Image Title

Search Results for one resource pool:

Ev Kontsevoy, Teleport | AWS re:Invent 2022


 

>>Hello everyone and welcome back to Las Vegas. I've got my jazz hands because I am very jazzed to be here at AWS Reinvent Live from the show floor all week. My name is Savannah Peterson, joined with the infamous John Farer. John, how you feeling >>After feeling great? Love? What's going on here? The vibe is a cloud, cloud native. Lot of security conversation, data, stuff we love Cloud Native, >>M I >>A L, I mean big news. Security, security, data lake. I mean, who would've thought Amazon have a security data lake? You know, e k s, I mean >>You might have with that tweet you had out >>Inside outside the containers. Reminds me, it feels like coan here. >>It honestly, and there's a lot of overlap and it's interesting that you mention CubeCon because we talked to the next company when we were in Detroit just a couple weeks ago. Teleport E is the CEO and founder F Welcome to the show. How you doing? >>I'm doing well. Thank you for having me today. >>We feel very lucky to have you. We hosted Drew who works on the product marketing side of Teleport. Yeah, we got to talk caddies and golf last time on the show. We'll talk about some of your hobbies a little bit later, but just in case someone's tuning in, unfamiliar with Teleport, you're all about identity. Give us a little bit of a pitch, >>Little bit of our pitch. Teleport is the first identity native infrastructure access platform. It's used by engineers and it's used by machines. So notice that I used very specific choice of words first identity native, what does it mean? Identity native? It consists of three things and we're writing a book about those, but I'll let you know. Stay >>Tuned on that front. >>Exactly, yes, but I can talk about 'em today. So the first component of identity, native access is moving away from secrets towards true identity. The secrets, I mean things like passwords, private keys, browser cookies, session tokens, API keys, all of these things is secrets and they make you vulnerable. The point is, as you scale, it's absolutely impossible to protect all of the seekers because they keep growing and multiplying. So the probability of you getting hacked over time is high. So you need to get rid of secrets altogether that that's the first thing that we do. We use something called True Identity. It's a combination of your biometrics as well as identity of your machines. That's tpms, HSMs, Ubikes and so on, so forth. >>Go >>Ahead. The second component is Zero Trust. Like Teleport is built to not trust the network. So every resource inside of your data center automatically gets configured as if there is no perimeter it, it's as safe as it was on the public network. So that's the second thing. Don't trust the network. And the third one is that we keep access policy in one place. So Kubernetes clusters, databases on stage, rdp, all of these protocols, the access policy will be in one place. That's identity. Okay, >>So I'm, I'm a hacker. Pretend I'm a hacker. >>Easy. That sounds, >>That sounds really good to me. Yeah, I'm supposed to tell 'em you're hacker. Okay. I can go to one place and hack that. >>I get this question a lot. The thing is, you want centralization when it comes to security, think about your house being your AWS account. Okay? Everything inside your furniture, your valuable, like you'll watch collection, like that's your data, that's your servers, paper clusters, so and so forth. Right Now I have a choice and your house is in a really bad neighborhood. Okay, that's the bad internet. Do you wanna have 20 different doors or do you want to have one? But like amazing one, extremely secure, very modern. So it's very easy for you to actually maintain it and enforce policy. So the answer is, oh, you probably need to have >>One. And so you're designing security identity from a perspective of what's best for the security posture. Exactly. Sounds like, okay, so now that's not against the conventional wisdom of the perimeter's dead, the cloud's everywhere. So in a way kind of brings perimeter concepts into the posture because you know, the old model of the firewall, the moat >>It Yeah. Just doesn't scale. >>It doesn't scale. You guys bring the different solution. How do you fit into the new perimeters dead cloud paradigm? >>So the, the way it works that if you are, if you are using Teleport to access your infrastructure, let's just use for example, like a server access perspective. Like that machine that you're accessing doesn't listen on a network if it runs in Teleport. So instead Teleport creates this trusted outbound tunnels to the proxy. So essentially you are managing devices using out going connection. It's kind of like how your phone runs. Yeah. Like your phone is actually ultimate, it's like a teleport like, like I It's >>Like teleporting into your environment. >>Yeah, well play >>Journal. But >>Think about actually like one example of an amazing company that's true Zero trust that we're all familiar with would be Apple. Because every time you get a new iOS on your phone, the how is it different from Apple running massive software deployment into enormous cloud with billions of servers sprinkle all over the world without perimeter. How is it possible That's exactly the kind of technology that Teleports >>Gives you. I'm glad you clarified. I really wanted to get that out on the table. Cuz Savannah, this is, this is the paradigm shift around what an environment is Exactly. Did the Apple example, so, okay, tell 'em about customer traction. Are people like getting it right away? Are their teams ready? Are they go, oh my god this is >>Great. Pretty much you see we kinda lucky like in a, in a, like in this business and I'm walking around looking at all these successful startups, like every single one of them has a story about launching the right thing at just the right like moment. Like in technology, like the window to launch something is extremely short. Like months. I'm literally talking months. So we built Teleport started to work on it in like 2015. It was internal project, I believe it or not, also a famous example. It's really popular like internal project, put it on GitHub and it sat there relatively unnoticed for a while and then it just like took off around 2000 >>Because people start to feel the pain. They needed it. Exactly, >>Exactly. >>Yeah. The timing. Well and And what a great way to figure out when the timing is right? When you do something like that, put it on GitHub. Yeah. >>People >>Tell you what's up >>Yeah's Like a basketball player who can just like be suspended in the air over the hoop for like half the game and then finally his score and wins >>The game. Or video gamer who's lagged, everyone else is lagging and they got the latency thing. Exactly. Thing air. Okay. Talk about the engineering side. Cause I, I like this at co con, you mentioned it at the opening of this segment that you guys are for engineers, not it >>Business people. That's right. >>Explain that. Interesting. This is super important. Explain why and why that's resonating. >>So there is this ongoing shift on more and more responsibilities going to engineers. Like remember back in the day before we even had clouds, we had people actually racking servers, sticking cables into them, cutting their fingers, like trying to get 'em in. So those were not engineers, they were different teams. Yeah. But then you had system administrators who would maintain these machines for you. Now all of these things are done with code. And when these things are done with code and with APIs, that shifts to engineers. That is what Teleport does with policy. So if you want to have a set of rules that govern who or what and when under what circumstances can access what data like on Kubernetes, on databases, on, on servers wouldn't be nice to use code for it. So then you could use like a version control and you can keep track of changes. That's what teleport enables. Traditionally it preferred more kind of clicky graphical things like clicking buttons. And so it's just a different world, different way of doing it. So essentially if you want security as code, that's what Teleport provides and naturally this language resonates with this persona. >>Love that. Security is coding. It's >>A great term. Yeah. Love it. I wanna, I wanna, >>Okay. We coined it, someone else uses it on the show. >>We borrow it >>To use credit. When did you, when did you coin that? Just now? >>No, >>I think I coined it before >>You wanted it to be a scoop. I love that. >>I wish I had this story when I, I was like a, like a poor little 14 year old kid was dreaming about security code but >>Well Dave Ante will testify that I coined data as code before anyone else but it got 10 years ago. You >>Didn't hear it this morning. Jimmy actually brought it back up. Aws, you're about startups and he's >>Whoever came up with lisp programming language that had this concept that data and code are exact same thing, >>Right? We could debate nerd lexicon all day on the cube. In fact, that could even be a segment first >>Of we do. First of all, the fact that Lisp came up on the cube is actually a milestone because Lisp is a very popular language for object-oriented >>Grandfather of everything. >>Yes, yes, grandfather. Good, good. Good catch there. Yeah, well done. >>All right. I'm gonna bring us back. I wanna ask you a question >>Talking about nerd this LIS is really >>No, I think it's great. You know how nerdy we can get here though. I mean we can just hang out in the weeds the whole time. All right. I wanna ask you a question that I asked Drew when we were in Detroit just because I think for some folks and especially the audience, they may not have as distinctive a definition as y'all do. How do you define identity? >>Oh, that's a great question. So identity as a term was, it was always used for security purposes. But most people probably use identity in the context of single signon sso. Meaning that if your company uses identity for access, which instead of having each application have an account for you, like a data entry with your first name, last name emails and your role. Yeah. You instead have a central database, let's say Okta or something like that. Yep. And then you, you use that to access everything that's kind of identity based access because there is a single source of identity. What we say is that we, that needs to be extended because it it no longer enough because that identity can be stolen. So if someone gets access to your Okta account using your credentials, then they can become you. So in order for identity to be attached to you and become your true identity, you have to rely on physical world objects. That's biometrics your facial fingerprint, like your facial print, your fingerprints as well as biometric of your machine. Like your laptops have PPM modules on it. They're absolutely unique. They cannot be cloned stolen. So that is your identity as well. So if you combine whatever is in Octa with the biker chip in this laptop and with your finger that collectively is your true identity, which cannot be stolen. So it's can't be hacked. >>And someone can take my finger like they did in the movies. >>So they would have to do that. And they would also have to They'd >>Steal your match. Exactly, exactly. Yeah. And they'd have to have your eyes >>And they have to, and you have >>Whatever the figure that far, they meant what >>They want. So that is what Drew identity is from telecom and >>Biometric. I mean it's, we're so there right now it's, it's really not an issue. It's only getting faster and better to >>Market. There is one important thing I said earlier that I want to go back to that I said that teleport is not just for engineers, it's also for machines. Cuz machines they also need the identity. So when we talk about access silos and that there are many different doors into your apartment, there are many different ways to access your data. So on the infrastructure side, machines are doing more and more. So we are offloading more and more tasks to them. That's a really good, what do machines use to access each other? Biome? They use API keys, they use private keys, they use basically passwords. Yeah. Like they're secrets and we already know that that's bad, right? Yeah. So how do you extend biometrics to machines? So this is why AWS offers cloud HSM service. HSM is secure hardware security module. That's a unique private key for the machine that is not accessible by anyone. And Teleport uses that to give identities to machines. Does do >>Customers have to enable that themselves or they have that part of a Amazon, the that >>Special. So it's available on aws. It's available actually in good old, like old bare metal machines that have HSMs on them on the motherboard. And it's optional by the way Teleport can work even if you don't have that capability. But the point is that we tried, you >>Have a biometric equivalent for the machines with >>Take advantage of it. Yeah. It's a hardware thing that you have to have and we all have it. Amazon sells it. AWS sells it to us. Yeah. And Teleport allows you to leverage that to enhance security of the infrastructure. >>So that classic hardware software play on that we're always talking about here on the cube. It's all, it's all important. I think this is really fascinating though. So I had an on the way to the show, I just enrolled in Clear and I had used a different email. I enrolled for the second time and my eyes wouldn't let me have two accounts. And this was the first time I had tried to sort of hack my own digital identity. And the girl, I think she was humoring me that was, was kindly helping me, the clear employee. But I think she could tell I was trying to mess with it and I wanted to see what would happen. I wanted to see if I could have two different accounts linked to my biometric data and I couldn't it, it picked it up right away. >>That's your true >>Identity. Yeah, my true identity. So, and forgive me cuz this is kind of just a personal question. It might be a little bit finger finger to the wind, but how, just how much more secure if you could, if you could give us a, a rating or a percentage or a a number. How much more secure is leveraging biometric data for identity than the secrets we've been using historically? >>Look, I could, I played this game with you and I can answer like infinitely more secure, right? Like but you know how security works that it all depends on implementation. So let's say you, you can deploy teleport, you can put us on your infrastructure, but if you're running, let's say like a compromised old copy of WordPress that has vulnerability, you're gonna get a hack through that angle. But >>Happens happens to my personal website all the time. You just touched Yeah, >>But the fact is that we, I I don't see how your credentials will be stolen in this system simply because your TPM on your laptop and your fingerprint, they cannot be downloaded. They like a lot of people actually ask us a slightly different question. It's almost the opposite of it. Like how can I trust you with my biometrics? When I use my fingerprint? That's my information. I don't want the company I work at to get my fingerprint people. I think it's a legit question to ask. >>Yeah. And it's >>What you, the answer to that question is your fingerprint doesn't really leave your laptop teleport doesn't see your fingerprint. What happens is when your fingerprint gets validated, it's it's your laptop is matching what's on the tpm. Basically Apple does it and then Apple simply tells teleport, yep that's F or whoever. And that's what we are really using. So when you are using this form authentication, you're not sharing your biometric with the company you work at. >>It's a machine to human confirmation first and >>Then it's it. It's basically you and the laptop agreeing that my fingerprint matches your TPM and if your laptop agrees, it's basically hardware does validation. So, and teleport simply gets that signal. >>So Ed, my final question for you is here at the show coupon, great conversations there for your company. What's your conversations here like at reinvent? Are you meeting with Amazon people, customers? What are some of the conversations? Because this is a much broader, I mean it's still technical. Yep. But you know, a lot of business kind of discussions, architectural refactoring of organizations. What are some of the things that you're talking about here with Telepo? What are, >>So I will mention maybe two trends I observed. The first one is not even security related. It's basically how like as a cloud becomes more mature, people now actually at different organizations develop their own internal ways of doing cloud properly. And they're not the same. Because when cloud was earlier, like there were this like best practices that everyone was trying to follow and there was like, there was just a maybe lack of expertise in the world and and now finding that different organizations just do things completely different. Like one, like for example, yeah, like some companies love having handful, ideally just one enormous Kubernetes cluster with a bunch of applications on it. And the other companies, they create Kubernetes clusters for different workloads and it's just like all over the map and both of them are believed that they're doing it properly. >>Great example of bringing in, that's Kubernetes with the complexity. And >>That's kind of one trend I'm noticing. And the second one is security related. Is that everyone is struggling with the access silos is that ideally every organization is dreaming about a day, but they have like one place which is which with great user experience that simply spells out this is what policy is to access this particular data. And it gets a automatically enforced by every single cloud provider, but every single application, but every single protocol, but every single resource. But we don't have that unfortunately Teleport is slowly becoming that, of course. Excuse me for plugging >>TelePro. No, no worries. >>But it is this ongoing theme that everyone is can't wait to have that single source of truth for accessing their data. >>The second person to say single source of truth on this stage in the last 24 >>Hours or nerds will love that. I >>Know I feel well, but it's all, it all comes back to that. I keep using this tab analogy, but we all want everything in one place. We don't wanna, we don't wanna have to be going all over the place and to look for >>Both. Because if it's and everything else places, it means that different teams are responsible for it. Yeah. So it becomes this kind of internal information silo as well. So you not even, >>And the risks and liabilities there, depending on who's overseeing everything. That's awesome. Right? So we have a new challenge on the cube specific to this show thing of this as your 30 minute or 30 minute that would be bold. 32nd sizzle reel, Instagram highlight. What is your hot take? Most important thing, biggest theme of the show this year. >>This year. Okay, so here's my thing. Like I want cloud to become something I want it to be. And every time I come here and I'm like, are we closer? Are we closer? So here's what I want. I want all cloud providers collectively to kind of merge. So then when we use them, it feels like we are programming one giant machine. Kind of like in the matrix, right? The movie. So like I want cloud to feel like a computer, like to have this almost intimate experience you have with your laptop. Like you can like, like do this and the laptop like performs the instructions. So, and it feels to me that we are getting closer. So like walking around here and seeing how everything works now, like on the single signon on from a security perspective, there is so that consolidation is finally happening. So it's >>The software mainframe we used to call it back in 2010. >>Yeah, yeah. Just kind of planetary scale thing. Yes. It's not the Zuckerberg that who's building metaverse, it's people here at reinvent. >>Unlimited resource for developers. Just call in. Yeah, yeah. Give me some resource, spin me up some, some compute. >>I would like alter that slightly. I would just basically go and do this and you shouldn't even worry about how it gets done. Just put instructions into this planetary mainframe and mainframe will go and figure this out. Okay. >>We gotta take blue or blue or red pill. I >>Know. I was just gonna say y'all, we are this, this, this, this segment is lit. >>We got made tricks. We got brilliant. We didn't get super cloud in here but we, we can weave that in. We got >>List. We just said it. So >>We got lisp. Oh great con, great conversation. Cloud native. >>Outstanding conversation. And thank you so much for being here. We love having teleport on the show. Obviously we hope to see you back again soon and and Drew as well. And thank all of you for tuning in this afternoon. Live from Las Vegas, Nevada, where we are hanging out at AWS Reinvent with John Furrier. I'm Savannah Peterson. This is the Cube. We are the source for high tech coverage.

Published Date : Nov 30 2022

SUMMARY :

John, how you feeling Lot of security conversation, data, stuff we love Cloud Native, I mean, who would've thought Amazon have a security data lake? Inside outside the containers. the CEO and founder F Welcome to the show. Thank you for having me today. We'll talk about some of your hobbies a little bit later, but just in case someone's tuning in, unfamiliar with Teleport, So notice that I So the probability of you getting hacked over time is high. So that's the second thing. So I'm, I'm a hacker. I can go to one place and hack that. So the answer is, oh, you probably need to have into the posture because you know, How do you fit into the new perimeters So the, the way it works that if you are, if you are using Teleport to access your infrastructure, But How is it possible That's exactly the kind of technology that Teleports I'm glad you clarified. So we built Teleport started to work on it in like 2015. Because people start to feel the pain. When you do something like that, Cause I, I like this at co con, you mentioned it at the opening of this segment that you That's right. This is super important. So essentially if you want Security is coding. I wanna, I wanna, When did you, when did you coin that? I love that. You Didn't hear it this morning. We could debate nerd lexicon all day on the cube. First of all, the fact that Lisp came up on the cube is actually a milestone because Lisp is a Yeah, well done. I wanna ask you a question I wanna ask you a question that I asked Drew when we were in Detroit just because I think for some So in order for identity to be attached to you and become your true identity, you have to rely So they would have to do that. And they'd have to have your eyes So that is what Drew identity is from telecom and I mean it's, we're so there right now it's, it's really not an issue. So how do you extend biometrics to machines? And it's optional by the way Teleport can work even if you don't have that capability. And Teleport allows you to leverage that So I had an on the way to the show, I just enrolled It might be a little bit finger finger to the wind, but how, just how much more secure if you could, So let's say you, you can deploy teleport, you can put us on your infrastructure, Happens happens to my personal website all the time. But the fact is that we, I I don't see how your credentials So when you are using this form authentication, you're not sharing your biometric with the company you It's basically you and the laptop agreeing that my fingerprint matches your TPM and So Ed, my final question for you is here at the show coupon, great conversations there for And the other companies, Great example of bringing in, that's Kubernetes with the complexity. And the second one is security related. No, no worries. But it is this ongoing theme that everyone is can't wait to have that single I We don't wanna, we don't wanna have to be going all over the place and to look for So you not even, So we have a new challenge on the cube specific to this show thing of this as your 30 minute or 30 you have with your laptop. It's not the Zuckerberg that who's building metaverse, Give me some resource, spin me up some, some compute. I would just basically go and do this and you shouldn't even I We got made tricks. So We got lisp. And thank all of you for tuning in this afternoon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John FarerPERSON

0.99+

AppleORGANIZATION

0.99+

2010DATE

0.99+

2015DATE

0.99+

DetroitLOCATION

0.99+

Las VegasLOCATION

0.99+

Ev KontsevoyPERSON

0.99+

JimmyPERSON

0.99+

30 minuteQUANTITY

0.99+

DrewPERSON

0.99+

TeleportORGANIZATION

0.99+

30 minuteQUANTITY

0.99+

Dave AntePERSON

0.99+

EdPERSON

0.99+

JohnPERSON

0.99+

iOSTITLE

0.99+

second thingQUANTITY

0.99+

SavannahPERSON

0.99+

two accountsQUANTITY

0.99+

two different accountsQUANTITY

0.99+

John FurrierPERSON

0.99+

BothQUANTITY

0.99+

ZuckerbergPERSON

0.99+

third oneQUANTITY

0.99+

one placeQUANTITY

0.99+

bothQUANTITY

0.99+

each applicationQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

TeleProORGANIZATION

0.99+

second componentQUANTITY

0.98+

This yearDATE

0.98+

10 years agoDATE

0.98+

todayDATE

0.98+

second timeQUANTITY

0.98+

firstQUANTITY

0.98+

first thingQUANTITY

0.98+

second personQUANTITY

0.98+

single sourceQUANTITY

0.97+

first timeQUANTITY

0.97+

three thingsQUANTITY

0.97+

20 different doorsQUANTITY

0.97+

this yearDATE

0.97+

InstagramORGANIZATION

0.96+

TelepoORGANIZATION

0.96+

first nameQUANTITY

0.96+

14 year oldQUANTITY

0.96+

Teleport EORGANIZATION

0.96+

oneQUANTITY

0.95+

billions of serversQUANTITY

0.95+

first oneQUANTITY

0.95+

second oneQUANTITY

0.95+

this afternoonDATE

0.94+

singleQUANTITY

0.94+

FirstQUANTITY

0.94+

GitHubORGANIZATION

0.94+

couple weeks agoDATE

0.94+

two trendsQUANTITY

0.92+

first componentQUANTITY

0.91+

CubeConORGANIZATION

0.9+

one important thingQUANTITY

0.89+

awsORGANIZATION

0.89+

one exampleQUANTITY

0.87+

Tom Sweet | Dell Technologies Summit


 

(upbeat music) >> As we said in our analysis of Dell's future, the transformation of Dell into Dell EMC and now Dell Technologies has been one of the most remarkable stories in the history of the technology industry. After years of successfully integrated EMC and becoming VMware's number one distribution channel, the metamorphosis of Dell culminated in the spin out of VMware from Dell and a massive wealth creation milestone pending of course the Broadcom acquisition of VMware. So where's that leave Dell and what does the future look like for this technology powerhouse? Hello, and welcome to theCUBE's exclusive coverage of Dell Technologies Summit 2022. My name is Dave Vellante and I'll be hosting the program. Today, in conjunction with the Dell Tech Summit, we'll hear from four of Dell senior executives. Tom Sweet is the CFO of Dell Technologies. He's going to share his views of the company's position and opportunities and answer the question why is Dell a good long term investment? Then we'll hear from Jeff Boudreau, who's the president of Dell's ISG business unit. He's going to talk about the product angle and specifically how Dell is thinking about solving the multi-cloud challenge. And then Sam Groccot is the senior vice President of marketing. He's going to come in the program and give us the update on Apex, which is Dell's as-a-service offering. And a new edge platform called Project Frontier. By the way, it's also Cybersecurity Awareness Month and we're going to see if Sam has any stories there. And finally, for a company that's nearly 40 years old, Dell has some pretty forward thinking philosophies when it comes to its culture and workforce. And we're going to speak with Jen Saavedra who's Dell's Chief Human Resource officer about hybrid work and how Dell is thinking about the future of work. We're going to geek out all day and talk multi-cloud and Edge and latency, but first, let's talk wallet. Tom Sweet, CFO, and one of Dell's key business architects. Welcome back to "theCUBE." >> Dave, it's good to see you and good to be back with you, so thanks for having me today. >> Yeah, you bet. Tom, it's been a pretty incredible past 18 months. Not only the pandemic and all that craziness, but the VMware spin. You had to give up your gross margin pinky, just kidding, and of course the macro environment. I'm so sick of talking about the macro. But putting that aside for a moment what's really remarkable is that for a company of your size, you've had some success at the top line which I think surprised a lot of people. What are your reflections on the last 18 to 24 months? >> Well Dave, it's been an incredible, not only last 18 months, but the whole transformation journey if you think all the way back maybe to the LBO and forward from there. But stepping into the last 18 months, it's, I think I remember talking with you and saying, "Hey, the scenario planning we did at the beginning of this pandemic journey was 30 different scenarios roughly, and none of which sort of panned out the way it actually did," which was a pretty incredible growth story. As we think about how we helped customers, drive workforce productivity, enable their business model during the all remote work environment that was the pandemic created. And couple that with the rise then and the infrastructure spin as we got towards the tail end of the pandemic coupled with the spin out of VMware, which culminated last November as we completed that, which unlocked a pathway back to investment grade, which then unlocked, quite frankly shareholder value, capital allocation frameworks. It's really been a remarkable 18, 24 months. It's, it's never dull at Dell Technologies. Let me put it that way. >> Well, I was impressed with you Tom before the leverage buyout and then what I've seen you guys navigate through is truly amazing. Well, let's talk about the challenging macro. I mean, I've been through a lot of downturns but I've never seen anything quite like this with Fed tightening, and you're combating inflation, you got this recession looming. There's a bear market. You got, but you got zero unemployment, you're rising wages, strong dollar, and it's very confusing. But IT spending is, it's somewhat softer, but it's still not bad. How are you seeing customers behave? How is Dell responding? >> Yeah look, if you think about the markets we play in Dave, we should start there as a grounding. The total market, the core market that we think about is roughly $750 billion or so, if you think about our core IT services capability. If you couple that with some of the growth initiatives that we're driving and the adjacent markets that that that brings in, you're roughly talking a 1.4 to $1.5 trillion market opportunity total addressable market. And so from that perspective we're extraordinarily bullish on where are we in the journey as we continue to grow and expand. We have, we're number one share in just about every category that we plan, but yet when you look at that, number one share in some of these, our highest share position may be low 30s and maybe in the high end of storage or at the upper end of 30s or 40%. But the opportunity there to continue to expand the core and continue to take share and outperform the market is truly extraordinary. So if you step back and think about that, then you say, okay, what have we seen over the last number of months and quarters? It's been really great performance through the pandemic as you highlighted. We actually had a really strong first half of the year of our fiscal year '23 with revenue up 12% operating income, up 12% for the first half. What we talked about if you might recall in our second quarter earnings was the fact that we were starting to see softness. We had seen it in the consumer PC space, which is not a big area of focus for us in the sense of our total revenue stream. But we started to see commercial PC soften and we were starting to see server demand soften a bit and storage demand was holding quite frankly. And so we gave a framework around guidance for the rest of the year as a result of what we were seeing. The macro environment as you highlighted continues to be challenging. If you look at inflation rates and the efforts by central banks across the globe through interest rate rise to press down and constrain growth and push down inflation, you couple that with supply chain challenges that continue particularly in the ISG space. And then you couple that with the Ukraine war and the energy crisis that that's created. And particularly in Europe, it's a pretty dynamic environment. But I'm confident, I'm confident in the long term. But I do think that there is, there's navigation that we're going to have to do over the coming number of quarters. Who knows quite how long. To make sure the business is properly positioned and we've got a great portfolio and you're going to talk to some of the team later on as you think your way through some of the solution capabilities we're driving, what we're seeing around technology trends. So the opportunity is there. There's some short term navigation that we're going to need to do just to make sure that we address some of the environmental things that we're seeing right now. >> Yeah, and as a global company of course you're converting local currencies back to appreciated dollars. That's another headwind. But as you say, I mean, that's math and you're navigating it. And again, I've seen a lot of downturns, but the best companies not only weather their storm, but they invest in ways they that allow them to come out the other side stronger. So I want to talk about that longer term opportunity the relationship between the core, the the business growth. You mentioned the TAM. I mean, even as a lower margin business, if you can penetrate that big of a TAM, you could still throw off a lot of cash and you've got other levers to turn in potentially acquisitions and software. But so ultimately what gives you confidence in Dell's future? How should we think about Dell's future? >> Yeah look, I think it comes down to we are extraordinarily excited about the opportunity over the long term. Digital transformation continues. I am on numerous customer and CIO conference calls every week. Customers are continuing to invest in digital transformation, in infrastructure, to enable their business model. Yes, maybe it's going to slow or pause, or maybe they're not going to invest quite at the same rate over the next number of quarters but over the long term the needs are there. You look at what we're doing around the growth opportunities that we see, not only in our core space where we continue to invest, but also in the, what we call the strategic adjacencies. Things like 5G and modern telecom infrastructure as our, the telecom providers across the globe open up their what previous been closed ecosystems to open architecture. You think about, what we're doing around the EDGE and the distribution now that we're seeing of compute and storage back to the edge given data, gravity, and latency matters. And so we're pretty bullish on the opportunity in front of us. Yes, we will, and we're continuing to invest. And you hear Jeff Boudreau talk about that I think later on in the program. So I'm excited about the opportunities and you look at our cash flow generation capability, we are in in normal times a cash flow generation machine and we'll continue to do so. We've got a negative CCC in terms of how do we think about efficiency of working capital? And we look at our capital allocation strategy which has now returned somewhere in near 60% of our free cash flow back to shareholders. And so, there's lots to, lots of reasons to think about why this, we are a great sort of, I think value creation opportunity in a over the long term. That the long term trends are with us and I expect them to continue to be so. >> Yeah, and you guys, you do what you say you're going to do. I mean, I said in my other piece that I did recently, I think you guys put $46 billion on the balance sheet in terms of debt. That's down to I think 16 billion in the core which that's quite remarking. That gives you some other opportunities. Give us your closing thoughts. I mean, you kind of just addressed why Dell is a good long term play, but I'll give you an opportunity to bring us home. >> Hey Dave, yeah look, I just think if you look at the grid, the market opportunity, the size and scale of Dell and how we think about the competitive advantages that we have, we can, if you look at say we're a hundred billion dollar revenue company which we were last year as we reported. Roughly 60, 65 billion of that in the client in PC space, roughly 35 to 40 billion in the ISG or infrastructure space. Those markets are going to continue. The opportunity to grow share, grow at a premium to the market, drive cash flow, drive share gain is clearly there. And couple that with what we think the opportunity is in these adjacent markets, whether it's telecom, the EDGE, what we're thinking around data services, data management, we, and you put that together with the long term trends around data creation and digital transformation. We are extraordinarily well positioned. We have the largest direct selling organization in the technology space. We have the largest supply chain. Our services footprint. Well positioned in my mind to take advantage of the opportunities as we move forward. >> Well Tom I really appreciate you taking the time to speak with us. Good to see you again. >> Nice seeing you. Thanks Dave. >> All right, you're watching theCUBE's exclusive behind the scenes coverage of Dell Technology Summit 2022. In a moment, I'll be back with Jeff Boudreau. He's the president of Dell's ISG Infrastructure Solutions Group. He's responsible for all the important enterprise business at Dell, and we're excited to get his thoughts. Keep it right there. (upbeat music)

Published Date : Oct 7 2022

SUMMARY :

and opportunities and answer the question and good to be back with you, and of course the macro environment. and the infrastructure spin the challenging macro. and maybe in the high end of but the best companies not and the distribution now 16 billion in the core of the opportunities as we move forward. Good to see you again. He's the president

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff BoudreauPERSON

0.99+

Dave VellantePERSON

0.99+

Jen SaavedraPERSON

0.99+

TomPERSON

0.99+

Sam GroccotPERSON

0.99+

Tom SweetPERSON

0.99+

SamPERSON

0.99+

DavePERSON

0.99+

$46 billionQUANTITY

0.99+

1.4QUANTITY

0.99+

DellORGANIZATION

0.99+

16 billionQUANTITY

0.99+

ISG Infrastructure Solutions GroupORGANIZATION

0.99+

EuropeLOCATION

0.99+

VMwareORGANIZATION

0.99+

last NovemberDATE

0.99+

40%QUANTITY

0.99+

18QUANTITY

0.99+

last yearDATE

0.99+

TodayDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

todayDATE

0.99+

30 different scenariosQUANTITY

0.99+

Dell Tech SummitEVENT

0.98+

fourQUANTITY

0.98+

fiscal year '23DATE

0.98+

40 billionQUANTITY

0.97+

30sQUANTITY

0.97+

first halfQUANTITY

0.96+

35QUANTITY

0.96+

nearly 40 years oldQUANTITY

0.96+

$1.5 trillionQUANTITY

0.96+

Dell Technology Summit 2022EVENT

0.96+

lionQUANTITY

0.95+

EMCORGANIZATION

0.95+

pandemicEVENT

0.95+

oneQUANTITY

0.94+

Dell Technologies SummitEVENT

0.94+

firstQUANTITY

0.94+

Dell Technologies Summit 2022EVENT

0.93+

Dell EMCORGANIZATION

0.93+

near 60%QUANTITY

0.91+

zero unemploymentQUANTITY

0.91+

24 monthsQUANTITY

0.9+

60, 65 billionQUANTITY

0.9+

ApexORGANIZATION

0.89+

BroadcomORGANIZATION

0.89+

Ukraine warEVENT

0.87+

Cybersecurity Awareness MonthEVENT

0.86+

12%QUANTITY

0.86+

ISGORGANIZATION

0.84+

hundred billion dollarQUANTITY

0.83+

TAMORGANIZATION

0.81+

last 18 monthsDATE

0.8+

$750 bilQUANTITY

0.79+

past 18 monthsDATE

0.78+

Barak Schoster, Palo Alto Networks | CUBE Conversation 2022


 

>>Hello, everyone. Welcome to this cube conversation. I'm here in Palo Alto, California. I'm John furrier, host of the cube, and we have a great guest here. Barack Shuster. Who's in Tel-Aviv senior director of chief architect at bridge crew, a part of Palo Alto networks. He was formerly the co-founder of the company, then sold to Palo Alto networks Brock. Thanks for coming on this cube conversation. >>Thanks John. Great to be here. >>So one of the things I love about open source, and you're seeing a lot more of the trend now that talking about, you know, people doing incubators all over the world, having open source and having a builder, people who are starting companies, it's coming more and more, you you're one of them. And you've been part of this security open source cloud infrastructure infrastructure as code going back a while, and you guys had a lot of success. Now, open source infrastructure as code has moved up to the stack, certainly lot going down at the network layer, but developers just want to build security from day one, right? They don't want to have to get into the, the, the waiting game of slowing down their pipelining of code in the CIC D they want to move faster. And this has been one of the core conversations this year is how to make developers more productive and not just a cliche, but actually more productive and not have to wait to implement cloud native. Right. So you're in the middle of it. And you've got you're in, tell us, tell us what you guys are dealing with that, >>Right? Yeah. So I hear these needles working fast, having a large velocity of releases from many of my friends, the SRAs, the DevOps, and the security practitioners in different companies. And the thing that we asked ourselves three years ago was how can we simplify the process and make the security teams an enabler instead of a gatekeeper that blocks the releases? And the thing that we've done, then we understood that we should do is not only doing runtime scanning of the cloud infrastructure and the cloud native clusters, but also shift left the findings and fixings the remediation of security issues to the level of the code. So we started doing infrastructure is good. We Terraform Kubernetes manifests cloud formation, server less, and the list goes on and we created an open source product around it, named checkup, which has an amazing community of hundreds of contributors. Not all of them are Palo Alto employees. Most of them are community users from various companies. And we tried to and succeeded to the democratic side is the creation of policy as code the ability to inspect your infrastructure as code and tell you, Hey, this is the best practice that you should use consider using it before applying a misconfigured S3 bucket into production, or before applying a misconfigured Kubernetes cluster into your production or dev environment. And the goal, >>The goal, >>The goal is to do that from the ID from the moment that you write code and also to inspect your configuration in CGI and CD and in runtime. And also understand that if there is any drift out there and the ability to fix that in the source code, in the blueprint itself. >>So what I hear you saying is really two problems you're solving. One is the organizational policies around how things were done in a environment before all the old way. You know, the security teams do a review, you send a ticket, things are waiting, stop, wait, hurry up and wait kind of thing. And then there's the technical piece of it, right? Is that there's two pieces to that. >>Yeah, I think that one thing is the change of the methodologies. We understood that we should just work differently than what we used to do. Tickets are slow. They have priorities. You have a bottleneck, which is a small team of security practitioners. And honestly, a lot of the work is repetitive and can be democratized into the engineering teams. They should be able to understand, Hey, I wrote the code piece that provision this instance, I am the most suitable person as a developer to fix that piece of code and reapply it to the runtime environment. >>And then it also sets the table for our automation. It sets the table for policies, things that make things more efficient scaling. Cause you mentioned SRS are a big part of this to dev ops and SRE. Those, those folks are, are trying to move as fast as possible at scale, huge scale challenge. How does that impact the scale piece become into here? >>So both themes Esri's and security teams are about a link to deploying application, but new application releases into the production environment. And the thing that you can do is you can inspect all kinds of best practices, not only security, best practices, but also make sure that you have provision concurrencies on your serverless functions or the amount of auto-scaling groups is what you expect it to be. And you can scan all of those things in the level of your code before applying it to production. >>That's awesome. So good, good benefits scales a security team. It sounds like too as well. You could get that policy out there. So great stuff. I want to really quickly ask you about the event. You're hosting code two cloud summit. What are we going to see there? I'm going to host a panel. Of course, I'm looking forward to that as well. You get a lot of experts coming in there. Why are you having this event and what topics will be covered? >>So we wanted to talk on all of the shifts, left movement and all of the changes that have happened in the cloud security market since inception till today. And we brought in great people and great practitioners from both the dev ops side, the chaos engineering and the security practitioners, and everybody are having their opinion on what's the current status state, how things should be implemented in a mature environment and what the future might hold for the code and cloud security markets. The thing that we're going to focus on is all of the supply chain from securing the CCD itself, making sure your actions are not vulnerable to a shut injection or making sure your version control system are configured correctly with single sign-on MFA and having branch protection rules, but also open source security like SCA software composition analysis infrastructure as code security. Obviously Ron thinks security drifts and Kubernetes security. So we're going to talk on all of those different aspects and how each and every team is mitigating. The different risks that come with. >>You know, one of the things that you bring up when you hear you talking is that's the range of, of infrastructure as code. How has infrastructure as code changed? Cause you're, you know, there's dev ops and SRS now application developers, you still have to have programmable infrastructure. I mean, if infrastructure code is real realize up and down the stack, all aspects need to be programmable, which means you got to have the data, you got to have the ability to automate. How would you summarize kind of the state of infrastructure as code? >>So a few years ago, we started with physical servers where you carried the infrastructure on our back. I, I mounted them on the rack myself a few years ago and connected all of the different cables then came the revolution of BMS. We didn't do that anymore. We had one beefy appliance and we had 60 virtual servers running on one appliance. So we didn't have to carry new servers every time into the data center then came the cloud and made everything API first. And they bill and enabled us to write the best scripts to provision those resources. But it was not enough because he wanted to have a reproducible environment. The is written either in declarative language like Terraform or CloudFormation or imperative like CDK or polluted, but having a consistent way to deploy your application to multiple environments. And the stage after that is having some kind of a service catalog that will allow application developer to get the new releases up and running. >>And the way that it has evolved mass adoption of infrastructure as code is already happening. But that introduces the ability for velocity in deployment, but also new kinds of risks that we haven't thought about before as security practitioners, for example, you should vet all of the open source Terraform modules that you're using because you might have a leakage. Our form has a lot of access to secrets in your environment. And the state really contains sensitive objects like passwords. The other thing that has changed is we today we rely a lot on cloud infrastructure and on the past year we've seen the law for shell attack, for example, and also cloud providers have disclosed that they were vulnerable to log for shell attack. So we understand today that when we talk about cloud security, it's not only about the infrastructure itself, but it's also about is the infrastructure that we're using is using an open source package that is vulnerable. Are we using an open source package that is vulnerable, is our development pipeline is configured and the list goes on. So it's really a new approach of analyzing the entire software bill of material also called Asbell and understanding the different risks there. >>You know, I think this is a really great point and great insight because new opera, new solutions for new problems are new opportunities, right? So open source growth has been phenomenal. And you mentioned some of those Terraform and one of the projects and you started one checkoff, they're all good, but there's some holes in there and it's open source, it's free, everyone's building on it. So, you know, you have, and that's what it's for. And I think now is open source goes to the next level again, another generational inflection point it's it's, there's more contributors there's companies are involved. People are using it more. It becomes a really strong integration opportunity. So, so it's all free and it's how you use it. So this is a new kind of extension of how open source is used. And if you can factor in some of the things like, like threat vectors, you have to know the code. >>So there's no way to know it all. So you guys are scanning it doing things, but it's also huge system. It's not just one piece of code. You talking about cloud is becoming an operating system. It's a distributed computing environment, so whole new area of problem space to solve. So I love that. Love that piece. Where are you guys at on this now? How do you feel in terms of where you are in the progress bar of the solution? Because the supply chain is usually a hardware concept. People can relate to, but when you bring in software, how you source software is like sourcing a chip or, or a piece of hardware, you got to watch where it came from and you gotta track track that. So, or scan it and validate it, right? So these are new, new things. Where are we with? >>So you're, you're you're right. We have a lot of moving parts. And really the supply chain terms of came from the automobile industry. You have a car, you have an engine engine might be created by a different vendor. You have the wheels, they might be created by a different vendor. So when you buy your next Chevy or Ford, you might have a wheels from continental or other than the first. And actually software is very similar. When we build software, we host it on a cloud provider like AWS, GCP, Azure, not on our own infrastructure anymore. And when we're building software, we're using open-source packages that are maintained in the other half of the war. And we don't always know in person, the people who've created that piece. And we do not have a vetting process, even a human vetting process on these, everything that we've created was really made by us or by a trusted source. >>And this is where we come in. We help you empower you, the engineer, we tools to analyze all of the dependency tree of your software, bill of materials. We will scan your infrastructure code, your application packages that you're using from package managers like NPM or PI. And we scan those open source dependencies. We would verify that your CIC is secure. Your version control system is secure. And the thing that we will always focus on is making a fixed accessible to you. So let's say that you're using a misconfigured backup. We have a bot that will fix the code for you. And let's say that you have a, a vulnerable open-source package and it was fixed in a later version. We will bump the version for you to make your code secure. And we will also have the same process on your run time environment. So we will understand that your environment is secure from code to cloud, or if there are any three out there that your engineering team should look at, >>That's a great service. And I think this is cutting edge from a technology perspective. What's what are some of the new cloud native technologies that you see in emerging fast, that's getting traction and ultimately having a product market fit in, in this area because I've seen Cooper. And you mentioned Kubernetes, that's one of the areas that have a lot more work to do or being worked on now that customers are paying attention to. >>Yeah, so definitely Kubernetes is, has started in growth companies and now it's existing every fortune 100 companies. So you can find anything, every large growler scale organization and also serverless functions are, are getting into a higher adoption rate. I think that the thing that we seeing the most massive adoption off is actually infrastructure as code during COVID. A lot of organization went through a digital transformation and in that process, they have started to work remotely and have agreed on migrating to a new infrastructure, not the data center, but the cloud provider. So at other teams that were not experienced with those clouds are now getting familiar with it and getting exposed to new capabilities. And with that also new risks. >>Well, great stuff. Great to chat with you. I want to ask you while you're here, you mentioned depth infrastructure as code for the folks that get it right. There's some significant benefits. We don't get it. Right. We know what that looks like. What are some of the benefits that can you share personally, or for the folks watching out there, if you get it for sure. Cause code, right? What does the future look like? What does success look like? What's that path look like when you get it right versus not doing it or getting it wrong? >>I think that every engineer dream is wanting to be impactful, to work fast and learn new things and not to get a PagerDuty on a Friday night. So if you get infrastructure ride, you have a process where everything is declarative and is peer reviewed both by you and automated frameworks like bridge and checkoff. And also you have the ability to understand that, Hey, once I re I read it once, and from that point forward, it's reproducible and it also have a status. So only changes will be applied and it will enable myself and my team to work faster and collaborate in a better way on the cloud infrastructure. Let's say that you'd done doing infrastructure as code. You have one resource change by one team member and another resource change by another team member. And the different dependencies between those resources are getting fragmented and broken. You cannot change your database without your application being aware of that. You cannot change your load Bonser without the obligation being aware of that. So infrastructure skullduggery enables you to do those changes in a, in a mature fashion that will foes Le less outages. >>Yeah. A lot of people getting PagerDuty's on Friday, Saturday, and Sunday, and on the old way, new way, new, you don't want to break up your Friday night after a nice dinner, either rock, do you know? Well, thanks for coming in all the way from Tel-Aviv really appreciate it. I wish you guys, everything the best over there in Delhi, we will see you at the event that's coming up. We're looking forward to the code to cloud summit and all the great insight you guys will have. Thanks for coming on and sharing the story. Looking forward to talking more with you Brock thanks for all the insight on security infrastructures code and all the cool things you're doing at bridge crew. >>Thank you, John. >>Okay. This is the cube conversation here at Palo Alto, California. I'm John furrier hosted the cube. Thanks for watching.

Published Date : Mar 18 2022

SUMMARY :

host of the cube, and we have a great guest here. So one of the things I love about open source, and you're seeing a lot more of the trend now that talking about, And the thing that we asked ourselves The goal is to do that from the ID from the moment that you write code and also You know, the security teams do a review, you send a ticket, things are waiting, stop, wait, hurry up and wait kind of thing. And honestly, a lot of the work is repetitive and can How does that impact the scale piece become into here? And the thing that you can do is you can inspect all kinds of best practices, I want to really quickly ask you about the event. all of the supply chain from securing the CCD itself, You know, one of the things that you bring up when you hear you talking is that's the range of, of infrastructure as code. And the stage after that is having some kind of And the way that it has evolved mass adoption of infrastructure as code And if you can factor in some of the things like, like threat vectors, So you guys are scanning it doing things, but it's also huge system. So when you buy your next Chevy And the thing that we will And you mentioned Kubernetes, that's one of the areas that have a lot more work to do or being worked So you can find anything, every large growler scale What are some of the benefits that can you share personally, or for the folks watching And the different dependencies between and all the great insight you guys will have. I'm John furrier hosted the cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Barack ShusterPERSON

0.99+

JohnPERSON

0.99+

DelhiLOCATION

0.99+

Barak SchosterPERSON

0.99+

BrockPERSON

0.99+

two piecesQUANTITY

0.99+

FordORGANIZATION

0.99+

RonPERSON

0.99+

Tel-AvivLOCATION

0.99+

SundayDATE

0.99+

SaturdayDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

Friday nightDATE

0.99+

two problemsQUANTITY

0.99+

60 virtual serversQUANTITY

0.99+

FridayDATE

0.99+

hundreds of contributorsQUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

ChevyORGANIZATION

0.99+

bothQUANTITY

0.99+

both themesQUANTITY

0.99+

OneQUANTITY

0.98+

100 companiesQUANTITY

0.98+

oneQUANTITY

0.98+

Friday nightDATE

0.98+

one applianceQUANTITY

0.98+

todayDATE

0.98+

BrockORGANIZATION

0.98+

threeQUANTITY

0.98+

AWSORGANIZATION

0.98+

three years agoDATE

0.97+

this yearDATE

0.97+

firstQUANTITY

0.97+

John furrierPERSON

0.97+

one thingQUANTITY

0.96+

past yearDATE

0.95+

KubernetesORGANIZATION

0.94+

singleQUANTITY

0.94+

one resourceQUANTITY

0.91+

few years agoDATE

0.91+

TerraformORGANIZATION

0.91+

one piece of codeQUANTITY

0.86+

day oneQUANTITY

0.86+

one team memberQUANTITY

0.83+

PagerDutyORGANIZATION

0.83+

onceQUANTITY

0.8+

GCPORGANIZATION

0.78+

AzureORGANIZATION

0.76+

eachQUANTITY

0.72+

Palo AltoLOCATION

0.71+

PaloLOCATION

0.71+

SRSTITLE

0.71+

beefyORGANIZATION

0.7+

CDKORGANIZATION

0.68+

2022DATE

0.68+

KubernetesTITLE

0.67+

DEVENT

0.58+

CloudFormationTITLE

0.58+

AltoORGANIZATION

0.55+

two cloudEVENT

0.55+

every teamQUANTITY

0.54+

AsbellTITLE

0.53+

S3TITLE

0.52+

CGITITLE

0.5+

CooperORGANIZATION

0.5+

EsriPERSON

0.5+

bridgeORGANIZATION

0.49+

ConversationEVENT

0.42+

COVIDTITLE

0.39+

Ranga Rajagopalan, Commvault & Stephen Orban, AWS | Commvault Connections 2021


 

>>Mhm. Mhm. >>We're here with the Cube covering Calm Vault Connections 21. We're gonna look at the data protection space and how cloud computing has advanced the way we think about backup recovery and protecting our most critical data. Ranga Rajagopalan, who is the vice president of products at Con vault and Stephen Orban, who's the General manager of AWS marketplace and control services gents. Welcome to the cube. Good to see you. >>Thank you. Always A pleasure to see you here >>steve. Thanks for having us. Very >>welcome, Stephen, let's start with you. Look the cloud has become a staple of digital infrastructure. I don't know where we'd be right now without being able to access enterprise services I. T. Services remotely. Um But specifically how our customers looking at backup and recovery in the cloud, is it a kind of a replacement for existing strategies? Is it another layer of protection? How are they thinking about that? >>Yeah. Great question. David again, thank thanks for having me. And I think you know, look if you look back to 15 years ago when the founders of AWS had the hypothesis that many enterprises governments and developers we're gonna want access to on demand pay as you go I. T. Resources in the cloud. Uh None of us would have been able to predict that it would have Matured and um you know become the staple that it has today over the last 15 years. But the reality is that a lot of these enterprise customers, many of whom have been doing their own IT infrastructure for the last 10, 20 or multiple decades do have to kind of figure out how they deal with the change management of moving to the cloud. And while a lot of our customers um will initially come to us because they're looking to save money or costs, almost all of them decided to stay and go big because of the speed at which they are able to innovate on behalf of their customers and when it comes to storage and backup, that just plays right into where they're headed. And there's a variety of different techniques that customers use, whether it be, you know, a lift and shift for a particular set of applications or a data center where they do very much. Look at how can they replace the backup and recovery that they have on premises in the cloud using solutions like, but we're partnering with console to do or completely reimagining their architecture for net new developments that they can really move quickly for their customers. Um and and completely developing something brand new, where it is really a, you know, a brand new replacement and innovation for for for what they've done in the past. >>Great, thank you, Stephen Rachael, I want to ask you about the d were digital. Look, if you're not a digital business today, you're basically out of business. So, my question to you is how have you seen customers change the way they think about data protection during what I call the forced March to digital over the last 18, 19 months or customers, you know, thinking about data protection differently today >>definitely Dave and and thank you for having me and steven. Pleasure to join you on this cube interview first going back to stevens comments can't agree more. Almost every business that we talked with today has a cloud first strategy, a cloud transformation mandate and you know, the reality is back to your digital comment. There are many different paths to the hybrid multi cloud and different customers. You know, there are different parts of the journey. So I still was saying most often customers at least in the data protection perspective start the conversation by thinking here have all these tips. Can I start using cloud as my air gap long term retention target and before they realized they start moving their workloads into the cloud and none of the backup and record yesterday's are going to change. So you need to continue protecting the clothes, which is where the cloud native data protection comes in and then they start innovating around er, can I use cloud as media sites so that you know, I don't need to meet in the other side. So this year is all around us. Cloud transformation is all around us and and the real essence of this partnership between AWS and calm vault is essentially to dr and simplify all the paths to the club regardless of whether you're going to use it as a storage started or you know, your production data center, all your dear disaster recovery site. >>Yeah, it really is about providing that optionality for customers. I talked to a lot of customers and said, hey, our business resilience strategy was really too focused on D. R. I've talked to all the customers at the other end of the spectrum said we don't even have a D. R. Strategy now, we're using the cloud for that. So it's really all over the map and you want that optionality. So steven and then go ahead. >>Please, ransomware plays a big role in many of these considerations that greatly. It's unfortunately not a question of whether you're going to be hit by ransomware, it's almost we can like, what do you do when you're hit by ransomware and the ability to use the clothes scaled immediately, bring up the resources, use the cloud backups has become a very popular choice simply because of the speed with which you can bring the business back to normal our patients. The agility and the power that cloud brings to the table. >>Yeah, ransomware is scary. You don't, you don't even need a high school diploma to be a ransomware ist you can just go on the dark web and by ransomware as a service and do bad things and hopefully you'll end up in jail. Uh Stephen we know about the success of the AWS marketplace, uh you guys are partnering here. I'm interested in how that partnership, you know, kind of where it started and how it's evolving. >>Yeah, happy to highlight on that. So, look when >>we when we started >>Aws or when the founders of aws started aws, as I said 15 years ago we we realized very early on that while we were going to be able to provide a number of tools for customers to have on demand access to compute storage, networking databases that many, particularly enterprise and government government customers still use a wide range of tools and solutions from hundreds, if not in some cases thousands of different partners. I mean I talked to enterprises who literally use thousands of of different vendors to help them deliver their solutions for their customers. So almost 10 years ago, we're almost at our 10 year anniversary for AWS marketplace, we launched the first substantiation of AWS marketplace which allowed builders and customers to find try buy and then deploy third party software solutions running on amazon machine instances also noticed as armies natively right in their AWS and cloud accounts to complement what they were doing in the cloud. And over the last nearly 10 years we've evolved quite a bit to the point where we support software and multiple different packaging types, whether it be amazon machine instances, containers, machine learning models and of course SAS and the rise of software as a service. So customers don't have to manage the software themselves. But we also support data products through the AWS Data exchange and professional services for customers who want to get services to help them integrate the software into their environments. And we now do that across a wide range of procurement options. So what used to be pay as you go amazon machine instances now includes multiple different ways to contract directly, customer can do that directly with the vendor with their channel partner or using kind of our public e commerce capabilities. And we're super excited, um, over the last couple of months we've been partnering with calm vault to get their industry leading backup and recovery solutions listed on AWS marketplace, which is available for our collective customers now. So not only do they have access to convulse awesome solutions to help them protect against ransomware as we talked about and to manage their backup and recovery environments, but they can find and deploy that directly in one click right into their AWS accounts and consolidate their building relationship right on the AWS and voice. And it's been awesome to work with with Rhonda and the product teams and convo to really, um, expose those capabilities where converts using a lot of different AWS services to provide a really great native experience for our collective customers as they migrate to the cloud. >>Yeah, the marketplace has been amazing. We've watched it evolve over the past decade and, and, and it's a, it's a key characteristic of everybody has a cloud today. We're a cloud to butt marketplaces unique uh, in that it's the power of the ecosystem versus the resources of one and Ringo. I wonder from, from your perspective, if you could talk about the partnership with AWS from your view and then specifically you've got some hard news, I wonder if you could talk about that as well. >>Absolute. So the partnership has been extended for more than 12 years. Right. So aws and Commonwealth have been bringing together solutions that help customers solve the data management challenges and everything that we've been doing has been driven by the customer demand that we seek. Right customers are moving their workloads in the cloud. They're finding new ways of deploying their workloads and protecting them. Um, you know, earlier we introduced cloud native integration with the EBS API which has driven almost 70% performance improvements in backup and restores. And when you look at huge customers like coca cola who have standardized on AWS um, combo. That is the scale that they want to operate in. You manage around 1 50,000 snapshots 1200 ec, two instances across six regions. But with just one resource dedicated for the data management strategy. Right? So that's where the real built in integration comes into play and we've been extending it to make use of the cloud efficiencies like our management and auto scale and so on. Another aspect is our commitment to a radically simple customer experience and that's, you know, I'm sure Stephen would agree it's a big month for at AWS as well. That's really together with the customer demand which brought us together to introduce com ball into the AWS marketplace exactly the way Stephen described it. Now the heart announcement is coming back up and recovery is available in native this marketplace. So the exact four steps that Stephen mentioned, find, try buy and deploy everything simplified through the marketplace So that our aws customers can start using far more back of software in less than 20 minutes. A 60 year trial version is included in the product through marketplace and you know, it's a single click buy, we use the cloud formation templates to deploy. So it becomes a super simple approach to protect the AWS workloads and we protect a lot of them. Starting from easy to rds dynamodb document DB um, you know, the containers, the list just keeps going on. So it becomes a very natural extension for our customers to make it super simple to start using convert data protection for the w >>well the con vault stack is very robust. You have extremely mature stack. I want, I'm curious as to how this sort of came about and it had to be customer driven. I'm sure where your customers saying, hey, we're moving to the cloud, we had a lot of workloads in the cloud, we're calm vault customer. That intersection between calm vault and AWS customers. So again, I presume this was customer driven. but maybe you can give us a little insight and add some color to that. >>Everything in this collaboration has been customer driven. We were earlier talking about the multiple paths to chlorine vapor example and still might probably add more color from his own experience at our jones. But I'll bring it to reference Parsons who's a civil engineering leader. They started with the cloud first mandate saying we need to start moving all our backups to the cloud but we have wanted that bad actors might find it easy to go and access the backups edible is um, Conwell came together with the security features and com well brought in its own authorization controls and now we have moved more than 14 petabytes of backup data into the club and it's so robust that not even the backup administrator and go and touch the backups without multiple levels of authorization. Right. So the customer needs, whether it is from a security perspective performance perspective or in this case from a simplicity perspective is really what is driving this. And and the need came exactly like that. There are many customers who have no standardized on it because they want to find everything through the AWS marketplace. They want to use their existing, you know, the AWS contracts and also bring data strategy as part of that so that that's the real um, driver behind this. Um, Stephen and I hope actually announced some of the customers that I actively started using it. You know, many notable customers have been behind this uh, innovation, don't even, I don't know, I wanted to add more to that. >>I would just, I would, I would just add Dave, you know, look if I look back before I joined a W S seven years ago, I was the C I O at dow jones and I was leading a a fairly big cloud migration there over a number of years. And one of the impetus is for us moving to the cloud in the first place was when Hurricane Sandy hit, we had a real disaster recovery scenario in one of our New Jersey data centers um, and we had to act pretty quickly convert was, was part of that solution. And I remember very clearly Even back then, back in 2013, they're being options available to help us accelerate are moved to the cloud and just to reiterate some of the stuff that Rhonda was talking about consoles, done a great job over the last more than a decade, taking features from things like EBS and S three and EC two and some of our networking capabilities and embedding them directly into their services so that customers are able to more quickly move their backup and recovery workloads to the cloud. So each and every one of those features was as a result of, I'm sure combo working backwards from their customer needs just as we do at >>AWS >>and we're super excited to take that to the next level to give customers the option to then also by that right on their AWS invoice on AWS marketplace. >>Yeah, I mean, we're gonna have to leave it there steven, you've mentioned several times the sort of the early days of back then we were talking about gigabytes and terabytes and now we're talking about petabytes and beyond. Guys. Thanks so much. I really appreciate your time and sharing the news with us. >>Dave. Thanks for having us. >>All right. Keep it right there more from combat connections. 21. You're watching the >>cube. Mm hmm.

Published Date : Nov 1 2021

SUMMARY :

protection space and how cloud computing has advanced the way we think about backup Always A pleasure to see you here Thanks for having us. at backup and recovery in the cloud, is it a kind of a replacement for existing strategies? have been able to predict that it would have Matured and um you know become the staple that my question to you is how have you seen customers change the way they think about data all the paths to the club regardless of whether you're going to use it as a storage started or you So it's really all over the map and you want that optionality. of the speed with which you can bring the business back to normal our patients. you know, kind of where it started and how it's evolving. Yeah, happy to highlight on that. So customers don't have to manage the software themselves. I wonder if you could talk about that as well. to a radically simple customer experience and that's, you know, I'm sure Stephen would agree it's a big but maybe you can give us a little insight and add some color to that. And and the need came exactly like that. And one of the impetus is for us moving to the cloud in the first place was when and we're super excited to take that to the next level to give customers the option to back then we were talking about gigabytes and terabytes and now we're talking about petabytes and beyond. Keep it right there more from combat connections.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Ranga RajagopalanPERSON

0.99+

AWSORGANIZATION

0.99+

DavidPERSON

0.99+

2013DATE

0.99+

DavePERSON

0.99+

Stephen RachaelPERSON

0.99+

Stephen OrbanPERSON

0.99+

New JerseyLOCATION

0.99+

Con vaultORGANIZATION

0.99+

hundredsQUANTITY

0.99+

RhondaPERSON

0.99+

stevensPERSON

0.99+

awsORGANIZATION

0.99+

stevenPERSON

0.99+

60 yearQUANTITY

0.99+

less than 20 minutesQUANTITY

0.99+

more than 12 yearsQUANTITY

0.99+

six regionsQUANTITY

0.99+

CommonwealthORGANIZATION

0.99+

two instancesQUANTITY

0.99+

thousandsQUANTITY

0.99+

15 years agoDATE

0.99+

more than 14 petabytesQUANTITY

0.99+

amazonORGANIZATION

0.99+

one resourceQUANTITY

0.98+

oneQUANTITY

0.98+

15 years agoDATE

0.98+

first strategyQUANTITY

0.98+

this yearDATE

0.97+

todayDATE

0.97+

stevePERSON

0.97+

Hurricane SandyEVENT

0.96+

EC twoTITLE

0.96+

MarchDATE

0.96+

10 year anniversaryQUANTITY

0.95+

almost 70%QUANTITY

0.95+

seven years agoDATE

0.95+

around 1 50,000 snapshotsQUANTITY

0.95+

coca colaORGANIZATION

0.95+

yesterdayDATE

0.94+

2021DATE

0.94+

first mandateQUANTITY

0.94+

four stepsQUANTITY

0.94+

eachQUANTITY

0.93+

1200 ecQUANTITY

0.93+

first placeQUANTITY

0.92+

S threeTITLE

0.92+

calm vaultORGANIZATION

0.9+

CommvaultORGANIZATION

0.89+

single clickQUANTITY

0.87+

first substantiationQUANTITY

0.86+

EBSORGANIZATION

0.85+

10 years agoDATE

0.84+

last 15 yearsDATE

0.84+

Ranga Rajagopalan & Stephen Orban


 

(Techno music plays in intro) >> We're here with theCUBE covering Commvault Connections 21. And we're going to look at the data protection space and how cloud computing has advanced the way we think about backup, recovery and protecting our most critical data. Ranga Rajagopalan who is the Vice President of products at Commvault, and Stephen Orban who's the General Manager of AWS Marketplace & Control Services. Gents! Welcome to theCUBE. Good to see you. >> Thank you, always a pleasure to see you Dave. >> Dave, thanks for having us. Great to be here. >> You're very welcome. Stephen, let's start with you. Look, the cloud has become a staple of digital infrastructure. I don't know where we'd be right now without being able to access enterprise services, IT services remotely, Um, but specifically, how are customers looking at backup and recovery in the cloud? Is it a kind of a replacement for existing strategies? Is it another layer of protection? How are they thinking about that? >> Yeah. Great question, Dave. And again, thanks. Thanks for having me. And I think, you know, look. If you look back to 15 years ago, when the founders of AWS had the hypothesis that many enterprises, governments, and developers were going to want access to on demand, pay as you go, IT resources in the cloud. None of us would have been able to predict that it would have matured and, um, you know become the staple that it has today over the last 15 years. But the reality is that a lot of these are enterprise customers. Many of whom have been doing their own IT infrastructure for the last 10, 20 or or multiple decades do have to kind of figure out how they deal with it. The change management of moving to the cloud, and while a lot of our customers will initially come to us because they're looking to save money or costs. Almost all of them decide to stay and go big because of the speed at which they're able to innovate on behalf of their customers. And when it comes to storage and backup, that just plays right into where they're headed and there's a variety of different techniques that customers use. Whether it be, you know, a lift and shift for a particular set of applications. Or a data center or where it, where they do very much look at how can they replace the backup and recovery that they have on premises in the cloud using solutions like what we're partnering with Commvault to do. Or completely re-imagining their architecture for net new developments that they can really move quickly for, for their customers and, and completely developing something brand new, where it is really a, um, you know a brand new replacement and innovation for, for, for what they've done in the past. >> Great. Thank you, Stephen. Ranga, I want to ask you about the D word, digital. Look, if you're not a digital business today, you're basically out of business. So my question to you Ranga is, is how have you seen customers change the way they think about data protection during what I call the forced March to digital over the last 18, 19 months? Are customers thinking about data protection differently today? >> Definitely Dave, and and thank you for having me and Stephen pleasure to join you on this CUBE interview. First, going back to Stephen's comments, can't agree more. Almost every business that we talk with today has a cloud first strategy, a cloud transmission mandate. And, you know, the reality is back to your digital comment. There are many different paths to the hybrid micro cloud. And different customers. You know, there are different parts of the journey. So as Stephen was saying, most often customers, at least from a data protection perspective. Start the conversation their thinking, hey, I have all these tapes, can I start using cloud as my air gap, long-term retention target. And before they realize they start moving their workloads into the cloud, and none of the backup and recovery facilities are going to change. So you need to continue protecting the cloud, which is where the cloud meta data protection comes in. And then they start innovating around DR Can I use cloud as my DR sites so that, you know, I don't need to meet in another site. So this is all around us, cloud transmissions, all around us. And, and the real essence of this partnership between AWS and Commvault is essentially to drive, and simplify all the paths to the cloud Regardless of whether you're going to use it as a storage target or, you know, your production data center or your DR. Disaster Recovery site. >> Yeah. So really, it's about providing that optionality for customers. I talked to a lot of customers and said, hey, our business resilience strategy was really too focused on DR. I've talked to all the customers at the other end of the spectrum said, we didn't even have a DR strategy. Now we're using the cloud for that. So it's a, it's really all over the map and you want that optionality. So Stephen, >> (Ranga cuts in) >> Go ahead, please. >> And sorry. Ransomware plays a big role in many of these considerations as well, right? Like, it's unfortunately not a question of whether you're going to be hit by ransomware. It's almost become like, what do you do when you're hit by ransomware? And the ability to use the cloud scale to immediately bring up the resources. Use the cloud backers has become a very popular choice simply because of the speed with which you can bring the business back to normal operations. The agility and the power that cloud brings to the table. >> Yeah. Ransomware is scary. You don't, you don't even need a high school degree diploma to be a ransomware-ist. You could just go on the dark web and buy ransomware as a service and do bad things. And hopefully you'll end up in jail. Stephen, we know about the success of the AWS Marketplace. You guys are partnering here. I'm interested in how that partnership, you know, kind of where it started and how it's evolving. >> Yeah. And happy to highlight on that. So look, when we, when we started AWS or when the founders of AWS started AWS, as I said, 15 years ago. We realized very early on that while we were going to be able to provide a number of tools for customers to have on demand access to compute storage, networking databases, that many particularly, enterprise and government government customers still use a wide range of tools and solutions from hundreds, if not in some cases, thousands of different partners. I mean, I talked to enterprises who who literally used thousands of of different vendors to help them deliver those solutions for their customers. So almost 10 years ago, we're almost at our 10 year anniversary for AWS Marketplace. We launched the first instantiation of AWS Marketplace, which allowed builders and customers to find, try, buy, and then deploy third-party software solutions running on Amazon Machine Instances, also known as AMI's. Natively, right in their AWS and cloud accounts to compliment what they were doing in the cloud. And over the last, nearly 10 years, we've evolved quite a bit. To the point where we support software in multiple different packaging types. Whether it be Amazon Machine Instances, containers, machine learning models, and of course, SAS and the rise of software as a service, so customers don't have to manage the software themselves. But we also support a data products through the AWS data exchange and professional services for customers who want to get services to help them integrate the software into their environments. And we now do that across a wide range of procurement options. So what used to be pay as you go Amazon Machine Instances now includes multiple different ways to contract directly. The customer can do that directly with the vendor, with their channel partner or using kind of our, our public e-commerce capabilities. And we're super excited, um, over the last couple of months, we've been partnering with Commvault to get their industry leading backup and recovery solutions listed on AWS Marketplace. Which is available for our collective customers now. So not only do they have access to Commvault's awesome solutions to help them protect against ransomware, as we talked about and, and to manage their backup and recovery environments. But they can find and deploy that directly in one click right into their AWS accounts and consolidate their, their billing relationship right on the AWS invoice. And it's been awesome to work with with Ranga and the, and the product teams at Commvault to really expose those capabilities where Commvault's using a lot of different AWS services to, to provide a really great native experience for our collective customers as they migrate to the cloud. >> Yeah. The Marketplace has been amazing. We've watched it evolve over the past decade and it's just, it's a key characteristic of cloud. Everybody has a cloud today, right? Ah, we're a cloud too, but Marketplace is unique in, in, in that it's the power of the ecosystem versus the resources of one. And Ranga, I wonder if from your perspective, if you could talk about the partnership with AWS from your view, and and specifically you've got some hard news. Would, if you could, talk about that as well. >> Absolutely. So the partnership has been extending for more than 12 years, right? So AWS and Commvault have been bringing together solutions that help customers solve the data management challenges and everything that we've been doing has been driven by the customer demand that we see, right. Customers are moving their workloads to the cloud. They are finding new ways of deploying the workloads and protecting them. You know, earlier we introduced cloud native integration with the EBS AVI's which has driven almost 70% performance improvements in backup and restore. When you look at huge customers like Coca-Cola, who have standardized on AWS and Commvault, that is the scale that they want to operate on. They manage around one through 3,000 snapshots, 1200 easy, two instances across six regions, but with just one resource dedicated for the data management strategy, right? So that's where the real built-in integration comes into play. And we've been extending it to make use of the cloud efficiencies like power management and auto-scale, and so on. Another aspect is our commitment to a radically simple customer experience. And that's, you know, I'm sure Stephen would agree. It's a big mantra at AWS as well. That's really, together, the customer demand that's brought us together to introduce combo into the AWS Marketplace, exactly the way Stephen described it. Now the hot announcement is calmer, backup and recovery is available in AWS Marketplace. So the exact four steps that Stephen mentioned: find, try, buy, and deploy everything simplified to the Marketplace so that our AWS customers can start using our more backup software in less than 20 minutes. A 60 day trial version is included in the product through Marketplace. And, you know, it's a single click buy. We use the cloud formation templates to deploy. So it becomes a super simple approach to protect the AWS workloads. And we protect a lot of them starting from EC2, RDS DynamoDB, DocumentDB, you know, the, the containers, the list just keeps going on. So it becomes a very natural extension for our customers to make it super simple, to start using Commvault data protection for the AWS workloads. >> Well, the Commvault stack is very robust. You have an extremely mature stack. I want to, I'm curious as to how this sort of came about? I mean, it had to be customer driven, I'm sure. When your customers say, hey, we're moving to the cloud, we had a lot of workloads in the cloud. We're a Commvault customer, that intersection between Commvault and AWS customer. So, so again, I presume this was customer driven, but maybe you can give us a little insight and add some color to that, Ranga. >> Every everything, you know, in this collaboration has been customer driven. We were earlier talking about the multiple paths to cloud and a very good example, and Stephen might probably add more color from his own experience at Dow Jones, but I I'll, I'll bring it to reference Parsons. Who's, you know, civil engineering leader. They started with the cloud first mandate saying, we need to start moving all our backups to the cloud, but we averted that bad actors might find it easy to go and access the backups. AWS and Commvault came together with AWS security features and Commvault brought in its own authorization controls. And now we are moved more than 14 petabytes of backup data into the cloud, and it's sort of as that, not even the backup administrators can go and patch the backups without multiple levels of authorization, right? So the customer needs, whether it is from a security perspective, performance perspective, or in this case from a simplicity perspective is really what is driving us and, and the need came exactly like that. There are many customers who have now standardized on AWS, they want to find everything related to this Marketplace. They want to use their existing, you know, the AWS contracts and also bring data strategy as part of that. So that, that's the real driver behind this. Stephen and I were hoping that we could actually announce some of the customers that have actively started using it. You know, many notable customers have been behind this innovation. And Stephen I don't know if you wanted to add more to that. >> I would just, I would just add Dave, you know, like if I look back before I joined AWS seven years ago, I was the CIO at Dow Jones. And I was leading a, a fairly big cloud migration there over a number of years. And one of the impetuses for us moving to the cloud in the first place was when Hurricane Sandy hit, we had a real disaster recovery scenario in one of our New Jersey data centers. And we had to act pretty quickly. Commvault was, was part of that solution. And I remember very clearly, even back then, back in 2013, there being options available to help us accelerate our move to the cloud. And, and just to reiterate some of the stuff that Ranga was talking about, you know, Commvault's done a great job over the last, more than a decade. Taking features from things like EBS, and S3, and TC2 and some of our networking capabilities and embedding them directly into their services so that customers are able to, you know, more quickly move their backup and recovery workloads to the cloud. So each and every one of those features was, is a result of, I'm sure, Commvault working backwards from their customer needs just as we do at AWS. And we're super excited to take that to the next level, to give customers the option to then also buy that right on their AWS invoice on AWS Marketplace. >> Yeah. I mean, we're going to have to leave it there. Stephen you've mentioned this several times, there's sort of the early days of AWS. We went back then we were talking about gigabytes and terabytes, and now we're talking about petabytes and beyond. Guys thanks so much. We really appreciate your time and sharing the news with us. >> Dave, thanks for having us. >> All right, keep it right there more from Commvault Connections 21, you're watching theCUBE.

Published Date : Oct 27 2021

SUMMARY :

the way we think about backup, recovery pleasure to see you Dave. Great to be here. and recovery in the cloud? of moving to the cloud, and while So my question to you Ranga is, and simplify all the paths to the cloud So it's a, it's really all over the map And the ability to use the cloud scale You could just go on the dark web and the rise of software as a service, in that it's the power of the ecosystem that is the scale that I mean, it had to be the multiple paths to cloud And, and just to reiterate and sharing the news with us. you're watching theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Ranga RajagopalanPERSON

0.99+

Stephen OrbanPERSON

0.99+

DavePERSON

0.99+

AWSORGANIZATION

0.99+

thousandsQUANTITY

0.99+

RangaPERSON

0.99+

2013DATE

0.99+

CommvaultORGANIZATION

0.99+

Dow JonesORGANIZATION

0.99+

hundredsQUANTITY

0.99+

New JerseyLOCATION

0.99+

3,000 snapshotsQUANTITY

0.99+

60 dayQUANTITY

0.99+

FirstQUANTITY

0.99+

more than 14 petabytesQUANTITY

0.99+

more than 12 yearsQUANTITY

0.99+

less than 20 minutesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Coca-ColaORGANIZATION

0.99+

seven years agoDATE

0.98+

firstQUANTITY

0.98+

six regionsQUANTITY

0.98+

1200 easyQUANTITY

0.98+

Hurricane SandyEVENT

0.98+

EBSORGANIZATION

0.98+

todayDATE

0.98+

15 years agoDATE

0.97+

EC2TITLE

0.97+

two instancesQUANTITY

0.97+

AWS Marketplace & Control ServicesORGANIZATION

0.96+

MarchDATE

0.96+

one resourceQUANTITY

0.96+

first mandateQUANTITY

0.96+

Jordan Sher, OpsRamp | CUIBE Conversation


 

>>Welcome to the AWS Startup. Showcase new breakthroughs in devops, did analytics and cloud management tools. I'm lisa martin, I've got Jordan share here with the next vice president of corporate marketing Ops ramp, Jordan welcome to the program >>lisa It's great to be here. Great to talk about some of the stuff. Thanks for having me. >>Yeah let's break this down. Tell me, first of all about Ops ramp, how is it facilitating the transformation of I. T. Ops helping companies as your website says control the chaos. >>Sure. So option is an availability platform for the modern enterprise. We consolidate digital I. T. Operations management into one place. So availability as you can imagine um is a consistent challenge for I. T. Operations teams in large enterprises maintaining service assurance, making sure that services are up available, performing uh Ops tramp is the platform that powers all of that and we bring a lot of different features and functions to bear in driving availability. I think about ai ops I think about hybrid infrastructure monitoring, multi cloud monitoring, that's all part of the options offering. Modern enterprise. >>Talk to me about back in 2014 what the founders saw of Ops ramps, what were some of the gaps in the market that they saw that this needs to be addressed and no one's >>doing? It's a great question. So abstract was originally founded as part of an MSP offering. So we were a platform serving managed service providers who wanted to consolidate the infrastructure of their clients onto one multi tenant platform. What they noticed was that these enterprise customers of the MSP s whom we served. Really appreciated that promise of being able to consolidate infrastructure, being able to visualize different alerts, different critical incidents that might arise all on one platform. And so that's when we decided to raise around and take it directly to the enterprise so they could have the same kind of visibility and control that MSP s were delivering back to them, >>Visibility and control is essential, especially if your objective is to help control the chaos. Talk to me about some of the trends that you've seen, especially in the last 18 months, as we've been in such a dynamic market, we've seen the rapid acceleration of digital business transformation. What are some of those key trends especially with respect to a I ops that you think are really poignant. >>Yeah. You know, we like to think over here that the pandemic didn't really change a whole lot, accelerated a whole lot. And so we started to see at least within the past 12 to 18 months this acceleration of moving to the cloud, you know, Gardner forecasted that I thi enterprises, large enterprises are going to be spending upwards of 300 billion um in the move to the public cloud. So that has really facilitated some of the decisions that we have made and the promises that we offer to our customers, number one, Number two, with the move to remote work and the adoption of a lot of different digital tools and uh the creation and implementation of a lot of different digital customer services. Um It has forced these enterprises whom we serve to really rethink how they provide flexibility and control to their larger enterprise. I. T. Teams that might be distributed might be working remote might be in different locations. How can they consolidate infrastructure as it gets more and more complex. So that's where ops tramp has really created the most value. So we think about two things. Number one I want to consolidate my multi cloud environments so services via AWS for example or other cloud providers. How do I bring that within? How do I bring that control within my enterprise within the context of maybe additional private cloud offerings or public cloud infrastructure. Number one. Number two how do I get control over the constant flood of alerts but I'm getting from these different digital services and tools all in one place. Um you know so we are responding to that need by for example uh implementing a really rich robust ai ops functionality within the train platform to both be able to consolidate those alerts that are coming through and really escalate the critical ones um for to allow I. T. Operations seems to be a little bit more proactive and understand how incidents are happening and giving them the ability to remediate those incidents become before they become business critical and can really shut down the internet. >>Speaking of the enterprise. I'm curious if your customer conversations have changed in level in the last 18 months as everything has become chaotic for quite a while. We're still in we've been in a hybrid cloud world for a while. We are in a hybrid workforce situation. Have you noticed an escalation up the stack in terms of the c suite of going we need to make sure that we're leveraging cloud properly financially responsibly and ensuring that we have this ability and all the services that we're delivering. >>You mean are they sweating more And are they coming to us when they're sweating more? Yeah. Yeah for sure. The short answer is yes. So let me give you a great example. Um One of our recent customers they manufacture chips microchips and what they've noticed is that number one demand has grown um due to the increase in digital transformation. Um Number two supply chains have become more constricted for them specifically so they're asking themselves. All right how can we equip our I. T. Operations teams to maintain the availability of different logistics services within our organization So that they can both maintain service availability of these different logistic logistic services um and be able to stay on deadline as much as they possibly can um during a supply chain crisis that we're facing right now. And number two how can we as we move to the cloud and we see a distribution of our workforce still be able to maintain I. T. Operation services regardless. Um That is a need in particular in particular the supply chain um constraint issue. Uh That is a need that has arisen only in the last 18 months and it is a perfect use case for ops ramp or a platform that allows you to consolidate I. T. Operations to one place and give flexibility control across a distributed environment with a number of different new digital services that have been implemented. To solve some of these challenges. >>Talk to me about Ai ops as a facilitator of that availability visibility in this hybrid world that is still somewhat chaotic. >>Yeah great question. So originally it was al gore algorithmic operations is coined by Gardner today it's artificial intelligence in its operations. So the notion there is simple right there's a lot of data coming in on throughout the I. T. Operations organization. How can we look for patterns within that data to help us understand and act more proactively. Um From an operational perspective well there are a lot of promises uh that go along with A. I. Ops that it's going to completely transform these I. T. Organizations that it's going to reduce headcount. Um We don't necessarily find that to be true. What we do find true though is that the original promise behind a IOP still exists right we need to look for patterns in the data and we need to be able to drive insights from those patterns so that is what the Ai ops feature functionality within abstract really does. It looks for patterns within alerts and helps you understand what these patterns ultimately mean. Let me give you a great example so we have different algorithms within the train platform for co occurring events or for downstream events that help us indicate, okay if a number of these events are happening across one geography or one um business service for example we can actually look for those co occurring patterns and we can see that there may be one resource or set of resources that is actually causing a bunch of these incidents for a bunch of these alerts upstream of all the actual alerts themselves. So instead of the ICTy Operations organization having to go in and remediate a bunch of different distributed alerts, they can actually look at that upstream alert and say okay that's the one that really matters, that's where I need to pay most of my attention to. Um and that's where I'm going to deploy a team or open up a ticket or escalate to I. T. S. M. Or a variety of different things because I know that these co occurring alerts are creating a pattern that's driving some insight. Um so that's just part of the overall Ops tramp Ai Ops um promise or uh you know there's there's tons more that goes along with the biopsy but we really want to take some of the load and reduce some of the alerts that these icy operations teams are having to deal with on a daily basis. >>So let's talk about how you do that from a practical perspective, is looking at some of the notes that your team provided and according to I. D. C. This was a report from asia pacific excluding Japan that 75% of global two K enterprises are going to adopt a I Ai Ops by 2023 but a lot of Ai ops projects have been built on and haven't been successful. How does abstract help change flip the script on that? >>So it really comes down to the quality of the data right? If you have a bolt on tool, you have to optimize that tool for the different data lakes or data warehouses or sources of data that exists within your operational organization. I think about multi cloud apps across the multiplied environment. So I have to optimize the data that is coming in from each of those different cloud providers onto a bolt on tool to make sure that the data that's being fed to the tool is accurate and it is a true reflection of what's going on in the operational organization. That's number one. If you look at ops tramp and the differentiation there. Um op tramp is a big data platform at its core. So you bring ops tramp in, you optimize it for your overall infrastructure mix and then the data that gets fed into the ai ops feature functionality is the same across the board. There is no further optimization. So what that means is that the insights that are being driven by the outside perhaps platform are more sophisticated, they're more nuanced, there are more accurate representation and they're probably driving ultimately better insights than sticking a tool on top of five different existing data warehouses or data lakes. >>So if you've got a customer and I'm sure that you do enterprises, as we said, going to be adopting this substantially by 2023 which is just around the corner, how do you help them sort through the infrastructure and the ecosystem that they have so that they're not bolting things on but rather they can actually really build this very intuitively to deliver that availability and the visibility that they need fast. >>Yeah, so a couple of different comments on that ways that we try to help. Number one, I think it's critical for us to understand the challenges of the modern I. T. Infrastructure environment, across different verticals, different industries. So when we walk into any of our clients, we already have a good mix of their challenges. Is it Iot? Are they dealing with a bunch of different devices at the edge, are they, you know, a telecom with uh critical incidents is incidents in the network that they need to remediate. Um Number two, we try to smooth the glide path into understanding the obscene ramp platform and promise early. So what does that mean? It means we offer a free trial of the platform itself at tried out abstract dot com, you can set up up to 1000 resources for free with an unlimited number of users for 14 days and kick the tires particularly in multi cloud monitoring and see what sorts of insights you can determine um, just within those two weeks and in fact we're, we put our cards on the table and we say you can probably see your first insights into your infrastructure within 20 minutes of setting up the abstract free trial um, and if you don't want to bring your resources, your own resources to it will even provide a collection of resources preloaded onto the platform so you can try it out yourself without having to get, you know, a bunch of approvals to load infrastructure in there. So two pieces, number one, it's this proof of concept proof of value where we try to understand the clients pain and number two, if you want to kick the tires on it yourself, we can offer that with this free trial offering. >>So what I'm hearing and that is fast time to value which in these days is absolutely essential. How does that differentiate ops ramp as a technology company and >>from your customer's perspective? Yeah, so I appreciate that. And the meantime to incite is one of the critical aspects of our product roadmap, we really want to drive down that time to value coefficient because it's what these operations teams need as complexity grows really if you take a step back right, everything is getting more complex. So it's not only the pandemic and the rise of multi cloud but it's more digital customer experience is to compete. It's the availability, it's the need of a modern enterprise to be agile. All of those things translate basically into speed and flexibility and agility. So if there's one guiding light of ops tram it's really to equip the operations team with the tools that they need to move flexibly with the business. There is a department in any modern enterprise today if they need access to the public cloud and they have a credit card they're getting on AWS right now and they are spinning up a host of services. We want to be the platform that still gives the central IT operations team some aspect of control over that with the ability with without taking away the ability of that you know siloed operations team somewhere in some geo geographic region. We want to empower them to be able to spend up that AWS service but at the same time we want to just know that exists and be able to control it. >>How can A I A facilitator of better alignment between I. T. Tops and the business as you just gave a great example of the business getting the credit card spending up services that they need for their line of business or their function and then from a cultural perspective I'm just curious how can A. I. R. S. B. A facilitator of those two groups working better together in a constantly complex environment. >>That's a great question. So imagine if I. T. Operations did more than just keep the lights on. Imagine if you knew that your I. T. Operations team could be more proactive and more productive about alerts incidents and insights from infrastructure monitoring. What that means is that you are free to create any kind of digital customer experience that you would want to drive value back to your end user. It means that no longer do you think about it? Operations is this big hodgepodge of technology that you have to spend you know hundreds of millions of dollars a year in network operations teams and centers and technologies just to keep control of right by consolidating everything down to one place one sas based platform like this it frees up the business to be able to innovate. Um You know take advantage of new technologies that come around and really to work flexibly with the needs of the business as it grows. That's the promise of a tramp. We're here to replace you know these old appliances or different management packs of tools that exists that you consistently have to add an optimized and tune to feel to to empower the operations team to act like that. Um The truth is that is that everything is SAT space now, everything is status based and when you get to the core of infrastructure, it needs to be managed to be a SAs and thats ops ramp in a nutshell, >>I like that nutshell, that's excellent. I want to know a little bit about your go to market with a W. S. Talk to me a little bit about the partnership there and where can what's your go to market like? Essentially, >>yeah, so were included in the AWS marketplace, we have an integration with a W. S um as the de facto biggest cloud provider in the world. We have to play nice with them. Um and obviously the insights that we drive on the option platform have to be insights that you need from your AWS experience. You know, it has to be similar to cloudwatch or in a lot of, in a lot of cases um it has to be as rich as the cloudwatch experience in order for you to want to use op tramp within the context of the different other multi cloud providers, so that's how abstract works. Um you know, we understand that there's a lot of AWS certified professionals who work with who work at Ops tramp, who understand what AWS is doing and who consistently introduce new features that play well with the service is the service library that AWS currently offers today. >>Got it as we look ahead to 2022 hopefully a better year than 2020 and 2021. What are some of the things that you're excited about? What are some of the things on the ops ramp road map that you can share with us? >>Yeah, so you know, the other, the other big aspect of uh the new landscape of IT operations is observe ability. We're really excited about observe ability, we think that it is the new landscape of monitoring um you know, the idea of being able to find unknown unknowns that exists within your operational stack is important to us to be able to consolidate that with the power of ai ops so that you now have machine learning on top of your ability to find unknown unknown issues. That's that's going to be super exciting for us. I know the product team is taking a hard look at how to drive hybrid, observe ability within the abstract platform. So how do we give a better operational perspective to on prem public cloud and private cloud infrastructure moving forward and how do we ingest alerts before they're even alerts? I mean that's observe ability in a nutshell, if I'm getting in and I'm checking the option platform every day, then that's a workflow that we can remove by creating a better observe ability posture within the train platform. So now the platform is going to run unsupervised right in the background um and ai apps is going to be able to take action on predictive incidents before they ever occur, that's what we're looking at in the future. You know, everything is getting more complex. We've heard this story a million times before, we want to be the platform that can handle that complexity on a massive scale, >>finding the unknown, unknowns, converting them into knowns I imagine is going to be more and more critical across every industry. Last question for you, given the culture and the dynamics of the market that we're in, are there any industries and all of trump's is seeing is really key targets for this type of technology. >>The nice thing about ops tramp is we are we are really vertical neutral, right? Any industry that has complexity and that's every industry can really take advantage of a platform like this. We have seen recent success particularly in finance manufacturing, health care because they deal with new emerging types of complexity that they are not necessarily cared for. So I think about some of our clients, some of our friends in the finance industry, you know, um as transactions accelerate as new customer experiences arise uh these are things that their operations teams need to be equipped for and that's where up tramp really drives value. What's more is that these uh these industries are also somewhat legacy, so they have a foot in the old way of doing things, they have a foot in the data center, you know, there are many financial institutions that have large data center footprint for security considerations. And so if they are living in the data center and they want to make the move to cloud, then they need something like cops ramp to be able to keep a foot in both sides of the equation, >>right, Keep that availability and that visibility. Jordan, thank you for joining me today and talking to us about ops around the capabilities that Ai ops can deliver to enterprises in any industry. The facilitation of of the I. T. Folks in the business folks and what you guys are doing with AWS, we appreciate your time. >>Absolutely lisa, thank you very much. Thanks for the great questions. If you ever need a job in corporate marketing, you seem like you're a natural fit. I'll >>call you awesome. >>Thank you >>for Jordan share. I'm lisa martin, You're watching the AWS startup showcase.

Published Date : Sep 21 2021

SUMMARY :

Welcome to the AWS Startup. lisa It's great to be here. Tell me, first of all about Ops ramp, how is it facilitating the of that and we bring a lot of different features and functions to bear in driving availability. Really appreciated that promise of being able to consolidate infrastructure, What are some of those key trends especially with respect to a I ops that you think are really poignant. So that has really facilitated some of the decisions that we have made and the the c suite of going we need to make sure that we're leveraging cloud properly financially Uh That is a need that has arisen only in the last 18 months and it is Talk to me about Ai ops as a facilitator of that availability visibility Um We don't necessarily find that to be true. So let's talk about how you do that from a practical perspective, is looking at some of the notes that your team provided So it really comes down to the quality of the data right? and the visibility that they need fast. incidents is incidents in the network that they need to remediate. How does that differentiate ops ramp as a technology company and And the meantime to incite is one of the critical aspects Tops and the business as you just gave a great example of the business getting the credit card spending up services that they need have to spend you know hundreds of millions of dollars a year in network operations Talk to me a little bit about the partnership there and where can what's your go to market like? platform have to be insights that you need from your AWS experience. What are some of the things on the ops ramp road map that you to be able to consolidate that with the power of ai ops so that you now have machine learning on finding the unknown, unknowns, converting them into knowns I imagine is going to be more and more critical some of our friends in the finance industry, you know, um as transactions accelerate the capabilities that Ai ops can deliver to enterprises in If you ever need a job in corporate marketing, for Jordan share.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JordanPERSON

0.99+

2014DATE

0.99+

14 daysQUANTITY

0.99+

AWSORGANIZATION

0.99+

20 minutesQUANTITY

0.99+

75%QUANTITY

0.99+

two piecesQUANTITY

0.99+

2020DATE

0.99+

2021DATE

0.99+

2023DATE

0.99+

two weeksQUANTITY

0.99+

lisa martinPERSON

0.99+

Jordan SherPERSON

0.99+

2022DATE

0.99+

two groupsQUANTITY

0.99+

OneQUANTITY

0.99+

GardnerPERSON

0.99+

lisaPERSON

0.99+

trumpPERSON

0.99+

todayDATE

0.99+

Ops trampORGANIZATION

0.98+

one platformQUANTITY

0.98+

bothQUANTITY

0.98+

both sidesQUANTITY

0.98+

two thingsQUANTITY

0.98+

one resourceQUANTITY

0.97+

first insightsQUANTITY

0.97+

300 billionQUANTITY

0.97+

eachQUANTITY

0.97+

one placeQUANTITY

0.96+

oneQUANTITY

0.95+

pandemicEVENT

0.93+

asia pacificLOCATION

0.92+

last 18 monthsDATE

0.91+

up to 1000 resourcesQUANTITY

0.91+

five differentQUANTITY

0.87+

hundreds of millions of dollars a yearQUANTITY

0.84+

two KQUANTITY

0.82+

upQUANTITY

0.79+

W. SORGANIZATION

0.78+

one umQUANTITY

0.75+

cloudwatchTITLE

0.73+

Number twoQUANTITY

0.72+

past 12DATE

0.72+

tonsQUANTITY

0.72+

Number oneQUANTITY

0.71+

A. I. OpsOTHER

0.7+

million timesQUANTITY

0.69+

monthsDATE

0.67+

JapanLOCATION

0.6+

ops trampTITLE

0.58+

Ops rampORGANIZATION

0.57+

opsTITLE

0.56+

ICTyORGANIZATION

0.55+

opTITLE

0.54+

I. D.ORGANIZATION

0.53+

18QUANTITY

0.53+

platformQUANTITY

0.52+

dataQUANTITY

0.52+

OpsTITLE

0.52+

I. T. OpsORGANIZATION

0.52+

dot comORGANIZATION

0.52+

JT Giri, nOps | AWS Startup Showcase


 

>> Welcome to the AWS Startup Showcase: New Breakthroughs in DevOps, Data Analytics, and Cloud Management Tools. I'm Lisa Martin. I'm pleased to welcome JT Giri, the CEO and founder of nOps, to the program. JT, welcome. It's great to have you. >> Thank you, Lisa. Glad to be here. >> Talk to me about nOps. This was founded in 2017, you're the founder. What do you guys do? >> Yeah. So just a little bit background on myself. You know, I've been migrating companies to AWS ever since EC2 was in beta. And you know, in the beginning I had to convince people like, "Hey, you should move to cloud." And the question people used to ask me, like, "Is cloud secure?" I'm glad no one is asking that question anymore. So, as I was building and migrating customers to the cloud, one of the things I realized very early on, is just cloud, there are so many resources, so many teams provisioning resources, then how do you align with your business goals? So we created nOps so that, for a mission, how do you are you build a platform where you make sure every single change and every single resource in the cloud is aligned with the business needs, right? Like we really helped people to make the right trade-offs. >> So you mentioned you've been doing this since EC2 was in beta, and we just celebrated, with AWS, EC2's 15th birthday. So you've been doing this awhile. You don't look old enough, but you've been doing this for awhile. So one of the things that I read on the website, I always love to understand messaging, that nOps says about itself, "The first cloud ops platform "designed to sync revenue growth across your teams." Talk to me about how you do that. >> Yeah. So one of the problems we see in the market right now, there are a lot of tools, there are a lot of dashboards that shows like, "Hey, you have this many issues, "here's the opportunity to fix issues. "And here are the security issues." We're more focused on how do we take those issues from a backlog and fixing those issues. Right? So our focus is more on operationalizing, so your teams could actually own that, prioritize, and actually remediate those issues. So that's where we focus our energy. >> Got it. Let's talk about cloud ops now, and how it varies or differs from traditional cloud management. >> Yeah, I think, like I mentioned, cloud management seems to be more like visibility. And everyone knows that there are challenges in their cloud environment. But when you focus more on the operation side, what we really try to do, from an issue, how do you actually fix that issue? How do you prioritize? How do you make the right trade-offs. Right? Trade offs is important because we make a lot of decisions in the cloud when you're building your infrastructure. Sometime you might have to prioritize for costs, sometime you might have to prioritize based on the SLA. You might have to add more resources to hit your SLAs. So we really help you to prioritize. And we build sort of accountability. You know, you can create roles. Most of the time, what we noticed, I truly believe that if it's everyone's responsibility, it's no one's responsibility. You know? So what we do is we help, within the tool, to establish clear roles and responsibilities. And we show audit log of people reviewing and fixing security issues. And we show audit log of people fixing and reviewing cost issues. That's one way we're trying to bring accountability. >> I like what you said, if everyone's responsible, then really no one is. And that seems to be a persistent problem that we see in businesses across industries, is just still that challenge with aligning IT and business. And especially with the dynamics of the market, JT, that we've seen in the last 18 months, things are moving so quickly. Talk to me about how you guys have been helping companies, especially in the last 18 months, with such change to get that alignment. So that that visibility and those clear roles are established and functional. >> Yeah. You know, what we really do is obviously listen to the customers. Right? And one of the challenges we hear over and over is like, you know, I know I have issues in the cloud environment that I really need help prioritizing. And they're really looking for a framework where they can come in and say, "Okay these are the people who are responsible for security. "These are the people who are responsible for the cost." So as part of onboarding with nOps, that's one of the things you do, you define your workloads. By the way, we automatically create your workload across all your accounts. And then we allow you to move resources around if you like. But then one of the first thing we do is assign roles and responsibilities for each one of these workloads. Oh, it's been incredible to see, when you have that kind of accountability, people actually do make sure that the resources are aligned with the business needs. >> So are you seeing, I mean, that's kind of a cultural shift. That changed management is a challenging process. How are you seeing that evolve in organizations who've been used to doing things maybe in a little bit of a blinders on kind of mode? >> Yeah. Well, changed management is an area where we spent a lot of time, because in cloud, changed management is almost like a fire hose. Right? There's so many changes and you could have 20 people or 20 different teams making changes. I think what people really want is sort of root cause analysis. Like, "Hey, this is what changed here. "Here's why it changed. "And here's how actions we could take, or you could take." So this is where we focus on this, where nOps focuses on. We really help people to show the root cause analysis, these three, four things, these three, four changes are related to this cost increase or these security issues. And we show like a clear path on taking action on those issues. >> That's critical. The ability to show the paths, to take the action to remediate or make any changes, course corrections. As we've learned in the last 18 months, real time is no longer for so many industries, a nice-to-have. The ability to be able to pivot on the fly is a survival and thriving mechanism. So that is really key. I do want to talk about the relationship with nOps and AWS. Here we are at the AWS startup showcase. Give me a little overview on the partnership. >> It's been an incredible. Like I said, I have a long history of working with AWS, and I started a consulting company, a very, very successful one. And so I have years of working with AWS partner teams. I think it's incredible. We were the first company in this, maybe not first, maybe very early. We were part of this Well-Architected framework. And when I came out of that training, the Well-Architected training, I was so excited. I was like, "Wow, this is amazing." You know? Because, to me, whenever you're building infrastructure, you really are making trade-offs. You know, sometime you optimize for cost, sometime you optimize for reliability. So it has been incredible to work with the Well-Architected team. Or Amazon also has this, another program, called FTR: Foundational Technical Review. We've been working closely with that team. So yeah, it's been amazing to collaborate with AWS. >> It sounds pretty synergistic. Have you had a chance influence infrastructure, and some of the technical direction? >> Oh, absolutely. Yeah. We work very closely. One of the cool thing about AWS is that they do take customers' feedback very, very seriously. And, Lisa, also other way around. Right? If AWS is going to build something, having that insight into the roadmap is very beneficial. Because if they're doing it, there's no point of us trying to reinvent the wheel. So that kind of synergy is very helpful. >> That's excellent. Let's talk about customers, now. I always loved talking about customer use cases and outcomes. You guys have a great story with Uber. Walk us through what the challenges were, how nOps came in, what you deployed, and how the business is being impacted positively. >> Yeah, I think Uber and all the enterprises, they sort of have the same challenge, right? There are many teams provisioning infrastructure. How do you make sure all those resources are aligned with your business needs? And in addition to that, not only you have different teams provisioning resources, there are different workloads. And these workloads have different requirements. Right? Some are production workloads, some are just maybe task workloads. So one of the things Uber did, they really embraced sort of nOps' way of managing infrastructure, building accountability, sharing these dashboards with all the different teams. And it was incredible, because within first 30 days they were able to save up to 15%. This was in their autonomous vehicle unit. And they spent a lot of money. And having to see that kind of cost saving, it was just amazing. And we see this over and over. And so like when customers are using platform, it's just incredible how much cost savings are there. >> So Uber, you said, in their autonomous vehicle department saved 15% in just the first 30 days alone. And you said that's a common positive business outcome that you're seeing from customers across industries, is that immediate cost savings. Tell me a little bit more about that as a differentiator of nOps' business. >> Yeah. Because as I mentioned earlier, one of the things we do, we bring accountability. Right? Most of the time when people, before nOps, maybe there are resources that are not accounted for. There is not clear owners, there's no budgets, there's no chargebacks. So I do think that's a huge differentiator of nOps, because, as part of onboarding, as you establish these roles and these responsibilities, you find so much unaccounted resources. And sometimes you don't even need those resources, and you shut them down. And those are the easiest next steps. Right? Like, you don't need to architect, you just shut it down. Like no one needs these resources. So that, I do believe, that's our strength. And we were able to demonstrate this over and over, this, on average, 15-30% cost saving in the first month or so. >> That's excellent. That's a lot of what customers, especially these days, are looking for, is cost optimization across the organization. What are some of the things that you've seen, that nOps has experienced in the last 18 months, with so much acceleration? Anything that surprised you, any industries that you see as really leading edge here, or prime candidates for your technology? >> Yeah. A couple of things. We see a lot of partners, a lot of other consulting company, leveraging nOps as a part of their offering. That's been amazing, we have a lot of partners who really leverage nOps as a go to market and ongoing management of their customers. And I do see that shift from the customer side as well. I think the complexity of cloud continues to kind of increase, like you just mentioned, it sounds like from last 18 months, it accelerated even more. How do you stay up to date, you know? And how do you always make sure that you're following best practices? So companies bring in partners to help them implement solutions. And then these partners are leveraging tools like nOps. And we've seen a lot of momentum around that. >> Tell me a little bit about how partners are leveraging nOps. What are some of the synergistic benefits on both sides? >> Yeah, so normally partners leverage nOps, you know, they will use it for Well-Architected assessments. And, you know, I've personally done a lot of these Well-Architected assessments. And, you know, early on, I kind of learned that, assessments are only good if you're able to move forward with fixing issues in the customer's environment. So what we really do, we really help customers, or sorry, we really help partners to actually do these Well-Architected assessments automatically. We auto discover issues, and then we help them to create proposals so they can present it to the customers like, "Hey, here are the five things we can help you with, "and here's how much it will cost." And, you know, we really streamline that whole process. And it's amazing that some partners used to take like days to do these assessments. Now they can do it an hour. And we also increase the close rate on SoW's because they are a lot more clear. You know, like here are the issues and here's how we can help you to fix those issues. >> You got some great business metrics there, in terms of speed and reduction in cost, reduction in speed. But it sounds like what you're doing is helping those partners build a business case for their customers far more efficiently and more clearly than they've ever been able to do before. >> Absolutely. Yeah, yeah. And... >> Go ahead. >> Yeah, so absolutely. Before nOps, everyone is using spreadsheets most of the time. Right? It's spreadsheets to collect information, and emails back and forth. And after the partner's start using nOps, they use nOps to facilitate these assessments. And once they have these customers as ongoing customers, they use nOps for checks and balances to make sure they're constantly aligned. Right? And we have a lot of success of having real revenue impact on partners' business, by leveraging nOps. >> Excellent. That's true value and true trust there. Last question. Where can you point folks, a CTA or URL that you want people to go to to learn more about nOps? >> Yeah. Basically just go to nops.io and just put on signup. Yeah. I love doing this stuff. I love talking to the customers. Feel free to reach out to me, as well: jt@nops. I would love to have a conversation. But yeah, just nops.io is the best place to get started. >> Awesome, nops.io. And I can hear enthusiasm for this, and your genuineness comes through the zoom screen here, JT. I totally thought that the whole time. Thank you for talking to me about nOps, how you guys are helping organizations really embrace cloud ops and evolve from traditional cloud management tools. We appreciate your time. >> Thanks, Lisa. >> For JT Giri, I'm Lisa Martin. You're watching the AWS Startup Showcase.

Published Date : Sep 16 2021

SUMMARY :

nOps, to the program. What do you guys do? And you know, in the beginning Talk to me about how you do that. "here's the opportunity to fix issues. and how it varies or differs So we really help you to prioritize. Talk to me about how you guys And then we allow you to move So are you seeing, I mean, And we show like a clear path ability to show the paths, So it has been incredible to work and some of the technical direction? having that insight into the how nOps came in, what you deployed, And in addition to that, And you said that's a common one of the things we do, we any industries that you see And how do you always make sure What are some of the synergistic things we can help you with, than they've ever been able to do before. And after the partner's start using nOps, a CTA or URL that you want people to go to I love talking to the customers. how you guys are helping organizations For JT Giri, I'm Lisa Martin.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

UberORGANIZATION

0.99+

2017DATE

0.99+

JT GiriPERSON

0.99+

15%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

LisaPERSON

0.99+

JTPERSON

0.99+

20 peopleQUANTITY

0.99+

an hourQUANTITY

0.99+

20 different teamsQUANTITY

0.99+

threeQUANTITY

0.99+

five thingsQUANTITY

0.99+

firstQUANTITY

0.99+

oneQUANTITY

0.99+

first 30 daysQUANTITY

0.98+

both sidesQUANTITY

0.98+

nOpsORGANIZATION

0.97+

15-30%QUANTITY

0.97+

EC2TITLE

0.97+

first monthQUANTITY

0.96+

fourQUANTITY

0.96+

up to 15%QUANTITY

0.96+

15th birthdayQUANTITY

0.94+

nOpsTITLE

0.93+

last 18 monthsDATE

0.93+

FTRTITLE

0.92+

nops.ioTITLE

0.91+

first companyQUANTITY

0.89+

each oneQUANTITY

0.82+

OneQUANTITY

0.78+

single resourceQUANTITY

0.78+

Startup ShowcaseEVENT

0.73+

one wayQUANTITY

0.71+

WellTITLE

0.67+

four thingsQUANTITY

0.67+

single changeQUANTITY

0.61+

nops.ioOTHER

0.61+

SoWTITLE

0.54+

jtOTHER

0.54+

cloud opsTITLE

0.38+

nopsORGANIZATION

0.34+

JT Giri, nOps | CUBE Conversation


 

>>mhm >>Hello and welcome to this cube conversation here in Palo alto California, I'm john for a year host of the cube, we're here with a great guest Jt gear, Ceo and founder and ops Hot Startup. Jt Welcome to the cube conversation. >>Hey, that sound, thanks for having me. It sounds like we know each other, we used to run into each other at meat out. So yeah, >>it's fun to talk to you because I know you're, you know, scratching the devops it from the beginning before devops was devops before infrastructure of code was infrastructure as code. All that's played out. So it's really a great ride. I know you had a good time doing it a lot of action though. If you look at devops it's kind of like this new, I won't say devops two point because it kind of cliche but you're starting to see the mature ization of companies besides the early adopters and the people who are hardcore adopting and they realize this is amazing and then they? Re platform in the cloud and they go great, let's do more and next thing, you know, they have an operations issue and they got a really kind of stabilize and then also not break anything. So this is kind of the wheelhouse of what you guys are doing in ops reminds me of no ops, no operations, you know, we don't want to have a lot of extra stuff. This is a big thing. Take a but take them in to explain the company, you're what you guys stand for and what you're all about. >>Yeah, so you know, our main focus is more on the operation side, so, you know the reason why you move to cloud or the reason why you have devops practices, you want to go fast. Um but you know when you're building cloud infrastructure, you have to make trade offs right? You have to maybe some environment, maybe you have to optimize for S L A. And maybe another workload, you have to optimize for um you know, maybe costs, right? So what we're on a mission to do is to make sure that companies are able to make the right trade offs, right? We help companies to make sure all their workload, every single resource in the cloud is aligned with the business needs, you know, so we do a lot of cool things by like, you know, bringing accountability mapping and we're close to different genes. But yeah, the end goal is, can we make sure that every single resource on data Bs is aligned with the business needs >>and they're also adding stuff. Every reinvent zillion more services get announced. So a lot, a lot of stuff going on, I gotta ask you while I got you here, what is the definition of cloud apps these days, from your standpoint and why is it important? A lot of folks are looking at this and they want to have stable operations. They love the cloud really can't deny the cloud value at all. But cloud ops has become a big topic. What is cloud apps and why is it important. >>Right? I mean, first of all, Like you just mentioned, right? Like Amazon keeps on launching more services. It's over 200. So the environment is very complex, Right? And then mm complexity within the services is uh pretty uh you really need to be the main expert for example, know everything about do So, you know, our question to us is, let's say if you find a critical issue, uh let's say you want to uh you know, enable multi AZ on your RDS for example. Uh and it's critical because you know, you're running a uh high availability workloads on AWS. How do you follow up on that right to us. Operation is how do you build a cloud backlog? How do you prioritize, how do you come together as a team to actually remediate those issues? No one is tackling that job, everyone's surfaces like, hey, here's 1000 things that are wrong with your environment. No one is focused on like how do you go from these issues to prioritization to backlog to actually coming together as a team. You know, I've been fixing some of those issues. That's that's what operation means is >>I know it's totally hard because sometimes I don't even know what's going on. I gotta ask you why, why is it harder now? Why are people, I mean I get the impression that people like looking the other way? I hope it goes the problem kind of goes away. What are the challenges? What's the big blocker from getting at the root cause or trying to solve these problems? What's the big thing that's holding people back? >>Yeah, I mean, when I first got into, you know, I t you know, I was working in data center and every time we needed a server, you know, we have to ask for approvals, right? And you finally got a server, but nowadays anyone could provision resources. And normally you have different people within the team's provisioning resources and you can have hundreds of different teams who are provisioning resources. So the complexity uh and the speed that we are, you know, provisioning resources across multiple people, it just continues to go higher and higher. So that's why uh you know, on the surface it might look that hey, this, you know, maybe the biggest instance uh is, you know, aligned with the business needs, you know, looking at the changes, it's hard to know, are those aligned with the business? They're not? So that's that's that's where the complexity and player. >>So the question I get a lot from people we talk about devops and cloud, cloud apps or cloud management or whatever kind of buzz words out there, it kind of comes down to cloud apps and cloud management seems to be the category, people focus on. How is cloud ops different then? Say the traditional cloud management and what impact does it have for customers and why should they care and what do they need an option. >>Right. So one of the things we do uh and and we do think that cloud operation is sort of an evolution from cloud management. We make sure that Every single resource 1st, first of all blondes and workload. So and you know, workload could be a group of microservices uh and then uh you know every single workload has owners like define owners who are responsible for making sure they managed budget that they're responsible for security that normally doesn't exist. Right? Cloud is this black box, you know where multiple people are provisioning resources, you know, everyone tries to sort of build sort of a structure to kind of see like what are these resources for? What are these resources for as part of onboarding to end up? So what we do, we actually, you know, analyze all your metadata. We create like 56 workloads and then we say here is a bucket where there's there, this is totally unassigned, right? And then we actually walked them through assigning different roles and also we walk them through to kinda looking under this unallocated resources and assign resources for those as well. So once you're done, every single resource has clear definition, right? Is this a compliant? Uh you know hip hop workload, what are the run books, what is this for? John I don't know if he heard that before. Sometimes there are workloads running and how people don't know, I don't even know who is the owner, right? So after you're done with an office and after you're managing and uh, you know, uh, managing your workload on and off, you have full visibility and clear understanding of what are the. It's funny, it's >>funny you mentioned the workloads being kind of either not knowing the owners, but also we see people um, with the workloads sometimes it's like throwing a switch and leaving the hose on the water on. And next thing you know, they get the bill. They're like, oh my God, what happened? Why did I leave? What, what is this? So there's a lot of things that you could miss. This brings up the point you just said and what you said earlier aligning resources across the cloud uh and and having accountability. And then you, you mentioned at the top of this interview that aligning with the business needs. I find that fastest. I would like to take him in to explain because it sounds really hard. I get how you can align the resources and do some things, identify what's going on, accountability kind of map that that's, that's good tech. How does that, how do you get that to the alignment on the business side. >>Yeah. I mean we start by, first of all, like I said, you know, we use machine learning to play these workloads? And then we asked basic questions about the workload. You know, what is this workload for? Uh Do you need to meet with any kind of compliance is for this workload? Uh What is your S. O. A. For this workload? You know, depending on that. We we make recommendations. Uh So we kind of ask those questions and we also walk them through where they create roles. Like we asked who was responsible for creating budgets or managing security for this workload and guess what also the you know the bucket where resources are allocated for. We ask for you know, owners for that as well like in this bucket who's the owner for who's going to monitor the budget and things like that. So you know we asked, you know, we start by just asking the question, having teams complete that sort of information and also you know, why do you a little bit more information on how this aligns with the business needs? You know, >>talk about the complexity side of it. I love that conversation around the number of services. You said 200 services depending how you count what you call services in the thousands of so many different things uh knobs to turn on amazon uh web services. So why are people um focused on the complexity and the partnering side? Because you know, it's the clouds at E. P. I. Based system. So you're dealing with a lot of different diverse resources. So you have complexity and diversity. Can you talk me through how that works? Because that's that seems to be a tough beast to tame the difference between the complexity of services and also working with other people. >>Yeah for sure like this this normal to have um you know maybe thousands of lambda functions in their application. We're working with a customer where within last month there were nine million containers that launched and got terminated right there, pretty much leveraging, auto scaling and things like that. So these environments are like very complex. You know, there's a lot of moving pieces even, you know, depending on the type of services they're using. So again what we do, you know we when we look at tags and we look at other variables like environments and we look at who's provisioning resources, those resources and we try to group them together and that way there's accountability uh you know if the cost goes up for one workload were able to show that team like your cost is going up uh And also we can show uh unallocated bucket that hey within last week Your cost is you know, $4,000 higher in the unallocated bucket. Where would you like to move this these resources to just like an ongoing game. You >>know, you know jt I was talking with my friend jerry Chen is that Greylock partners is a V. C. Has been on the cube many times a couple of years ago. We're talking about how you can build a business within the cloud, in the shadows of the clouds, what he called it, but I called it more the enabling side and and that's happened now, you're seeing the massive growth. I'm also talking to some C X O C IOS or CSOs and they're like trying to figure out which companies that are evolving and growing to be to buy from, get to get the technology. Uh and they always say to me john I'm looking for game changing kind of impact. I'm looking for the efficiency and you know, enablement, the classic kind of criteria. So how would you guys position yourself to those buyers out there that might want to look at you guys as a solution and ups what game changing aspect of what you do is out there, how would you talk to that that C I O or C. So or buyer um out in the end the enterprise and the thieves ran his piece. What would you say to them? >>Yeah, I think the biggest uh advantage and I think right now it's a necessity, you hear these stories where, you know, people provision resources, they don't even know which project is it for. It's just very hard to govern the cloud environment, but I believe we're the only tool. Mhm where you want to compromise on the speed, right? The whole reason um cloud but they want to innovate faster. No one wants to follow that. Right? But I think what's important. We need to make sure everything is aligned with the business value. Uh, we allow people to do that. You know, we, we, we can both fast at the same time. You can have some sort of guard rails. So there are proper ownership. There's accountability. People are collaborating and people are also rightsizing terminating resources, they're not using. It's like, you know, I think if companies are looking for a tool that's gonna drive better accountability on how people build and collaborate on cloud, I think reply the best solution. >>So people are evolving with the cloud and you mentioned terminating services. That's a huge deal in cloud. Native things are being spun up and turned off all the time. So you need to have good law, You have a good visibility, observe ability is one of the hottest buzzwords out there. We see a zillion companies saying, hey, we're observe ability, which is to me is just monitoring stuff. They can sure you're tracking everything. So when you have all this and you start to operationalize this next gen, next level cloud scale, cost optimization and visibility is huge. Um, what is the, what is the secret sauce uh, for that you guys offer? Because the change management is a big 12 teams are changing too cost team accountability. All this is kind of, it's not just speeds and feeds, there's, it's kind of intersection of both. What's your take on that reaction to that? >>Yeah, I think it's the Delta. Right? So change management, What you're really looking for is not a, like a fire hose, you're looking for. What changed what the root cause who did it, what happened? Right. Because it's totally normal for someone to provision maybe thousands or even millions containers. But how many of those got shut down? What is the delta and uh, you know, if there is a, there is an anomaly, what is the root cause? Right? Uh, how we fix it. So you know the way we've changed managers, change management is a lot different. We really get to the root cause analysis and we really help companies to make, really show what changed and how they can take action to a media. But if there were issues, >>I want to put a little plug in for you guys. I noticed you guys have a really strong net promoter score. You have happy customers also get partners. A lot of enablement there. You kind of got a lot of things going on. Um, explain what you guys are all about. How did you get here? What's the day in the life of a customer that you're serving? Why then why are the scores so high? Um, take us through a use case of someone getting that value. >>Yeah. So I, I come from like a consulting background, john so you know, I was migrating companies to read the Bs when the institute was in beta and then I, you know, founded a consulting company over 100 employees. Really successful interview. S premier partner called in clouds. And so Enos was born there because because you know it was, it was born out a consulting company, there are a lot of other partners who are leveraging the tools to help their customers and it goes back to our point earlier, john like amazon has to wonder services, right? We are noticing customers are open to work with partners and uh you know with different partners that really helped them to make sure they're making the right decisions when they are building on cloud. So a lot of the partners, a lot of the consulting companies are leveraging uh and hopes to deliver value to their customers as far as uh you know how we actually operate. You know, we pay attention to uh you know what, what customers are looking for, what, where are the next sort of challenges uh you know, customers are facing in a cloud environment world like super obsessed, you know, like we're trying to figure out how do we make sure every single resource is aligned with the business value without slowing companies down so that really drives us, we're constantly welcome customers to stay true to the admission >>and that's the ethos of devops moving fast. The old quote Mark Zuckerberg used to have move fast, break stuff and then he revised it to move move fast and make it stable, which is essentially operational thing. Right, so you're starting to see that maturity, I noticed that you guys also have a really cool pricing model, very easy to get in and you have a high end too. So talk us through about how to engage with you guys, how do people get involved? Just click and just jump in there, buying software buying services, take a minute to explain how people can, can work with you. >>Yeah, it's just, it's just signing up on our site, you know, our pricing is tier model, uh you know, once you sign up, if you do need help with, you know, remediating high risk issues we can bring in partners, we have a strong partner ecosystem. Uh we could definitely help you do interviews to the right partners but it's as simple as just signing up and just taking me out. First thing I guess. >>Jt great chatting with you have been there from early days of devops, born in the field, getting, getting close to the customers and you mentioned ec two and beta, they just celebrate their 15th birthday and I remember one of my starts that didn't actually get off the off the blocks, they didn't even have custom domains at that time was still the long remember the long you are else >>everything was ephemeral like when you restart server, everything will go away a cool >>time. And I just remember saying to myself man, every entrepreneur is going to use this service who would ever go out and buy and host the server. So you were there from the beginning and it's been great to see the success. Thanks for coming on the cube >>all That's >>okay. Jt thanks so much as a cube conversation here in Palo alto. I'm john for your host. Thanks for watching. Mhm.

Published Date : Sep 7 2021

SUMMARY :

Jt Welcome to the cube conversation. So yeah, Re platform in the cloud and they go great, let's do more and next thing, you know, they have an operations You have to maybe some environment, maybe you have to optimize So a lot, a lot of stuff going on, I gotta ask you while I got you here, what is the definition of cloud apps these days, Uh and it's critical because you know, you're running a uh high availability I gotta ask you why, why is it harder Yeah, I mean, when I first got into, you know, I t you know, So the question I get a lot from people we talk about devops and cloud, cloud apps or cloud So what we do, we actually, you know, analyze all your metadata. So there's a lot of things that you could miss. So you know we asked, you know, we start by just asking the question, having teams Because you know, it's the clouds at E. P. I. Based system. we do, you know we when we look at tags and we look of what you do is out there, how would you talk to that that C I O or C. It's like, you know, So when you have all this and you start to operationalize this next gen, What is the delta and uh, you know, I noticed you guys have a really strong net promoter score. and then I, you know, founded a consulting company over 100 employees. So talk us through about how to engage with you guys, how do people get involved? our pricing is tier model, uh you know, once you sign up, So you were there from the beginning and it's been great to see the I'm john for your host.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Mark ZuckerbergPERSON

0.99+

$4,000QUANTITY

0.99+

amazonORGANIZATION

0.99+

JohnPERSON

0.99+

JT GiriPERSON

0.99+

200 servicesQUANTITY

0.99+

thousandsQUANTITY

0.99+

jerry ChenPERSON

0.99+

AWSORGANIZATION

0.99+

hundredsQUANTITY

0.99+

nine million containersQUANTITY

0.99+

1000 thingsQUANTITY

0.99+

over 100 employeesQUANTITY

0.99+

last weekDATE

0.98+

last monthDATE

0.98+

johnPERSON

0.98+

Palo alto CaliforniaLOCATION

0.98+

12 teamsQUANTITY

0.98+

JtPERSON

0.98+

15th birthdayQUANTITY

0.98+

over 200QUANTITY

0.98+

bothQUANTITY

0.97+

First thingQUANTITY

0.97+

Palo altoLOCATION

0.97+

two pointQUANTITY

0.95+

56 workloadsQUANTITY

0.95+

firstQUANTITY

0.95+

oneQUANTITY

0.92+

couple of years agoDATE

0.92+

GreylockORGANIZATION

0.92+

millions containersQUANTITY

0.87+

CeoORGANIZATION

0.84+

Jt gearORGANIZATION

0.84+

a yearQUANTITY

0.84+

C X O C IOSTITLE

0.83+

companiesQUANTITY

0.77+

single resourceQUANTITY

0.77+

Hot StartupORGANIZATION

0.76+

1stQUANTITY

0.75+

singleQUANTITY

0.75+

EnosORGANIZATION

0.72+

one workloadQUANTITY

0.71+

CSOsTITLE

0.67+

lambdaQUANTITY

0.66+

every single resourceQUANTITY

0.64+

single workloadQUANTITY

0.63+

ec twoTITLE

0.5+

zillionQUANTITY

0.49+

DeltaORGANIZATION

0.49+

resourceQUANTITY

0.44+

BOS18 Brian Hoffmann VTT


 

>>from >>Around the globe, it's the cube with digital coverage of IBM think 2021 brought to you by IBM. >>Welcome back to IBM Think 2021 we're gonna dig into the intersection of finance and business strategy. My name is Dave Volonte and you're watching the cubes continuous coverage of IBM thinking with me is brian Hoffman is the chief operating officer of IBM Global financing, brian, thanks for coming on the cube today. >>Good morning, Great to be here. >>Hey, good morning. So I think we've heard a lot about the impact of hybrid cloud ai digital transformation and I wonder as a finance person in a former CFO, what do you see? And how do you think about some of the key considerations and financials and strategies that are supporting these major projects? Right? We got to come to the CFO and say, hey, we want to spend some money and here's the benefit, here is the cost. How can see IOS and their teams work with CFOs to try to really accelerate that digital transformation. >>Great question. And actually that question, I think I might have answered it a little bit differently, like two years ago, a year ago before the pandemic, I think it's actually changed a little bit with pandemic in my experience is the CFO people would come into me for projects and there's three ways you can justify it, but you can justify short term immediate, quick payback kind of hitters, you can justify it with, you know, improving our efficiency or effectiveness, um you know, reducing costs in the long run, making the client experience better or more from a strategic point of view, um you know, growing revenue getting to new clients, improving margins right? And so the the hybrid cloud transformation journey really still addresses those three things and when we come in, a lot of people focus like I said, on that third strategic point, but but all three of those come into play, and what's really interesting now is is as I'm dealing with it, I'm talking to other Cfos. The pandemic is really, if you will throw in a wrinkle in here, right? So the clients that I'm talking to, the IBM clients, they have to operate their business very differently and and their business models, some of them are changing clearly. Their clients, their business models are changing their operating differently as well. Um So, so our clients have to react to that and Hybrid Cloud and that that that type of of a structure really can support that. So there's really an emphasis here now to act with much more speed on this journey to get moving on it to get there because you have to make these changes and doing those two things in concert really has a ton of business value. >>Yeah I mean the cfos that I've talked to in the C. I. O. S. It's really kind of industry dependent, right? If you're in airlines or hospitality was like uh we got to cut costs. A lot of organizations said okay we're gonna support remote workers put in V. D. I. Or deal with endpoint security or whatever it was. But we're actually gonna double down on our digital transformation. This is we're gonna lean into an opportunity for us to come out stronger. How did you guys approach it in terms of your own internal digital >>transformation? Yeah. We we we were working on our digital transformation uh a little bit before the pandemic and it kind of followed those those three uh those three items when they when they first started implementing it, they came in and said hey if we can if we can move to a cloud platform, our infrastructure savings will be pretty significant. You know the I. T. Infrastructure savings will be 30 to 40%. So you know, quick payback CFO types love that. So you know, we went forward with that. Um but then quickly we saw the real benefits of moving to a hybrid cloud strategy. So just as an example as we were making some of these changes, we found a workflow tool in one of our markets in europe, that was a great tool and uh if we wanted to implement that across the business um in the old days, You know, we're in 40 countries, we've got 2500 employees, three lines of business. It would have been very complex because our operating structure is is very robust, very complex. Um Probably have taken a year, two years to do that. But since we are now on a cloud platform we got that rolled out that workflow tool rolled out across our business in months, Saving 20-30 of of workload. Being much more um efficiently getting to our clients and reacting quickly to them. And in fact that tool got adopted across IBM because that cloud platform enabled that to happen. And then the great thing which I didn't even realize at the time but now thinking more strategically um are my I. T. Resource earlier was running at about 50 50 50 people working on maintenance. The kind of things with 50 on development as we've now transition to this cloud. My I. T. Resources now 70 plus percent dedicated to new development. So now we can go attack new things that really provide customer value in the pandemic. You know the first thing to look at is can we get into more um you know electronic contracts, E signatures, things that would provide value to customers anyway. But in the pandemic is like really a significant, you know differentiator for us. So all those things were enabled by that journey that we've been taking, >>interesting. I mean most of the CF I uh in fact every CFO I know of a public company took advantage of cheap debt and improving their balance sheets. And liquidity is not the problem today, especially in the tech industry at the same time. You know I'm interested in how companies are using financing. They don't want to necessarily build out data centers but they do want to fund their digital transformation. So what are you seeing in terms of how your customers are using financing? You know, what's the conversation like? What advice are you giving? >>Yeah. So um you know, it depends a little bit on the type of customer, like you said, you know, we we deal with a lot of the biggest, strongest customers in the world. And, and as we deal with them, financing really helps the return on their investment, right, aligning the payments of those cash flows for when they're getting the benefits. Uh And and we see a real good value in improving the return on those investments in helping, you know, if it's something that's going to go to the board that really makes a difference to them. Uh So, you know, that that's always been a value proposition. It continues to be. Um The other thing that's helping now, like you said, is even in this environment, people want to accelerate this transition. Um but it's a, it's a, it's a big time of uncertainty. So, you know, some of the smaller clients, some of the more uh you know, the industries that are a little more cash constrained airlines, et cetera, you know, they're looking for the the immediate cash flow benefits. Um But many of the cfos are saying, hey, listen, you know, I can I want to go as fast as I can help me put together a structure that lets me, you know, get this in place as quick as possible, but not below my budget is not make me take too much risk in this time of uncertainty, but keeps me moving and I think that's where financing really comes in as well. Um And we're kind of talking much more about that value proposition than just if you will be improved ri proposition that we've had all along. >>I want to talk a little bit more about IBM global financing. I mean, people, a lot of times people misunderstand it. You know when you look at I. B. M. S. Debt, you gotta you gotta take out the piece that IBM global financing because that's a significant portion and that's sort of self self fulfilling. But what do people need to know about IBM global financing, >>We actually run three different businesses and we've been transforming our strategy over time. So you know right now with with IBM being all in on hybrid, we are very focused on helping IBM and IBM clients on this digital journey on IBM growing their revenue. Um you know, we we in the past have been more of if you're really full service. It finance are doing a lot more than just IBM but we are really focused now on on helping IBM. So I think the best thing for for IBM clients to know is as you're talking to IBM about the total solution, the total value profit IBM brings that financing, that cash flow solution should be embedded in what they're looking at and can provide a lot of value. Um You know, the second thing I think most people know is we provide financing for IBM s channel, so you know, distributors, resellers etcetera, if you're an IBM distributor or reseller, you know about us, because just about 100% of IBM partners use us to provide that working capital financing, uh you know, we have a state of the art platforms were just so integrated with them. Again, I don't have to I don't have to do a sales pitch on that because they don't know us. Um and the third one just because people might not realize this is, we do haven't we call it an asset recovery business, um it's a pretty small business, you know, it's bringing back equipment that comes off lease, so that uh is used by IBM internally. Um and while, you know, it's not, it's not as well known, I'm pretty proud of it because it really does help with the focus that the world that IBM has on sustainability and reuse and um and making sure that, you know, we're treating the planet fairly here, so that that's a small but powerful piece of our business well, >>You're quite broader than leasing mainframes in the 80s, >>that's for sure. >>Talking more about give, you can double click on that sort of hybrid cloud and obviously machine intelligence is a big piece of those digital transformation. So, so how specifically are you, are you helping clients really take advantage of things like hybrid cloud? >>So yeah, so um what we have typically had been doing and I can give you a couple different examples if you will, you know, for larger clients. What we tend to be doing is helping them like I said, accelerate their business. So um, you know, they're looking to modernize their applications but they still have a big infrastructure in place and so they'll run into uh you know, budget constraints and and you know, cash is still be careful to managed. So for them we are much more typically focused on, you know, if you will project based financing that allows those cash flows to line up with the savings. Again, those are tend to be bigger projects that often go to boards that return benefit is very important. Ah a little bit different value proposition for more mid market customers. So, you know, as I was kind of just looking recently, we have a couple of different customers like form engineering um or or Novi still there to smaller uh compared to some of the other customers we use uh they are again much more focused on how do I, how do I conserve and best use my cash immediately? But they want to get this, they want to get this transformation going. So you know we provide flexible payment plans to them so they can go at the rate and pace that they need to, they can align up those cash deals with their budgets for their business cycles etcetera. So again, where smaller customers where timing of the cash flow in their business cycle is very important. We provide that benefit as well. >>You know, I wonder if I could ask you. So you remember of course the early days of public cloud, one of the first tail winds for public cloud was the pen was not the pandemic, the for the financial crisis of 2007. And a lot of CFO said, Okay let's shift to uh to an apex model. And now you can always provide financial solutions to customers. But it seems like today when I talk to clients, it's it's much more integrated, it's not just the public cloud, you can do that for on premises and again you always could do that. But it seems like there's much more simpatico uh in the way in which you provide that that that solution is that >>fair? Absolutely. And this might be a little to finance geeky, I don't know. But if you go back, well if you go back to the financial crisis and all that and at that time um a lot of people were looking to financing for you call that ah please. But you know if if I was talking about off balance sheet transactions right? Um and and you know between regulation etcetera etcetera, that that off balance sheet thing. First of all, people are seeing through it that much more clearly. But second, you know the the uh financial disclosure say you kind of have to show that stuff so that that if you will, window dressing benefit has gone away. So now which is great for me, we really get to talk about what's the real benefit, what is the, you know, what is the real benefit of? You know, you want to make sure that you have known timed expenditures. You know that if your business grows that your your expenses can grow evenly with those with that business growth, you don't have to take big chunky things and so you know uh financing under the covers of an integrated solution and IBM has a lot of those integrated solutions allows businesses to have that, you know, known timing known quantities. Most of the benefits that people were looking for from that affects cloud model. Um without, you know, some of the problems that you have, when you try to have to go straight to a public cloud for very, you know, big sensitive businesses, confidential confidential data etcetera. >>Thanks for that. So, okay, we're basically out of time. But I wonder if you could give us the bumper sticker and key takeaways, maybe you could summarize for our audience. >>Yeah. For those that noah global financing or dealing with IBM, my view would be in the past we might have been a little more, you know, out there with our own with our own banner etcetera. In the future. I think that you should expect to see us very well integrated into anything you're doing. I think our value proper is clear and compelling and and and will be included uh in these hybrid con transformations to the benefit of our clients. So that's that's our objective and we're well on our way there. >>Great. Anywhere, anywhere I'm gonna go for more, more familiar, obviously IBM dot com. You got some resources there. But there is >>there any Absolutely dot com? There's there's a thank you. Just probably a slash financing. But yeah, there's >>were >>loaded with information of people. >>Excellent brian thanks so much for coming to the cube. Really great to have you today. >>I appreciate the time. >>My pleasure. Thank you for watching everybody's day. Volonte for the Cuban. Our coverage of IBM think 2021, the virtual edition right back.

Published Date : Apr 16 2021

SUMMARY :

think 2021 brought to you by IBM. Welcome back to IBM Think 2021 we're gonna dig into the intersection of finance and And how do you think about some of the key my experience is the CFO people would come into me for projects and there's three ways you can justify How did you guys approach it in terms of your own internal digital You know the first thing to look at is can we get into more um you know electronic contracts, So what are you seeing in terms of how Um But many of the cfos are saying, hey, listen, you know, I can I You know when you look at I. B. M. S. Debt, you gotta you gotta take out the piece that IBM Um and while, you know, it's not, it's not as well known, Talking more about give, you can double click on that sort of hybrid cloud and obviously machine place and so they'll run into uh you know, budget constraints and and you integrated, it's not just the public cloud, you can do that for on premises and again you always could do that. of those integrated solutions allows businesses to have that, you know, known timing known quantities. But I wonder if you could give us the bumper sticker and key I think that you should expect to see us very well integrated into anything you're doing. But there is But yeah, Really great to have you today. Thank you for watching everybody's day.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VolontePERSON

0.99+

30QUANTITY

0.99+

IBMORGANIZATION

0.99+

two yearsQUANTITY

0.99+

brian HoffmanPERSON

0.99+

europeLOCATION

0.99+

2500 employeesQUANTITY

0.99+

Brian HoffmannPERSON

0.99+

40 countriesQUANTITY

0.99+

brianPERSON

0.99+

todayDATE

0.99+

third oneQUANTITY

0.99+

two years agoDATE

0.99+

threeQUANTITY

0.99+

a yearQUANTITY

0.99+

three waysQUANTITY

0.98+

70 plus percentQUANTITY

0.98+

three thingsQUANTITY

0.98+

2021DATE

0.98+

secondQUANTITY

0.97+

IOSTITLE

0.97+

80sDATE

0.97+

FirstQUANTITY

0.97+

first thingQUANTITY

0.97+

firstQUANTITY

0.97+

pandemicEVENT

0.97+

20-30QUANTITY

0.96+

CfosORGANIZATION

0.96+

two thingsQUANTITY

0.96+

40%QUANTITY

0.95+

about 100%QUANTITY

0.95+

financial crisis of 2007EVENT

0.95+

oneQUANTITY

0.95+

V. D. I.LOCATION

0.94+

second thingQUANTITY

0.94+

I. T. ResourcesORGANIZATION

0.93+

I. T. ResourceORGANIZATION

0.93+

three itemsQUANTITY

0.9+

IBM Global financingORGANIZATION

0.9+

IBM dot comORGANIZATION

0.89+

a year ago beforeDATE

0.88+

about 50 50QUANTITY

0.87+

three linesQUANTITY

0.85+

VolontePERSON

0.83+

third strategicQUANTITY

0.79+

coupleQUANTITY

0.76+

50QUANTITY

0.75+

50 peopleQUANTITY

0.75+

dayEVENT

0.71+

noahPERSON

0.59+

businessesQUANTITY

0.57+

C. I.LOCATION

0.56+

DebtORGANIZATION

0.54+

doubleQUANTITY

0.52+

think 2021COMMERCIAL_ITEM

0.51+

BOS18OTHER

0.43+

CubanPERSON

0.41+

Think 2021COMMERCIAL_ITEM

0.4+

M.ORGANIZATION

0.36+

Doc D'Errico, Infinidat | CUBE Conversation, December 2020


 

>>From the cubes studios in Palo Alto, in Boston, connecting with thought leaders all around the world. This is a cute conversation. >>The external storage array business, as we know it has changed forever. You know, you can see that in the survey data that we do and the financial information from the largest public storage companies. And it's not just because of COVID, although that's clearly a factor which has accelerated the shifts that we see in the market, specifically, those CIO is, are rationalizing their infrastructure portfolios by consolidating workloads to simplify, reduce costs and minimize vendor sprawl. So they can shift resources toward digital initiatives that include cloud containers, machine intelligence, and automation all while reducing their risks. Hello everyone. This is Dave Vellante and welcome to this cube conversation where we're going to discuss strategies related to workload consolidation at petabyte scale. And with me is Dr. Rico. He's the vice president office of the CTO at INFINIDAT welcome back to the cube doc, always a pleasure to see you >>And great to be here. Always a pleasure to work with you, Dave. >>So doc, I just published a piece over the weekend and I pointed out that of the largest storage companies, only one showed revenue growth last quarter, and that was on a significantly reduced compare with last year. So my first question to you is, is INFINIDAT growing its business. >>Oh, absolutely. It's been a very interesting year all across as you can quite imagine. Um, but you know, our footprint is such that with our elastic pricing models and the, and the fact that we've got excess capacity, uh, in almost every single system that's out there, we were really given our customers a, an opportunity to take advantage of that, to increase their capacity levels while maintaining the same levels of performance and availability, but not have to have anybody on premises during this crazy, you know, COVID struck era. >>Yeah. So you're bringing that cloud model to the, to the data center, which has obviously been a challenge. I mean, you mentioned the subscription sort of like pricing, we're going to get into the cloud more, but I wonder if we could step back a little bit and look at some of the macro trends that you're seeing in the market and specifically as it relates to on-prem storage strategies that CEO's are taking. >>Yeah. You know, it's been interesting, we've seen over the course of the past five years or so, certainly a big uptick in people looking at next generation or what they believe in perceived to be next generation storage platforms, which are really just evolutions of media. They're not really taking advantage of any new innovations in storage and, you know, not withstanding our own products, which are all software driven. We've talked about that before, but what what's really happened in this past year, as you, as you said, CEOs and CTOs, they're always looking for that, that next point of leverage advantage. And they're looking for more agility in application deployment, they're looking in a way to rapidly respond to business requirements. So they're looking very much at those cloud-like requirements. They're looking at those capabilities to containerize applications. They're looking at how they can, um, you know, shift out virtual machines if they're not in a directly in a container, uh, and how the storage by the way, can, can have the same advantage and in order to do so, they really need to look at storage consolidation. You know, I think Dave, to, to sum it up from the storage perspective, you know, I love Ken Steinhardt was recently on a video and, you know, he was, he was challenged that, you know, people aren't looking at spinning rust, riff, you know, a derogatory wave of referring a disc and, and Ken, so rightly and accurately responded. Yeah. But people weren't really looking for QLC either. You know, what they're looking for is performance, scale availability and certainly cost effectiveness and price. >>Yeah. It was like, I set up front dock. I mean, if you're a C level executive today, you don't want to worry about your storage infrastructure. You've got bigger problems to worry about. You just want it to work. And so when you talk about consolidating workloads, people often talk about the so-called blast radiation. In other words, people who run data centers, they understand that things fail. And sometimes something as simple, it might be a power supply can have a catastrophic downstream effect on application availability. So my question is, how do you think about architecting systems? So as to minimize the effects of component failures on the business? >>Yeah. You know, it's a very interesting term, Dave blast radius, right? We've, we've heard this referred to storage over the last several decades. In fact, when it really should refer to the data center and the application infrastructure. Uh, but you know, if we're talking about just the storage footprint itself, one of the things that we really need to look, look at is the resilience and the reliability of the architecture. And when you look at something that is maybe dual controller single or double power supply, there are issues and concerns that take in, in, into, into play. And what we've done is we've designed something that's really triple redundant, which is typically only been applied to the very high end of the market before. And we do it in a very active, active, active manner. And naturally we have suggestions for best practices for deployment within a data center as well, you know, multiple sources of power coming into the array and things of that nature. But everything needs to be this active, active, active type of architecture in order to bring those reliability levels up to the point where as long as it's a component failure within the array, it's not going to cause an outage or data on availability event. >>Yeah. So imagine a heat map when people talk about the blast radius. So imagine the heat map is green. There's a big, you know, there's a yellow area and there's a, there's a red area. And what you're saying is, as far as the array goes, you're essentially eliminating the red area. Now, if you take it to the broader installation, you know, that red area, you have to deal with it in different ways, remote replication, then you can at the sink and in a sink. Uh, but, but essentially what I'm hearing you say, doc, is, is you're squeezing that red area out. So, so your customers could sleep at night. >>That absolutely sleep at night is so appropriate. And in fact, we've got a large portion of our customer base is, or they're running mission critical businesses. You know, we have some of the most mission critical companies in our, in our logo portfolio, in the world. We also have, by the way, some very significant service provider businesses who were we're providing, you know, mission critical capabilities to their customers in turn, and they need to sleep at night. And it it's, you know, availability is only one factor. Certainly manageability is another cause you know, not meeting a service level is just like data unavailability in some respects. So making manageability is automatic as it can be making sure that the, that the system is not only self-healing, but can re respond to variations in workload appropriately is very, very critically important as well. >>Yeah. So that, that you mentioned mission critical workloads, and those are the, those are the workloads that let's face it. They're not moving into the cloud, certainly not in any, any big way, you know, why would they generally are CIO CTO is they're putting a brick wall around that saying, Hey, it works. We don't want to migrate that piece, but I want to talk more about how your customers are thinking about workload consolidation and rationalizing their storage portfolios. What are those conversations like? Where do they start and what are some of the outcomes that you're seeing with your customers? >>Yeah, I think the funny thing about that point Dave, is that customers are really starting to think about a cloud in an entirely different way. You know, at one point cloud meant public cloud and men, this entity, uh, outside the walls of the data center and people were starting to use services without realizing that that was another type of cloud. And then they were starting to build their own versions of cloud. You know, we were referring to them as private clouds, but they were, you know, really spread beyond the walls of a single data center. So now it's a very hybrid world and there's lots of different ways to look at it, hybrid cloud multi-cloud, whatever moniker you want to put on it. It really comes down to a consistency in how you manage that infrastructure, how you interface with that infrastructure and then understanding what the practicality is of putting workloads in different places. >>And practicality means not only the, you know, the latency of access of the data, but the costs associated with it. And of course the other aspects that we talked about, like what the, the availability metrics, and as you increase the availability and performance metrics, those costs go up. And that's one of the reasons why some of these larger mission critical data centers are really, you know, repatriating their, their mission, critical workloads, at least the highest, highest levels of them and others are looking at other models, for example, AWS outposts, um, which, you know, talked about quite a bit recently in AWS reinvent. >>Yeah. I just wrote, again, this weekend that you guys were one of the, uh, partners that was qualified now, uh, to run on AWS outpost, it's interesting as Amazon moves, it's, you know, it's, it's it's model to the edge, which includes the data center to them. They need partners that can, that really understand how to operate in an on-premise world, how to service those customers. And so that's great to see you guys as part of that. >>Yeah. Thank you. And, you know, it was actually a very seamless integration because of the power and capability of all of the different interface models that we have is they all are fully and tightly integrated and work seamlessly. So if you want to use a, you know, a CSI type model, uh, you know, do you interface with your storage again, uh, with, with INFINIDAT and, you know, we work with all of the different flavors so that the qualification process, the certification process and the documentation process was actually quite easy. And now we're able to provide, you know, people who have particularly larger workloads that capability in the AWS on premises type environment. >>Yeah. Now I implied upfront that that cloud computing was the main factor, if not the primary factor, really driving some of the changes that we're seeing in the marketplace. Now, of course, it's all, not all pink roses with the cloud. We've seen numerous public cloud outages this year, certainly from Microsoft. We saw the AWS Kinesis outage in November. Google just had a major outage this month. Gmail was down G suite was down for an extended period of time. And that disrupted businesses, we rely on that schools, for example. So it's always caveat emptor as we know, but, but talk to INFINIDAT cloud strategy, you mentioned hybrid, uh, particularly interested in, in how you're dealing with things like orchestration and containers and Kubernetes. >>Yeah, well, of course we have a very feature rich set of interfaces for containers, Kubernetes interfaces, you know, downloadable through native, uh, native. So they're, they're very easy to integrate with, you know, but our cloud strategy is that, you know, we are a software centric model and we, you know, all of the, all of the value and feature function that we provide is through the software. The hardware of infiniboxes really a reference architecture that we, uh, we deliver to make it easier for customers to enjoy say 100% availability model. But if, if you want to run something in a traditional on premises data center, you know, straighten InfiniBox is fine, but we also give you the flexibility of cloud-like consumption through our pricing models, our, our elastic pricing models. So you don't need to consume an entire InfiniBox day one. You can grow and shrink that environment with, uh, with an OPEX model, or you can, um, buy it as you consume it in a, in a cap ex model. >>And you can switch, uh, from OPEX over to CapEx if it becomes more cost effective for you in time, which I think is, is what a lot of people are looking for. If you're looking for that public cloud, we, you know, we have our new tricks cloud offering, which is now being delivered more through partners, but you know, some businesses and especially the, the mid tier, um, you know, the SMB all the way through the mid enterprise are also now looking to cloud service providers, many of which use InfiniBox as, as their backend. And now with AWS outposts, of course, you know, we can give you that on premises, uh, uh, experience of the public cloud, >>You guys were early on. And obviously in that, that subscription-based model, and now everyone's doing it. I noticed in the latest Gartner magic quadrant on, on storage arrays, which you guys were named a leader, uh, they, I think they had a stat in there and said, I, I forget what the exact timeframe was that 50% of customers would be using that type of model. And again, I guarantee you by whatever time frame, that was a hundred percent of the vendor community is going to be delivering that type of model. So, so congratulations on being named a leader, I will say this there's there's there's consolidation happening in the market. So this, to me, this bodes well, to the extent that you can guarantee high availability and consistent performance, uh, at, at scale, that bodes well for, for you guys in a consolidating market. And I know IDC just released a paper, it was called, uh, I got, uh, I got a copy here. >>It's called a checklist for, uh, storage, workload consolidation at petabyte scale. It was written by Eric Bergner, who I've known for a number of years. He's the VP of infrastructure. Uh, he knows his stuff and the paper is very detailed. So I'm not going to go through the checklist items, but I, but I think if you don't mind, doc, I think it's worth reading an excerpt from this. If I can, as part of his conclusions, when workload consolidation, it organizations should carefully consider their performance availability, functionality, and affordability requirements. Of course, few storage systems in the market will be able to cost effectively consolidate different types of workloads with different IO profiles onto a single system. But that is in INFINIDAT forte. They're very good at it. So that's a, that's quite a testimonial, you know, why is that your thoughts on what Eric wrote? >>Well, you know, first of all, thank you for the kudos on the Gartner MQ, you know, being a leader on the second year in a row for primary storage, only because that documents only existed for two years, but, uh, you know, we were also a leader in hybrid storage arrays before that. And, you know, we, we love Gardner. We think they're, they're, you know, um, uh, real critical, you know, reliable source for, for a lot of large companies and, and IDC, you know, Eric of course is, uh, he's a name in the industry. So we, you know, we very much appreciate when he writes something, you know, that positive about us. But to answer your question, Dave, you know, there's, there's a lot that goes on inside InfiniBox and is the neural cash capabilities, the deep learning engine that is able to understand the different types of workloads, how they operate, uh, how to provide, you know, predictable performance. >>And that I think is ultimately key to an application. It's not just high performance. It's, it's predictable performance is making sure the application knows what to expect. And of course it has to be performant. It can't just be slow, but predictable. It has to be fast and predictable providing a multi-tenant infrastructure that is, that is native to the architecture, uh, so that these workloads can coexist whether they're truly just workloads from multiple applications or workloads from different business units, or potentially, as we mentioned with cloud service providers, workloads from different customers, you know, they, they need to be segmented in such a way so that they can be managed, operating and provide that performance and availability, you know, at scale because that's where data centers go. That's where data centers are. >>Great. Well, so we'll bring that graphic back up just to show you, obviously, this is available on your website. Uh, you can go download this paper from Erik, uh, from IDC, www infinidat.com/ian/resource. I would definitely recommend you check it out. Uh, as I say, Ericsson, you know, I've been in the business a long, long time, so, so that's great, doc, we'll give you the last word. Anything we didn't cover any big takeaways you want to, you want to share with the audience? >>Yeah. You know, I think I'll go back to that point. You know, consolidation is absolutely key for, uh, not just simplicity of management, but capability for you respond quickly to changing business requirements and or new business requirements, and also do it in a way that is cost-effective, you know, just buying the new shiny object is it's expensive and it's very limited in, in shelf life. You're just going to be looking for the next one the next year. You want to provide something that is going to provide you that predictable capability over time, because frankly, I have never met a C X O of anything that wasn't trying to increase their profit. >>You know, that's a great point. And I just, I would add, I mean, the shiny new object thing. Look, if you're in an experimental mode and playing around with, you know, artificial intelligence or automation thinking, you know, areas that you really don't know a lot about, you know, what, check out the shiny new objects, but I would argue you're on-prem storage. You don't want to be messing around with that. That's, it's not a shiny new objects business. It's really about, you know, making sure that that base is stable. And as you say, predictable and reliable. So doc Terico thanks so much for coming back into cube. Great to see you. >>Great to see you, David, and look forward to next time. >>And thank you for watching everybody. This is Dave Volante and we'll see you next time on the queue.

Published Date : Dec 17 2020

SUMMARY :

From the cubes studios in Palo Alto, in Boston, connecting with thought leaders all around the world. You know, you can see that in the survey And great to be here. So my first question to you is, is INFINIDAT growing Um, but you know, our footprint is such that I mean, you mentioned the subscription sort of like pricing, we're going to get into the cloud more, you know, he was, he was challenged that, you know, people aren't looking at spinning And so when you talk about Uh, but you know, if we're talking about you know, that red area, you have to deal with it in different ways, remote replication, And it it's, you know, availability is only one factor. They're not moving into the cloud, certainly not in any, any big way, you know, clouds, but they were, you know, really spread beyond the walls of a single data center. And practicality means not only the, you know, the latency of access of the And so that's great to see you guys as part And now we're able to provide, you know, people who have particularly larger you mentioned hybrid, uh, particularly interested in, in how you're dealing with things like orchestration you know, but our cloud strategy is that, you know, we are a software centric the, the mid tier, um, you know, the SMB all the way through the mid enterprise are also to the extent that you can guarantee high availability and consistent performance, you know, why is that your thoughts on what Eric wrote? We think they're, they're, you know, um, uh, real critical, you know, providers, workloads from different customers, you know, they, they need to be segmented in such Uh, as I say, Ericsson, you know, that is cost-effective, you know, just buying the new shiny object is thinking, you know, areas that you really don't know a lot about, you know, what, check out the shiny new objects, And thank you for watching everybody.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Eric BergnerPERSON

0.99+

DavidPERSON

0.99+

Ken SteinhardtPERSON

0.99+

Dave VolantePERSON

0.99+

EricPERSON

0.99+

December 2020DATE

0.99+

AWSORGANIZATION

0.99+

INFINIDATORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

NovemberDATE

0.99+

50%QUANTITY

0.99+

KenPERSON

0.99+

MicrosoftORGANIZATION

0.99+

two yearsQUANTITY

0.99+

second yearQUANTITY

0.99+

100%QUANTITY

0.99+

ErikPERSON

0.99+

GoogleORGANIZATION

0.99+

BostonLOCATION

0.99+

RicoPERSON

0.99+

IDCORGANIZATION

0.99+

EricssonORGANIZATION

0.99+

first questionQUANTITY

0.99+

GmailTITLE

0.99+

last quarterDATE

0.99+

last yearDATE

0.99+

todayDATE

0.98+

one factorQUANTITY

0.98+

hundred percentQUANTITY

0.98+

next yearDATE

0.98+

this yearDATE

0.97+

OPEXORGANIZATION

0.97+

singleQUANTITY

0.97+

CapExORGANIZATION

0.97+

this monthDATE

0.96+

doubleQUANTITY

0.96+

single systemQUANTITY

0.95+

oneQUANTITY

0.95+

GardnerPERSON

0.95+

G suiteTITLE

0.95+

one pointQUANTITY

0.94+

Doc D'ErricoPERSON

0.93+

Gartner MQORGANIZATION

0.89+

docPERSON

0.89+

CTOORGANIZATION

0.89+

InfiniBoxCOMMERCIAL_ITEM

0.85+

single data centerQUANTITY

0.84+

GartnerORGANIZATION

0.84+

KubernetesTITLE

0.8+

TericoPERSON

0.79+

firstQUANTITY

0.77+

COVIDOTHER

0.77+

past five yearsDATE

0.75+

last several decadesDATE

0.73+

www infinidat.com/ian/resourceOTHER

0.72+

InfinidatPERSON

0.7+

InfiniBoxORGANIZATION

0.7+

past yearDATE

0.69+

Dr.PERSON

0.69+

this weekendDATE

0.67+

cap exORGANIZATION

0.66+

day oneQUANTITY

0.62+

infiniboxesORGANIZATION

0.6+

InfiniBoxTITLE

0.58+

COVIDTITLE

0.58+

dualQUANTITY

0.55+

KinesisCOMMERCIAL_ITEM

0.42+

Jay Snyder, New Relic | AWS re:Invent 2020


 

>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. >>Hello and welcome to the Cube virtual here with coverage of aws reinvent 2020. I'm your host, Justin Warren. And today I'm joined by J. Snyder, who is the chief chief customer officer at New Relic J. Welcome to the Cube. >>It is fantastic. Me back with the Cube. One of my favorite things to do has been for years. So I appreciate you having me. >>Yes, a bit of a cube veteran. Been on many times. So it's great to have you with us here again. Eso you've got some news about new relic and and Amazon away W s strategic collaboration agreement. I believe so. Maybe tell us a bit more about what that actually is and what it means. >>Yes. So we've been partners with AWS for years, but most recently in the last two weeks, we've just announced a five year strategic partnership that really expands on the relationship that we already had. We had a number of integrations and competencies already in place, but this is a big deal to us. and and we believe a big deal. Teoh A W s Aziz Well, so really takes all the work we've done to what I'll call the next level. It's joint technology development where were initially gonna be embedding new relic one right into the AWS management console for ease of use and really agility for anyone who's developing and implementing Ah cloud strategy, uh, big news as well from an adoption relative to purchase power so you can purchase straight through the AWS marketplace and leverage your existing AWS spend. And then we're gonna really be able to tap into the AWS premier partner ecosystem. So we get more skills, more scale as we look to drive consulting and skills development in any implementation for faster value realization and overall success in the cloud. So that's the high level. Happy to get into a more detailed level if you're interested around what I think it means to companies but just setting the stage, we're really excited about it as a company. In fact, I just left a call with a W S to join this call as we start to build out the execution plan for the next five years look like >>fantastic. So for those who might be new to new relic and aren't particularly across the sort of field of observe ability, could you just give us a quick overview of what new relic does? And and then maybe talk about what the strategic partnership means for for the nature of new relics business? >>Yes, so when I think about observe ability and what it means to us as opposed to the market at large, I would say our vision around observe ability is around one word, and that word is simplification. So, you know, I talked to a lot of customers. That's what I do all the time. And every time I do, I would say that there's three themes that come up over and over. It's the need to deliver a customer experience with improved up time and ever improving importance. It's the need to move more quickly to public cloud to embrace the scale and efficiency public cloud services have to offer. And then it's the need to improve the efficiency and speed of their own engineering teams so they can deliver innovation through software more quickly. And if you think about all those challenges And what observe ability is it's the one common thread that cuts across all those right. It's taking all of the operational data that your system admits it helps you measure improve the customer, experience your ability to move to public cloud and compare that experience before you start to after you get there. The effectiveness of your team before you deploy toe after you get there. And it's all the processes around that right, it helps you be almost able to be there before your there there. I mean, if that makes sense right, you'll be able to troubleshoot before the event actually happens or occurs. So our vision for this is like I talked about earlier is all about simply simplification. And we've broken this down into literally three piece parts, right? Three products. That's all we are. The first is about having a much data as you possibly can. I talked about admitting that transactional telemetry data, so we've created a telemetry data platform which rides on the world's most powerful database, and we believe that if we can take all of that data, all that infrastructure and application data and bring it into that database, including open source data and allow you to query it, analyze it and take action against it. Um, that's incredibly powerful, but that's only part one. Further, we have a really strong point of view that anybody who has the ability to break production should have the ability to fix production. And for us, that's giving them full stack observe ability. So it's the ability to action against all of that data that sits in the data platform. And then finally, we believe that you need to have applied intelligence because there's so many things that are happening in these complex environments. You wanna be able to cut through the noise and reduce it to find those insights and take action in a way that leverages machine learning. And that, for us, is a i ops. So really for us. Observe ability. When I talked about simplification, we've simplified what is a pretty large market with a whole bunch of products, just down to three simple things. A data platform, the ability to operationalize in action against that data and then layer on top in the third layer, that cake machine learning so it could be smarter than you can be so it sees problems before they occur. And that And that's what that's what I would say observe, ability is to us, and it's the ability to do that horizontally and vertically across your entire infrastructure in your entire stack. I hope that makes sense. >>Yeah, there's a lot of dig into there, So let's let's start with some of that operational side of things because I've long been a big believer in the idea of cloud is being a state of mind rather than a particular location on. A lot of people have been embracing Cloud Way Know that for we're about 10 or so years. And the and the size of reinvent is proven out how popular cloud could be. Eso some of those operational aspects that you were talking about there about the ability to react are particularly like that. You you were saying that anyone who could break production should be able to fix production. That's a very different way of working than what many organizations would be used to. So how is new relic helping customers to understand what they need to change about how they operate their business as they adopt some of these methods. >>Well, it's a great question. There's a couple of things we do. So we have an observe ability, maturity framework by which we employ deploy and that, and I don't want to bore the audience here. But needless to say, it's been built over the last year, year and a half by using hundreds of customers as a test case to determine effectively that there is a process that most companies go through to get to benefits realization. And we break those benefit categories into two different areas, one around operational efficiency and agility. The other is around innovation and digital experience. So you were talking about operational efficiency, and in there we have effectively three or four different ways and what I call boxes on how we would double, click and triple click into a set of actions that would lead you to an operational outcome. So we have learned over time and apply to methodology and approach to measure that. So depending on what you're trying to do, whether it's meantime to recover or meantime, to detect, or if you've got hundreds of developers and you're finding that they're ineffective or inefficient and you want to figure out how to deploy those resource is to different parts of the environment so you can get them to better use their time. It all depends on what your business outcome and business objective is. We have a way to measure that current state your effectiveness ply rigor to it and the design a process by using new relic one to fill in those gaps. And it can take on the burden of a lot of those people. E hate to say it because I'm not looking to replace any individual. It's really about freeing up their time to allow them to go do something in a more effective and more effective, efficient manner. So I don't know if that's answering the question perfectly, but >>e don't think there is a perfect answer to its. Every customer is a bit different. >>S So this is exactly why we developed the methodology because every customer is a little different. The rationale, though, is yeah, So the rationale there's a lot of common I was gonna say there's a lot of common themes, So what we've been able to develop over time with this framework is that we've built a catalog of use cases and experiences that we can apply against you. So depending on what your business objectives are and what you're trying to achieve, were able to determine and really auger in there and assess you. What is your maturity level of being able to deliver against these? Are you even using the platform to the level of maturity that would allow you to gain this benefit realization? And that's where we're adding a massive amount of value. And we see that every single day with our customers who are actually quite surprised by the power of the platform. I mean, if you think traditionally back not too far, two or even three years. People thought of new relic as an a P M. Company. And I think with the launch this summer, this past July with new relic one, we've really pivoted to a platform company. So while a lot of companies love new relic for a PM, they're now starting to see the power of the platform and what we can do for them by operationally operationalize ing. Those use cases around agility and effectiveness to drive cost and make people b'more useful and purposeful with their time so they can create better software. >>Yeah, I think that's something that people are realizing a lot more lately than they were previously. I think that there was a lot of TC analysis that was done on a replacement of FTE basis, but I think many organizations have realized that well, actually, that doesn't mean that those people go away. They get re tasked to do new things. So any of these efficiency, you start with efficiency. And it turns out actually being about business agility about doing new things with the same sort with the same people that you have who now don't have to do some of these more manual and fairly boring tasks. >>Yeah, just e Justin. If this if this cube interview thing doesn't work out for you were hiring some value engineers Right now it sounds like you've got the talk track down perfectly, because that's exactly what we're seeing in the market place. So I agree. >>So give us some examples, if you can, of maybe one or two off things that you've seen that customers have have used new relic where they've stripped out some of that make work or the things that they don't really need to be doing. And then they're turning that into new agility and have created something new, something more individual. Have you got an example you could share with us? >>You know, it's it's funny way were just I just finished doing our global customer advisory boards, which is, you know, rough and tough about 100 customers around the world. So we break it into the three theaters, and we just we were just talking with a particular customer. I don't want to give their name, but the session was called way broke the sessions into two different buckets, and I think every customer buys products like New Relic for two reasons. One is to either help them save money or to help them make money. So we actually split the sessions into those two areas and e think you're talking about how do we help them? How do we help them save money? And this particular company that was in the media industry talked at great length about the fact that they are a massive news conglomerate. They have a whole bunch of individual business units. They were decentralized and non standardized as it related to understanding how their software was getting created, how they were defining and, um, determining meantime to recover performance metrics. All these things were happening around them in a highly complex environment, just like we see with a lot of our customers, right? The complexity of the environments today are really driving the need for observe ability. So one of the things we did with them is we came in and we apply the same type of approach that we just discussed. We did a maturity assessment for them, and we find a found a variety of areas where they were very immature and using capabilities that existed within the platform. So we're able to light up a variety of things around. Insights were able to take more data in from a logging perspective. And again, I'm probably getting a little bit into the weeds for this particular session. But needless to say, way looked at the full gamut of metrics, events, logs and traces which was wasn't really being done in observe, ability, strategy, manner, and deploy that across the entire enterprise so created a standard platform for all the data in this particular environment. Across 5th, 14 different business units and as a byproduct, they were able to do a variety of things. One, the up time for a lot of their customer facing media applications improved greatly. We actually started to pivot from actually driving cost to showing how they could quote unquote make money, because the digital experience they were creating for a lot of their customers reduced the time to glass, if you will, for clicking the button and how quickly they could see the next page, the next page or whatever online app they were looking to get dramatically. So as a byproduct of this, they were about the repurpose to the point you made Justin. Dozens of resource is off of what was traditionally maintenance mode and fighting fires in a reactive capability towards building new code and driving new innovation in the marketplace. And they gave a couple of examples of new applications that they were able to bring to market without actually having to hire any net New resource is so again, I don't want to give away the name, the company, it maybe it was a little too high level, but it actually plays perfectly into exactly what what you're describing, Um, >>that is a good example of one of those that one of the it's always nice to have a specific concrete customer doing one of these kinds of things that you you describe in generic terms. Okay. No, this is this is being applied very specifically to one customer. So we're seeing those sorts of things more and more. >>Yeah, and I was gonna give you, you know, I thought about in advance of this session. You know, what is a really good example of what's happening in the world around us today? And I thought of particular company that we just recently worked with, which is check. I don't know if you're familiar with keg, if you've heard of them. But their education technology company based in California and they do digital and physical textbook rentals. They do online tutoring an online customer services. So, Justin, if you're like me or the rest of the world and you have kids who are learning at home right now, think about the amount of pressure and strain that's now being put on this poor company Check to keep their platform operational 24 77 days a week. So that students can learn at pace and keep up right. And it's an unbelievable success story for us and one that I love, because it touches me personally because I have three kids all doing online, learning in a variety of different manners right now. And, you know, we talked about it earlier. The complexity of some of the environments today, this is a company that you would never gas, but they run 500 micro services and highly complex, uh, technical architectural right. So we had to come in and help these folks, and we're able to produce their meantime to recover because they were having a lot of issues with their ability to provide a seamless performance experience. Because you could imagine the volume of folks hitting them these days on. Reduce that meantime to recover by five X. So it's just another example we're able to say, you know, it's a real world example. Were you able to actually reduce the time to recover, to provide a better experience and whether or not you want to say that saving money or making money? What I know for sure is is giving an incredible experience so that folks in the next generation of great minds aren't focused on learning instead of waiting to learn right, So very cool. >>That is very cool. And yes, and I have gone through the whole teaching kids >>about on >>which is, uh, which it was. It was disruptive, not necessarily in a good way, but we all we adapted and learned how to do it in a new way, which is, uh, it was a lot easier towards the end than it was at the beginning. >>I'd say we're still getting there at the Snyder household. Justin, we're still getting >>was practice makes perfect eso for organizations like check that who might be looking at JAG and thinking that that sounds like a bit of a success story. I want to learn more about how new relic might be able to help me. How should they start? >>Well, there's a lot of ways they can start. I mean, one of the most exciting things about our launch in July was that we have a new free tier. So for anybody who's interested in understanding the power of observe ability, you could go right to our website and you can sign up for free and you can start to play with new relic one. I think once you start playing for, we're gonna find the same thing that happens to most of the folks to do that. They're gonna play more and more and more, and they're gonna start Thio really embrace the power. And there's an incredible new relic university that has fantastic training online. So as you start to dabble in that free tier, start to see with the power and the potential is you'll probably sign up for some classes. Next thing you know, you're often running, so that is one of the easiest ways to get exposed to it. So certainly check us out at our website and you can find out all about that free tier. And what observe ability could potentially mean to you or your business. >>And as part of the AWS reinvent experience, are they able to engage with you in some way? >>It could definitely come by our booth, check us out, virtually see what we have to say. We'd love to talk to them, and we'd be happy to talk to you about all the powerful things we're doing with A. W. S. in the marketplace to help meet you wherever you are in your cloud journey, whether it's pre migration during migration, post migration or even optimization. We've got some incredible statistics on how we can help you maximize and leverage your investment in AWS. And we're really excited to be a strategic partner with them. And, you know, it's funny. It's, uh, for me to see how observe ability this platform can really touch every single facet of that cloud migration journey. And, you know, I was thinking originally, as I got exposed to this, it would be really useful for identity Met entity relationship management at the pre migration phase and then possibly at the post migration flays is you try to baseline and measure results. But what I've come to learn through our own process, of moving our own business to the AWS cloud, that there's tremendous value everywhere along that journey. That's incredibly exciting. So not only are we a great partner, but I'm excited that we will be what I call first and best customer of AWS ourselves new relic as we make our own journey to the cloud >>or fantastic and I'm I encourage any customers who might be interested in new relic Thio definitely gone and check you out as part of the show. Thank you. J. J. Snyder from New Relic. You've been watching the Cube virtual and our coverage of AWS reinvent 2020. Make sure that you check out all the rest of the cube coverage of AWS reinvent on your desktop laptop your phone wherever you are. I've been your host, Justin Warren, and I look forward to seeing you again soon.

Published Date : Dec 2 2020

SUMMARY :

It's the Cube with digital coverage Welcome to the Cube. So I appreciate you having me. So it's great to have you with us here again. so you can purchase straight through the AWS marketplace and leverage your existing AWS spend. across the sort of field of observe ability, could you just give us a quick overview of what new relic So it's the ability to action So how is new relic helping customers to understand what they need to change about of actions that would lead you to an operational outcome. e don't think there is a perfect answer to its. to the level of maturity that would allow you to gain this benefit realization? new things with the same sort with the same people that you have who now don't have to do some of these more If this if this cube interview thing doesn't work out for you were hiring some So give us some examples, if you can, of maybe one or two off things that you've seen that customers So one of the things we did with them is we came in and we apply the same type of approach doing one of these kinds of things that you you describe in generic terms. X. So it's just another example we're able to say, you know, And yes, and I have gone through the whole teaching kids but we all we adapted and learned how to do it in a new way, which is, I'd say we're still getting there at the Snyder household. I want to learn more about how new relic might be able to help me. mean to you or your business. W. S. in the marketplace to help meet you wherever you are in your cloud journey, whether it's pre migration during Make sure that you check out all the rest of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin WarrenPERSON

0.99+

Jay SnyderPERSON

0.99+

JustinPERSON

0.99+

CaliforniaLOCATION

0.99+

AWSORGANIZATION

0.99+

JulyDATE

0.99+

AmazonORGANIZATION

0.99+

three kidsQUANTITY

0.99+

J. J. SnyderPERSON

0.99+

oneQUANTITY

0.99+

five yearQUANTITY

0.99+

threeQUANTITY

0.99+

5thQUANTITY

0.99+

twoQUANTITY

0.99+

two reasonsQUANTITY

0.99+

two areasQUANTITY

0.99+

three themesQUANTITY

0.99+

Three productsQUANTITY

0.99+

J. SnyderPERSON

0.99+

three theatersQUANTITY

0.99+

three yearsQUANTITY

0.99+

firstQUANTITY

0.99+

two different bucketsQUANTITY

0.99+

OneQUANTITY

0.99+

New RelicORGANIZATION

0.98+

third layerQUANTITY

0.98+

CubeCOMMERCIAL_ITEM

0.97+

about 100 customersQUANTITY

0.97+

IntelORGANIZATION

0.97+

one customerQUANTITY

0.97+

todayDATE

0.97+

new relicORGANIZATION

0.97+

this summerDATE

0.95+

tripleQUANTITY

0.95+

doubleQUANTITY

0.95+

five X.QUANTITY

0.95+

14 different business unitsQUANTITY

0.95+

SnyderPERSON

0.94+

hundreds of customersQUANTITY

0.93+

three piece partsQUANTITY

0.93+

J.PERSON

0.93+

part oneQUANTITY

0.92+

ThioPERSON

0.92+

24 77 days a weekQUANTITY

0.92+

three simple thingsQUANTITY

0.91+

500 micro servicesQUANTITY

0.89+

FTEORGANIZATION

0.89+

Dozens of resourceQUANTITY

0.88+

last yearDATE

0.87+

one wordQUANTITY

0.86+

four different waysQUANTITY

0.85+

A. W. S.ORGANIZATION

0.84+

hundreds of developersQUANTITY

0.83+

two different areasQUANTITY

0.83+

one commonQUANTITY

0.82+

last two weeksDATE

0.82+

about 10 or so yearsQUANTITY

0.81+

awsORGANIZATION

0.78+

year and a halfQUANTITY

0.78+

New RelicPERSON

0.78+

P M.ORGANIZATION

0.78+

AzizPERSON

0.76+

next five yearsDATE

0.75+

past JulyDATE

0.74+

4 3 Ruha for Transcript


 

>>Thank you. Thank you so much for having me. I'm thrilled to be in conversation with you today. And I thought I would just kick things off with some opening reflections on this really important session theme, and then we can jump into discussion. So I'd like us to, as a starting point, um, wrestle with these buzz words, empowerment and inclusion so that we can, um, have them be more than kind of big platitudes and really have them reflected in our workplace cultures and the things that we design and the technologies that we put out into the world. And so to do that, I think we have to move beyond techno determinism and I'll explain what that means in just a minute. And techno determinism comes in two forms. The first on your left is the idea that technology automate. Um, all of these emerging trends are going to harm us are going to necessarily, um, harm humanity. >>They're going to take all the jobs they're going to remove human agency. This is what we might call the techno dystopian version of the story. And this is what Hollywood loves to sell us in the form of movies like the matrix or Terminator. The other version on your right is the techno utopian story that technologies automation, the robots, as a shorthand are going to save humanity. They're going to make everything more efficient, more equitable. And in this case, on the surface, they seem like opposing narratives, right? They're telling us different stories. At least they have different endpoints, but when you pull back the screen and look a little bit more closely, you see that they share an underlying logic, that technology is in the driver's seat and that human beings, that social society can just respond to what's happening. But we don't, I really have a say in what technologies are designed. >>And so to move beyond techno determinism, the notion that technology is in the driver's seat, we have to put the human agents and agencies back into the story protagonists and think carefully about what the human desires, worldviews values assumptions are that animate the production of technology. We have to put the humans behind the screen back into view. And so that's a very first step in when we do that. We see as was already mentioned that it's a very homogenous group right now in terms of who gets the power and the resources to produce the digital and physical infrastructure that everyone else has to live with. And so, as a first step, we need to think about how to, to create more participation of those who are working behind the scenes to design technology. Now, to dig a little more deeper into this, I want to offer a kind of low tech example before we get to the more high tech ones. >>So what you see in front of you here is a simple park bench public it's located in Berkeley, California, which is where I went to graduate school. And on this one particular visit, I was living in Boston. And so I was back in California, it was February, it was freezing where I was coming from. And so I wanted to take a few minutes in between meetings to just lay out in the sun and soak in some vitamin D. And I quickly realized actually I couldn't lay down on the bench because of the way it had been designed with these arm rests at intermittent intervals. And so here I thought, okay, th th the armrests have a functional reason why they're there. I mean, you could literally rest your elbows there, or, um, you know, it can create a little bit of privacy of someone sitting there that you don't know. >>Um, when I was nine months pregnant, it could help me get up and down or for the elderly the same thing. So it has a lot of functional reasons, but I also thought about the fact that it prevents people who are, are homeless from sleeping on the bench. And this is the Bay area that we're talking about, where in fact, the tech boom has gone hand in hand with a housing crisis. Those things have grown in tandem. So innovation has grown with inequity because we have, I haven't thought carefully about how to address the social context in which technology grows and blossoms. And so I thought, okay, this crisis is growing in this area. And so perhaps this is a deliberate attempt to make sure that people don't sleep on the benches by the way that they're designed and where the, where they're implemented. And so this is what we might call structural inequity, by the way something is designed. >>It has certain yeah. Affects that exclude or harm different people. And so it may not necessarily be the intent, but that's the effect. And I did a little digging and I found, in fact, it's a global phenomenon, this thing that architect next call, hostile architecture around single occupancy, benches and Helsinki. So only one booty at a time, no Nolan down there. I've found caged benches in France. Yeah. And in this particular town, what's interesting here is that the mayor put these benches out in this little shopping Plaza and within 24 hours, the people in the town rally together and have them removed. So we see here that just because we, we have a discriminatory design in our public space, doesn't mean we have to live with it. We can actually work together to ensure that our public space reflects our better values. But I think my favorite example of all is the metered bench. >>And then this case, this bench is designed with spikes in them and to get the spikes to retreat into the bench, you have to feed the meter. You have to put some coins in, and I think it buys you about 15, 20 minutes, then the spikes come back up. And so you will be happy to know that in this case, uh, this was designed by a German artist to get people to think critically about issues of design, not the design of physical space, but the design of all kinds of things, public policies. And so we can think about how our public life in general is metered, that it serves those that can pay the price and others are excluded or harmed. Whether we're talking about education or healthcare. And the meter bench also presents something interesting for those of us who care about technology, it creates a technical fix for a social problem. >>In fact, it started out as art, but some municipalities in different parts of the world have actually adopted this in their public spaces, in their parks in order to deter so-called loiters from using that space. And so by a technical fix, we mean something that creates a short-term effect, right? It gets people who may want to sleep on it out of sight. They're unable to use it, but it doesn't address the underlying problems that create that need to sleep outside of the first place. And so, in addition to techno determinism, we have to think critically about technical fixes, that don't address the underlying issues that the tech tech technology is meant to solve. And so this is part of a broader issue of discriminatory design, and we can apply the bench metaphor to all kinds of things that we work with, or that we create. >>And the question we really have to continuously ask ourselves is what values are we building in to the physical and digital infrastructures around us? What are the spikes that we may unwittingly put into place? Or perhaps we didn't create the spikes. Perhaps we started a new job or a new position, and someone hands us something, this is the way things have always been done. So we inherit the spiked bench. What is our responsibility? When we notice that it's creating these kinds of harms or exclusions or technical fixes that are bypassing the underlying problem, what is our responsibility? All of this came to a head in the context of financial technologies. I don't know how many of you remember these high profile cases of tech insiders and CEOs who applied for apples, >>The Apple card. And in one case, a husband and wife applied, and the husband, the husband received a much higher limit, almost 20 times the limit as his, >>His wife, even though they shared bank accounts, they lived in common law state. Yeah. >>And so the question there was not only the fact that >>The husband was receiving a much better rate and a high and a better >>The interest rate and the limit, but also that there was no mechanism for the individuals involved to dispute what was happening. They didn't even know how, what the factors were that they were being judged that was creating this form of discrimination. So >>In terms of financial technologies, it's not simply the outcome, that's the issue, or that can be discriminatory, >>But the process that black box is all of the decision-making that makes it so that consumers and the general public have no way to question it, no way to understand how they're being judged adversely. And so it's the process, not only the product that we have to care a lot about. And so the case of the Apple card is part of a much broader phenomenon >>Of, um, races >>And sexist robots. This is how the headlines framed it a few years ago. And I was so interested in this framing because there was a first wave of stories that seemed to be shocked at the prospect, that technology is not neutral. Then there was a second wave of stories that seemed less surprised. Well, of course, technology inherits its creators biases. And now I think we've entered a phase of attempts to override and address the default settings of so-called racist and sexist robots for better or worse than here. Robots is just a kind of shorthand that the way that people are talking about automation and emerging technologies more broadly. And so, as I was encountering these headlines, I was thinking about how these are not problems simply brought on by machine learning or AI. They're not all brand new. And so I wanted to contribute to the conversation, a kind of larger context and a longer history for us to think carefully about the social dimensions of technology. And so I developed a concept called the new Jim code, >>Which plays on the phrase, >>Jim Crow, which is the way that the regime of white supremacy and inequality in this country was defined in a previous era. And I wanted us to think about how that legacy continues to haunt the present, how we might be coding bias into emerging technologies and the danger being that we imagine those technologies to be objective. And so this gives us a language to be able to name this phenomenon so that we can address it and change it under this larger umbrella of the new Jim code are four distinct ways that this phenomenon takes shape from the more obvious engineered inequity. Those are the kinds of inequalities tech mediated in the qualities that we can generally see coming. They're kind of obvious, but then we go down the line and we see it becomes harder to detect it's happening in our own backyards, it's happening around us. And we don't really have a view into the black box. And so it becomes more insidious. And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of these, and then a move towards conclusion. Then we can start chatting. So when it comes to default discrimination, this is the way that social inequalities >>Become embedded in emerging technologies because designers of these technologies, aren't thinking carefully about history and sociology. A great example of this, uh, came to, um, uh, the headlines last fall when it was found that widely used healthcare algorithm, effecting millions of patients, um, was discriminating against black patients. And so what's especially important to note here is that this algorithm, healthcare algorithm does not explicitly take note of race. That is to say it is race neutral by using cost to predict healthcare needs this digital triaging system unwittingly reproduces health disparities, because on average, black people have incurred fewer costs for a variety of reasons, including structural inequality. So in my review of this study, by Obermeyer and colleagues, I want to draw attention to how indifference to social reality can be even more harmful than malicious intent. It doesn't have to be the intent of the designers to create this effect. >>And so we have to look carefully at how indifference is operating and how race neutrality can be a deadly force. When we move on to the next iteration of the new Jim code, coded exposure, there's a tension because on the one hand, you see this image where the darker skin individual is not being detected by the facial recognition system, right on the camera, on the computer. And so coded exposure names, this tension between wanting to be seen and included and recognized whether it's in facial recognition or in recommendation systems or in tailored advertising. But the opposite of that, the tension is with when you're over, it >>Included when you're surveilled, when you're >>Too centered. And so we should note that it's not simply in being left out, that's the problem, but it's in being included in harmful ways. And so I want us to think carefully about the rhetoric of inclusion and understand that inclusion is not simply an end point, it's a process, and it is possible to include people in harmful processes. And so we want to ensure that the process is not harmful for it to really be effective. The last iteration of the new Jim code. That means the, the most insidious let's say is technologies that are touted as helping us address bias. So they're not simply including people, but they're actively working to address bias. And so in this case, there are a lot of different companies that are using AI to hire, uh, create hiring, um, software and hiring algorithms, including this one higher view. >>And the idea is that there there's a lot that, um, AI can keep track of that human beings might miss. And so, so the software can make data-driven talent decisions after all the problem of employment discrimination is widespread and well-documented, so the logic goes, wouldn't this be even more reason to outsource decisions to AI? Well, let's think about this carefully. And this is the idea of techno benevolence, trying to do good without fully reckoning with what, how technology can reproduce inequalities. So some colleagues of mine at Princeton, um, tested a natural learning processing algorithm and was looking to see whether it exhibited the same, um, tendencies that psychologists have documented among humans. And what they found was that in fact, the algorithm associated black names with negative words and white names with pleasant sounding words. And so this particular audit builds on a classic study done around 2003 before all of the emerging technologies were on the scene where two university of Chicago economists sent out thousands of resumes to employers in Boston and Chicago. >>And all they did was change the names on those resumes. All of the other work history education were the same. And then they waited to see who would get called back and the applicants, the fictional applicants with white sounding names received 50% more callbacks than the, the black applicants. So if you're presented with that study, you might be tempted to say, well, let's let technology handle it since humans are so biased. But my colleagues here in computer science found that this natural language processing algorithm actually reproduced those same associations with black and white names. So two with gender coded words and names as Amazon learned a couple years ago, when its own hiring algorithm was found discriminating against women, nevertheless, it should be clear by now why technical fixes that claim to bypass human biases are so desirable. If only there was a way to slay centuries of racist and sexist demons with a social justice bot beyond desirable, more like magical, magical for employers, perhaps looking to streamline the grueling work of recruitment, but a curse from any job seekers as this headline puts it. >>Your next interview could be with a racist bot, bringing us back to that problem space. We started with just a few minutes ago. So it's worth noting that job seekers are already developing ways to subvert the system by trading answers to employers tests and creating fake applications as informal audits of their own. In terms of a more collective response. There's a Federation of European trade unions call you and I global that's developed a charter of digital rights for workers that touches on automated and AI based decisions to be included in bargaining agreements. And so this is one of many efforts to change the ecosystem, to change the context in which technology is being deployed to ensure more protections and more rights for everyday people in the U S there's the algorithmic accountability bill that's been presented. And it's one effort to create some more protections around this ubiquity of automated decisions. >>And I think we should all be calling for more public accountability when it comes to the widespread use of automated decisions. Another development that keeps me somewhat hopeful is that tech workers themselves are increasingly speaking out against the most egregious forms of corporate collusion with state sanctioned racism. And to get a taste of that, I encourage you to check out the hashtag tech, won't build it among other statements that they've made and walking out and petitioning their companies. One group said as the, at Google at Microsoft wrote as the people who build the technologies that Microsoft profits from, we refuse to be complicit in terms of education, which is my own ground zero. Um, it's a place where we can, we can grow a more historically and socially literate approach to tech design. And this is just one resource that you all can download, um, by developed by some wonderful colleagues at the data and society research Institute in New York. >>And the, the goal of this intervention is threefold to develop an intellectual understanding of how structural racism operates and algorithms, social media platforms and technologies not yet developed and emotional intelligence concerning how to resolve racially stressful situations within organizations and a commitment to take action, to reduce harms to communities of color. And so as a final way to think about why these things are so important, I want to offer, uh, a couple last provocations. The first is pressed to think a new about what actually is deep learning when it comes to computation. I want to suggest that computational depth when it comes to AI systems without historical or social depth is actually superficial learning. And so we need to have a much more interdisciplinary, integrated approach to knowledge production and to observing and understanding patterns that don't simply rely on one discipline in order to map reality. >>The last provocation is this. If as I suggested at the start in the inequity is woven into the very fabric of our society. It's built into the design of our, our policies, our physical infrastructures, and now even our digital infrastructures. That means that each twist coil and code is a chance for us to weave new patterns, practices, and politics. The vastness of the problems that we're up against will be their undoing. Once we, that we are pattern makers. So what does that look like? It looks like refusing colorblindness as an anecdote to tech media discrimination, rather than refusing to see difference. Let's take stock of how the training data and the models that we're creating. Have these built in decisions from the past that have often been discriminatory. It means actually thinking about the underside of inclusion, which can be targeting and how do we create a more participatory rather than predatory form of inclusion. And ultimately it also means owning our own power in these systems so that we can change the patterns of the past. If we're, if we inherit a spiked bench, that doesn't mean that we need to continue using it. We can work together to design more, just an equitable technologies. So with that, I look forward to our conversation.

Published Date : Nov 25 2020

SUMMARY :

And so to do that, I think we have to move And this is what Hollywood loves And so to move beyond techno determinism, the notion that technology is in the driver's seat, And so I was back in California, it was February, And so this is what we might call structural inequity, And so it may not necessarily be the intent, And so we can think about how our public life in general is metered, And so, in addition to techno determinism, we have to think critically about And the question we really have to continuously ask ourselves is what values And in one case, a husband and wife applied, and the husband, Yeah. the individuals involved to dispute what was happening. And so it's the process, And so I developed a concept called the new Jim code, And so in the remaining couple of minutes, I'm just, just going to give you a taste of the last three of And so what's especially And so we have to look carefully at how indifference is operating and how race neutrality can And so we should note that it's not simply in being left And the idea is that there there's a lot that, um, AI can keep track of that All of the other work history education were the same. And so this is one of many efforts to change the ecosystem, And I think we should all be calling for more public accountability when it comes And so we need to have a much more interdisciplinary, And ultimately it also means owning our own power in these systems so that we can change

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
CaliforniaLOCATION

0.99+

FranceLOCATION

0.99+

BostonLOCATION

0.99+

ChicagoLOCATION

0.99+

MicrosoftORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

ObermeyerPERSON

0.99+

nine monthsQUANTITY

0.99+

New YorkLOCATION

0.99+

GoogleORGANIZATION

0.99+

twoQUANTITY

0.99+

Berkeley, CaliforniaLOCATION

0.99+

todayDATE

0.99+

FebruaryDATE

0.99+

one caseQUANTITY

0.99+

first stepQUANTITY

0.99+

Federation of European trade unionsORGANIZATION

0.99+

one resourceQUANTITY

0.99+

first stepQUANTITY

0.99+

firstQUANTITY

0.99+

two formsQUANTITY

0.99+

Jim CrowPERSON

0.99+

JimPERSON

0.98+

U SLOCATION

0.98+

millions of patientsQUANTITY

0.98+

AppleORGANIZATION

0.97+

20 minutesQUANTITY

0.97+

oneQUANTITY

0.97+

RuhaPERSON

0.97+

applesORGANIZATION

0.97+

TerminatorTITLE

0.96+

each twistQUANTITY

0.96+

one disciplineQUANTITY

0.95+

HelsinkiLOCATION

0.95+

two universityQUANTITY

0.95+

one bootyQUANTITY

0.95+

four distinct waysQUANTITY

0.95+

HollywoodORGANIZATION

0.94+

24 hoursQUANTITY

0.94+

One groupQUANTITY

0.94+

almost 20 timesQUANTITY

0.93+

centuriesQUANTITY

0.93+

few minutes agoDATE

0.93+

about 15QUANTITY

0.92+

one effortQUANTITY

0.92+

firstEVENT

0.92+

GermanOTHER

0.91+

couple years agoDATE

0.9+

NolanPERSON

0.88+

threeQUANTITY

0.88+

50% moreQUANTITY

0.87+

matrixTITLE

0.87+

last fallDATE

0.87+

few years agoDATE

0.86+

BayLOCATION

0.86+

2003DATE

0.82+

data and society research InstituteORGANIZATION

0.8+

single occupancyQUANTITY

0.79+

coupleQUANTITY

0.78+

second waveEVENT

0.76+

thousands of resumesQUANTITY

0.76+

PrincetonORGANIZATION

0.72+

first placeQUANTITY

0.71+

waveEVENT

0.48+

Corey Quinn, The Duckbill Group | AWS Summit Online 2020


 

>>from the Cube Studios in Palo Alto and Boston connecting with thought leaders all around the world. >>This is a cube conversation. I'm stew Minuteman, and this is the Cube's virtual coverage of AWS Summit online. Happy to welcome back to the program to help give us some insight into what's happening. Aws last week. And today, that is Cory Quinn is the cloud economist at the Duck Bill Group. Cory, I know it's >>thank you. Always a >>pleasure to see you >>now it is always a pleasure to see me. Thank you for once again exhibiting remarkably poor judgment and inviting me back onto your program. >>Yeah, you know, Korea, you've been on the program a few times now, including in some of the AWS goes Ah, 18 San Francisco AWS, New York City. You know, reinvent we see you. But this is the first time we've had you on online. So give us a little bit about you know what? That impact of the global pandemic has been meaning to you and more importantly, what you've been seeing from our dear friends at AWS. >>Sure, the fact that not traveling anymore and spending almost all of my time at home means that I'm a lot closer to the edge when it comes to the content I put out because I no longer have to worry about someone punching me in the face. But other than that, from a business perspective, things tend to be continuing on much as they have before with four different customer concerns. The more interesting question from my side has been what is the effect this is having on people Because we're working from home remotely? It's not really a fair test of how well you can do remote. I've been doing it like this for years, and there's a There's a sense of existential dread that's hanging over people's at more so than usual, more so than right before the AWS re invent. You know, when you're wondering if you're about to have your entire business put out of business by an AWS. Now, it's just that sort of dread that never goes away because they won't deliver the keynote If you'll pardon me if using a metal >>Yeah, it's been really interesting to watch, you know, for you, of course. I mean, Amazon is, you know, a big player in the industry before this Amazon one that gets talked about a lot in the news. You know Amazon overall, You know, when this first pitch they announced they were hiring 1000 then they went through that faster than anyone could believe. You know, you think about having to hire a driver remotely. You know, my joke was you know, Alexa the screen. Everybody and I are everyone. But then they hired another 75,000. And it's not just the warehouse and the whole food people, because I've seen a number of people that I know getting hired by AWS do. So you know, you talk. It's all about the people that you know, the number one in the endemic people. How's Amazon doing? What feedback are you getting for? How they're doing? >>Well, I don't have too many internal sources that confirm or deny things of strategic import because it turns out that I'm generally not for those things. Who knew something I'm picking up on across the industry has been that if you're building a hyper scale cloud provider, you're not looking to next border. The investments you make today are going to be realized 3 to 5 years No one is currently predicting a dramatic economic impact community felt for a decade, based on the current question. So, yeah, AWS is still investing in people, which is always going to be the limiting constraint there still launching regions we have to launch within a month, and we're still seeing a definite acceleration of anything of the pace of innovation as a W was like. Now my perspective, that's both reassuring that some things never change. And, of course, the usual level of depression where oh, good, there's still more services to learn what they do. Learn how the names work, find ways to poke holes in their various presentational aspects. And, of course, try and keep the content relatively fresh. There's only so many times you could make the same joke for people. >>Yeah, absolutely. And of course, you bring up a really good point. You know, Amazon that they have a long strategic plan there. If they're building new data centers, they're building the power in perfect for these things. It's not something that they're going to change on a dime. They plan these things out far in advance, and AWS does, of course, have a global scope. Um, you know, I really, you know, wonder. You know, from an operational standpoint, are there any pressures on them? You wrote an article you know, relatively recently talking about one of the other public cloud providers that is by our customers. And we even have performance issues. AWS seems to be running through this dealing with acid. You know, I've had phone systems that have problems. You know, everybody as when they're working from home engine internally. Even if you've got a gig bandwidth When the fire neighborhood has Children on, you know, the classrooms online for video. There's pressures there. So you know where your teams from what I've seen, you know, AWS operationally is running well and, you know, keeping things all up and running Is am I missing anything? >>No. I mean the database is fond of saying there's no compression algorithm for experience, as I'm fond of saying, that's why they charge per gigabyte. But what that means is that they've gone through a lot of these growing pains and largest instructional stories in 2010 to 2012 EBS outages, causing a cascading failures as everyone saturates links as they roll from region to region or availability zone availability zone. They understand what those workloads look like and what those years are, and they've put in credible amount of engineering into solving these problems. I think that anyone who looks at this and doesn't see this happening is unfortunate place because we don't have to its approach utility level of reliability. You don't wonder every time you turn the faucet on whether water is going to, and we're now at a point of seeing that with AWS Resource. Now they're still going to be recurring issues. And there have been basically since this thing watched a particular instance. Size and family in a particular availability zone of a particular region may be constrained for a period of weeks, and that is something that we've seen across the board. But that has less to do with the fact that they didn't see this stuff coming in that appropriately and more to do with the fact that there's a lot of different options and customer demand is never going to be an exact thing we are seeing some customers dramatically turn off city and others sporadically scrapping capacity up. It comes down to what is the nature of this endemic on >>there. Yeah, well, this absolutely does. But you know, some of those promises of the cloud test I should be able to spin things down some things I should be able to turn off. And if I have to know shut down by business, I should be able to do that. Um, I'm curious what you've heard on changing demand out there. Worry. Um, you know, on the one hand, you know customers there re buying, they're getting reserved. They're making for that. They can, you know, optimize every dollar. But when something like this comes up and they need a major change, you know, are they stuck with a lot of capacity that they didn't necessarily want? >>Sometimes it comes down to a lot of interesting variables For me, the more interesting expression of this is when companies see demand falling off a cliff. As users, we're no longer using what their what they built out. But their infrastructure spend doesn't change. That tells me that it's not a particularly elastic infrastructure. And in fact, when people are building the elasticity into their applications, they always interpret that is scaling up rather than scaling down because the failure mode of not scaling up fast enough is you're dropping customer requests on the floor. The failure mode of not scaling down fast enough just means you're spending money. So when you see user demand for environment cut by 80% but the infrastructure cost remains constant or the infrastructure usage defending. That's a more interesting problem. And you're not gonna have a lot of success asking any cloud provider for adjustment when? Well, okay, you're suddenly not seeing the demand, but you're still remains the same. What is this based upon? You need to actually demonstrate a shortfall. First of wow, you know, we normally spend a $1,000,000 a month. Well, now we're spending 200 grand a month. Yeah, about that. And once you could do that, there are paths forward. I have not yet heard stories about, frankly, any of the big Three cloud providers, absolutely hanging customers out to dry in the cloud I have heard whispers about, for example, with G suite, where they're not willing to. And this this feels like a very dark way to go. But I'm going for it. Where will we just laid off 1/3 of our staff and we get a break on the annual licensing for those seats on G suite. And the answer is no. That feels like it stings and is more than a little capricious. >>Yeah, No, absolutely. You know, one of the things that the underbelly of fast is, you know? Oh, it should be elastic like cloud. But often times you're locked one or your contract, and if all of a sudden you find yourself with that meeting half the demand and you call them up, you know, Are they going to give you that break? So you know, Price and Corey, you know better >>than most. So all right, let me spoil it for you. Every provider is going to give you a break on this because this is a temporary aberration. As far as the way the world works, we're not going to start seeing Global 10 X every year, I hope. And when this crisis passes, people are going to remember how their vendors treated. And if it's well, we held your feet to the fire and made you live up to that contract that sticks with me, and it doesn't take too many stories like that, or people pulling lawsuits out of Acer to demonstrate that a company beat the crap out of them to say, Huh? Maybe that's not where I want to thank my sizeable cloud. Invest. >>Yeah. So, Corey, how about you know, there are there certain areas where I heard, you know, certain that maybe were slow rolling cloud and all of a sudden realize that when they're working from home, they plug and adjust their servers that are saying, Oh, jeez, maybe I need to hop on this. Then there's other services. You think VPN usage must be through the roof workspaces. So when first announced, you know, many years ago was a bit of a slow roll had been a growth ah, area for Amazon for the last couple of years. Are you hearing anything specific to new services or increase growth in certain services like I'm in? >>There are two patterns we're seeing. Of all. One is the traditional company you just described, where they build out a VPN that assumes some people will occasionally be working from home at a 5% rate versus the entire workforce 40 hours a week that that model that model is training every. Whereas if you go back the last 10 years or so and look at a bunch of small businesses that have started up or startups that have launched where everything they're using is a SAS service or a cloud service, then there is no VPN. I don't have a VPN. For example, the fact that I have a wireless network here in my house and I'm at dislocation. There's this I p address isn't white listed anywhere. The only benefit that this network has over others is that there's a printer plugged in here, and that's it. The identity model of Ioffe indicate to these services by the credentials of a user name and password by enchanting something, and they send an email that I click the link that that winds up handling the Asian night and there is no bottleneck in the same direction. I feel like this is going to be the death now for a lot of VM centric for tonight. >>Alright, Corey, want one of the other things about aws is they don't stop. And what I mean is, you know, you talked about them always being online. But you know every week there's a new announcement. It keeps feeding your newsletter, feeding your feet. You know everything going on there. How is number one? You know the announcement? Brains from AWS going and anything specific. You know, John Furrier was, you know, interest in, you know, Amazon Apolo, something that was released relatively recently. >>The problem with a lot of these new services that get released relatively recently is that it requires time to vet out how it works, how it doesn't work, how it should have wound up being implemented to solve your particular use case or, in my case, how they could have named it better. But you're not able to come up with those things off the top of your head the first time you see it because it's irresponsible at scale to deploy anything in production. You don't understand. It's failure cases right now, with everyone scrambling, most companies are not making significant investments in new capabilities. They are desperately trying to get their workforces online and stay afloat and adjust very rapidly changing. And oh, they built a new data store or something of that. Nature is not going to be this sort of thing that gets people super excited in most shops, that time will change. But I do feel a bit of it right now for a lot of these product teams who've been working away on these things for months or years. And now suddenly they're releasing something into a time when people don't I care about it enough to invest the effort that, yeah, you bring up a really good >>point. Corey, you know, there's certain things. If I was working on a project that was going to help me be more agile and be more flexible, I needed that yesterday. But I still need that today. Um, some other projects, you know, might take years to roll out a eyes. Technology that has been growing bring over the last couple of years were I O T solutions are a little bit more nascent. So is what you're thinking. It's a little bit more Stick to your knitting and the solutions and the products that you're leveraging today. And some of the, you know, more visionary and futuristic ones might be a little bit of a pause button for the next couple months. >>Exactly if you're looking at exploring something that isn't going to pay dividends for 18 months. Right now, the biggest question everyone has is what is the long term repercussion of this going to be? What is the year? What we're gonna look like in three years? Because that's where a lot of these planning horizons are stretching to. And the answer is, Look, when I wind up doing a pre recorded video or podcast where I talk about this stuff and it's not going to release for four days, I'm worried about saying something that was going to be eclipsed by the new site. I worry on my podcast reporting, for example, that I'm going to wind up saying something about that dynamic, and by the time it airs in two months, it's Oh, look at this guy. He's talking about the pandemic. He doesn't even mention the meteor, and that's the place right now where people are operating from, it becomes much more challenging to be able to adequately and intelligently address the long term. When you don't know what it's going to look like, >>Yeah, absolutely. For our viewers, when you hear my segment on Cory's ask and you wonder why we could talk about that it's because we missed that one week window that we're in right now When we're talking about murder Hornet, Not when we recorded it. Not when we released the really good point court. You know, Corey, you know, data is one of the most important things. You've done a lot about data portability, you know, all the costs involved. Cloud Amazon's trying to help people, you know, with, you know, bringing data together. You know, I said in one of the interviews with Andy Jassy a couple years ago, while customers were really the flywheel for AWS for a number of years, I think it is data that is that next flywheel. So I'm curious your thoughts as our, you know, enterprises think about their data, and AWS is role >>there incorrectly. If you want me to be blunt, there's an awful lot of movement, especially as we look at AI and machine learning to gather all of the data. I've been on cost optimization projects where Wow, that's an awful lot of data sitting there. And that s three bucket. Do you need it all? And I'm assured that yes, all of the sales transaction logs from 2012 are absolutely going to be a treasure trove of data just as soon as they figure out what to do with it, and they're spending our piles of money on >>it. But >>it's worse than that because it's not just that you have this data that's costing you money. That's almost a by product. There's risk to an awful lot of forms of data with regulation that continues to expand. Data can become a toxic asset in many respects. But there's this belief of never throw anything away that's not really ideal. Part of the value of a same data management strategy is making sure that you can remove all of the stuff that you don't absolutely need right now, with AI and ML being where they are, there's this movement or keep everything because we don't know what that's going to be useful for. Down the road, it's a double edged sword, and enterprises are at this point not looking at this through a lens of this thing could hurt me so much as they are. This thing could possibly benefit that the business in the future. >>Alright, so Cory, I I've really noticed over the last few months you've spent a bit more. I'm talking publicly about some of the other clouds that aren't AWS, though. You know what we are covering? AWS Summit online. Give us what you're hearing from Microsoft, Google and others. You know any strategies that Aaron you any you know, customer movement? That is worth >>sure. I think that we're seeing customers move in the way that they've always been moving. People made a bit of a kerfuffle about a block post I put out with the extremely Clickbait idle of Zoom chose Oracle Cloud over AWS. Maybe you should, too, and there were a few. There are few conclusions people drew understandably from that particular headline, which was, for example, the idea that AWS have lost a workload that was being moved from AWS to Oracle. Not true. It was net new. They do already has existing relationships with both Azure and AWS by their own admission. But the argument what what I took that particular change to be in my case was an illustration of something that's been bugging me for a while. If you look at AWS data transfer pricing publicly posted stop, which again, no one of this scale is going to pay. It is over 10 times more expensive than Oracle. Wow. And what that tells me is that I'm now sitting here in a position where I can make you made a good faith recommendation to choose Oracle's for cost reasons, which sounds nuts. But that's the world in which we live. It's a storytelling problem, far more than it is a technical shortcoming. But that was interpreted to mean that Oracle's on the rise. AWS is in decline. Zoom is a very strong AWS customer and has made public commitments. They will remain so right now. This is what we're seeing across the board. You see Zoom doing super well. They're not building out a whole lot of net new, either. What they're doing is building is just it's desperately trying to stay up under brushing unprecedented demand. That's where the value is coming from right now, clouds elasticity and they're not doing. You know, we're going to go ahead and figure out if we can build a new continuous deploy process or something that it makes on call a little bit less brutal. That's not what anyone's focusing on it here. Wow, this boat is sinking. If we don't stay up, grab a bucket, start bailing. And that is what they're doing. The fact that they're working with every cloud provider, it shouldn't come as a surprise. >>Yeah, well, it's interesting. I'm thinking about Zoom, and one of the things that I've been watching them for the last couple of everybody has is, you know, the daily updates that are happening Related security. Um, you know, I think back, you know, 67 years ago, Amazon had This is our security model. We're not changing it for anyone now. You know Amazon as a much more flexible and nuanced. So there are >>still in violent principles that Amazon will not and cannot shift. So, to be clear, they have different ways of interfacing with security in different ways of handling data classification. But there are rules that you knew are not changing. It's not well surprised. Now, suddenly, every Amazonian who works there can look through your private data that none of that is >>happening. I >>just want to very clear on >>that. Yeah, No, you're absolutely right. It's more security, you know, getting more engine even than ever. And it was already coming into 2020 before everything changed. What was one of the hot topic? Great. You know, I'm curious. You know, we're looking at a virtual event for AWS. Have you been to some of these? You know, you're getting burnt out from all of the online content. I'm sure everybody's getting tired of you. So are you getting tired of everyone else? >>I don't accept that anyone ever get tired of me. I'm a treasure and of the light. But as far as online events go, I think that people are getting an awful lot profoundly wrong about that. For example, I think that people focus on, well, I need to get the best video and the best microphone, and that's the thing that people are going to focus on, rather than maybe I should come up with something that someone wants to listen. People are also assuming that the same type of delivery and content works super well in a stage for 45 minutes is not going to work when people can tab over to something else and stop paying attention. You've got to be more dynamic. You've got to be able to grab people's, and I think that people are missing the forest for the trees. Here, you're just trying to convert existing format into something that will work online in the immediate short term. Everyone is super sympathetic. It's not going to last. People are going to get very tired of the same tired formatting ropes, and there's only so much content people are going to consume. You've got to stand out and you've got to make it compelling and interesting. I've been spending a lot of time trying to find ways to make that >>work. Yeah, I had a great conversation with John Troyer, he said. You know, we can learn something about what? Some of the late those Ah, you know, I think there's a new opportunity for you to say There's a house band, you know. You have a small child at home, divert Amerine there's your house band. You know you can have a lot of fun with >>Oh, absolutely, especially during a tantrum that's going to go super. Well, I'm just gonna watch one of her meltdowns about some various innocuous topic, and then I'm going to wind up having toddler meltdown the Amazon s three remix, and I'm sure we could wind up tying it back to something that is hilarious in the world of cloud. But I'm trying to pull off a little bit longer before I start actively exploiting her for Internet points. I mean, I'm going to absolutely do it. I just wanted to get a little color. >>All right. Well, Corey, want to give you the final word on AWS? The online events happening, you know, give our audience that what they should be looking at when it comes to their AWS estate, >>cool as usual attention to what's coming out. It's always been to have a low level awareness of what's coming out on stage. I don't feel you need to jump in and adopt any of it immediately. Focus on the things that matter to your business. Just because something new and shiny has announced on stage does not need a fit for you doesn't mean it's not, but remain critical. I tend not to be one of the early adopters in production, things that have a potential to wind up housing challenges, and I'm not saying, Oh, stay on the exact old stuff from 2010 and nothing newer, but there is a bit of a happy medium. Don't think that just because they released something that a you need to try it or B, it's even for, you know, AWS service is for everyone but every AWS services for someone. >>Alright, Well, Cory Quinn, always a pleasure to catch up with you. Thanks so much for joining with you joining us. >>Thank you. It was over the suffering. The slings and arrows Appreciate >>it. All right. Thank you for watching everyone. Lots of coverage of the cube at the AWS Summit online. Check out the cube dot net for all the offering. And thank you for what? >>Yeah, yeah, yeah.

Published Date : May 13 2020

SUMMARY :

Happy to welcome back to the program to help give us some insight into what's happening. Always a Thank you for once again exhibiting remarkably poor judgment and inviting me has been meaning to you and more importantly, what you've been seeing from our dear friends things tend to be continuing on much as they have before with four different customer concerns. It's all about the people that you know, the number one in the endemic There's only so many times you could make the same joke for people. You wrote an article you know, the fact that they didn't see this stuff coming in that appropriately and more to do with the fact that there's a lot of different you know, on the one hand, you know customers there re buying, they're getting reserved. you know, we normally spend a $1,000,000 a month. you know, Are they going to give you that break? Every provider is going to give you a break on this because this is where I heard, you know, certain that maybe were slow rolling cloud and all of a sudden realize One is the traditional company you just described, And what I mean is, you know, you talked about them always being online. Nature is not going to be this sort of thing that And some of the, you know, more visionary and futuristic ones might be a little bit of a pause that I'm going to wind up saying something about that dynamic, and by the time it airs in two months, You know, Corey, you know, data is one of the most important things. going to be a treasure trove of data just as soon as they figure out what to do with it, all of the stuff that you don't absolutely need right now, with AI and ML being where they are, You know any strategies that Aaron you any that particular change to be in my case was an illustration of something of everybody has is, you know, the daily updates that are happening Related security. But there are rules that you knew are not changing. I you know, getting more engine even than ever. and that's the thing that people are going to focus on, rather than maybe I should come up with something that someone wants to listen. Some of the late those Ah, you know, I think there's a new opportunity I mean, I'm going to absolutely do it. The online events happening, you know, give our audience that what they should be looking at when Focus on the things that matter to your business. Thanks so much for joining with you joining us. It was over the suffering. And thank you for what?

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Cory QuinnPERSON

0.99+

John TroyerPERSON

0.99+

AWSORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

2012DATE

0.99+

18 monthsQUANTITY

0.99+

45 minutesQUANTITY

0.99+

Corey QuinnPERSON

0.99+

GoogleORGANIZATION

0.99+

Andy JassyPERSON

0.99+

CoreyPERSON

0.99+

3QUANTITY

0.99+

CoryPERSON

0.99+

2010DATE

0.99+

John FurrierPERSON

0.99+

OracleORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

AcerORGANIZATION

0.99+

Duck Bill GroupORGANIZATION

0.99+

5%QUANTITY

0.99+

New York CityLOCATION

0.99+

80%QUANTITY

0.99+

2020DATE

0.99+

BostonLOCATION

0.99+

last weekDATE

0.99+

1000QUANTITY

0.99+

four daysQUANTITY

0.99+

AaronPERSON

0.99+

yesterdayDATE

0.99+

OneQUANTITY

0.99+

Cube StudiosORGANIZATION

0.98+

tonightDATE

0.98+

over 10 timesQUANTITY

0.98+

todayDATE

0.98+

one weekQUANTITY

0.98+

75,000QUANTITY

0.97+

67 years agoDATE

0.97+

oneQUANTITY

0.97+

bothQUANTITY

0.96+

AWS ResourceORGANIZATION

0.96+

FirstQUANTITY

0.96+

5 yearsQUANTITY

0.96+

40 hours a weekQUANTITY

0.95+

first pitchQUANTITY

0.95+

awsORGANIZATION

0.95+

first timeQUANTITY

0.95+

ZoomORGANIZATION

0.95+

UNLIST TILL 4/2 - A Deep Dive into the Vertica Management Console Enhancements and Roadmap


 

>> Jeff: Hello, everybody, and thank you for joining us today for the virtual Vertica BDC 2020. Today's breakout session is entitled "A Deep Dive "into the Vertica Mangement Console Enhancements and Roadmap." I'm Jeff Healey of Vertica Marketing. I'll be your host for this breakout session. Joining me are Bhavik Gandhi and Natalia Stavisky from Vertica engineering. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait, just type your question or comment in the question box below the slides and click submit. There will be a Q and A session at the end of the presentation. We'll answer as many questions as we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively visit Vertica Forums at forum.vertica.com. Post your question there after the session. Our engineering team is planning to join the forums to keep the conversation going well after the event. Also, a reminder that you can maximize the screen by clicking the double arrow button in the lower right corner of the slides. And yes, this virtual session is being recorded and will be available to you on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you, Bhavik. >> Bhavik: All right. So hello, and welcome, everybody doing this presentation of "Deep Dive into the Vertica Management Console Enhancements and Roadmap." Myself, Bhavik, and my team member, Natalia Stavisky, will go over a few useful announcements on Vertica Management Console, discussing a few real scenarios. All right. So today we will go forward with the brief introduction about the Management Console, then we will discuss the benefits of using Management Console by going over a couple of user scenarios for the query taking too long to run and receiving email alerts from Management Console. Then we will go over a few MC features for what we call Eon Mode databases, like provisioning and reviving the Eon Mode databases from MC, managing the subcluster and understanding the Depot. Then we will go over some of the future announcements on MC that we are planning. All right, so let's get started. All right. So, do you want to know about how to provision a new Vertica cluster from MC? How to analyze and understand a database workload by monitoring the queries on the database? How do you balance the resource pools and use alerts and thresholds on MC? So, the Management Console is basically our answer and we'll talk about its capabilities and new announcements in this presentation. So just to give a brief overview of the Management Console, who uses Management Console, it's generally used by IT administrators and DB admins. Management Console can be used to monitor both Eon Mode and Enterprise Mode databases. Why to use Management Console? You can use Management Console for provisioning Vertica databases and cluster. You can manage the already existing Vertica databases and cluster you have, and you can use various tools on Management Console like query execution, Database Designer, Workload Analyzer, and set up alerts and thresholds to get notified by some of your activities on the MC. So let's go over a few benefits of using Management Console. Okay. So using Management Console, you can view and optimize resource pool usage. Management Console helps you to identify some critical conditions on your Vertica cluster. Additionally, you can set up various thresholds thresholds in MC and get other data if those thresholds are triggered on the database. So now let's dig into the couple of scenarios. So for the first scenario, we will discuss about queries taking too long and using workload analyzer to possibly help to solve the problem. In the second scenario, we will go over alert email that you received from your Management Console and analyzing the problem and taking required actions to solve the problem. So let's go over the scenario where queries are taking too long to run. So in this example, we have this one query that we are running using the query execution on MC. And for some reason we notice that it's taking about 14.8 seconds seconds to execute this query, which is higher than the expected run time of the query. The query that we are running happens to be the query used by MC during the extended monitoring. Notice that the table name and the schema name which is ds_requests_issued, and, is the schema used for extended monitoring. Now in 10.0 MC we have redesigned the Workload Analyzer and Recommendations feature to show the recommendations and allow you to execute those recommendations. In our example, we have taken the table name and figured the tuning descriptions to see if there are any tuning recommendations related to this table. As we see over here, there are three tuning recommendations available for that table. So now in 10.0 MC, you can select those recommendations and then run them. So let's run the recommendations. All right. So once recommendations are run successfully, you can go and see all the processed recommendations that you have run previously. Over here we see that there are three recommendations that we had selected earlier have successfully processed. Now we take the same query and run it on the query execution on MC and hey, it's running really faster and we see that it takes only 0.3 seconds to run the query and, which is about like 98% decrease in original runtime of the query. So in this example we saw that using a Workload Analyzer tool on MC you can possibly triage and solve issue for your queries which are taking to long to execute. All right. So now let's go over another user scenario where DB admin's received some alert email messages from MC and would like to understand and analyze the problem. So to know more about what's going on on the database and proactively react to the problems, DB admins using the Management Console can create set of thresholds and get alerted about the conditions on the database if the threshold values is reached and then respond to the problem thereafter. Now as a DB admin, I see some email message notifications from MC and upon checking the emails, I see that there are a couple of email alerts received from MC on my email. So one of the messages that I received was for Query Resource Rejections greater than 5, pool, midpool7. And then around the same time, I received another email from the MC for the Failed Queries greater than 5, and in this case I see there are 80 failed queries. So now let's go on the MC and investigate the problem. So before going into the deep investigation about failures, let's review the threshold settings on MC. So as we see, we have set up the thresholds under the database settings page for failed queries in the last 10 minutes greater than 5 and MC should send an email to the individual if the threshold is triggered. And also we have a threshold set up for queries and resource rejections in the last five minutes for midpool7 set to greater than 5. There are various other thresholds on this page that you can set if you desire to. Now let's go and triage those email alerts about the failed queries and resource rejections that we had received. To analyze the failed queries, let's take a look at the query statistics page on the database Overview page on MC. Let's take a look at the Resource Pools graph and especially for the failed queries for each resource pools. And over to the right under the failed query section, I see about like, in the last 24 hours, there are about 6,000 failed queries for midpool7. And now I switch to view to see the statistics for each user and on this page I see for User MaryLee on the right hand side there are a high number of failed queries in last 24 hours. And to know more about the failed queries for this user, I can click on the graph for this user and get the reasons behind it. So let's click on the graph and see what's going on. And so clicking on this graph, it takes me to the failed queries view on the Query Monitoring page for database, on Database activities tab. And over here, I see there are a high number of failed queries for this user, MaryLee, with the reasons stated as, exceeding high limit. To drill down more and to know more reasons behind it, I can click on the plus icon on the left hand side for each failed queries to get the failure reason for each node on the database. So let's do that. And clicking the plus icon, I see for the two nodes that are listed, over here it says there are insufficient resources like memory and file handles for midpool7. Now let's go and analyze the midpool7 configurations and activities on it. So to do so, I will go over to the Resource Pool Monitoring view and select midpool7. I see the resource allocations for this resource pool is very low. For example, the max memory is just 1MB and the max concurrency is set to 0. Hmm, that's very odd configuration for this resource pool. Also in the bottom right graph for the resource rejections for midpool7, the graph shows very high values for resource rejection. All right. So since we saw some odd configurations and odd resource allocations for midpool7, I would like to see when this resource, when the settings were changed on the resource pools. So to do this, I can preview the audit logs on, are available on the Management Console. So I can go onto the Vertica Audit Logs and see the logs for the resource pool. So I just (mumbles) for the logs and figuring the logs for midpool7. I see on February 17th, the memory and other attributes for midpool7 were modified. So now let's analyze the resource activity for midpool7 around the time when the configurations were changed. So in our case we are using extended monitoring on MC for this database, so we can go back in time and see the statistics over the larger time range for midpool7. So viewing the activities for midpool7 around February 17th, around the time when these configurations were changed, we see a decrease in resource pool usage. Also, on the bottom right, we see the resource rejections for this midpool7 have an increase, linear increase, after the configurations were changed. I can select a point on the graph to get the more details about the resource rejections. Now to analyze the effects of the modifications on midpool7. Let's go over to the Query Monitoring page. All right, I will adjust the time range around the time when the configurations were changed for midpool7 and completed activities queries for user MaryLee. And I see there are no completed queries for this user. Now I'm taking a look at the Failed Queries tab and adjusting the time range around the time when the configurations were changed. I can do so because we are using extended monitoring. So again, adjusting the time, I can see there are high number of failed queries for this user. There about about like 10,000 failed queries for this user after the configurations were changed on this resource pool. So now let's go and modify the settings since we know after the configurations were changed, this user was not able to run the queries. So you can change the resource pool settings of using Management Console's database settings page and under the Resource Pools tab. So selecting the midpool7, I see the same odd configurations for this resource pool that we saw earlier. So now let's go and modify it, the settings. So I will increase the max memory and modify the settings for midpool7 so that it has adequate resources to run the queries for the user. Hit apply on the right hand top to see the settings. Now let's do the validation after we change the resource pool attributes. So let's go over to the same query monitoring page and see if MaryLee user is able to run the queries for midpool7. We see that now, after the configuration, after the change, after we changed the configuration for midpool7, the user can run the queries successfully and the count for Completed Queries has increased after we modified the settings for this midpool7 resource pool. And also viewing the resource pool monitoring page, we can validate that after the new configurations for midpool7 has been applied and also the resource pool usage after the configuration change has increased. And also on the bottom right graph, we can see that the resource rejections for midpool7 has decreased over the time after we modified the settings. And since we are using extended monitoring for this database, I can see that the trend in data for these resource pools, the before and after effects of modifying the settings. So initially when the settings were changed, there were high resource rejections and after we again modified the settings, the resource rejections went down. Right. So now let's go work with the provisioning and reviving the Eon Mode Vertica database cluster using the Management Console on different platform. So Management Console supports provisioning and reviving of Eon Mode databases on various cloud environments like AWS, the Google Cloud Platform, and Pure Storage. So for Google, for provisioning the Vertica Management Console on Google Cloud Platform you can use launch a template. Or on AWS environment you can use the cloud formation templates available for different OS's. Once you have provisioned Vertica Management Console, you can provision the Vertica cluster and databases from MC itself. So you can provision a Vertica cluster, you can select the Create new database button available on the homepage. This will open up the wizard to create a new database and cluster. In this example, we are using we are using the Google Cloud Platform. So the wizard will ask me for varius authentication parameters for the Google Cloud Platform. And if you're on AWS, it'll ask you for the authentication parameters for the AWS environment. And going forward on the Wizard, it'll ask me to select the instance Type. I will select for the new Vertica cluster. And also provide the communal location url for my Eon Mode database and all the other preferences related to the new cluster. Once I have selected all the preferences for my new cluster I can preview the settings and I can hit, if I am, I can hit Create if all looks okay. So if I hit Create, this will create a new, MC will create a new GCP instances because we are on the GCP environment in this example. It will create a cluster on this instance, it'll create a Vertica Eon Mode Database on this cluster. And it will, additionally, you can load the test data on it if you like to. Now let's go over and revive the existing Eon Mode database from the communal location. So you can do it the same using the Management Console by selecting the Revive Eon Mode database button on the homepage. This will again open up the wizard for reviving the Eon Mode database. Again, in this example, since we are using GCP Platform, it will ask me for the Google Cloud storage authentication attributes. And for reviving, it will ask me for the communal location so I can enter the Google Storage bucket and my folder and it will discover all the Eon Mode databases located under this folder. And I can select one of the databases that I would like to revive. And it will ask me for other Vertica preferences and for this video, for this database reviving. And once I enter all the preferences and review all the preferences I can hit Revive the database button on the Wizard. So after I hit Revive database it will create the GCP instances. The number of GCP instances that I created would be seen as the number of hosts on the original Vertica cluster. It will install the Vertica cluster on this data, on this instances and it will revive the database and it will start the database. And after starting the database, it will be imported on the MC so you can start monitoring on it. So in this example, we saw you can provision and revive the Vertica database on the GCP Platform. Additionally, you can use AWS environment to provision and revive. So now since we have the Eon Mode database on MC, Natalia will go over some Eon Mode features on MC like managing subcluster and Depot activity monitoring. Over to you, Natalia. >> Natalia: Okay, thank you. Hello, my name is Natalia Stavisky. I am also a member of Vertica Management Console Team. And I will talk today about the work I did to allow users to manage subclusters using the Management Console, and also the work I did to help users understand what's going on in their Depot in the Vertica Eon Mode database. So let's look at the picture of the subclusters. On the Manage page of Vertica Management Console, you can see here is a page that has blue tabs, and the tab that's active is Subclusters. You can see that there are two subclusters are available in this database. And for each of the subclusters, you can see subcluster properties, whether this is the primary subcluster or secondary. In this case, primary is the default subcluster. It's indicated by a star. You can see what nodes belong to each subcluster. You can see the node state and node statistics. You can also easily add a new subcluster. And we're quickly going to do this. So once you click on the button, you'll launch the wizard that'll take you through the steps. You'll enter the name of the subcluster, indicate whether this is secondary or primary subcluster. I should mention that Vertica recommends having only one primary subcluster. But we have both options here available. You will enter the number of nodes for your subcluster. And once the subcluster has been created, you can manage the subcluster. What other options for managing subcluster we have here? You can scale up an existing subcluster and that's a similar approach, you launch the wizard and (mumbles) nodes. You want to add to your existing subcluster. You can scale down a subcluster. And MC validates requirements for maintaining minimal number of nodes to prevent database shutdown. So if you can not remove any nodes from a subcluster, this option will not be available. You can stop a subcluster. And depending on whether this is a primary subcluster or secondary subcluster, this option may be available or not available. Like in this picture, we can see that for the default subcluster this option is not available. And this is because shutting down the default subcluster will cause the database to shut down as well. You can terminate a subcluster. And again, the MC warns you not to terminate the primary subcluster and validates requirements for maintaining minimal number of nodes to prevent database shutdown. So now we are going to talk a little more about how the MC helps you to understand what's going on in your Depot. So Depot is one of the core of Eon Mode database. And what are the frequently asked questions about the Depot? Is the Depot size sufficient? Are a subset of users putting a high load on the database? What tables are fetched and evicted repeatedly, we call it "re-fetched," in Depot? So here in the Depot Activity Monitoring page, we now have four tabs that allow you to answer those questions. And we'll go a little more in detail through each of them, but I'll just mention what they are for now. At a Glance shows you basic Depot configuration and also shows you query executing. Depot Efficiency, we'll talk more about that and other tabs. Depot Content, that shows you what tables are currently in your Depot. And Depot Pinning allows you to see what pinning policies have been created and to create new pinning policies. Now let's go through a scenario. Monitoring performance of workloads on one subcluster. As you know, Eon Mode database allows you to have multiple subclusters and we'll explore how this feature is useful and how we can use the Management Console to make decisions regarding whether you would like to have multiple subclusters. So here we have, in my setup, a single subcluster called default_subcluster. It has two users that are running queries that are accessing tables, mostly in schema public. So the query started executing and we can see that after fetching tables from Communal, which is the red line, the rest of the time the queries are executing in Depot. The green line is indicating queries running in Depot. The all nodes Depot is about 88% full, a steady flow, and the depot size seems to be sufficient for query executions from Depot only. That's the good case scenario. Now at around 17 :15, user Sherry got an urgent request to generate a report. And at, she started running her queries. We can see that picture is quite different now. The tables Sherry is querying are in a different schema and are much larger. Now we can see multiple lines in different colors. We can see a bunch of fetches and evictions which are indicated by blue and purple bars, and a lot of queries are now spilling into Communal. This is the red and orange lines. Orange line is an indicator of a query running partially in Depot and partially getting fetched from Communal. And the red line is data fetched from Communal storage. Let's click on the, one of the lines. Each data point, each point on the line, it'll take you to the Query Details page where you can see more about what's going on. So this is the page that shows us what queries have been run in this particular time interval which is on top of this page in orange color. So that's about one minute time interval and now we can see user Sherry among the users that are running queries. Sherry's queries involve large tables and are running against a different schema. We can see the clickstream schema in the name of the, in part of the query request. So what is happening, there is not enough Depot space for both the schema that's already in use and the one Sherry needs. As a result, evictions and fetches have started occurring. What other questions we can ask ourself to help us understand what's going on? So how about, what tables are most frequently re-fetched? So for that, we will go to the Depot Efficiency page and look at the middle, the middle chart here. We can see the larger version of this chart if we expand it. So now we have 10 tables listed that are most frequently being re-fetched. We can see that there is a clickstream schema and there are other schemas so all of those tables are being used in the queries, fetched, and then there is not enough space in the Depot, they getting evicted and they get re-fetched again. So what can be done to enable all queries to run in Depot? Option one can be increase the Depot size. So we can do this by running the following queries, which (mumbles) which nodes and storage location and the new Depot size. And I should mention that we can run this query from the Management Console from the query execution page. So this would have helped us to increase the Depot size. What other options do we have, for example, when increasing Depot size is not an option? We can also provision a second subcluster to isolate workloads like Sherry's. So we are going to do this now and we will provision a second subcluster using the Manage page. Here we're creating subcluster for Sherry or for workloads like hers. And we're going to create a (mumbles). So Sherry's subcluster has been created. We can see it here, added to the list of the subclusters. It's a secondary subcluster. Sherry has been instructed to use the new SherrySubcluster for her work. Now let's see what happened. We'll go again at Depot Activity page and we'll look at the At a Glance tab. We can see that around >> 18: 07, Sherry switched to running her queries on SherrySubcluster. On top of this page, you can see subcluster selected. So we currently have two subclusters and I'm looking, what happened to SherrySubcluster once it has been provisioned? So Sherry started using it and the lines after initial fetching from Depot, which was from Communal, which was the red line, after that, all Sherry's queries fit in Depot, which is indicated by green line. Also the Depot is pretty full on those nodes, about 90% full. But the queries are processed efficiently, there is no spilling into Communal. So that's a good case scenario. Let's now go back and take a look at the original subcluster, default subcluster. So on the left portion of the chart we can see multiple lines, that was activity before Sherry switched to her own designated subcluster. At around 18:07, after Sherry switched from the subcluster to using her designated subcluster, there is no, she is no longer using the subcluster, she is not putting a load in it. So the lines after that are turning a green color, which means the queries that are still running in default subcluster are all running in Depot. We can also see that Depot fetches and evictions bars, those purple and blue bars, are no longer showing significant numbers. Also we can check the second chart that shows Communal Storage Access. And we can see that the bars have also dropped, so there is no significant access for Communal Storage. So this problem has been solved. Each of the subclusters are serving queries from Depot and that's our most efficient scenario. Let's also look at the other tabs that we have for Depot monitoring. Let's look at Depot Efficiency tab. It has six charts and I'll go through each one of them quickly. Files Reads by Location gives an indicator of where the majority of query execution took place in Depot or in Communal. Top 10 Re-Fetches into Depot, and imagine the charts earlier in our user case, it shows tables that are most frequently fetched and evicted and then fetched again. These are good candidates to get pinned if increasing Depot size is not an option. Note that both of these charts have an option to select time interval using calendar widget. So you can get the information about the activity that happened during that time interval. Depot Pinning shows what portion of your Depot is pinned, both by byte count and by table count. And the three tables at the bottom show Depot structure. How long tables stay in Depot, we would like tables to be fetched in Depot and stay there for a long time, how often they are accessed, again, the tables in Depot, we would like to see them accessed frequently, and what the size range of tables in Depot. Depot Content. This tab allows us to search for tables that are currently in Depot and also to see stats like table size in Depot. How often tables are accessed and when were they last accessed. And the same information that's available for tables in Depot is also available on projections and partition levels for those tables. Depot Pinning. This tab allows users to see what policies are currently existing and so you can do this by clicking on the first little button and click search. This'll show you all existing policies that are already created. The second option allows you to search for a table and create a policy. You can also use the action column to modify existing policies or delete them. And the third option provides details about most frequently re-fetched tables, including fetch count, total access count, and number of re-fetched bytes. So all this information can help to make decisions regarding pinning specific tables. So that's about it about the Depot. And I should mention that the server team also has a very good presentation on the, webinar, on the Eon Mode database Depot management and subcluster management. that strongly recommend it to attend or download the slide presentation. Let's talk quickly about the Management Console Roadmap, what we are planning to do in the future. So we are going to continue focusing on subcluster management, there is still a lot of things we can do here. Promoting/demoting subclusters. Load balancing across subclusters, scheduling subcluster actions, support for large cluster mode. We'll continue working on Workload Analyzer enhancement recommendation, on backup and restore from the MC. Building custom thresholds, and Eon on HDFS support. Okay, so we are ready now to take any questions you may have now. Thank you.

Published Date : Mar 30 2020

SUMMARY :

for the virtual Vertica BDC 2020. and all the other preferences related to the new cluster. and the depot size seems to be sufficient So on the left portion of the chart

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Natalia StaviskyPERSON

0.99+

SherryPERSON

0.99+

MaryLeePERSON

0.99+

Jeff HealeyPERSON

0.99+

NataliaPERSON

0.99+

JeffPERSON

0.99+

February 17thDATE

0.99+

second scenarioQUANTITY

0.99+

10 tablesQUANTITY

0.99+

forum.vertica.comOTHER

0.99+

AWSORGANIZATION

0.99+

1MBQUANTITY

0.99+

two usersQUANTITY

0.99+

first scenarioQUANTITY

0.99+

second optionQUANTITY

0.99+

VerticaORGANIZATION

0.99+

BhavikPERSON

0.99+

80 failed queriesQUANTITY

0.99+

todayDATE

0.99+

DepotORGANIZATION

0.99+

thirdQUANTITY

0.99+

EachQUANTITY

0.99+

six chartsQUANTITY

0.99+

bothQUANTITY

0.99+

each pointQUANTITY

0.99+

three recommendationsQUANTITY

0.99+

TodayDATE

0.99+

eachQUANTITY

0.99+

GoogleORGANIZATION

0.99+

Bhavik GandhiPERSON

0.99+

midpool7TITLE

0.99+

two nodesQUANTITY

0.99+

second chartQUANTITY

0.99+

two subclustersQUANTITY

0.98+

second subclusterQUANTITY

0.98+

Each data pointQUANTITY

0.98+

each userQUANTITY

0.98+

both optionsQUANTITY

0.98+

4/2DATE

0.98+

EonORGANIZATION

0.97+

this weekDATE

0.97+

each subclusterQUANTITY

0.97+

about 90%QUANTITY

0.97+

three tablesQUANTITY

0.96+

0QUANTITY

0.96+

about 14.8 seconds secondsQUANTITY

0.96+

one subclusterQUANTITY

0.95+

UNLIST TILL 4/2 - Optimizing Query Performance and Resource Pool Tuning


 

>> Jeff: Hello, everybody and thank you for Joining us today for the virtual "Vertica VBC" 2020. Today's breakout session has been titled "Optimizing Query Performance and Resource Pool Tuning" I'm Jeff Ealing, I lead Vertica marketing. I'll be your host for this breakout session. Joining me today are Rakesh Banula, and Abhi Thakur, Vertica product technology engineers and key members of the Vertica customer success team. But before we begin, I encourage you to submit questions or comments during the virtual session. You don't have to wait. Just type your question or comment in the question box below the slides and click Submit. There will be a Q&A session at the end of the presentation. We'll answer as many questions we're able to during that time. Any questions we don't address, we'll do our best to answer them offline. Alternatively, visit Vertica forums at forum.vertica.com to post your questions there after the session. Our engineering team is planning to Join the forums to keep the conversation going. Also a reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of your slides. And yes, this virtual session is being recorded, will be available to view on demand this week. We'll send you a notification as soon as it's ready. Now let's get started. Over to you Rakesh. >> Rakesh: Thank you, Jeff. Hello, everyone. My name is Rakesh Bankula. Along with me, we have Bir Abhimanu Thakur. We both are going to cover the present session on "Optimizing Query Performance and Resource Pool Tuning" In this session, we are going to discuss query optimization, how to review the query plans and how to get the best query plans with proper production design. Then discuss on resource allocations and how to find resource contention. And we will continue the discussion on important use cases. In general, to successfully complete any activity or any project, the main things it requires are the plan. Plan for that activity on what to do first, what to do next, what are things you can do in parallel? The next thing you need, the best people to work on that project as per the plan. So, first thing is a plan and next is the people or resources. If you overload the same set of people, our resources by involving them in multiple projects or activities or if any person or resource is sick in a given project is going to impact on the overall completion of that project. The same analogy we can apply through query performance too. For a query to perform well, it needs two main things. One is the best query plan and other is the best resources to execute the plan. Of course, in some cases, resource contention, whether it can be from system side or within the database may slow down the query even when we have best query plan and best resource allocations. We are going to discuss each of these three items a little more in depth. Let us start with query plan. User submits the query to database and Vertica Optimizer generates the query plan. In generating query plans, optimizer uses the statistics information available on the tables. So, statistics plays a very important role in generating good query plans. As a best practice, always maintain up-to-date statistics. If you want to see how query plan looks like, add explain keyword in front of your query and run that query. It displays the query plan on the screen. Other option is BC explained plans. It saves all the explained plans of the queries run on the database. So, once you have a query plan, once you're checking it to make sure plan is good. The first thing I would look for, no statistics are predicted out of range. If you see any of these, means table involved in the query, have no up to date statistics. It is now the time to update the statistics. Next thing to explain plans are broadcast, three segments around the Join operator, global re segments around a group by operators. These indicate during the runtime of the query, data flow between the nodes over the network and will slow down the query execution. As far as possible, prevent such operations. How to prevent this, we will discuss in the projection design topic. Regarding the Join order, check on inner side and outer side, which tables are used, how many rows each side processing. In (mumbles) picking a table, having smaller number of rows is good in case of as shown as, as Join built in memory, smaller the number of rows, faster it is to build the hash table and also helps in consuming less memory. Then check if the plan is picking query specific projection or default projections. If optimizer ignoring any query specific projection, but picking the default super projection will show you how to use query specific hints to follow the plant to pick query specific projections which helps in improving the performance. Okay, here is one example query plan of a query trying to find number of products sold from a store in a given state. This query is having Joins between store table, product table and group by operation to find the count. So, first look for no statistics particularly around storage access path. This plan is not reporting any no statistics. This means statistics are up to date and plan is good so far. Then check what projections are used. This is also around the storage access part. For Join orders check, we have Hash Join in path ID 4 having it In Path ID 6 processing 60,000 rows and outer is in Path ID 7 processing 20 million rows. Inner side processing last record is good. This helps in building hash table quicker by using less memory. Check if any broadcast re segments, Joins in Path ID 4 and also Path ID 3. Both are having inner broadcast, Inners are having 60,000 records are broadcasted to all nodes in the cluster. This could impact the query performance negatively. These are some of the main things which we normally check in the explained plans. Still now, We have seen that how to get good query plans. To get good query plans, we need to maintain up to date statistics and also discussed how to review query plans. Projection design is the next important thing in getting good query plans, particularly in preventing broadcasts re segments. Broadcast re segments happens during Join operation, random existing segmentation class of the projections involved in the Join not matching with the Join columns in the query. These operations causes data flow over the network and negatively impacts the query performance particularly when it transfers millions or billions of rows. These operations also causes query acquire more memory particularly in network send and receive operations. One can avoid these broadcast re segments with proper projection segmentation, say, Join involved between two fact tables, T1, T2 on column I then segment the projections on these T1, T2 tables on column I. This is also called identically segmenting projections. In other cases, Join involved between a fact table and a dimension table then replicate or create an unsegmented projection on dimension table will help avoiding broadcast re segments during Join operation. During group by operation, global re segment groups causes data flow over the network. This can also slow down the query performance. To avoid these global re segment groups, create segmentation class of the projection to match with the group by columns in the query. In previous slides, we have seen the importance of projection segmentation plus in preventing the broadcast re segments during the Join operation. The order by class of production design plays important role in picking the Join method. We have two important Join methods, Merge Join and Hash Join. Merge Join is faster and consumes less memory than hash Join. Query plan uses Merge Join when both projections involved in the Join operation are segmented and ordered on the Join keys. In all other cases, Hash Join method will be used. In case of group by operation too, we have two methods. Group by pipeline and group by Hash. Group by pipeline is faster and consumes less memory compared to group by Hash. The requirements for group by pipeline is, projection must be segmented and ordered by on grouping columns. In all other cases, group by hash method will be used. After all, we have seen importance of stats and projection design in getting good query plans. As statistics are based on estimates over sample of data, it is possible in a very rare cases, default query plan may not be as good as you expected, even after maintaining up-to-date stats and good projection design. To work around this, Vertica providing you some query hints to force optimizer to generate even better query plans. Here are some example Join hints which helps in picking Join method and how to distribute the data, that is broadcast or re segment on inner or outer side and also which group by method to pick. The table level hints helps to force pick query specific projection or skipping any particular projection in a given query. These all hints are available in Vertica documentation. Here are a few general hints useful in controlling how to load data with the class materialization et cetera. We are going to discuss some examples on how to use these query hints. Here is an example on how to force query plan to pick Hash Join. The hint used here is JTYPE, which takes arguments, H for HashJoin, M for MergeJoin. How to place this hint, just after the Join keyword in the query as shown in the example here. Another important Join in this, JFMT, Join For My Type hint. This hint is useful in case when Join columns are lost workers. By default Vertica allocates memory based on column data type definition, not by looking at the actual data length in those columns. Say for example, Join column defined as (mumbles) 1000, 5000 or more, but actual length of the data in this column is, say, less than 50 characters. Vertica going to use more memory to process such columns in Join and also slow down the Join processing. JSMP hint is useful in this particular case. JSMP parameter uses the actual length of the Join column. As shown in the example, using JFMP of V hint helps in reducing the memory requirement for this query and executes faster too. Distrib hint helps in how to force inner or outer side of the Join operator to be distributed using broadcast or re segment. Distrib takes two parameters. First is the outer site and second is the inner site. As shown in the example, DISTRIB(A,R) after Join keyword in the query helps to force re segment the inner side of the Join, outer side, leaving it to optimizer to choose that distribution method. GroupBy Hint helps in forcing query plan to pick Group by Hash or Group by Pipeline. As shown in the example, GB type or hash, used just after group by class in the query helps to force this query to pick Group by Hashtag. See now, we discussed the first part of query performance, which is query plans. Now, we are moving on to discuss next part of query performance, which is resource allocation. Resource Manager allocates resources to queries based on the settings on resource pools. The main resources which resource pools controls are memory, CPU, query concurrency. The important resource pool parameters, which we have to tune according to the workload are memory size, plan concurrency, mass concurrency and execution parallelism. Query budget plays an important role in query performance. Based on the query budget, query planner allocate worker threads to process the query request. If budget is very low, query gets less number of threads, and if that query requires to process huge data, then query takes longer time to execute because of less threads or less parallelism. In other case, if the budget is very high and query executed on the pool is a simple one which results in a waste of resources, that is, query which acquires the resources holds it till it complete the execution, and that resource is not available to other queries. Every resource pool has its own query budget. This query budget is calculated based on the memory size and client and currency settings on that pool. Resource pool status table has a column called Query Budget KB, which shows the budget value of a given resource pool. The general recommendation for query budget is to be in the range of one GB to 10 GB. We can do a few checks to validate if the existing resource pool settings are good or not. First thing we can check to see if query is getting resource allocations quickly, or waiting in the resource queues longer. You can check this in resource queues table on a live system multiple times, particularly during your peak workload hours. If large number of queries are waiting in resource queues, indicates the existing resource pool settings not matching with your workload requirements. Might be, memory allocated is not enough, or max concurrency settings are not proper. If query's not spending much time in resource queues indicates resources are allocated to meet your peak workload, but not sure if you have over or under allocated the resources. For this, check the budget in resource pool status table to find any pool having way larger than eight GB or much smaller than one GB. Both over allocation and under allocation of budget is not good for query performance. Also check in DC resource acquisitions table to find any transaction acquire additional memory during the query execution. This indicates the original given budget is not sufficient for the transaction. Having too many resource pools is also not good. How to create resource pools or even existing resource pools. Resource pool settings should match to the present workload. You can categorize the workload into well known workload and ad-hoc workload. In case of well-known workload, where you will be running same queries regularly like daily reports having same set of queries processing similar size of data or daily ETL jobs et cetera. In this case, queries are fixed. Depending on the complexity of the queries, you can further divide it into low, medium, high resource required pools. Then try setting the budget to 1 GB, 4 GB, 8 GB on these pools by allocating the memory and setting the plan concurrency as per your requirement. Then run the query and measure the execution time. Try couple UP iterations by increasing and then decreasing the budget to find the best settings for your resource pools. For category of ad-hoc workload where there is no control over the number of users going to run the queries concurrently, or complexity of queries user going to submit. For this category, we cannot estimate, in advance, the optimum query budget. So for this category of workload, we have to use cascading resource pool settings where query starts on the pool based on the runtime they have set, then query resources moves to a secondary pool. This helps in preventing smaller queries waiting for resources, longer time when a big query consuming all resources and rendering for a longer time. Some important resource pool monitoring tables, analyze system, you can query resource cues table to find any transaction waiting for resources. You will also find on which resource pool transaction is waiting, how long it is waiting, how many queries are waiting on the pool. Resource pool status gives info on how many queries are in execution on each resource pool, how much memory in use and additional info. For resource consumption of a transaction which was already completed, you can play DC resource acquisitions to find how much memory a given transaction used per node. DC resource pool move table shows info on what our transactions moved from primary to secondary pool in case of cascading resource pools. DC resource rejections gives info on which node, which resource a given transaction failed or rejected. Query consumptions table gives info on how much CPU disk network resources a given transaction utilized. Till now, we discussed query plans and how to allocate resources for better query performance. It is possible for queries to perform slower when there is any resource contention. This contention can be within database or from system side. Here are some important system tables and queries which helps in finding resource contention. Table DC query execution gives the information on transaction level, how much time it took for each execution step. Like how much time it took for planning, resource allocation, actual execution etc. If the time taken is more in planning, which is mostly due to catalog contentions, you can play DC lock releases table as shown here to see how long transactions are waiting to acquire global catalog lock, how long transaction holding GCL x. Normally, GCL x acquire and release should be done within a couple of milliseconds. If the transactions are waiting for a few seconds to acquire GCL x or holding GCL x longer indicates some catalog contention, which may be due to too many concurrent queries or due to long running queries, or system services holding catalog mutexes and causing other transactions to queue up. A query is given here, particularly the system tables will help you further narrow down the contention. You can vary sessions table to find any long-running user queries. You can query system services table to find any service like analyze row counts, move out, merge operation and running for a long time. DC all evens table gives info on what are slower events happening. You can also query system resource usage table to find any particular system resource like CPU memory, disk IO or network throughput, saturating on any node. It is possible once slow node in the cluster could impact overall performance of queries negatively. To identify any slow node in the cluster, we use queries. Select one, and (mumbles) Clearly key one query just executes on initiative node. On a good node, kV one query returns within 50 milliseconds. As shown here, you can use a script to run this, select kV one query on all nodes in the cluster. You can repeat this test multiple times, say five to 10 times then reveal the time taken by this query on all nodes in all tech (mumbles) . If there is any one node taking more than a few seconds compared to other notes taking just milliseconds, then something is wrong with that node. To find what is going on with the node, which took more time for kV one query, run perf top. Perf top gives info on stopped only lister functions in which system spending most of the time. These functions can be counter functions or Vertica functions, as shown here. Based on their systemic spending most of the time we'll get some clue on what is going on with that code. Abhi will continue with the remaining part of the session. Over to you Abhi. >> Bir: Hey, thanks, Rakesh. My name is Abhimanu Thakur and today I will cover some performance cases which we had addressed recently in our customer clusters which we will be applying the best practices just showed by Rakesh. Now, to find where the performance problem is, it is always easy if we know where the problem is. And to understand that, like Rakesh just explained, the life of a query has different phases. The phases are pre execution, which is the planning, execution and post execution which is releasing all the required resources. This is something very similar to a plane taking a flight path where it prepares itself, gets onto the runway, takes off and lands back onto the runway. So, let's prepare our flight to take off. So, this is a use case which is from a dashboard application where the dashboard fails to refresh once in a while, and there is a batch of queries which are sent by the dashboard to the Vertica database. And let's see how we can be able to see where the failure is or where the slowness is. To reveal the dashboard application, these are very shortly queries, we need to see what were the historical executions and from the historical executions, we basically try to find where is the exact amount of time spent, whether it is in the planning phase, execution phase or in the post execution and if they are pretty consistent all the time, which means the plan has not changed in the execution which will also help us determine what is the memory used and if the memory budget is ideal. As just showed by Rakesh, the budget plays a very important role. So DC query executions, one-stop place to go and find your timings, whether it is a timing extra or is it execute plan or is it an abandoned plan. So, looking at the queries which we received and the times from the scrutinize, we find most of the time average execution, the execution is pretty consistent and there is some time, extra time spent in the planning phase which users of (mumbles) resource contention. This is a very simple matrix which you can follow to find if you have issues. So the system resource convention catalog contention and resource contention, all of these contribute mostly because of the concurrency. And let's see if we can drill down further to find the issue in these dashboard application queries. So, to get the concurrency, we pull out the number of queries issued, what is the max concurrency achieved, what are the number of threads, what is the overall percentage of query duration and all this data is available in the V advisor report. So, as soon as you provide scrutinize, we generate the V advisor report which helps us get complete insight of this data. So, based on this we definitely see there is very high concurrency and most of the queries finish in less than a second which is good. There are queries which go beyond 10 seconds and over a minute, but so definitely, the cluster had concurrency. What is more interesting is to find from this graph is... I'm sorry if this is not very readable, but the topmost line what you see is the Select and the bottom two or three lines are the create, drop and alters. So definitely this cluster is having a lot of DDL and DMLs being issued and what do they contribute is if there is a large DDL and DMLs, they cause catalog contention. So, we need to make sure that the batch, what we're sending is not causing too many catalog contention into the cluster which delays the complete plan face as the system resources are busy. And the same time, what we also analyze is the analyze tactics running every hour which is very aggressive, I would say. It should be scheduled to be need only so if a table has not changed drastically that's not scheduled analyzed tactics for the table. A couple more settings has shared by Rakesh is, it definitely plays a important role in the modeled and mode operations. So now, let's look at the budget of the query. The budget of the resource pool is currently at about two GB and it is the 75 percentile memory. Queries are definitely executing at that same budget, which is good and bad because these are dashboard queries, they don't need such a large amount of memory. The max memory as shown here from the capture data is about 20 GB which is pretty high. So what we did is, we found that there are some queries run by different user who are running in the same dashboard pool which should not be happening as dashboard pool is something like a premium pool or kind of a private run way to run your own private jet. And why I made that statement is as you see, resource pools are lik runways. You have different resource pools, different runways to cater different types of plane, different types of flights which... So, as you can manage your resource pools differently, your flights can take off and land easily. So, from this we did remind that the budget is something which could be well done. Now let's look... As we saw in the previous numbers that there were some resource weights and like I said, because resource pools are like your runways. So if you have everything ready, your plane is waiting just to get onto the runway to take off, you would definitely not want to be in that situation. So in this case, what we found is the coolest... There're quite a bit number of queries which have been waited in the pool and they waited almost a second and which can be avoided by modifying the the amount of resources allocated to the resource pool. So in this case, we increase the resource pool to provide more memory which is 80 GB and reduce the budget from two GB to one GB. Also making sure that the plan concurrency is increased to match the memory budget and also we moved the user who was running into the dashboard query pool. So, this is something which we have gone, which we found also in the resource pool is the execution parallelism and how this affects and what what number changes. So, execution parallelism is something which allocates the plan, allocates the number of threads, network buffers and all the data around it before even the query executes. And in this case, this pool had auto, which defaults to the core count. And so, dashboard queries not being too high on resources, they need to just get what they want. So we reduced the execution parallelism to eight and this drastically brought down the amount of threads which were needed without changing the time of execution. So, this is all what we saw how we could tune before the query takes off. Now, let's see what path we followed. This is the exact path what we followed. Hope of this diagram helps and these are the things which we took care of. So, tune your resource pool, adjust your execution parallelism based on the type of the queries the resource pool is catering to and match your memory sizes and don't be too aggressive on your resource budget. And see if you could replace your staging tables with temporary tables as they help a lot in reducing the DDLs and DMLs, reducing the catalog contention and the places where you cannot replace them with the truncate tables, reduce your analyzed statics duration and if possible, follow the best practices for a couple more operations. So moving on, let's let our query take a flight and see what best practices can be applied here. So this is another, I would say, very classic example of query where the query has been running and suddenly stops to fail. And if there is... I think most of the other seniors in a Join did not fit in memory. What does this mean? It basically means the inner table is trying to build a large Hash table, and it needs a lot of memory to fit. There are only two reasons why it could fail. One, your statics are outdated and your resource pool is not letting you grab all the memory needed. So in this particular case, the resource pool is not allowing all the memory it needs. As you see, the query acquire 180 GB of memory, and it failed. When looking at the... In most cases, you should be able to figure out the issue looking at the explained plan of the query as shared by Rakesh earlier. But in this case if you see, the explained plan looks awesome. There's no other operator like in a broadcast or outer V segment or something like that, it's just Join hash. So looking further we find into the projection. So inner is on segmented projection, the outer is segmented. Excellent. This is what is needed. So in this case, what we would recommend is go find further what is the cost. The cost to scan this row seems to be pretty high. There's the table DC query execution in their profiles in Vertica, which helps you drill down to every smallest amount of time, memory and what were the number of rows used by individual operators per pack. So, while looking into the execution engine profile details for this query, we found the amount of time spent is on the Join operator and it's the Join inner Hash table build time, which has taking huge amount of time. It's just waiting basically for the lower operators can and storage union to pass the data. So, how can we avoid this? Clearly, we can avoid it by creating a segmented projection instead of unsegmented projection on such a large table with one billion rows. Following the practice to create the projection... So this is a projection which was created and it was segmented on the column which is part of the select clause over here. Now, that plan looks nice and clean still, and the execution of this query now executes in 22 minutes 15 seconds and the most important you see is the memory. It executes in just 15 GB of memory. So, basically to what was done is the unsegmented projection which acquires a lot of memory per node is now not taking that much of memory and executing faster as it has been divided by the number of nodes per node to execute only a small share of data. But, the customer was still not happy as 22 minutes is still high. And let's see if we can tune it further to make the cost go down and execution time go down. So, looking at the explained plan again, like I said, most of the time, you could see the plan and say, "What's going on?" In this case, there is an inner re segment. So, how could we avoid the inner re segments? We can avoid the inner re segment... Most of the times, all the re segments just by creating the projection which are identically segmented which means your inner and outer both have the same amount, same segmentation clause. The same was done over here, as you see, there's now segment on sales ID and also ordered by sales ID which helps us execute the query drop from 22 minutes to eight minutes, and now the memory acquired is just equals to the pool budget which is 8 GB. And if you see, the most What is needed is the hash Join is converted into a merge Join being the ordered by the segmented clause and also the Join clause. So, what this gives us is, it has the new global data distribution and by changing the production design, we have improved the query performance. But there are times when you could not have changed the production design and there's nothing much which can be done. In all those cases, as even in the first case of Vertica after fail of the inner Join, the second Vertica replan (mumbles) spill to this operator. You could let the system degrade by acquiring 180 GB for whatever duration of minutes the query had. You could simply use this hand to replace and run the query in the very first go. Let the system have all the resources it needs. So, use hints wherever possible and filter disk is definitely your option where there're no other options for you to change your projection design. Now, there are times when you find that you have gone through your query plan, you have gone through every other thing and there's not much you see anywhere, but you definitely look at the query and you feel that, "Now, I think I can rewrite this query." And how what makes you decide that is you look at the query and you see that the same table has been accessed several times in my query plan, how can I rewrite this query to access my table just once? And in this particular use case, a very simple use case where a table is scanned three times for several different filters and then a union in Vertica union is kind of costly operator I would say, because union does not know what's the amount of data which should be coming from the underlying query. So we allocate a lot of resources to keep the union running. Now, we could simply replace all these unions by simple "Or" clause. So, simple "Or" clause changes the complete plan of the query and the cost drops down drastically. And now the optimizer almost know the exact amount of rows it has to process. So change, look at your query plans and see if you could make the execution in the profile or the optimizer do better job just by doing some small rewrites. Like if there are some tables frequently accessed you could even use a "With" clause which will do an early materialization and make use the better performance or for the union which I just shared and replace your left Joins with right Joins, use your (mumbles) like shade earlier for you changing your hash table types. This is the exact part what we have followed in this presentation. Hope this presentation was helpful in addressing, at least finding some performance issues in your queries or in your class test. So, thank you for listening to our presentation. Now we are ready for Q&A.

Published Date : Mar 30 2020

SUMMARY :

and key members of the Vertica customer success team. and other is the best resources to execute the plan. and the most important you see is the memory.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rakesh BanulaPERSON

0.99+

RakeshPERSON

0.99+

Abhi ThakurPERSON

0.99+

Jeff EalingPERSON

0.99+

JeffPERSON

0.99+

two GBQUANTITY

0.99+

VerticaORGANIZATION

0.99+

one GBQUANTITY

0.99+

180 GBQUANTITY

0.99+

80 GBQUANTITY

0.99+

Rakesh BankulaPERSON

0.99+

1 GBQUANTITY

0.99+

millionsQUANTITY

0.99+

8 GBQUANTITY

0.99+

forum.vertica.comOTHER

0.99+

OneQUANTITY

0.99+

22 minutesQUANTITY

0.99+

60,000 recordsQUANTITY

0.99+

15 GBQUANTITY

0.99+

4 GBQUANTITY

0.99+

10 GBQUANTITY

0.99+

fiveQUANTITY

0.99+

20 million rowsQUANTITY

0.99+

less than a secondQUANTITY

0.99+

two methodsQUANTITY

0.99+

TodayDATE

0.99+

less than 50 charactersQUANTITY

0.99+

AbhiPERSON

0.99+

firstQUANTITY

0.99+

Abhimanu ThakurPERSON

0.99+

FirstQUANTITY

0.99+

eight minutesQUANTITY

0.99+

one billion rowsQUANTITY

0.99+

todayDATE

0.99+

three linesQUANTITY

0.99+

10 timesQUANTITY

0.99+

secondQUANTITY

0.99+

three timesQUANTITY

0.99+

one exampleQUANTITY

0.98+

each sideQUANTITY

0.98+

BothQUANTITY

0.98+

5000QUANTITY

0.98+

eachQUANTITY

0.98+

over a minuteQUANTITY

0.98+

60,000 rowsQUANTITY

0.98+

2020DATE

0.98+

Path ID 3OTHER

0.98+

1000QUANTITY

0.98+

first partQUANTITY

0.98+

Path ID 7OTHER

0.98+

10 secondsQUANTITY

0.98+

two reasonsQUANTITY

0.97+

three itemsQUANTITY

0.97+

each resource poolQUANTITY

0.97+

about 20 GBQUANTITY

0.97+

GCL xTITLE

0.97+

both projectionsQUANTITY

0.97+

two parametersQUANTITY

0.97+

more than a few secondsQUANTITY

0.97+

Path ID 4OTHER

0.97+

T2OTHER

0.97+

75 percentileQUANTITY

0.97+

Bir Abhimanu ThakurPERSON

0.97+

bothQUANTITY

0.96+

50 millisecondsQUANTITY

0.96+

each executionQUANTITY

0.96+

about two GBQUANTITY

0.96+

Path ID 6OTHER

0.95+

this weekDATE

0.95+

two main thingsQUANTITY

0.93+

eightQUANTITY

0.93+

eight GBQUANTITY

0.93+

twoQUANTITY

0.93+

Hardik Modi, NETSCOUT | RSAC USA 2020


 

>>buy from San Francisco. It's the queue covering our essay conference 2020. San Francisco Brought to you by Silicon Angle Media >>Hey, welcome back here. Ready? Jeff Frick here with the Cube. We're in downtown San Francisco. It is absolutely spectacular. Day outside. I'm not sure why were incited. Mosconi. That's where we are. It's the RCC conference, I think 50,000 people the biggest security conference in the world here in Mosconi this week. We've been here, wall to wall coverage. We'll be here all the way till Thursday. So thanks for joining us. We're excited to have our next guest. He's got a lot of great data to share, so let's jump into it. It's hard mode. He's a VP engineering threat and mitigation products for nets. Cowhearted. Great to meet you. >>Thank you. Good to be here, >>too. So for people who aren't familiar with Net Scout, give em kind of the basic overview. What do you guys all about? Yes, and that's what we consider >>ourselves their guardians of the connected world. And so our job is to protect, like, you know, companies, enterprises, service providers, anybody who has on the Internet and help keep their services running your applications and things returned deliver to your customers would make sure that it's up there performing to, like, you know the way you want them to, but also kind of give you visibility and protect you against DDOS attacks on other kind of security threats. That's basically in a nutshell. What we do as a company and, yeah, wear the garden of connected world. >>So So I just from a vendor point of the I always I feel so sorry for >>buyers in this environment because you walk around. I don't know how many vendors are in here. A lot of >>big boost, little boost. So how do you kind of help separate? >>You know, Netsch out from the noise? How what's your guys? Secret sauce? What's your kind of special things? >>Really, it's like 30 years >>off investment in like, network based visibility, and >>we truly >>believe in the network. Our CEO, he says, like you know the network like, you know, actually, when you monitor the network, it's like taking a blood test. It tells you the truth, right? And it's really like how you find out, like, you know, some things right or wrong. I mean, I actually, for my background to like network monitoring. There's a lot of our what we think of as like the endpoint is actually contested territory. That's where the adversary is. When you're on the network and your monitoring all activity, it really gives you a vantage point. You know, that's >>really special. So we really focus on the network. Our heritage and the network is is one of our key strengths and then, you know, as part of >>us as a company like Arbor Arbor. Networks with coming in that's got acquired some years ago were very much part of Net Scout with our brand of products. Part of that, you know, the Arbor legacy includes huge visibility into what's happening across the Internet and visibility like nobody else like in terms of the number of service providers and large enterprises who work with us, help us understand what's happening across the landscape. That's like nobody else out here. And that is what we consider a key differentiator. >>Okay, great. So one of the things you guys do >>a couple times years, I understand his publisher reporting solution, gift people. Some information as to what's going on. So we've got the We've >>got the version over four here. Right Net scout threat, intelligence report. So you said this comes out twice a year, twice a year. So what is the latest giving some scoop >>here, Hot off the presses we published last week. Okay, so it's really just a few days old and, you know, our focus here is what happened in the last six months of last year. So that and then what we do is we compare it against data that we've collected a year prior. >>So really a few things >>that we want you to remember if you're on the right, you know, the first number is 8.4 million. That's the number of D DOS attacks that >>we saw. This doesn't mean that >>we've seen every attack, you know, in the world, but that's like, you know just how many DDOS attacks we saw through the eyes of our customers. That's >>in this in six months. 8.4 number is >>actually for the entire year here in an entire year of 2019. There's a little bit of seasonality to it. So if you think of it like a 4.4, maybe something that that was the second half of the year. But that's where I want to start. That's just how many DDOS attacks we observed. And so, in the >>course of the report, what we can do a >>slice and dice that number talk about, like, different sizes, like, what are we seeing? Between zero and 100 gigabits per 2nd 102 104 100 above and >>kind of give you a sense of just what kind of this separation there is who is being targeted >>like we had a very broad level, like in some of the verticals and geographies. We kind of lay out this number and give you like, a lot of contact. So if you're if you're in finance and you're in the UK, you want to know like, Hey, what happened? What happened in Europe, for example, In the past 66 months, we have that data right, and we've got to give you that awareness of what's happening now. The second number I want you to remember is seven seven or the number of new attack vectors reflection application attack vectors that we observed being used widely in in in the second half. >>Seven new 17 new ones. So that now kind of brings our tally >>up to 31 like that. We have those listed out in here. We talk about >>just how much? Uh huh. Really? Just how many of these vectors, how they're used. Also, these each of these vectors >>leverage vulnerabilities in devices that are deployed across the Internet. So we kind of laid out like, you know, just how many of them are out there. But that's like, You know that to us seven is reflecting how the adversary is innovating. They're looking for new ways to attack us. They've found 71 last year. They're going to war, right? Right. And that's that's kind of what we focus on. >>Let's go back to the 8.4. So of those 8.4 million, how many would you declare >>successful from the attacker point of view? >>Yeah, You know something that this is always >>like, you know, you know, it's difficult to go estimate precisely or kind of get within some level of >>precision. I think that you know, the the adversaries, always trying to >>of course, they love to deliver a knockout blow and like all your services down but even like every attack inflicts a cost right and the cost is whether it's, you know, it's made its way all the way through to the end target. And now you know, they're using more network and computing resource is just to kind of keep their services going while they're under attack. The attack is low, You're still kind of you. You're still paying that cost or, you know, the cost of paid upstream by maybe the service provider. Somebody was defending your network for you. So that way, like, you know, there's like there's a cost to every one of these, right? In >>terms of like outages. I should also point out that the attacks that you might think >>that this attack is like, you know, hey, you know, there was a specific victim and that victim suffered as a result of but >>in many cases, the adversaries going after people who are providing services to others. So I mean, if a Turkish bank >>goes down right, like, you know, our cannot like services, customers for a month are maybe even a few hours, right, And you know, the number of victims in this case is fairly broad. Might be one attacks that might be one target, however, like the impact is fairly, >>is very large. What's interesting is, have begs a question. Kind of. How do you >>define success or failure from both the attacker's point of view as well as the defender? >>Yeah, I mean, I mean and again, like there's a lot of conversation in the industry about for every attack, right? Any kind of attack. What? When do I say that? You know what? I was ready for it. And, you know, I was I was fine. I mean, I don't care about, you know, ultimately, there's a cost to each of these things. I'd say that everybody kind of comes at it with their You know, if you're a bank, that you might go. Okay. You know what? If my if I'm paying a little bit extra to keep the service up and running while the Attackers coming at me, No problem. If I if my customers air aren't able to log in, some subset of my customers aren't able to log in. Maybe I can live through that. A large number of my customers can't log in. That's actually a really big problem. And if it's sustained, then you make your way into the media or you're forced to report to the government by like, outages are like, You know, maybe, you know, you have to go to your board and go like a sorry, right? Something just happened. >>But are the escalation procedures >>in the definition of consistency? Right? Getting banged all the time right? And there's something like you said, there's some disruption at some level before it fires off triggers and remediation. So so is there some level of okay, that's kind of a cost of doing business versus, you know, we caught it at this. They're kind of like escalation points that define kind of very short of a full line. >>I think when we talk to our service provider customers, we talked to the very large kind of critical enterprises. They tend to be more methodical about how they think of like, Okay, you know, degradation of the service right now, relative to the attack. I think I think for a lot of people, it's like in the eyes of the beholder. Here's Here's something. Here's an S L. A. That I missed the result of the attack at that point. Like you know, I have, I certainly have a failure, but, you know, it's it's up until there is kind of like, Okay, you're right >>in the eyes the attacker to delay service >>at the at the Turkish bank because now their teams operate twice, twice the duration per transaction. Is it? Just holding for ransom is what benefit it raises. A range >>of motivations is basically the full range of human nature. There's They're certainly like we still see attacks that are straight journalism. I just I just cause I could just I wanted I wanted to write. I wanted to show my friend like, you know, that I could do this. There's there's definitely a lot of attacks that have that are like, you know, Hey, I'm a gamer and I'm like, you know, there's I know that person I'm competing with is coming from this I p address. Let me let me bombard them with >>an attack. And you know, there's a huge kind of it could be >>a lot of collateral damage along the way because, you know, you think you're going after this one person in their house. But actually, if you're taking out the network upstream and there's a lot of other people that are on that network, like you know, there's certain competitive element to it. They're definitely from time to time. There are extortion campaigns pay up or we'll do this again right in some parts of the world, like in the way we think of it. It's like cost of doing business. You are almost like a business dispute resolution. You better be. You know, you better settle my invoice or like I'm about, Maybe maybe I'll try and uses take you out crazy. Yeah, >>it, Jeff. I mean things >>like, you know the way talked about this in previous reports, and it's still true. There's especially with d dos. There's what we think of it, like a democratization off the off the attack tools where you don't have to be technical right. You don't have to have a lot of knowledge, you know, their services available. You know, like here's who I'm going to the market by the booth, so I'd like to go after and, you know, here's my $50 or like a big point equivalent. All right, >>let's jump to >>the seven. We talked about 8.4 and the seven new attack vectors and you outline, You know, I think, uh, the top level themes I took from the summary, right? Weaponizing new attack vectors, leveraging mobile hot spots targeting compromised in point >>about the end points. I o t is >>like all the rage people have mess and five G's just rolling out, which is going to see this huge i o t expansion, especially in industrial and all these connected devices and factories in from that power people. How are people protecting those differently now, as we're getting to this kind of exponential curve of the deployment of all these devices, >>I mean, there are a lot of serious people thinking about how to protect individual devices, but infrastructure and large. So I'm not gonna go like, Hey, it's all bad, right? Is plenty back on it all to be the next number, like 17 and 17 as the number of architectures for which Amir, I mean, I was really popular, like in a bar right from a few years ago. That still exists. But over time, what's happened is people have reported Mirai to different architectures so that, you know, think of it like, you know, if you have your your refrigerator connected to the Internet, it comes. It's coming with a little board, has CPU on it like >>running a little OS >>runs and runs in the West on it. Well, there's a Mirai variant ready for that. Essentially, as new devices are getting deployed like, you know, there's, you know, that's kind of our observation that there's even as new CPUs are introduced, a new chips or even the West they're introduced. There's somebody out there. We're ready to port it to that very now, Like, you know, the next level challenges that these devices, you know, they don't often get upgraded. There's no real. In many cases, they're not like, you know, there's very little thought given to really kind of security around it. Right? There are back doors and, like default passwords used on a lot of them. And so you take this combination. I have a whole you know, we talk about, you know, large deployments of devices every year. So you have these large deployments and now, you know, bought is just waiting for ready for it Now again, I will say that it's not. It's not all bad, but there are serious people who were thinking about this and their devices that are deployed on private networks. From the get go, there was a VPN tunnel back to a particular control point that the the commercial vendor operates. I mean, there are things like that, like, hardening that people have done right, So not every device is gonna find its way into a botnet. However, like, you know, you feel like you're getting a toy like Christmas and against $20 you know, and it can connect to the Internet. The odds are nobody's >>thinking not well. The thing we've heard, too, about kind of down the i t and kind of bringing of operations technology and I t is. A lot of those devices weren't developed for upgrades and patches, and Lord knows what Os is running underneath the covers was a single kind of use device. It wasn't really ever going to be connected to the outside world. But now you're connecting with the I t. Suddenly exposing a whole host of issues that were never kind of part of the plan when whoever designed that thing in the first place for sure for sure is crazy. Alright, so that's that. Carpet bombing tactics, increased sector attack, availability. What is there's carpet bomb and carpet bombing generally? What's going on in this space? >>Well, so carpet bombing is a term that we applied a few years ago to a kind of a variation of attack which, like >>traditionally, you know, we see an attack >>against a specific I P address or a specific domain, right? That's that's where that's what I'm targeting. Carpet bombing is taking a range of API's and go like, you know, hey, almost like cycling through every single one of them. So you're so if your filters, if your defense is based on Hey, if my one server sees a spike, let me let me block traffic while now you're actually not seeing enough of a spike on an individual I p. But across a range there's a huge you know, there's a lot of traffic that you're gonna be. >>So this is kind of like trips people >>up from time to time, like are we certainly have defensive built for it. But >>now what? We're you know, it's it's really like what we're seeing is the use >>off Muehr, our other known vectors. We're not like, Okay, C l dap is a protocol feel that we see we see attacks, sealed up attacks all the time. Now what we're >>seeing is like C l >>dap with carpet bombing. Now we're seeing, like, even other other reflection application protocols, which the attack isn't like an individual system, but instead the range. And so that's that's what has changed. Way saw a lot of like, you know, TCP kind of reflection attacks, TCP reflection attacks last year. And then and then the novelty was that Now, like okay, alongside that is the technique, right? Carpet bombing technique. That's that's a pipe >>amounts never stops right? Right hard. We're out of time. I give you the final word. One. Where can people go get the information in this report? And more importantly, for people that aren't part of our is a matter that you know kind of observers or they want to be more spark. How should they be thinking about security when this thing is such a rapidly evolving space? >>So let me give you two resource is really quickly. There's this this >>report available Dub dub dub dub dot com slash threat report. That's that's that's what That's where this report is available on Google Next Threat report and you'll find your way there. We've also, you know, we made another platform available that gives you more continuous visibility into the landscape. So if you read this and like Okay, what's happening now? Then you would go to what we call Met Scout Cyber Threat Horizon. So that's >>kind of tell you >>what's happening over the horizon. It's not just like, you know, Hey, what's what am I seeing? What are people like me seeing maybe other people other elsewhere in the world scene. So that's like the next dot com slash horizon. Okay, to find >>that. And I think like between those two, resource is you get >>access to all of our visibility and then, you know, really, in terms of like, our focus is not just to drive awareness, but all of this knowledge is being built into our products. So the Net's got like arbor line of products. We're continually innovating and evolving and driving like more intelligence into them, right? That's that's really? How We help protect our customers. Right >>hearted. Thanks for taking a few minutes >>and sharing the story. Thank you. 18 Scary. But I'm glad you said it's not all bad. So that's good. >>Alright, he started. I'm Jeff. You're watching the Cube. We're at the RSA conference 2020 >>Mosconi. Thanks for watching. We'll see you next time. >>Yeah, yeah, yeah.

Published Date : Feb 26 2020

SUMMARY :

San Francisco Brought to you by Silicon He's got a lot of great data to share, so let's jump into it. Good to be here, What do you guys all about? like, you know, companies, enterprises, service providers, anybody who has buyers in this environment because you walk around. So how do you kind of help separate? And it's really like how you find out, like, you know, some things right or wrong. and then, you know, as part of you know, the Arbor legacy includes huge visibility into what's happening across the Internet So one of the things you guys do Some information as to what's going on. So you said this comes out twice a year, twice a year. old and, you know, our focus here is what happened in the last six months of last year. that we want you to remember if you're on the right, you know, the first number is 8.4 million. This doesn't mean that we've seen every attack, you know, in the world, but that's like, you know just how many DDOS attacks in this in six months. So if you think of it like a 4.4, maybe something that that was In the past 66 months, we have that data right, and we've got to give you that awareness So that now kind of brings our tally We have those listed out in here. Just how many of these vectors, you know, just how many of them are out there. So of those 8.4 million, how many would you declare I think that you know, the the adversaries, always trying to So that way, like, you know, there's like there's a cost to every one of these, right? I should also point out that the attacks that you might think in many cases, the adversaries going after people who are providing services to others. goes down right, like, you know, our cannot like services, customers for a How do you I mean, I don't care about, you know, ultimately, there's a cost to each of these things. that's kind of a cost of doing business versus, you know, we caught it at this. Okay, you know, degradation of the service right now, relative to the attack. at the at the Turkish bank because now their teams operate twice, that are like, you know, Hey, I'm a gamer and I'm like, you know, there's I know that person And you know, there's a huge kind of it could be a lot of collateral damage along the way because, you know, you think you're going after this one person You don't have to have a lot of knowledge, you know, We talked about 8.4 and the seven new attack vectors and you outline, about the end points. like all the rage people have mess and five G's just rolling out, to different architectures so that, you know, think of it like, However, like, you know, you feel like you're to the outside world. a huge you know, there's a lot of traffic that you're gonna be. up from time to time, like are we certainly have defensive built for it. We're not like, Okay, C l dap is a protocol feel that we see we see attacks, Way saw a lot of like, you know, for people that aren't part of our is a matter that you know kind of observers or they So let me give you two resource is really quickly. We've also, you know, we made another platform available that gives you more continuous It's not just like, you know, Hey, what's what am I seeing? And I think like between those two, resource is you get access to all of our visibility and then, you know, really, in terms of like, our focus is not just Thanks for taking a few minutes But I'm glad you said it's not all bad. We're at the RSA conference 2020 We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EuropeLOCATION

0.99+

JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

$50QUANTITY

0.99+

Arbor ArborORGANIZATION

0.99+

SevenQUANTITY

0.99+

8.4 millionQUANTITY

0.99+

UKLOCATION

0.99+

San FranciscoLOCATION

0.99+

MosconiLOCATION

0.99+

Hardik ModiPERSON

0.99+

last yearDATE

0.99+

zeroQUANTITY

0.99+

twiceQUANTITY

0.99+

Silicon Angle MediaORGANIZATION

0.99+

last weekDATE

0.99+

second halfQUANTITY

0.99+

last yearDATE

0.99+

Net ScoutORGANIZATION

0.99+

eachQUANTITY

0.99+

ArborORGANIZATION

0.99+

sevenQUANTITY

0.99+

bothQUANTITY

0.99+

$20QUANTITY

0.99+

twoQUANTITY

0.99+

50,000 peopleQUANTITY

0.99+

30 yearsQUANTITY

0.99+

this weekDATE

0.98+

2019DATE

0.98+

ThursdayDATE

0.98+

ChristmasEVENT

0.98+

second numberQUANTITY

0.98+

twice a yearQUANTITY

0.98+

71QUANTITY

0.98+

8.4QUANTITY

0.98+

one personQUANTITY

0.97+

six monthsQUANTITY

0.97+

one targetQUANTITY

0.97+

2020DATE

0.97+

firstQUANTITY

0.96+

singleQUANTITY

0.96+

OneQUANTITY

0.96+

oneQUANTITY

0.96+

first numberQUANTITY

0.95+

NetschORGANIZATION

0.94+

100 gigabitsQUANTITY

0.93+

RSACEVENT

0.93+

a yearDATE

0.93+

two resourceQUANTITY

0.93+

last six monthsDATE

0.93+

seven sevenQUANTITY

0.92+

8.4 numberQUANTITY

0.91+

AmirPERSON

0.9+

a monthQUANTITY

0.9+

few years agoDATE

0.89+

RSA conference 2020EVENT

0.89+

17 new onesQUANTITY

0.89+

CubeORGANIZATION

0.88+

17OTHER

0.87+

Scout Cyber Threat HorizonTITLE

0.87+

seven new attack vectorsQUANTITY

0.86+

MiraiTITLE

0.85+

DOSTITLE

0.84+

some years agoDATE

0.83+

daysQUANTITY

0.81+

CubeTITLE

0.78+

2020EVENT

0.75+

RCCEVENT

0.75+

2nd 102QUANTITY

0.74+

one attacksQUANTITY

0.74+

couple times yearsQUANTITY

0.72+

up to 31QUANTITY

0.65+

past 66 monthsDATE

0.63+

100QUANTITY

0.63+

GORGANIZATION

0.63+

4.4QUANTITY

0.62+

MetORGANIZATION

0.6+

fourQUANTITY

0.57+

USALOCATION

0.54+

Derek Manky, Fortinet | CUBEConversation, November 2018


 

[Music] hi I'm Peter Burris and welcome to another Cube conversation from the cube studios here in beautiful Palo Alto California today we're going to talk about some new things that are happening in the security world obviously this is one of the most important domains within the technology industry and increasingly because of digital business in business overall now to do that we've asked Eric manki to come back Derick is the chief of security insights and global threat alliances at Fort Net Derek welcome back to the cube absolutely the same feel the same way Derek okay so we're going to get into some some predictions about what the bad guys are doing and some predictions about what the defenses are doing how we're going to see them defense opportunities improve but let's set the stage because predictions always are made on some platforms some understanding of where we are and that has also changed pretty dramatically so what's the current state in the overall security world Derek yeah so what we saw this year in 2019 a lot is a big increase on automation and I'm talking from an attackers point of view I think we talked about this a little bit earlier in the year so what we've been seeing is the use of frameworks to enhance sort of the day-to-day cycles that cyber criminals and attackers are using to make their you know criminal operations is that much more efficient sort of a well-oiled machine so we're seeing toolkits that are taking you know things within the attack cycle and attack change such as reconnaissance penetration you know exploitation getting into systems and just making that that much quicker so that that window to attack the time to breach has been shrinking thanks to a lot of these crime kits and services that are offered out there now one other comment on this or another question that I might have on this is that so speed is becoming an issue but also the risk as digital business takes on a larger four portion of overall business activities that ultimately the risks and costs of doing things wrong is also going up if I got the right yeah absolutely for sure and you know it's one of those things that it's the longer that a cybercriminal has a foothold in your system or has the opportunity to move laterally and gain access to other systems maybe it's your I o T or you know other other platforms the higher the risk right like the deeper down they are within an attack cycle the higher the risk and because of these automated toolkits are allowing allowing them to facilitate that it's a catalyst really right they can get into the system they can actually get out that much quicker the risk is a much higher and we're talking about risk we're talking about things like intellectual property exfiltration client information this sort of stuff that can be quite damaging to organizations so with the new foundation of speed is becoming an increasingly important feature probably think about security and the risks are becoming greater because digital assets are being recognized as more valuable why do you take us through some of the four Donets predictions on some of the new threats or the threat landscape how's the threat landscape changing yeah so as I said we've already seen this shift in automation so what I would call the basics I mean knowing the target trying to break into that target right when it comes to breaking into the target cyber criminals right now they're following the path of least resistance right they're finding easy ways that they can get into IOT devices I into other systems in our world when we talk about penetration or breaking into systems it's through zero days right so the idea of a zero day is essentially a cyber weapon there's movies and Hollywood that have been made off of this you look at attacks like Stuxnet in the past they all use zero day vulnerabilities to get into systems all right so the idea of one of the predictions we're seeing is that cyber criminals are gonna start to use artificial intelligence right so we talk about machine learning models and artificial intelligence to actually find these zero days for them so in the world of an attacker to find a zero day they have to do a practice called fuzzing and fuzzing is basically trying to trick up computer code right so you're throwing unverified parameters out at your turn T of throwing and unanticipated sequences into code parameters and and input validation and so forth to the point that the code crashes and that's from an attackers point of view that's when you take control of that code this how you know finding weapons into system cyber weapons in this systems work it typically takes a lot of a lot of resource it takes a lot of cycles it takes a lot of intelligence that takes a lot of time to discovery we can be talking on month for longer it's one of the predictions that we're hitting on is that you know cyber criminals are gonna start to use artificial intelligence fuzzing or AI F as I call it to be able to use AI to do all of that you know intelligent work for them so you know basically having a system that will find these gateways if you will these these you know new vulnerabilities into systems so sustained use of AI F to corrupt models so that they can find vulnerabilities that can then be exploited yeah absolutely and you know when it comes to the world of hacking and fuzzing it's one of the toughest things to do it is the reason that zero days are worth so much money you know they can suffer hundreds of thousands of dollars on darknet and in the cyber criminal you know economy so it's because they're talk talk to finally take a lot of resources a lot of intelligence and a lot of effort to be able to not only find the vulnerability but then actively attack it and exploit it right there's two phases to that yeah so the idea is by using part of the power of artificial intelligence that cyber criminals will start to leverage that and harness it in a bad way to be able to not only discover you know these vulnerabilities but also create that weapon right create the exploit so that they can find more you know more holes if you will or more angles to be able to get into systems now another one is that virtualization is happening in you know what the good guys as we virtualized resources but is it also being exploited or does it have the potential be exploited by the bad guys as well especially in a swarming approach yeah virtualization for sure absolutely so the thing about virtualization too is you often have a lot of virtualization being centralizes especially when we talk about cloud right so you have a lot of potential digital assets you know valuable digital assets that could be physically located in one area so when it comes to using things like artificial intelligence fuzzing not only can it be used to find different vulnerabilities or ways into systems it can also be combined with something like I know we've talked about the const that's warm before so using you know multiple intelligence infected pieces of code that can actually try to break into other virtual resources as well so virtualization asked definitely it because of in some cases close proximity if you will between hypervisors and things like this it's also something of concern for sure now there is a difference between AI fai fuzzing and machine learning talk to us a little bit about some of the trends or some of the predictions that pertain to the advancement of machine learning and how bad guys are going to exploit that sure so machine learning is a core element that is used by artificial intelligence right if you think of artificial intelligence it's a larger term it can be used to do intelligent things but it can only make those decisions based off of a knowledge base right and that's where machine learning comes into place machine learning is it's data it's processing and it's time right so there's various machine learning learning models that are put in place it can be used from everything from autonomous vehicles to speech recognition to certainly cybersecurity and defense that we can talk about but you know the other part that we're talking about in terms of reductions is that it can be used like any tool by the bad guys so the idea is that machine learning can be used to actually study code you know from from a black hat attacker point of view to studying weaknesses in code and that's the idea of artificial intelligence fuzzing is that machine learning is used to find software flaws it finds the weak spots in code and then it actually takes those sweet spots and it starts probing starts trying to attack a crisis you know to make the code crash and then when it actually finds that it can crash the code and that it can try to take advantage of that that's where the artificial intelligence comes in right so the AI engine says hey I learned that this piece of software or this attack target has these weak pieces of code in it that's for the AI model so the I fuzzy comes into place to say how can I actually take advantage how can i exploit this right so that's where the AI trussing comes into play so we've got some predictions about how black hats and bad guys are going to use AI and related technologies to find new vulnerabilities new ways of exploiting things and interacting new types of value out of a business what are the white hats got going for them what are their some of the predictions on some of the new classes of defense that we're going to be able to put to counter some of these new classes of attacks yeah so that's that's you know that's honestly some of the good news I believe you know it's always been an armor an arms race between the bad guys and the good guys that's been going on for decades in terms of cybersecurity often you know the the bad guys are in a favorable position because they can do a million things wrong and they don't care right from the good guys standpoint we can do a million things right one thing wrong and that's an issue so we have to be extra diligent and careful with what we do but with that said you know as an example of 49 we've deployed our forty guard AI right so this is six years in the making six years using machine learning using you know precise models to get higher accuracy low false positives to deploy this at reduction so you know when it comes to the defensive mechanism I really think that we're in the drivers position quite frankly we have better technology than the Wild West that they have out on the bad guys side you know from an organization point of view how do you start combating this sort of onslaught of automation in AI from from the bad guys side well you gotta fight fire with fire right and what I mean by that is you have to have an intelligent security system you know perimeter based firewalls and gateways they don't cut it anymore right you need threat intelligence you need systems that are able to orchestrate and automate together so in different security products and in your security stack or a security fabric that can talk to each other you know share intelligence and then actually automate that so I'm talking about things like creating automated security policies based off of you know threat intelligence finding that a potential threat is trying to get into your network that sort of speed through that integration on the defensive side that intelligence speed is is is the key for it I mean without that any organization is gonna be losing the arms race and I think one of the things that is also happening is we're seeing a greater willingness perhaps not to share data but to share information about the bad things that are happening and I know that fort and it's been something at the vanguard of ensuring that there's even better clearing for this information and then driving that back into code that actually further automates how customers respond to things if I got that right yeah you hit a dead-on absolutely you know that is one of the key things that were focused on is that we realized we can't win this war alone right nobody can on a single point of view so we're doing things like interoperating with security partners we have a fabric ready program as an example we're doing a lot of work in the industry working with as an example Interpol and law enforcement to try to do attribution but though the whole endgame what we're trying to do is to the strategy is to try to make it more expensive for cyber criminals to operate so we obviously do that as a vendor you know through good technology our security fabric I integrated holistic security fabric and approach to be able to make it tougher you know for attackers to get into systems but at the same time you know we're working with law enforcement to find out who these guys are to go after attribution prosecution cut off the head of the snake as I call it right to try to hit cyber criminal organizations where it hurts we're also doing things across vendor in the industry like cyber threat Alliance so you know forty knots a founding member of the cyber threat Alliance we're working with other security vendors to actually share real time information is that speed you know message that we're talking about earlier to share real time information so that each member can take that information and put it into you something actionable right in our case when we get intelligence from other vendors in the cyber threat Alliance as an example we're putting that into our security fabric to protect our customers in new real-time so in sum we're talking about a greater value from being attacked being met with a greater and more cooperative use of technology and process to counter those attacks all right yeah absolutely so open collaboration unified collaboration is is definitely key when it comes to that as well you know the other thing like I said is is it's the is the technology piece you know having integration another thing from the defensive side too which is becoming more of a topic recently is deception deception techniques this is a fascinating area to me right because the idea of deception is the way it sounds instead of to deceive criminals when they're coming knocking on your door into your network so it's really what I call like the the house of a thousand mirrors right so they get into your network and they think they're going to your data store but is it really your data store right it's like it's there's one right target and a thousand wrong targets it's it's a it's a defensive strategy that organizations can play to try to trip up cyber criminals right it makes them slower it makes them more inaccurate it makes them go on the defensive and back to the drawing board which is something absolutely I think we have to do so it's very interesting promising you know technology moving forward in 2019 to essentially fight back against the cyber criminals and to make it more expensive to get access to whatever it is that they want Derek max Lilly yeah Derrick McKey chief of security insights and global threat Alliance this is for net thanks once again for being on the cube it's a pleasure anytime look forward to the next chat and from Peter Burroughs and all of us here at the cube in Palo Alto thank you very much for watching this cube conversation until next time you

Published Date : Nov 16 2018

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

Derrick McKeyPERSON

0.99+

Derek MankyPERSON

0.99+

2019DATE

0.99+

DerickPERSON

0.99+

six yearsQUANTITY

0.99+

Peter BurroughsPERSON

0.99+

Palo AltoLOCATION

0.99+

Eric mankiPERSON

0.99+

November 2018DATE

0.99+

each memberQUANTITY

0.99+

Derek max LillyPERSON

0.99+

hundreds of thousands of dollarsQUANTITY

0.99+

cyber threat AllianceORGANIZATION

0.98+

todayDATE

0.97+

two phasesQUANTITY

0.97+

Palo Alto CaliforniaLOCATION

0.97+

cyber threat AllianceORGANIZATION

0.97+

zero daysQUANTITY

0.97+

one right targetQUANTITY

0.97+

forty knotsQUANTITY

0.97+

zero daysQUANTITY

0.97+

HollywoodORGANIZATION

0.97+

oneQUANTITY

0.97+

DerekPERSON

0.97+

decadesQUANTITY

0.96+

zero dayQUANTITY

0.96+

zero daysQUANTITY

0.95+

a thousand wrong targetsQUANTITY

0.95+

zero dayQUANTITY

0.95+

a thousand mirrorsQUANTITY

0.93+

single pointQUANTITY

0.93+

FortinetORGANIZATION

0.9+

one areaQUANTITY

0.88+

one thingQUANTITY

0.88+

one of the key thingsQUANTITY

0.88+

a millionQUANTITY

0.87+

one of the predictionsQUANTITY

0.78+

fourQUANTITY

0.78+

49QUANTITY

0.77+

Fort Net DerekORGANIZATION

0.76+

lotQUANTITY

0.75+

WestLOCATION

0.75+

forty guardQUANTITY

0.73+

this yearDATE

0.72+

one of the predictionsQUANTITY

0.7+

millionQUANTITY

0.7+

global threat AllianceORGANIZATION

0.7+

one otherQUANTITY

0.69+

one of thoseQUANTITY

0.68+

a lot of resourceQUANTITY

0.68+

DonetsORGANIZATION

0.59+

earlier in theDATE

0.59+

most important domainsQUANTITY

0.54+

thingsQUANTITY

0.49+

resourcesQUANTITY

0.49+

WildORGANIZATION

0.46+

StuxnetPERSON

0.45+

InterpolTITLE

0.45+

insightsORGANIZATION

0.43+

CubeORGANIZATION

0.42+

Apurva Davé, Sysdig | CUBEConversation, Sept 2018


 

(dramatic orchestral music) >> Hey, welcome back everybody. Jeff Frick, here, at theCUBE. We're at the Palo Alto studios taking a very short break in the middle of the crazy fall conference season. We'll be back on the road again next week. But we're excited to take an opportunity to take a breath. Again, meet new companies, have CUBE conversations here in the studio, and we're really excited to have our next guest. He's Apurva Dave, the CMO of Sysdig. Apurva, great to see you. >> Thanks, Jeff, thanks for having me here. >> Yea, welcome, happy Friday. >> Appreciate it, happy Friday, always worth it. >> So give us kind of the 101 on Sysdig. >> Yep, Sysdig is a really cool story. It is founded by a gentleman named Loris Degioanni. And, I think the geeks in your audience will probably know Loris in a heartbeat because he was one of the co-creators of a really famous open source project called Wireshark. It's at 20 million users worldwide, for network forensics, network visibility, troubleshooting, all that great stuff. And, way back when, in 2012, Loris realized what cloud and containers were doing to the market and how people build applications. And he stepped back and said, "We're going to need "a totally new way to monitor "and secure these applications." So he left all that Wireshark success behind, and he started another open source project, which eventually became Sysdig. >> Okay. >> Fast-forward to today. Millions of people are using the open source Sysdig and the sister project Sysdig Falco to monitor and secure these containerized applications. >> So what did Sysdig the company delineate itself from Sysdig the open source project? >> Well, you know, that's part of the challenge with open source, it's like part of your identity, right. Open source is who you are. And, what we've done is, we've taken Loris's vision and made it a reality, which is, using this open source technology and instrumentation, we can then build these enterprise class products on top for security monitoring and forensics at scales that the biggest banks in the world can use, governments can use, pharma, healthcare, insurance, all these large companies that need enterprise class products. All based on that same, original open source technology that Loris conceived so many years ago. >> So would you say, so the one that we see all the time and kind of use a base for the open source model, you kind of, Hortonworks, it's really pure, open source Hadoop. Then you have, kind of, Mapbar, you know, it's kind of proprietary on top of Hadoop. And then you have Cloudera. It's kind of open core with a wrapper. I mean, how does the open piece fit within the other pieces that you guys provide? >> That's really a really insightful question because Loris has always had a different model to open source, which is, you create these powerful open source projects that, on their own, will solve a particular problem or use case. For example, the initial Sysdig open source project is really good at forensics and troubleshooting. Sysdig Falco is really good at runtime container security. Those are useful in and of themselves. But then for enterprise class companies, you operate that at massive scale and simplicity. So we add powerful user interfaces, enterprise class management, auditing, security. We bundle that all on top. And that becomes this Cloud-Native intelligence platform that we sell to enterprise. >> And how do they buy that? >> You can, as subscription model. You can use it either as software as a service, where we operate it for you, or you can use it as on-premise software, where we deliver the bits to you and you deploy it behind your firewall. Both of those products are exactly the same functionally, and that's kind of the benefit we had as a younger company coming to market. We knew when we started, we'd need to deliver our software in both forms. >> Okay and then how does that map to, you know, Docker, probably the most broadly known container application, which rose and really disturbed everything a couple years ago. And then that's been disturbed by the next great thing, which is Kubernetes. So how do you guys fit in within those two really well-known pieces of the puzzle? >> Yeah, well you know, like we were talking about earlier, there's so much magic and stardust around Kubernetes and Docker and you just say it to an IT person anywhere and either they're working on Kubernetes, they're thinking about working on Kubernetes, or they're wondering when they can get to working on Kubernetes. The challenge becomes that, once the stardust wears off, and you realize that yeah, this thing is valuable, but there's a lot of work to actually implementing it and operationalizing it, that's when your customers realize that their entire life is going to be upended when they implement these new technologies and implement this new platform. So that's where Sysdig and other products come in. We want to help those customers actually operationalize that software. For us, that's solving the huge gaps around monitoring, security, network visibility, forensics, and so on. And, part of my goal in marketing, is to help the customers realize that they're going to need all these capabilities as they start moving to Kubernetes. >> Right, certainly, it's the hot topic. I mean, we were just at VMworld, we've been covering VMworld forever, and both Pat and Sanjay had Kubernetes as parts of their keynotes on day one and day two. So they're all in, as well, all time for Amazon, and it goes without saying with Google. >> Yeah, so it's funny is, we released initial support for Kubernetes, get this, back in 2015. And, this was the point where, basically the world hadn't yet really, they didn't really know what Kubernetes was. >> Unless they watched theCUBE. >> Unless they watched-- >> They had Craig Mcklecky-- >> Okay, alright. >> On Google cloud platform next 2014. I looked it up. >> Awesome. Very nice-- >> Told us, even the story of the ship wheel and everything. But you're right, I don't think that many people were there. It was at Mission Bay Conference Center, which is not where you would think a Google conference would be. It's a 400 person conference facility. >> Exactly, and I think this year, CubeCon is probably going to be 7,000 people. Shows you a little bit of the growth of this industry. But, even back in 2015, we kind of recognized that it wasn't just about containers, but it was about the microservices that you build on top on containers and how you control those containers. That's really going to change the way enterprises build software. And that's been a guiding principle for us, as we've built out the company and the products. >> Well, way to get ahead of the curve, I love it. So, I see it of more of a philosophical question on an open source company. It's such an important piece of the modern software world, and you guys are foundationally built on that, but I always think about when you're managing your own resources. You know, how much time do you enable the engineers to spend on the open source piece of the open source project, and how much, which is great, and they get a lot of kudos in the ecosystem, and they're great contributors, and they get to speak at conferences, and it's good, it's important. Versus how much time they need to spend on the company stuff, and managing those two resource allocations, 'cause they're very different, they're both very important, and in a company, like Sysdig, they're so intimately tied together. >> Yeah, that last point to me is the biggest driver. I think some companies deal with open source as a side project that gives engineers an outlet to do some fun, interesting things they wouldn't otherwise do. For a company like Sysdig, open source is core to what we do. We think of these two communities that we serve, the open source community and the enterprise community. But it's all based on the same technology. And our job in this mix is to facilitate the activity going on in both of these communities in a way that's appropriate for how those communities want to operate. I think most people understand how an enterprise, you know, a commercial enterprise community wants to operate. They want Sysdig to have a roadmap and deliver on that roadmap, and that's all well and good. That open source element is really kind of new and challenging. Our model has always been that the core open source technology fuels our enterprise business, and what we need to do is put as much energy as we can into the open source, such that the community is inspired to interact with us, experiment, and give back. And if we do it right, two things happen. We see massive contribution from the community, the community might even take over our open source projects. We see that happening with Sysdig Falco right now. For us, our job then is to sit back, understand how that community is innovating, and how we can add value on top of it. So coming back all the way to your question around engineers and what they should be doing, step one, always contribute to the open source. Make our open source better, so that the community is inspired to interact with us. And then from there, we'll leverage all that goodness in a way that's right for our enterprise community. >> So really getting in almost like a flywheel effect. Just investing in that core flywheel and then spin off all kinds of great stuff. >> You got it, you know, my motto's always been like, if the open source is this thing off to the side, that you're wondering, oh, should our engineers be working on it, or shouldn't they, it's going to be a tough model to sustain long-term. There has to be an integrated value to your overall organization and you have to recognize that. And then, resource it appropriately. >> Right, so let's kind of come up to the present. You guys just had a big round of funding, congratulations. >> Yep, thank you. >> So you got some new cash in the bank. So what's next for Sysdig? Now you got this new powder, if you will, so what's on the horizon, where are you guys going next? Where are you taking the company forward? >> Great question, so, we just raised a $68.5 million Series D round, led by Inside Ventures and follow-on investors from our previous investors, Accel and Bane. 68.5 doesn't happen overnight. It's certainly been a set of wins since Loris first introduced those open source projects to releasing our monitoring product, adding our security product. In fact, earlier this year, we brought on a very experienced CEO, Suresh Vasudevan, who was the previous CEO of Nimble Storage, as a partner to Loris, so that they could grow the business together. Come this summer, we're having massive success. It feels like we've hit a hockey stick late last year, where we signed up some of the largest investment banks in the world, large government organizations, Fortune 500s, all the magic is happening that you hope for, and all of a sudden, we found these investors knocking at our door, we weren't actually even out looking for funds, and we ended up with an over-subscribed round. >> Right. >> So our next goal, like what are you going to do with all that money, is first of all, we're moving to a phase where, it's not just about the product, but it's about the overall experience with Sysdig the company. We're really building that out, so that every enterprise has an incredible experience with our product and the company itself, so that they're just, you know, amazed with what Sysdig did to help make Cloud-Native a reality. >> That's great and you got to bring in an extra investor, like in a crunch phase, you guys haven't had that many investors in the company, relatively a small number of participants. >> It's been very tightly held, and we like it that way. We want to keep out community small and tight. >> Well, Apurva, exciting times, and I'm sure you're excited to have some of that money to spend on marketing going forward. >> Well, we'll do our part. >> Well, thanks for sharing your story, and have a great weekend. I'm happy it's Friday, I'm sure you are, too. >> Thanks so much, have a great weekend. Thanks for having me. >> He's Apurva, I'm Jeff, you're watching theCUBE. It's theCUBE conversation in Palo Alto, we'll be back on the road next week, so keep on watching. See you next time. (dramatic orchestral music)

Published Date : Sep 28 2018

SUMMARY :

in the middle of the crazy fall conference season. And he stepped back and said, "We're going to need and the sister project Sysdig Falco that the biggest banks in the world can use, So would you say, so the one that we see all the time For example, the initial Sysdig open source project and you deploy it behind your firewall. Okay and then how does that map to, you know, and Docker and you just say it to an IT person anywhere Right, certainly, it's the hot topic. Yeah, so it's funny is, we released initial support I looked it up. which is not where you would think That's really going to change the way and you guys are foundationally built on that, Make our open source better, so that the community and then spin off all kinds of great stuff. if the open source is this thing off to the side, Right, so let's kind of come up to the present. So you got some new cash in the bank. all the magic is happening that you hope for, so that they're just, you know, amazed with what Sysdig haven't had that many investors in the company, It's been very tightly held, and we like it that way. to have some of that money I'm happy it's Friday, I'm sure you are, too. Thanks so much, have a great weekend. See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Suresh VasudevanPERSON

0.99+

Jeff FrickPERSON

0.99+

JeffPERSON

0.99+

Loris DegioanniPERSON

0.99+

LorisPERSON

0.99+

2012DATE

0.99+

2015DATE

0.99+

Nimble StorageORGANIZATION

0.99+

SysdigORGANIZATION

0.99+

Sept 2018DATE

0.99+

SanjayPERSON

0.99+

Palo AltoLOCATION

0.99+

PatPERSON

0.99+

$68.5 millionQUANTITY

0.99+

AmazonORGANIZATION

0.99+

400 personQUANTITY

0.99+

KubernetesTITLE

0.99+

AccelORGANIZATION

0.99+

ApurvaPERSON

0.99+

Craig MckleckyPERSON

0.99+

next weekDATE

0.99+

GoogleORGANIZATION

0.99+

bothQUANTITY

0.99+

VMworldORGANIZATION

0.99+

WiresharkTITLE

0.99+

HadoopTITLE

0.99+

7,000 peopleQUANTITY

0.99+

Apurva DavéPERSON

0.99+

BothQUANTITY

0.99+

20 million usersQUANTITY

0.99+

Inside VenturesORGANIZATION

0.98+

SysdigPERSON

0.98+

CubeConEVENT

0.98+

two thingsQUANTITY

0.98+

FridayDATE

0.98+

two communitiesQUANTITY

0.98+

Mission Bay Conference CenterLOCATION

0.97+

day oneQUANTITY

0.97+

DockerTITLE

0.97+

both formsQUANTITY

0.97+

day twoQUANTITY

0.97+

BaneORGANIZATION

0.97+

earlier this yearDATE

0.96+

oneQUANTITY

0.96+

CUBEORGANIZATION

0.96+

firstQUANTITY

0.96+

Apurva DavePERSON

0.95+

Fortune 500sORGANIZATION

0.94+

two resourceQUANTITY

0.93+

two really well-known piecesQUANTITY

0.92+

late last yearDATE

0.92+

couple years agoDATE

0.9+

this summerDATE

0.9+

ClouderaTITLE

0.89+

Series DOTHER

0.88+

todayDATE

0.87+

Millions of peopleQUANTITY

0.87+

step oneQUANTITY

0.87+

this yearDATE

0.87+

68.5QUANTITY

0.86+

HortonworksORGANIZATION

0.84+

yearsDATE

0.79+

Sysdig FalcoORGANIZATION

0.79+

LorisORGANIZATION

0.79+

Dave Rensin, Google | Google Cloud Next 2018


 

>> Live from San Francisco, it's The Cube. Covering Google Cloud Next 2018 brought to you by Google Cloud and its ecosystem partners. >> Welcome back everyone, it's The Cube live in San Francisco. At Google Cloud's big event, Next 18, GoogleNext18 is the hashtag. I'm John Furrier with Jeff Frick, our next guest, Dave Rensin, director of CRE and network capacity at Google. CRE stands for Customer Reliability Engineering, not to be confused with SRE which is Google's heralded program Site Reliability Engineering, categoric changer in the industry. Dave, great to have you on. Thanks for coming on. >> Thank you so much for having me. >> So we had a meeting a couple months ago and I was just so impressed by how much thought and engineering and business operations have been built around Google's infrastructure. It's a fascinating case study in history of computing, you guys obviously power yourselves and the Cloud is just massive. You've got the Site Reliability Engineer concept that now is, I won't say is a boiler plate, but it's certainly the guiding architecture for how enterprise is going to start to operate. Take a minute to explain the SRE and the CRE concept within Google. I think it's super important that you guys, again pioneered, something pretty amazing with the SRE program. >> Well, I mean, like everything it was just formed out of necessity for us. We did the calculation 12 or 13 years ago, I think. We sat down a piece of paper and we said, well, the number of people we need to run our systems scales linearly with the number of machines, which scales linearly with the number of users, and the complexity of the stuff you're doing. Alright, carry the two divide by six, plot line. In ten years, now this is 13 or 14 years ago, we're going to need one million humans to run google. And that was at the growth and complexity of 10 years ago or 12 years ago. >> Yeah, Search. (laughs) >> Search, right? We didn't have Android, we didn't have Cloud, we didn't have Assistant, we didn't have any of these things. We were like, well that's not going to work. We're going to have to do something different and so that's kind of where SRE came from. It's like, how do we automate, the basic philosophy is simple, give to the machines all the things machines can do. And keep for the humans all the things that require human judgment. And that's how we get to a place where like 2,500 SREs run all of Google. >> And that's massive and there's billions and billions of users. >> Yeah. >> Again, I think this is super important because at that time it was a tell sign for you guys to wake up and go, well I can't get a million humans. But it's now becoming, in my opinion, what this enterprise is going through in this digital transformation, whatever we call it these days, consumer's agent of IT now it's digital trasfor-- Whatever it is, the role of the human-machine interaction is now changing, people need to do more. They can collect more data than ever before. It doesn't cost them that much to collect data. >> Yeah. >> We just heard from the BigQuery guys, some amazing stuff happening. So now enterprises are almost going through the same changeover that you guys had to go through. And this I now super important because now you have the tooling and the scale that Google has. And so it's almost like it's a level up fast. So, how does an enterprise become SRE like, quickly, to take advantage of the Cloud? >> So, you know, I would like to say this is all sort of a deliberate march of a multi-year plan. But it wasn't, it was a little accidental. Starting two or three years ago, companies were asking us, they were saying, we're getting mired in toil. Like, we're not being able to innovate because we're spending all of our budget and effort just running the things and turning the crank. How do you have billions of users and not have this problem? We said, oh we use this thing called SRE. And they're like please use more words. And so we wrote a book. Right? And we expected maybe 20 people would read the book, and it was fine. And we didn't do it for any other reason other than that seemed like a very scalable way to tell people the words. And then it all just kind of exploded. We didn't expect that it was going to be true and so a couple of years ago we said, well, maybe we should formalize our interactions of, we should go out proactively and teach every enterprise we can how to do this and really work with them, and build up muscle memory. And that's where CRE comes from. That's my little corner of SRE. It's the part of SRE that, instead of being inward focused, we point out to companies. And our goal is that every firm from five to 50 thousand can follow these principles. And they can. wW know they can do it. And it's not as hard as they think. The funny thing about enterprises is they have this inferiority complex, like they've been told for years by Silicon Valley firms in sort of this derogatory way that, you're just an enterprise. We're the innovate-- That's-- >> Buy our stuff. Buy our software. Buy IT. >> We're smarter than you! And it's nonsense. There are hundreds and hundreds of thousands of really awesome engineers in these enterprises, right? And if you just give them a little latitude. And so anyway, we can walk these companies on this journey and it's been, I mean you've seen it, it's just been snowballing the last couple of years. >> Well the developers certainly have changed the game. We've seen with Cloud Native the role of developers doing toil and, or specific longer term projects at an app related IT would support them. So you had this traditional model that's been changed with agile et cetera. And dev ops, so that's great. So you know, golf clap for that. Now it's like scale >> No more than a golf clap it's been real. >> It's been a high five. Now it's like, they got to go to the next level. The next level is how do you scale it, how do I get more apps, how am I going to drive more revenue, not just reduce the cost? But now you got operators, now I have to operate things. So I think the persona of what operating something means, what you guys have hit with SRE, and CRE is part of that program, and that's really I think the aha moment. So that's where I see, and so how does someone read the book, put it in practice? Is it a cultural shift? Is it a reorganization? What are you guy seeing? What are some of the successes that you guys have been involved in? >> The biggest way to fail at doing SRE is try to do all of it at once. Don't do that. There are a few basic principles, that if you adhere to, the rest of it just comes organically at a pace that makes sense for your business. The easiest thing to think of, is simply-- If I had to distill it down to a few simple things, it's just this. Any system involving people is going to have errors. So any goal you have that assumes perfection, 100% uptime, 100% customer satisfaction, zero error, that kind of thing, is a lie. You're lying to yourself, you're lying to your customers. It's not just unrealistic its, in a way kind of immoral. So you got to embrace that. And then that difference between perfection and the amounts, the closeness to perfection that your customers really need, cuz they don't really need perfection, should be just a budget. We call it the error budget. Go spend the budget because above that line your customers are indifferent they don't care. And that unlocks innovation. >> So this is important, I want to just make sure I slow down on this, error budget is a concept that you're talking about. Explain that, because this is, I think, interesting. Because you're saying it's bs that there's no errors, because there's always errors, Right? >> Sure. >> So you just got to factor in and how you deal with them is-- But explain this error budget, because this operating philosophy of saying deal with errors, so explain this error budget concept. >> It comes from this observation, which is really fascinating. If you plot reliability and customer satisfaction on a graph what you will find is, for a while as your reliability goes up, your customer satisfaction goes up. Fantastic. And then there's a point, a magic line, after which you hit this really deep knee. And what you find is if you are much under that line your customers are angry, like pitchforks, torches, flipping cars, angry. And if you operate much above that line they are indifferent. Because, the network they connect with is less reliable than you. Or the phone they're using is less reliable than you. Or they're doing other things in their day than using your system, right? And so, there's a magic line, actually there's a term, it's called an SLO, Service Level Objective. And the difference between perfection, 100%, and the line you need, which is very business specific, we say treat as a budget. If you over spend your budget your customers aren't happy cuz you're less reliable than they need. But if you consistently under spend your budget, because they're indifferent to the change and because it is exponentially more expensive for incrementive improvement, that's literally resources you're wasting. You're wasting the one resource you can never get back, which is time. Spend it on innovation. And just that mental shift that we don't have to be perfect, less people do open and honest, blameless postmortems. It let's them embrace their risk in innovation. We go out of our way at Google to find people who accidentally broke something, took responsibility for it, redesigned the system so that the next unlucky person couldn't break it the same way, and then we promote them and celebrate them. >> So you push the error budget but then it's basically a way to do some experimentation, to do some innovation >> Safely. >> Safely. And what you're saying is, obviously the line of unhappy customers, it's like Gmail. When Gmail breaks people are like, the World freaks out, right? But, I'm happy with Gmail right now. It's working. >> But here's the thing, Gmail breaks very, very little. Very, very often. >> I never noticed it breaking. >> Will you notice the difference between 10 milliseconds of delivery time? No, of course not. Now, would you notice an hour or whatever? There's a line, you would for sure notice. >> That's the SLO line. >> That's exactly right. >> You're also saying that if you try to push above that, it costs more and there's not >> And you don't care >> An incremental benefit >> That's right. >> It doesn't effect my satisfaction. >> Yeah, you don't care. >> I'm at nirvana, now I'm happy. >> Yeah. >> Okay, and so what does that mean now for putting things in practice? What's the ideal error budget, that's an SLO? Is that part of the objective? >> Well that's part of the work to do as a business. And that's part of what my team does, is help you figure out is, what is the SLO, what is the error budget that makes sense for you for this application? And it's different. A medical device manufacturer is going to have a different SLO than a bank or a retailer, right? And the shapes are different. >> And it's interesting, we hear SLA, the Service Level Agreement, it's an old term >> Different things. >> Different things, here objective if I get this right, is not just about speed and feeds. There's also qualitative user experience objectives, right? So, am I getting that right? >> Very much so. SLOs and SLAs get confused a lot because they share two letters. But they don't mean anywhere near the same thing. An SLA is a legal agreement. It's a contract with your user that describes a penalty if you don't meet a certain performance. Lawyers, and sometimes sales or marketing people, drive SLAs. SLOs are different things driven by engineers. They are quantitative measures of your users happiness right now. And exactly to your point, it's always from the user's perspective. Like, your user does not care if the CPU and your fleet spiked. Or the memory usage went up x. They care, did my mail delivery slow down? Or is my load balancer not serving things? So, focus from your user backwards into your systems and then you get much saner things to track. >> Dave, great conversation. I love the innovation, I love the operating philosophy cuz you're really nailing it with terms of you want to make people happy but you're also pushing the envelope. You want to get these error budgets so we can experiment and learn, and not repeat the same mistake. That sounds like automation to me. But I want you to take a minute to explain, what SRE, that's an inward facing thing for Google, you are called a CRE, Customer Reliability Engineer. Explain what that is because I heard Diane Greene saying, we're taking a vertical focus. She mentioned healthcare. Seems like Google is starting to get in, and applying a lot of resources, to the field, customers. What is a CRE? What does that mean? How is that a part of SRE? Explain that. >> So a couple of years ago, when I was first hired at Google I was hired to build and run Cloud support. And one of the things I noticed, which you notice when you talk to customers a lot, is you know the industries done a really fabulous job of telling people how to get to Cloud. I used to work at Amazon. Amazon is a fantastic job! Telling people, how do you get to Cloud? How do you build a thing? But we're awful, as an industry, about telling them how to live there. How do you run it? Cuz it's different running a thing in a Cloud than it is running it in On-Prem. And you find that's the cause of a lot of friction for people. Not that they built it wrong, but they're just operating it in a way that's not quite compatible. It's a few degree off. And so we have this notion of, well we know how to operate these things to scale, that's what SRE is. What if, what if, we did a crazy thing? We took some of our SREs and instead of pointing them in at our production systems, we pointed them out at customers? Like what if we genetically screened our SREs for, can talk to human, instead of can talk to machine? Which is what you optimize for when you hire an engineer. And so we started Siri, it's this part of our SRE org that we point outwards to customer. And our job is to walk that path with you and really do it to get like-- sometimes we go so far as even to share a pager with you. And really get you to that place where your operations look a lot like we're talking that same language. >> It's custom too, you're looking at their environment. >> Oh yeah, it's bespoke. And then we also try to do scale things. We did the first SRE book. At the show just two days ago we launched the companion volume to the book, which is like-- cheap plug segment, where it's the implementation details. The first book's sort of a set of principles, these are the implementation details. Anything we can do to close that gap, I don't know if I ever told you the story, but when I was a little kid when I was like six. Like 1978, my dad who's always loved technology decided he was going to buy a personal computer. So he went to the largest retailer of personal computers in North America, Macy's in 1978, (laughs) and he came home with two things. He came home with a huge box and a human named Fred. And Fred the human unpacked the big box and set up the monitor, and the tape drive, and the keyboard, and told us about hardware and software and booting up, because who knew any of these things in 1978? And it's a funny story that you needed a human named Fred. My view is, I want to close the gap so that Siri are the Freds. Like, in a few years it'll be funny that you would ever need humans, from Google or anyone else, to help you learn how-- >> It's really helping people operate their new environment at a whole. It's a new first generation problem. >> Yeah. >> Essentially. Well, Dave great stuff. Final question, I want to get your thoughts. Great that we can have this conversation. You should come to the studio and go more and more deeper on this, I think it's a super important, and new role with SRES and CREs. But the show here, if you zoom out and look at Google Cloud, look down on the stage of what's going on this week, what's the most important story that should be told that's coming out of Google Cloud? Across all the announcements, what's the most important thing that people should be aware of? >> Wow, I have a definite set of biases, that won't lie. To me, the three most exciting announcements were GKE On-Prem, the idea that manage kubernetes you can actually run in your own environment. People have been saying for years that hybrid wasn't really a thing. Hybrid's a thing and it's going to be a thing for a long time, especially in enterprises. That's one. I think the introduction of machine learning to BigQuery, like anything we can do to bring those machine learning tools into these petabytes-- I mean, you mentioned it earlier. We are now collecting so much data not only can we not, as companies, we can't manage it. We can't even hire enough humans to figure out the right questions. So that's a big thing. And then, selfishly, in my own view of it because of reliability, the idea that Stackdriver will let you set up SLO dashboards and SLO alerting, to me that's a big win too. Those are my top three. >> Dave, great to have you on. Our SLO at The Cube is to bring the best content we possibly can, the most interviews at an event, and get the data and share that with you live. It's The Cube here at Google Cloud Next 18 I'm John Furrier with Jeff Frick. Stay with us, we've got more great content coming. We'll be right back after this short break.

Published Date : Jul 26 2018

SUMMARY :

brought to you by Google Cloud Dave, great to have you on. and the CRE concept within Google. and the complexity of the stuff you're doing. Yeah, Search. And keep for the humans And that's massive at that time it was a tell sign for you guys the same changeover that you guys and effort just running the things Buy our stuff. And if you just give them a little latitude. So you had this traditional model it's been real. and so how does someone read the book, the closeness to perfection error budget is a concept that you're talking about. and how you deal with them is-- and the line you need, obviously the line of unhappy customers, But here's the thing, Will you notice the difference between And the shapes are different. So, am I getting that right? and then you get much saner things to track. and not repeat the same mistake. And our job is to walk that path with you It's custom too, And it's a funny story that you needed It's a new first generation problem. Great that we can have this conversation. the idea that Stackdriver will let you and get the data and share that with you live.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave RensinPERSON

0.99+

Jeff FrickPERSON

0.99+

AmazonORGANIZATION

0.99+

Diane GreenePERSON

0.99+

DavePERSON

0.99+

100%QUANTITY

0.99+

1978DATE

0.99+

SiriTITLE

0.99+

GoogleORGANIZATION

0.99+

John FurrierPERSON

0.99+

FredPERSON

0.99+

hundredsQUANTITY

0.99+

20 peopleQUANTITY

0.99+

North AmericaLOCATION

0.99+

two lettersQUANTITY

0.99+

10 millisecondsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

firstQUANTITY

0.99+

sixQUANTITY

0.99+

first bookQUANTITY

0.99+

fiveQUANTITY

0.99+

AndroidTITLE

0.99+

twoQUANTITY

0.99+

an hourQUANTITY

0.99+

two thingsQUANTITY

0.99+

twoDATE

0.99+

The CubeORGANIZATION

0.98+

2,500 SREsQUANTITY

0.98+

GmailTITLE

0.98+

SREORGANIZATION

0.98+

10 years agoDATE

0.98+

MacyORGANIZATION

0.98+

12 years agoDATE

0.98+

oneQUANTITY

0.98+

two days agoDATE

0.98+

Google CloudTITLE

0.97+

three years agoDATE

0.97+

googleORGANIZATION

0.96+

first generationQUANTITY

0.96+

zero errorQUANTITY

0.96+

50 thousandQUANTITY

0.94+

GoogleNext18EVENT

0.94+

13DATE

0.93+

SRETITLE

0.93+

couple of years agoDATE

0.92+

Silicon ValleyLOCATION

0.91+

CREORGANIZATION

0.91+

couple months agoDATE

0.91+

CloudTITLE

0.91+

agileTITLE

0.9+

Google CloudORGANIZATION

0.9+

AssistantTITLE

0.89+

one million humansQUANTITY

0.89+

14 years agoDATE

0.89+

SLATITLE

0.88+

ten yearsQUANTITY

0.87+

12DATE

0.86+

StackdriverORGANIZATION

0.86+

last couple of yearsDATE

0.85+

Pat Wadors, ServiceNow & Patricia Tourigny, Magellan Health | ServiceNow Knowledge18


 

(techno music) >> Narrator: Live from Las Vegas, it's the Cube. Covering Service Now Knowledge 2018. Brought to you by Service Now. >> Welcome back to the Cube's coverage of Service Now Knowledge 18 here in Las Vegas, Nevada. I'm your host Rebecca Knight. I'm joined by Pat Wadors. She is the Chief Talent Officer of Service Now, and Pat Tourigney who is the Senior Vice President HR Global Shared Services at Magellan Health. Pat and Pat, thanks so much for coming on the show. >> Pat Wadors: Thank you for having us. We're excited. >> Pat Tigourney: It's so great to be here Rebecca, thank you. >> Rebecca: Well you were both on the main stage this morning talking about Magellan's, Magellan Health Service Now journey. We started talking about a personal health scare that you had Pat, that really changed the way you think about the world of work, and the employers' role in that. Can you tell our viewers a little more about it? >> Pat: I'd be happy to Rebecca. So, obviously I had been working and had taken some time off to start and raise my family. And when I went back to work I started to feel unwell. And it took about two and a half years for me to finally get an answer. I had searched for many doctors, et cetera. But literally one day I was rushed to a hospital emergency room. After a few days I was diagnosed with stage three B colon cancer, and I was told I had probably about a three percent survival chance. So at that time I faced four years of surgery, and hospitalizations, and chemo and radiation. And of course during all this time you're hearing the probably outcomes and the statistics. But what I truly focused on was my purpose. Which was my family. I had two small children and they needed me, and I needed to be there for them. And so I learned a lot of lessons during that time, and I think anyone who goes through that would say that. But the two things that have really stuck with me is knowing my purpose, and leading with empathy. And it's truly changed how I live, how I work, how I interact with other people. And I think its made a huge difference in what I do every day. >> Rebecca: What Pat was just talking about, the leading with empathy, and the finding your purpose, these are two of the things that are central to the culture at Service Now. Can you describe a little bit more for our viewers, how you view this sort of purpose driven life? >> Pat Wadors: For me and for the company, its as essential to our success as our customers. So I know that purpose driven companies outperform those that don't have a purpose. And I know from a talent brand, and how we recruit and retain talent, if their personal purpose is aligned with the company purpose, not only do you get higher engagement and higher productivity, but that impacts our customers. And they have higher engagement and higher sat. So its great business. It's something that I think creates a competitive differentiation, and its something that our employees seek as an employer. So it's just something that I totally believe in and so does our company. >> Rebecca: So talk a little bit about VERN. First of all, what does VERN stand for? >> Pat: Oh I love VERN. (laughing) >> Pat: Everyone loves VERN. VERN stands for the Virtual Employee Resource Network. And a couple things that I would probably want to say about that is number one, you don't see HR in there at all. Because it's about the employee. This is a way that we are helping our employees fundamentally change how they work and how they engage with us. The reason I think VERN works is our employees voted on that name. So we had a whole campaign to launch VERN, and we offered up four different names, and our employees voted. And when VERN won we created a VERN persona, and everything else that goes with that. And he's just become part of our team. >> Rebecca: So what does VERN do? >> Pat: Well VERN is really sort of the, it took the place of our call center. VERN is a way for our employees to learn information, and answer their basic questions, and learn to work in new ways. And it helps, it's basically a consumerized HR product. If an employee can use google or shop online, they can use VERN. Its' very simple, it's easy and fun. And truly VERN has become a part of our team. So we don't have a call center anymore. We don't use email to answer questions. Our employees know that VERN is there for them twenty four seven. >> Rebecca: They have a question and ask VERN. >> Pat: Exactly. Turn to VERN, that's our motto. >> Rebecca: (laughing) I love it. So Pat, thinking about this empathic way of leading, how would you describe what it really means when it comes to HR? You had said before it really is a competitive differentiator, and that if you're happier at work, you're going to do better at work, you're going to be more energized, you're going to then provide better service to your customers. But how can companies, how can they build a culture of empathy? >> Pat: By listening. I think that when Pat and I were talking over dinner and I talked to my peers, companies that win listen. And they listen to their customers, and they reverse engineer back to their products and services. Great cultures listen. And our employees are going to tell us what's working what's not working. And if we capture those data sets, those moments, we give them the information, we give them the tools. They are joyful, they are more productive, there's a stickiness that I can not only survive there I'll thrive. And so by being empathetic, by seeing where the pain points are, by seeing what gets you joyful, and measuring those things and turning my dials accordingly, that to me is a winning situation. >> Rebecca: We're at a point in time where we have five generations in the workforce all at once. Can you describe what that's like, from your company perspective, from talent management and HR, and how catering to these very different segments of people who their comfort with technology is one thing, but also their phase of life. How do you do that? >> Pat: Well I think, honestly, there's this joyfulness, you used that word and I love that word, of how all these different generations really do work together and help one another. In a way we're all learning from each other. And we're not afraid to learn in front of each other. And that really makes a difference I think. And I think there's just this mutual respect of, we're all there to help each other and do the right thing for the company. And I think the empathy piece of it really comes across because, when you truly understand one another in a way that you care and you're showing that, it's not about age anymore or anything else, it's that we're all people working together trying to do our best work and we're there for each other. To me that's what it means. >> Pat: The only thing I would add to that is, when you look at consumerization of the enterprise, when you look at seamless, what they call frictionless solutions, it demystifies the technology. So if you have the older generation going "I've not used a bot" or "I don't know what machine learning is" I'm like can you type in your question? I can do that. And if I serve you knowledge bites that I can digest that answers my question and move on with my life, that's a gift. And so I think that if you make it more human, if you make it more approachable, then every generation appreciates that. And I also know that from my studies and from working in the valley for a long time in tech, is that every generation wants the same thing. They want to be heard, they want to be appreciated, treated respectfully, and know that they can do their best work. That they matter. >> Rebecca: So Pat you are relatively new to Service Now. You're from LinkedIn. You are so committed to the company you dyed your hair to match the brand identity. What drew you to Service Now? >> Pat: I was a customer of Service Now while at LinkedIn. And my goldilocks is a growth company. I'm a builder. I love creating culture and leading through change. And I also love geeking out with my peeps in HR. And so Service Now has a talent place, they are helping HR solve problems, and I get to geek out with them. I get to meet people like Pat, and have a wonderful dinner and a great conversation. That feeds my soul. I don't think I am unique in the problems I'm facing, and I copy shamelessly. I'm trying to steal VERN from her. (Pat laughing) I think that's awesome, I want a VERN button. >> Pat: I'm going to get you one. >> Pat: And then the added sauce for me where I fell in love, is when John Donahoe became the CEO and wanted my partnership to build an enduring high performing healthy company. And I'm like, sign me up. >> Rebecca: Talking about the culture of Service Now and Magellan Health, culture is so hard. It's just one of those things that, or maybe its not, maybe I'm making it out to be, but when you have large companies dispersed employees, i'ts sort of hard to always stay on message and to have everyone pulling in the same direction. How do you do it? What would you say you do at Magellan? I'm interested in how you do it at Service Now too. >> Pat: Want to go first? >> Pat: I'll take a stab. So, you got to think about where you're going. So what's your purpose? I'm going back to purpose. How do you serve the customer? What are those four key milestones that matter? And repeat, and I say rinse, and then repeat. So everyone hears it. You know the top five goals in the company. And we talk about it all hands, we refer to them in our internal portal, we talk about them, we measure them. We tell the employees this is what we wanted to do, this is what we did or didn't do. This is what we do next. And we're as transparent as we possibly can be. And the magic comes when every employee can look up and say I made that goal happen. And when they start seeing those dots connect, they can't wait to connect more dots. And that's when the journey starts accelerating. That's when you get more flywheel going in the organization where what I do is actually impacting profit, impacting customer success, impacting joy. >> Rebecca: And taking some ownership of it. >> Pat: I agree. I think that when everyone sort of shares in that purpose, and they understand what they do, how it affects that, it makes a huge difference. But I also think as an organization from a leadership perspective, if you model the behavior that you're seeking, and you set your expectations really high for that, and that in a very sort of respectful way when you see things that aren't right you say something about it, the culture does start to shift. And you start to build this feeling of we're there, we're together, we have each other's backs, we treat each other with dignity and respect, and honesty and openness, and you can really start to just shift it almost organically. >> Rebecca: Pat Tourigney, Pat Wadors, thanks so much for coming on the Cube. It was a great conversation. >> Pat: Oh thank you Rebecca. It's been great. >> Pat: Thank you for having us. >> Rebecca: We'll have more with the Cube's live coverage of Service Now just after this. (techno music)

Published Date : May 9 2018

SUMMARY :

Brought to you by Service Now. Pat and Pat, thanks so much Pat Wadors: Thank you for to be here Rebecca, thank you. and the employers' role in that. and I needed to be there for them. and the finding your purpose, and its something that our employees Rebecca: So talk a Pat: Oh I love VERN. and everything else that goes with that. and learn to work in new ways. Rebecca: They have a Turn to and that if you're happier at work, and they reverse engineer back to and how catering to these and do the right thing for the company. And I also know that Rebecca: So Pat you are and I get to geek out with them. and wanted my partnership to build an but when you have large And the magic comes when Rebecca: And taking and you set your expectations thanks so much for coming on the Cube. Pat: Oh thank you Rebecca: We'll have more

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
RebeccaPERSON

0.99+

PatPERSON

0.99+

Rebecca KnightPERSON

0.99+

Pat TourigneyPERSON

0.99+

Pat WadorsPERSON

0.99+

Patricia TourignyPERSON

0.99+

John DonahoePERSON

0.99+

twoQUANTITY

0.99+

Pat TigourneyPERSON

0.99+

LinkedInORGANIZATION

0.99+

four yearsQUANTITY

0.99+

two thingsQUANTITY

0.99+

Service NowORGANIZATION

0.99+

VERNORGANIZATION

0.99+

MagellanORGANIZATION

0.99+

Magellan HealthORGANIZATION

0.99+

Las Vegas, NevadaLOCATION

0.99+

about two and a half yearsQUANTITY

0.98+

ServiceNowORGANIZATION

0.98+

CubeORGANIZATION

0.98+

googleORGANIZATION

0.98+

bothQUANTITY

0.98+

five goalsQUANTITY

0.96+

four key milestonesQUANTITY

0.95+

one thingQUANTITY

0.94+

two small childrenQUANTITY

0.94+

firstQUANTITY

0.91+

about a three percentQUANTITY

0.9+

twenty four sevenQUANTITY

0.9+

FirstQUANTITY

0.89+

Las VegasLOCATION

0.86+

five generationsQUANTITY

0.85+

four different namesQUANTITY

0.83+

HR Global Shared ServicesORGANIZATION

0.83+

this morningDATE

0.78+

Eva Velasquez, Identity Theft Resource Center | Data Privacy Day 2018


 

>> Hey, welcome back everybody, Jeff Frick here with The Cube. We're at Data Privacy Day 2018, I still can't believe it's 2018, in downtown San Francisco, at LinkedIn's headquarters, the new headquarters, it's a beautiful building just down the road from the sales force building, from the new Moscone that's being done, there's a lot of exciting things going on in San Francisco, but that's not what we're here to talk about. We're here to talk about data privacy, and we're excited to have a return visit from last year's Cube alumni, she's Eva Velasquez, president and CEO, Identity Theft Resource Center. Great to see you again. >> Thank you for having me back. >> Absolutely, so it's been a year, what's been going on in the last year in your world? >> Well, you know, identity theft hasn't gone away >> Shoot. >> And data-- >> I thought you told me it was last time. >> I know, I wish, and in fact, unfortunately we just released our data breach information, and there was a tremendous growth. It was a little over 1000, previous year, and over 1500 data breaches... in 2017. >> We're almost immune, they're like every day. And it used to be like big news. Now it's like, not only was Yahoo breached at some level, which we heard about a while ago, but then we hear they were actually breached like 100%. >> There is some fatigue, but I can tell you that it's not as pervasive as you might think. Our call center had such a tremendous spike in calls during the Equifax breach. It was the largest number of calls we'd had in a month, since we'd been measuring our call volume. So people were still very, very concerned. But a lot of us who are in this space are feeling, I think we may be feeling the fatigue more than your average consumer out there. Because for a lot of folks, this is really the first exposure to it. We're still having a lot of first exposures to a lot of these issues. >> So the Equifax one is interesting, because most people don't have a direct relationship with Equifax, I don't think. I'm not a direct paying customer, I did not choose to do business with them. But as one of the two or three main reporting agencies, right, they've got data on everybody for their customers who are the banks, financial institutions. So how does that relationship get managed? >> Oh my gosh, there's so much meat there. There's so much meat there. Okay, so, while it feels like you don't have a direct relationship with the credit reporting agencies, you actually do, you get a benefit from the services that they're providing to you. And every time you get a loan, I mean this is a great conversation for Data Privacy Day. Because when you get a loan, get a credit card, and you sign those terms and conditions, guess what? >> They're in there? >> You are giving that retailer, that lender, the authority to send that information over to the credit reporting agencies. And let's not forget that the intention of forming the credit reporting agencies was for better lending practices, so that your creditworthiness was not determined by things like your gender, your race, your religion, and those types of really, I won't say arbitrary, but just not pertinent factors. Now your creditworthiness is determined by your past history of, do you pay your bills? What is your income, do you have the ability to pay? So it started with a good, very good purpose in mind, and we definitely bought into that as a society. And I don't want to sound like I'm defending the credit reporting agencies and all of their behavior out there, because I do think there are some changes that need to be made, but we do get a benefit from the credit reporting agencies, like instant credit, much faster turnaround when we need those financial tools. I mean, that's just the reality of it. >> Right, right. So, who is the person that's then... been breached, I'm trying to think of the right word of the relationship between those who've had their data hacked from the person who was hacked. If it's this kind of indirect third party relationship through an authorization through the credit card company. >> No, the, Equifax is absolutely responsible. >> So who would be the litigant, just maybe that's the word that's coming to me in terms of feeling the pain, is it me as the holder of the Bank of America Mastercard? Is it Bank of America as the issuer of the Mastercard? Or is it Mastercard, in terms of retribution back to Equifax? >> Well you know, I can't really comment on who actually would have the strongest legal liability, but what I can say is, this is the same thing I say when I talk to banks about identity theft victims. There's some discussion about, well, no, it's the bank that's the victim in existing account identity theft, because they're the ones that are absorbing the financial losses. Not the person whose data it belongs to. Yet the person who owns that data, it's their identity credentials that have been compromised. They are dealing with issues as well, above and beyond just the financial compromise. They have to deal with cleaning up other messes and other records, and there's time spent on the phone, so it's not mutually exclusive. They're both victims of this situation. And with data breaches, often the breached entity, again, I hate to sound like an apologist, but I am keeping this real. A breached entity, when they're hacked, they are a victim, a hacker has committed that crime and gone into their systems. Yes, they have a responsibility to make those security systems as robust as possible, but the person whose identity credentials those are, they are the victim. Any entity or institution, if it's payment card data that's compromised, and a financial services institution has to replace that data, guess what, they're a victim too. That's what makes this issue and this crime so terrible, is that it has these tentacles that reach down and touch more than one person for each incident. >> Right. And then there's a whole 'nother level, which we talked about before we got started that we want to dig into, and that's children. Recently, a little roar was raised with these IOT connected toys. And just a big, giant privacy hole, into your kid's bedroom. With eyes and ears and everything else. So wonder if you've got some specific thoughts on how that landscape is evolving. >> Well, we have to think about the data that we're creating. That does comprise our identity. And when we start talking about these toys and other... internet connected, IOT devices that we're putting in our children's bedroom, it actually does make the advocacy part of me, it makes the hair on the back of my neck stand up. Because the more data that we create, the more that it's vulnerable, the more that it's used to comprise our identity, and we have a big enough problem with child identity theft just now, right now as it stands, without adding the rest of these challenges. Child and synthetic identity theft are a huge problem, and that's where a specific Social Security number is submitted and has a credit profile built around it, when it can either be completely made up, or it belongs to a child. And so you have a four year old whose Social Security number is now having a credit profile built around it. Obviously they're not, so the thieves are not submitting this belongs to a four year old, it would not be issued credit. So they're saying it's a, you know, 23 year old-- >> But they're grabbing the number. >> They're grabbing the number, they're using the name, they build this credit profile, and the biggest problem is we really haven't modernized how we're authenticating this information and this data. I think it's interesting and fitting that we're talking about this on Data Privacy Day, because the solution here is actually to share data. It's to share it more. And that's an important part of this whole conversation. We need to be smart about how we share our data. So yes, please, have a thoughtful conversation with yourself and with your family about what are the types of data that you want to share and keep, and what do you want to keep private, but then culturally we need to look at smart ways to open up some data sharing, particularly for these legitimate uses, for fraud detection and prevention. >> Okay, so you said way too much there, 'cause there's like 87 followup questions in my head. (Eva laughs) So we'll step back a couple, so is that synthetic identity, then? Is that what you meant when you said a synthetic identity problem, where it's the Social Security number of a four year old that's then used to construct this, I mean, it's the four year old's Social Security number, but a person that doesn't really exist? >> Yes, all child identity theft is synthetic identity theft, but not all synthetic identity theft is child identity theft. Sometimes it can just be that the number's been made up. It doesn't actually belong to anyone. Now, eventually maybe it will. We are hearing from more and more parents, I'm not going to say this is happening all the time, but I'm starting to hear it a little bit more often, where the Social Security number is being issued to their child, they go to file their taxes, so this child is less than a year old, and they are finding out that that number has a credit history associated with it. That was associated years ago. >> So somebody just generated the number. >> Just made it up. >> So are we ready to be done with Social Security numbers? I mean, for God's sake, I've read numerous things, like the nine-digit number that's printed on a little piece of paper is not protectable, period. And I've even had a case where they say, bring your little paper card that they gave you at the hospital, and I won't tell you what year that was, a long time ago. I'm like, I mean come on, it's 2018. Should that still be the anchor-- >> You super read my mind. >> Data point that it is? >> It was like I was putting that question in your head. >> Oh, it just kills me. >> I've actually been talking quite a bit about that, and it's not that we need to get, quote unquote, get rid of Social Security numbers. Okay, Social Security numbers were developed as an identifier, because we have, you can have John Smith with the same date of birth, and how do we know which one of those 50,000 John Smiths is the one we're looking for? So that unique identifier, it has value. And we should keep that. It's not a good authenticator, it is not a secret. It's not something that I should pretend only I know-- >> Right, I write it on my check when I send my tax return in. Write your number on the check! Oh, that's brilliant. >> Right, right. So it's not, we shouldn't pretend that this is, I'm going to, you, business that doesn't know me, and wants to make sure I am me, in this first initial relationship or interaction that we're having, that's not a good authenticator. That's where we need to come up with a better system. And it probably has to do with layers, and more layers, and it means that it won't be as frictionless for consumers, but I'm really challenging, this is one of our big challenges for 2018, we want to flip that security versus convenience conundrum on its ear and say, no, I really want to challenge consumers to say... I'm happier that I had to jump through those hoops. I feel safer, I think you're respecting my data and my privacy, and my identity more because you made it a little bit harder. And right now it's, no, I don't want to do that because it's a little too, nine seconds! I can't believe it took me nine seconds to get that done. >> Well, yeah, and we have all this technology, we've got fingerprint readers that we're carrying around in our pocket, I mean there's, we've got geolocation, you know, is this person in the place that they generally, and having 'em, there's so many things-- >> It's even more granular >> Beyond a printed piece of >> Than that-- >> paper, right? >> It's the angle at which you look at your phone when you look at it. It's the tension with which you enter your passcode, not just the passcode itself. There are all kinds of very non-invasive biometrics, for lack of a better word. We tend to think of them as just, like our face and our fingerprint, but there are a lot of other biometrics that are non-invasive and not personal. They're not private, they don't feel secret, but we can use them to authenticate ourselves. And that's the big discussion we need to be having. If I want to be smart about my privacy. >> Right. And it's interesting, on the sharing, 'cause we hear that a lot at security conferences, where one of the best defenses is that teams at competing companies, security teams, share data on breach attempts, right? Because probably the same person who tried it against you is trying it against that person, is trying it against that person. And really an effort to try to open up the dialogue at that level, as more of just an us against them versus we're competing against each other in the marketplace 'cause we both sell widgets. So are you seeing that? Is that something that people buy into, where there's a mutual benefit of sharing information to a certain level, so that we can be more armed? >> Oh, for sure, especially when you talk to the folks in the risk and fraud and identity theft mitigation and remediation space. They definitely want more data sharing. And... I'm simply saying that that's an absolutely legitimate use for sharing data. We also need to have conversations with the people who own that data, and who it belongs to, but I think you can make that argument, people get it when I say, do you really feel like the angle at which you hold your phone, is that personal? Couldn't that be helpful, that combined with 10 other data points about you, to help authenticate you? Do you feel like your personal business and life is being invaded by that piece of information? Or compare that to things like your health records. And medical conditions-- >> Mom's maiden name. >> That you're being treated for, well, wow, for sure that feels super, super personal, and I think we need to do that nuance. We need to talk about what data falls into which of these buckets, and on the bucket that isn't super personal, and feeling invasive and that I feel like I need to protect, how can I leverage that to make myself safer? >> Great. Lots of opportunity. >> I think it's there. >> Alright. Eva, thanks for taking a few minutes to stop by. It's such a multi-layered and kind of complex problem that we still feel pretty much early days at trying to solve. >> It's complicated, but we'll get there. More of this kind of dialogue gets us just that much closer. >> Alright, well thanks for taking a few minutes of your day, great to see you again. >> Thanks. >> Alright, she's Eva, I'm Jeff, you're watching The Cube from Data Privacy Days, San Francisco. (techno music)

Published Date : Jan 27 2018

SUMMARY :

Great to see you again. I thought you told me it was and there was a tremendous growth. but then we hear they were actually breached like 100%. the first exposure to it. I did not choose to do business with them. that they're providing to you. And let's not forget that the intention of the relationship between those who've had above and beyond just the financial compromise. that we want to dig into, and that's children. Because the more data that we create, the more We need to be smart about how we share our data. Is that what you meant when you said Sometimes it can just be that the number's been made up. at the hospital, and I won't tell you is the one we're looking for? Write your number on the check! And it probably has to do with layers, It's the tension with which you enter your passcode, Because probably the same person who tried it against you the angle at which you hold your phone, is that personal? and that I feel like I need to protect, Lots of opportunity. problem that we still feel pretty much early days just that much closer. of your day, great to see you again. Alright, she's Eva, I'm Jeff, you're watching The Cube

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

Eva VelasquezPERSON

0.99+

EquifaxORGANIZATION

0.99+

2017DATE

0.99+

nine secondsQUANTITY

0.99+

EvaPERSON

0.99+

Bank of AmericaORGANIZATION

0.99+

YahooORGANIZATION

0.99+

JeffPERSON

0.99+

2018DATE

0.99+

LinkedInORGANIZATION

0.99+

four yearQUANTITY

0.99+

MastercardORGANIZATION

0.99+

twoQUANTITY

0.99+

Identity Theft Resource CenterORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

The CubeTITLE

0.99+

100%QUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

last yearDATE

0.99+

CubeORGANIZATION

0.99+

first exposureQUANTITY

0.99+

10 other data pointsQUANTITY

0.98+

each incidentQUANTITY

0.98+

a monthQUANTITY

0.98+

less than a year oldQUANTITY

0.98+

more than one personQUANTITY

0.98+

over 1000QUANTITY

0.97+

first exposuresQUANTITY

0.97+

both victimsQUANTITY

0.97+

nine-digitQUANTITY

0.97+

three main reporting agenciesQUANTITY

0.97+

over 1500 data breachesQUANTITY

0.97+

87 followup questionsQUANTITY

0.96+

The CubeORGANIZATION

0.96+

bothQUANTITY

0.96+

Data Privacy DayEVENT

0.95+

Data Privacy Day 2018EVENT

0.94+

Data Privacy DaysTITLE

0.94+

four year oldQUANTITY

0.93+

MosconeLOCATION

0.9+

previous yearDATE

0.88+

50,000QUANTITY

0.85+

a yearQUANTITY

0.82+

John SmithPERSON

0.81+

23 year oldQUANTITY

0.81+

about a while agoDATE

0.68+

coupleQUANTITY

0.68+

privacyORGANIZATION

0.66+

IOTORGANIZATION

0.61+

yearsDATE

0.56+

John SmithsCOMMERCIAL_ITEM

0.4+

Cricket Liu, Infoblox | CyberConnect 2017


 

>> Announcer: Live from New York City It's TheCube. Covering CyberConnect 2017. Brought to you by Centrify and the Institute for Critical Infrastructure Technology. >> It got out of control, they were testing it. Okay, welcome back everyone. We are here live in New York City for CyberConnect 2017. This is Cube's coverage is presented by Centrify. It's an industry event, bringing all the leaders of industry and government together around all the great opportunities to solve the crisis of our generation. That's cyber security. We have Cricket Liu. Chief DNS architect and senior fellow at Infoblox. Cricket, great to see you again. Welcome to theCUBE. >> Thank you, nice to be back John. >> So we're live here and really this is the first inaugural event of CyberConnect. Bringing government and industry together. We saw the retired general on stage talking about some of the history, but also the fluid nature. We saw Jim from Aetna, talking about how unconventional tactics and talking about domains and how he was handling email. That's a DNS problem. >> Yeah, yeah. >> You're the DNS guru. DNS has become a role in this. What's going on here around DNS? Why is it important to CyberConnect? >> Well, I'll be talking tomorrow about the first anniversary, well, a little bit later than the first anniversary of the big DDoS attack on Dyn. The DNS hosting provider up in Manchester, New Hampshire. And trying to determine if we've actually learned anything, have we improved our DNS infrastructure in any way in the ensuing year plus? Are we doing anything from the standards, standpoint on protecting DNS infrastructure. Those sorts of things. >> And certainly one of the highlight examples was mobile users are masked by the DNS on, say, email for example. Jim was pointing that out. I got to ask you, because we heard things like sink-holing addresses, hackers create domain names in the first 48 hours to launch attacks. So there's all kinds of tactical things that are being involved with, lets say, domain names for instance. >> Cricket: Yeah, yeah. >> That's part of the critical infrastructure. So, the question is how, in DDoS attacks, denial-of-service attacks, are coming in in the tens of thousands per day? >> Yeah, well that issue that you talked about, in particular the idea that the bad guys register brand new domain names, domain names that initially have no negative reputation associated with them, my friend Paul Vixie and his new company Farsight Security have been working on that. They have what is called a -- >> John: What's the name of the company again? >> Farsight Security. >> Farsight? >> And they have what's called a Passive DNS Database. Which is a database basically of DNS telemetry that is accumulated from big recursive DNS servers around the internet. So they know when a brand new domain name pops up, somewhere on the internet because someone has to resolve it. And they pump all of these brand new domain names into what's called a response policy zone feed. And you can get for example different thresh holds. I want to see the brand new domain names created over the last 30 minutes or seen over the last 30 minutes. And if you block resolution of those brand new domain names, it turns out you block a tremendous amount of really malicious activity. And then after say, 30 minutes if it's a legitimate domain name it falls off the list and you can resolve it. >> So this says your doing DNS signaling as a service for new name registrations because the demand is for software APIs to say "Hey, I want to create some policy around some techniques to sink-hole domain address hacks. Something like that? >> Yeah, basically this goes hand in hand with this new system response policy zone which allows you to implement DNS policy. Something that we've really never before done with DNS servers, which that's actually not quite true. There have been proprietary solutions for it. But response policy zones are an open solution that give you the ability to say "Hey I do want to allow resolution of this domain name, but not this other domain name". And then you can say "Alright, all these brand new domain names, for the first 30 minutes of their existence I don't want-- >> It's like a background check for domain names. >> Yeah, or like a wait list. Okay, you don't get resolved for the first 30 minutes, that gives the sort of traditional, reputational, analyzers, Spamhaus and Serval and people like that a chance to look you over and say "yeah, it's malicious or it's not malicious". >> So serves to be run my Paul Vixie who is the contributor to the DNS protocol-- >> Right, enormous contributor. >> So we should keep an eye on that. Check it out, Paul Vixie. Alright, so DNS's critical infrastructure that we've been talking about, that you and I, love to riff about DNS and the role What's it enabled? Obviously it's ASCII, but I got to ask you, all these Unicode stuff about the emoji and the open source, really it highlight's the Unicode phenomenon. So this is a hacker potential haven. DNS and Unicode distinction. >> It's really interesting from a DNS standpoint, because we went to a lot of effort within the IETF, the Internet Engineering Task Force, some years ago, back when I was more involved in the IETF, some people spent a tremendous amount of effort coming up with a way to use allow people to use Unicode within domain name. So that you could type something into your browser that was in traditional or simplified Chinese or that was in Arabic or was in Hebrew or any number of other scripts. And you could type that in and it would be translated into something that we call puny code, in the DNS community, which is an ASCII equivalent to that. The issue with that though, becomes that there are, we would say glifs, most people I guess would say characters, but there are characters in Unicode that look just like, say Latin alphabet characters. So there's a lowercase 'a' for example, in cyrillic, it's not a lowercase 'a' in the Latin alphabet, it's a cyrillic 'a', but it looks just like an 'a'. So it's possible for people to register names, domain names, that in there Unicode representation, look like for example, PayPal, which of course has two a's in it, and those two a's could be cyrillic a's. >> Not truly the ASCII representation of PayPal which we resolve through the DNS. >> Exactly, so imagine how subtle an attack that would be if you were able to send out a bunch of email, including the links that said www.-- >> Someone's hacked your PayPal account, click here. >> Yeah, exactly. And if you eyeballed it you'd think Well, sure that's www.PayPal.com, but little do you know it's actually not the -- >> So Jim Ruth talked about applying some unconventional methods, because the bad guys don't subscribe to the conventional methods . They don't buy into it. He said that they change up their standards, is what I wrote down, but that was maybe their sort of security footprint. 1.5 times a day, how does that apply to your DNS world, how do you even do that? >> Well, we're beginning to do more and more with analytics DNS. The passive DNS database that I talked about. More and more big security players, including Infoblox are collecting passive DNS data. And you can run interesting analytics on that passive DNS data. And you can, in some cases, automatically detect suspicious or malicious behavior. For example you can say "Hey, look this named IP address mapping is changing really, really rapidly" and that might be an indication of let's say, fast flux. Or you can say "These domain names have really high entropy. We did an engram analysis of the labels of these". The consequence of that we believe that this resolution of these domain names, is actually being used to tunnel data out of an organization or into an organization. So there's some things you can do with these analytical algorithms in order to suss out suspicious and malicious. >> And you're doing that in as close to real time as possible, presumably right? >> Cricket: That's right. >> And so, now everybody's talking about Edge, Edge computing, Edge analytics. How will the Edge effect your ability to keep up? >> Well, the challenge I think with doing analytics on passive DNS is that you have to be able to collect that data from a lot of places. The more places that you have, the more sensors that you have collecting passive DNS data the better. You need to be able to get it out from the Edge. From those local recursive DNS servers that are actually responding to the query's that come from say your smart phone or your laptop or what have you. If you don't have that kind of data, you've only got, say, big ISPs, then you may not detect the compromise of somebody's corporate network, for example. >> I was looking at some stats when I asked the IOT questions, 'cause you're kind of teasing out kind of the edge of the network and with mobile and wearables as the general was pointing out, is that it's going to create more service area, but I just also saw a story, I don't know if it's from Google or wherever, but 80% plus roughly, websites are going to have SSL HTBS that they're resolving through. And there's reports out here that a lot of the anti virus provisions have been failing because of compromised certificates. And to quote someone from Research Park, and we want to get your reaction to this "Our results show", this is from University of Maryland College Park. "Our results show that compromised certificates pose a bigger threat than we previously believed, and is not restricted to advanced threats and digitally signed malware was common in the wild." Well before Stuxnet. >> Yeah, yeah. >> And so breaches have been caused by compromising certificates of actual authority. So this brings up the whole SSL was supposed to be solving this, that's just one problem. Now you've got the certificates, well before Stuxnet. So Stuxnet really was kind of going on before Stuxnet. Now you've got the edge of the network. Who has the DNS control for these devices? Is it kind of like failing? Is it crumbling? How do we get that trust back? >> That's a good question. One of the issues that we've had is that at various points, CAs, Certificate Authorities, have been conned into issuing certificates for websites that they shouldn't have. For example, "Hey, generate a cert for me". >> John: The Chinese do it all the time. >> Exactly. I run www. Bank of America .com. They give it to the wrong guy. He installs it. We have I think, something like 1,500 top level certification authorities. Something crazy like that. Dan Komenski had a number in one of his blog posts and it was absolutely ridiculous. The number of different CA's that we trust that are built into the most common browsers, like Chrome and Firefox and things like that. We're actually trying to address some of those issues with DNS, so there are two new resource records being introduced to DNS. One is TLSA. >> John: TLSA? >> Yeah, TLSA. And the other one is called CAA I think, which always makes me think of a California Automotive Association. (laughter) But TLSA is basically a way of publishing data in your own zone that says My cert looks like this. You can say "This is my cert." You can just completely go around the CA. And you can say "This is my cert" and then your DNS sec sign your zone and you're done. Or you can do something short of that and you can say "My cert should look like this "and it should have this CA. "This is my CA. "Don't trust any other one" >> So it's metadata about the cert or the cert itself. >> Exactly, so that way if somebody manages to go get a cert for your website, but they get that cert from some untrustworthy CA. I don't know who that would be. >> John: Or a comprimised-- >> Right, or a compromised CA. No body would trust it. No body who actually looks up the TSLA record because they'll go "Oh, Okay. I can see that Infoblox's cert that their CA is Symantech. And this is not a Symantech signed cert. So I'm not going to believe it". And at the same time this CAA record is designed to be consumed by the CA's themselves, and it's a way of saying, say Infoblox can say "We are a customer of Symantech or whoever" And when somebody goes to the cert and says "Hey, I want to generate a certificate for www.Infoblox.com, they'll look it up and say "Oh, they're a Symantech customer, I'm not going to do that for you". >> So it creates trust. So how does this impact the edge of the network, because the question really is, the question that's on everyone's mind is, does the internet of things create more trust or does it create more vulnerabilities? Everyone knows it's a surface area, but still there are technical solutions when you're talking about, how does this play out in your mind? How does Infoblox see it? How do you see it? What's Paul Vixie working on, does that tie into it? Because out in the hinterlands and the edge of the network and the wild, is it like a DNS server on the device. It could be a sensor? How are they resolving things? What is the protocol for these? >> At least this gives you a greater assurance if you're using TLS to encrypt communication between a client and a web server or some other resource out there on the internet. It at least gives you a better assurance that you really aren't being spoofed. That you're going to the right place. That your communications are secure. So that's all really good. IOT, I think of as slightly orthogonal to that. IOT is still a real challenge. I mean there is so many IOT devices out there. I look at IOT though, and I'll talk about this tomorrow, and actually I've got a live event on Thursday, where I'll talk about it some more with my friend Matt Larson. >> John: Is that going to be here in New York? >> Actually we're going to be broadcasting out of Washington, D.C. >> John: Were you streaming that? >> It is streamed. In fact it's only streamed. >> John: Put a plug in for the URL. >> If you go to www.Infoblox.com I think it's one of the first things that will slide into your view. >> So you're putting it onto your company site. Infoblox.com. You and Matt Larson. Okay, cool. Thursday event, check it out. >> It is somewhat embarrassingly called Cricket Liu Live. >> You're a celebrity. >> It's also Matt Larson Live. >> Both of you guys know what you're talking about. It's great. >> So there's a discussion among certain boards of directors that says, "Look, we're losing the battle, "we're losing the war. "We got to shift more on response "and at least cover our butts. "And get some of our response mechanisms in place." What do you advise those boards? What's the right balance between sort of defense perimeter, core infrastructure, and response. >> Well, I would certainly advocate as a DNS guy, that people instrument their DNS infrastructure to the extent that they can to be able to detect evidence of compromise. And that's a relatively straight forward thing to do. And most organizations haven't gone through the trouble to plumb their DNS infrastructure into their, for example, their sim infrastructure, so they can get query log information, they can use RPZs to flag when a client looks up the domain name of a known command and control server, which is a clear indication of compromise. Those sorts of things. I think that's really important. It's a pretty easy win. I do think at this point that we have to resign ourselves to the idea that we have devices on our network that are infected. That game is lost. There's no more crunchy outer shell security. It just doesn't really work. So you have to have defensive depth as they say. >> Now servs has been around for such a long time. It's been one of those threats that just keeps coming. It's like waves and waves. So it looks like there's some things happening, that's cool. So I got to ask you, CyberConnect is the first real inaugural event that brings industry and some obviously government and tech geeks together, but it's not black hat or ETF. It's not those geeky forums. It's really a business community coming together. What's your take of this event? What's your observations? What are you seeing here? >> Well, I'm really excited to actually get the opportunity to talk to people who are chiefly security people. I think that's kind of a novelty for me, because most of the time I think I speak to people who are chiefly networking people and in particular that little niche of networking people who are interested in DNS. Although truth be told, maybe they're not really interested in DNS, maybe they just put up with me. >> Well the community is really strong. The DNS community has always been organically grown and reliable. >> But I love the idea of talking about DNS security to a security audience. And hopefully some of the folks we get to talk to here, will come away from it thinking oh, wow, so I didn't even realize that my DNS infrastructure could actually be a security tool for me. Could actually be helpful in any way in detecting compromise. >> And what about this final question, 'cause I know we got a time check here. But, operational impact of some of these DNS changes that are coming down from Paul Vixie, you and Matt Larson doing some things together, What's the impact of the customer and they say "okay, DNS will play a role in how I role out my architecture. New solutions for cyber, IOT is right around the corner. What's the impact to them in your mind operationally. >> There certainly is some operational impact, for example if you want to subscribe to RPZ feeds, you've got to become a customer of somebody who provides a commercial RPZ feed or somebody who provides a free RPZ feed. You have to plumb that into your DNS infrastructure. You have to make sure that it continues transferring. You have to plumb that into your sim, so when you get a hit against an RPZ, you're notified about it, your security folks. All that stuff is routine day to day stuff. Nothing out of the ordinary. >> No radical plumbing changes. >> Right, but I think one of the big challenges in so many of the organizations that I go to visit, the security organization and the networking organization are in different silos and they don't necessarily communicate a lot. So maybe the more difficult operational challenge is just making sure that you have that communication. And that the security guys know the DNS guys, the networking guys, and vice versa. And they cooperate to work on problems. >> This seems to be the big collaboration thing that's happening here. That it's more of a community model coming together, rather than security. Cricket Liu here, DNS, Chief Architect of DNS and senior fellow of Infoblox. The legend in the DNS community. Paul Vixie amongst the peers. Really that community holding down the fort I'll see a lot of exploits that they have to watch out for. Thanks for your commentary here at the CyberConnect 2017 inaugural event. This is theCUBE. We'll be right back with more after this short break. (techno music)

Published Date : Nov 6 2017

SUMMARY :

and the Institute for Critical Infrastructure Technology. Cricket, great to see you again. but also the fluid nature. Why is it important to CyberConnect? of the big DDoS attack on Dyn. And certainly one of the highlight examples was in the tens of thousands per day? in particular the idea that the bad guys register a legitimate domain name it falls off the list because the demand is for software APIs that give you the ability to say "Hey I that gives the sort of traditional, reputational, stuff about the emoji and the So that you could type something into your browser of PayPal which we resolve through the DNS. a bunch of email, including the links that And if you eyeballed it you'd think to your DNS world, how do you even do that? We did an engram analysis of the labels of these". And so, now everybody's talking about Edge, The more places that you have, the more sensors kind of the edge of the network Who has the DNS control for these devices? One of the issues that we've had that are built into the most common browsers, And the other one is called CAA I think, So it's metadata about the cert Exactly, so that way if somebody And at the same time this is it like a DNS server on the device. At least this gives you a greater assurance out of Washington, D.C. It is streamed. If you go to www.Infoblox.com So you're putting it onto your company site. It is somewhat embarrassingly called Both of you guys know what you're talking about. What's the right balance between sort of defense perimeter, And that's a relatively straight forward thing to do. CyberConnect is the first real inaugural event actually get the opportunity to Well the community is really strong. And hopefully some of the folks we get to talk to here, What's the impact to them in your mind operationally. You have to plumb that into your DNS infrastructure. And that the security guys know the DNS guys, Really that community holding down the fort

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Matt LarsonPERSON

0.99+

Dan KomenskiPERSON

0.99+

SymantechORGANIZATION

0.99+

JimPERSON

0.99+

CentrifyORGANIZATION

0.99+

Jim RuthPERSON

0.99+

New YorkLOCATION

0.99+

Paul VixiePERSON

0.99+

Institute for Critical Infrastructure TechnologyORGANIZATION

0.99+

ThursdayDATE

0.99+

InfobloxORGANIZATION

0.99+

University of Maryland College ParkORGANIZATION

0.99+

JohnPERSON

0.99+

Research ParkORGANIZATION

0.99+

www.Infoblox.comOTHER

0.99+

80%QUANTITY

0.99+

California Automotive AssociationORGANIZATION

0.99+

tomorrowDATE

0.99+

FarsightORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

30 minutesQUANTITY

0.99+

Washington, D.C.LOCATION

0.99+

Farsight SecurityORGANIZATION

0.99+

HebrewOTHER

0.99+

New York CityLOCATION

0.99+

FirefoxTITLE

0.99+

ArabicOTHER

0.99+

www.PayPal.comOTHER

0.99+

PayPalORGANIZATION

0.99+

ChineseOTHER

0.99+

first anniversaryQUANTITY

0.99+

ServalORGANIZATION

0.99+

one problemQUANTITY

0.99+

BothQUANTITY

0.99+

OneQUANTITY

0.99+

ChromeTITLE

0.99+

CyberConnectEVENT

0.99+

www. Bank of America .com.OTHER

0.98+

CA.LOCATION

0.98+

oneQUANTITY

0.98+

LatinOTHER

0.98+

DynORGANIZATION

0.98+

twoQUANTITY

0.98+

first 30 minutesQUANTITY

0.98+

CAAORGANIZATION

0.98+

DNSORGANIZATION

0.97+

1.5 times a dayQUANTITY

0.97+

TSLAORGANIZATION

0.96+

CyberConnect 2017EVENT

0.96+

Internet Engineering Task ForceORGANIZATION

0.96+

first 48 hoursQUANTITY

0.95+

UnicodeOTHER

0.94+

EdgeTITLE

0.94+

StuxnetORGANIZATION

0.94+

Mike Flaum, HPE | VMworld 2017


 

>> Narrator: Live, from Las Vegas, it's the CUBE, covering VM World, 2017. Brought to you by VMWare and its ecosystem partners. >> Welcome back to the CUBE's live continuing coverage of Vmworld 2017. We're on day two, I'm Lisa Martin. Thanks so much for joining. I'm joined by my cohost Keith Townsend and Keith and I are excited to be joined by CUBE first time visitor, Mike Flaum senior product manager to HPE. Welcome Mike. >> Thank you for inviting me here. I appreciate to have the opportunity. >> Great announcements over the last day and half. Tell us what's new with HPE and Vmware. >> Sure, so today our announcement went out, Vmware cloud foundation on top of Synergy. This is a follow on announcement that we had for Vmware cloud foundation on top of DL380 which is the industry leading rack based server. What we've done is we've now extended to our composable platform on Synergy and that was the announcement that went out earlier today. >> Composable infrastructure and Vmware cloud foundation, on paper doesn't kind of make sense. That you have this thing that's super flexible and what's supposed to be a reference kind of validated design, how does that work. >> It really accomplishes two things. What we're hearing from our customers very specifically is how do we make it easier. It's really not about technology, it's that how do people consistently do these deployments. So by using a composable platform it allows them to standardize and do the implementations. Then on top of that Vmware cloud foundation has its own installation appliance that installs to the Vsphere, the VSAN and the NSX. We're totally online with Vmware by making it easier for the customer implementations. Then the ongoing maintenance and support of it. >> Sorry, I was going to say from a go to market perspective, yesterday I think Pat Gelsinger had said 10,000 customers on VSAN, a huge install base with Vsphere. Talk to us about sort of the specific joint customer opportunities globally that you are seeing. >> Sure, so with the install base of Vsphere and then the VSAN install base, our customers are really asking for this. One of the things that we've done also is that we have OEM SKUs. We're actually taking the VCF and the VCN and you're able to buy these products directly from us, from Vmware. There's a synergy between, no pun intended, to actually have our customers be able to buy just from one vendor. So we're able to purchase the Vmware and the Synergy from HPE. That's been ongoing. >> Customer reaction in general? The concept is kind of abstract. We get Vmware on AWS. It took us awhile to get that. Are customers getting kind of, they can have that type of flexibility in their own data center? >> Absolutely. What happens is, is that when the DL380 announcement happened it was great for a rack based system. But that really doesn't scale super large. Customers think about customers that have multiple cabinets, multiple rows, multiple data centers, and that's really where the VCF on Synergy makes a huge difference. It's for the large data center deployments. Those customers are like wow we really see the value in VCF, but we really want to have it on Synergy for this platform because we have large data centers. That's really where. And the customers take those large data centers, they also want to be able to leverage VCF on AWS. They want to have this hybrid approach to having the workloads being both in the cloud and on premise. >> So let's talk a little bit about day two operations. What is it like, or what's the differentiator for Synergy and VCF versus any other solution? >> The difference is what makes it composable. On the Synergy platform we have an actual hardware that's the composure and it runs OneView. OneView has certain templates in order to make the compute, network and storage all run appropriately for the VCF on top of it. The part that the customers like about VCF is the SDDC manager, which is they look at this and wow that manages all the Vsphere, the NSX and VSAN. They need to have the composable and OneView management of the underlying hardware. That's where we come in from the composable side. >> One of the things that, I think it was Michael Dell that talked about this morning about this growing volumes of data. Everybody knows data is fuel and its pathway to other sources of economy within an organization. As we look at servers and storage, what is the sea level conversation around these technologies in terms of the benefits, like speeds and feeds and things like that. How is the HPE Vmware announcement today with composable, what are some of the key business problems that that's going to solve for a CEO, CIO, CTO? >> One of the things that happens is this proliferation of equipment. They buy a blade system. They buy a storage array. They buy networking. It ends up being on three different vendors. One of the benefits of doing it on Synergy, is that we're using the local storage. So the local storage is great but it requires the VSAN that comes from Vmware. Then the VCF is what puts it all together. It's not that you can't use Vsphere and NSX and VSAN separately, the benefit is to put it onto one system on the Synergy that combines it all together. For the CIO what happens is instead of buying three different equipment, three different vendors, managing three different firmware streams, now you have it all converged onto one system that's purpose built for this. So that's really the main difference. >> I hear cost reduction. Reduced CAPX, reduced OPX. Are you seeing customers be able to move resources around and be able to utilize resources for other strategies within their companies? >> Absolutely. On the Synergy we have a technology called Virtual Connect which is actually hardware. One of the things that it does when you run these composable templates on top of it, is that it makes it one resource pool. If you have compute resources or storage resources that are in different cabinets, it presents that to the VCF manager and you're able to move that as needed. It makes it easier because it sets it up as one giant computer. Whereas before it might be segmented based upon the cabinet level. That's really one of the main differences, have the fluid resource pools. But it really relies on having the VCF on top of it. >> Talking about that data center wide resource pool, I'm a customer, I have complicated data center, I have DL380s, I have DL580s, I have Synergy, I have original chassis. Help me move forward to this vision of VCF. What's the road map for a typical customer who has a diverse data center? >> This question comes up all the time. The customers say look we're on your existing products and we have those for years. What I tell them is if you just need to do an incremental add, then to buy that particular hardware platform. If you're building a new data center, you want to pick the next generation platform and so what you want to do is do your proof of concept on the Synergy and then build that for the future. It's not that the other platforms don't work. It's not that they're not going to continue to be supported. They will. But you're always taking a look at where do I want to be two years from now. That's the big difference is that I'm going to look at Synergy and leverage the Vsphere which I've been using for years. I'm going to use the VSAN which was just recently certified. But it's also a component of VCF and I'm able to leverage that local storage which compresses it all down into one hardware platform. That's where the customer's really get the added benefit. >> Terrific, well Mike thank you so much for joining us on the CUBE today and explaining from your perspective the impact that the announcement with HPE and Vmware on composable means today. We want to thank you for watching the CUBE again. I'm Lisa Martin with my cohost Keith Townsend. We are live on day two of VMWorld 2017. Keep watching, we'll be right back. (funky music)

Published Date : Aug 29 2017

SUMMARY :

it's the CUBE, covering VM World, 2017. and Keith and I are excited to be joined I appreciate to have the opportunity. Great announcements over the last day and half. This is a follow on announcement that we had and what's supposed to be a reference and do the implementations. opportunities globally that you are seeing. One of the things that we've done also The concept is kind of abstract. It's for the large data center deployments. What is it like, or what's the differentiator and OneView management of the underlying hardware. One of the things that, I think it was Michael Dell One of the things that happens to move resources around and be able But it really relies on having the VCF on top of it. What's the road map for a typical customer That's the big difference is that I'm going to look the impact that the announcement with HPE

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Michael DellPERSON

0.99+

Mike FlaumPERSON

0.99+

MikePERSON

0.99+

CUBEORGANIZATION

0.99+

VmwareORGANIZATION

0.99+

HPEORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

Las VegasLOCATION

0.99+

VMWareORGANIZATION

0.99+

10,000 customersQUANTITY

0.99+

Keith TownsendPERSON

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

two thingsQUANTITY

0.99+

one systemQUANTITY

0.99+

todayDATE

0.98+

OneQUANTITY

0.98+

SynergyORGANIZATION

0.98+

bothQUANTITY

0.98+

DL580sCOMMERCIAL_ITEM

0.97+

one vendorQUANTITY

0.97+

VMWorld 2017EVENT

0.97+

VsphereORGANIZATION

0.97+

DL380sCOMMERCIAL_ITEM

0.96+

VCNORGANIZATION

0.95+

day twoQUANTITY

0.92+

two yearsQUANTITY

0.91+

DL380COMMERCIAL_ITEM

0.91+

three different vendorsQUANTITY

0.91+

Vmware cloudORGANIZATION

0.9+

OneViewTITLE

0.9+

one resource poolQUANTITY

0.88+

dayQUANTITY

0.88+

VMworld 2017EVENT

0.87+

Narrator:TITLE

0.87+

this morningDATE

0.87+

first time visitorQUANTITY

0.85+

NSXORGANIZATION

0.83+

earlier todayDATE

0.83+

VCFORGANIZATION

0.81+

three differentQUANTITY

0.81+

oneQUANTITY

0.81+

CAPXTITLE

0.77+

VSANTITLE

0.77+

one giant computerQUANTITY

0.77+

2017DATE

0.76+

VCFTITLE

0.75+

KeithPERSON

0.75+

three different firmware streamsQUANTITY

0.72+

VSANORGANIZATION

0.71+

TownsendPERSON

0.7+

one hardware platformQUANTITY

0.7+

Vmworld 2017EVENT

0.68+

VM WorldEVENT

0.63+

twoQUANTITY

0.61+

Jason Brown, Dell EMC | VMworld 2017


 

>> Announcer: Live from Las Vegas, it's the Cube. Covering VMworld 2017. Brought to you by VMware and its ecosystem partners. >> Welcome back to the Cube. Our continuing coverage of Vmworld 2017 continues. I'm Lisa Martin with my co-host Stu Miniman. We're excited to be joined next by Jason Brown a Cube alumni consultant and product marketing for Dell EMC ScaleIO. Welcome back to the Cube Jason. >> Thank you for having me. >> Good to have you here, so day two of the event, lot's of announcements, lots of buzz. Talk to us about ScaleIO. What's the current state of the business. >> Well, it's actually really exciting right now. We're doing really well. We're seeing great customer adoption. We're seeing massive petabytes of ScaleIO deployed in data centers, and were here at the show really to talk to you about customers for ScaleIO for Vmware. 'Cause everyone here's above E10, obviously, they're doing awesome. We love it. They're doing great. But there's some differences and similarities between the two products that people get confused about, so we're here at the show really trying to help, you know, ease confusion, talk about how it's like peanut butter and jelly, right? Some people like peanut butter, some people like jelly but most people like 'em both, so we're just trying to help people out and understand when to choose which and sometimes it's both. >> Alright, Jason, I've got a history watching ScaleIO since before the acquisition, you know, service providers that usually kind of fit their model a little bit more than VSAN, so when I think scale, I tend to think ScaleIO. I interviewed ADP yesterday. Big customer, rolling out like 30,000 nodes of compute with VSAN. So, scales >> Yeah >> not only one piece of it. Maybe, help us kind of understand some of the, you know, of course there's going to be places that overlap, but what is the, you know, kind of ideal ScaleIO customer, what are they looking for, and how's that differ from the VSAN? >> Sure, so in particular if you're looking at ScaleIO for VMware, there's a few things you need to understand. First and foremost, with ScaleIO we're talking about consolidating resources across the data center. So we're talking data center grade software to find storage which can run in a hyperconverged model or not. And that's really key differentiating, 'cause if you look at these enterprises, especially, these large enterprises that built an IT organization of past 20 years, right? And so when you introduce HCI to them, you're transforming the architecture of the data center but also the IT operating environment. And that's scary for a lot of people who have spent millions of dollars having a server team, a network team, and the storage team. So one of the key things for ScaleIO in a VMware environment is, if you want to transform the architecture to software defined, but preserve that IT operating model, this two layer deployment, we call it, you can do that with ScaleIO. But on the flip side you can also do a more modern architecture with hyperconverged as well. So you can get the best of both worlds. So whether today you're ready to go all the way with the service providers, they'll go hyperconverged, you know out of the gate, but enterprises usually start more traditional and then move to that hyperconverged and ScaleIO provides that pathway to get there. >> Yeah, bring us inside those customers a little. 'Cause I've talked to a couple of very large customers of ScaleIO actually, did a case study at Citi and Citi told me, internally, we're just not ready to go fully hyperconverged. >> Jason: Exactly. >> So they kept that. They're massive scale. Talked to a large global hospitality company that, once again, looked more at kind of the storage usage of what they're doing so, I mean hyperconverged VSAN seems to be having, you know they've got 10,000 customers, they're all in that-- model. >> Exactly. >> So, what is it that gets a customer ready for that? What kind of pushes or pulls them towards being ready for, you know, embracing? >> Well, I think it's understanding your business goals and your desired outcomes. So with something like ScaleIO you're looking at simplicity in the data centers. So you're looking for scale, you know, not tens of nodes where traditional, I hear this said that traditional VSAN deployment is eight to 16 nodes, 'cause they're you know, VMware's everywhere, right? There's a lot of ROBO, SMB, VDI, use scales right there, and that's not really where ScaleIO plays. ScaleIO is about data center, so Tier 1 application, databases, data analytics. It's looking at things like containers and microservices, Splunk, NoSQL. Applications like that. So when you look at those types of applications and workloads, you have to understand that your scale will probably go from tens to hundreds of nodes. Your performance may go from a million IOPS to tens of millions of IOPS. You may need six nines availability 'cause again, you're running in the data center. Customers are replacing their SAN arrays with ScaleIO. So you need all that enterprise class, data center grade functionality with the scale performance and flexibility, the key thing is flexibility as well, if you want to run multiple workloads on a cluster, you need to be able to support VMware, Hyper-V, KVM, Linux, Windows, so and ScaleIO enables all of those things. And therefore, that's why when you look at your business goals, your business ops and what your data center looks like, you need to understand that functionality. Then you decide okay, is it going to be VSAN or ScaleIO or is it going to be both, 'cause I have both of those use cases there. >> So you talked about VSAN and ScaleIO, peanut butter and jelly. Michael Dell on main stage with Pat Gelsinger said VMware and Dell EMC are like peanut butter and chocolate. Both, all good flavors, in my opinion. I'd love to hear an example though, of where, like to your point, before I asked the question. We just had the CTO of Dell EMC storage, speaking with Stu and I a few minutes ago and one year post-combination, and he said customers are starting to understand now the value of Dell EMC-- >> Yes. >> Together. So with that, you know, a year later and customers now understanding the value proposition of this company that now also owns VMware, how much easier is the conversation, you know, away from VSAN verses ScaleIO? I'd love to understand where are you seeing where they both, those peanut butter and jelly sandwiches play together. What are some of the maybe industries or key use cases where a customer would need ScaleIO and VSAN? >> Sure. So if you think about financial services, Citi as Stu mentioned, one of the larger ones there, definitely plays there, in healthcare there's a few large big partner network companies that have come together to be successful there. Telco, Verizon, Comcast, right? Not only just private Cloud but public Clous as well, so when you look at your data center, you got to look at the whole thing. So, for your VDI, your ROBO, your SMB and maybe for a few of your enterprise applications that only need you know, 50,000 in an IOPS performance for your VMs then VSAN is going to be great there, but then you look to the other side of your data center and you've got something like SAP, you know HANA, I think any other, in fact, ORACLE, etc or you're looking to build a private cloud of hundreds of nodes, well that's where ScaleIO is going to sit. Over in that corner, you know? So, it really is understanding what your workloads are and where they play. You know, it's important to know too that for ScaleIO our primary use cases are array consolidation, so you've got silos of arrays in your data center and you want to stop managing silos of arrays, and you want to bring everything together into a single resource, a single cluster, boom, ScaleIO. You want to build the cloud environment whether you're a service provider building a public cloud like Swisscom for example, who built a public cloud based off of ScaleIO, or a private cloud like CitiGroup for example. It's pretty much a private cloud; mix of array consolidation as well. And then something like a gaming company that we've worked with where they are doing this next generation DevLogs containers, microservices, well ScaleIO's great for that too, 'cause it has the flexibility to start small and grow and support the various things that they need to be able to deploy their applications 32% faster. So you know, it really encompasses the whole data center. >> Yeah, a bunch of interesting points that I want to unpack a little bit there. Specifically, you're talking about all the new applications and the new technologies that people are doing. One of the challenges most people have, you know, the stack we've been using, I think, for my entire IT career is, you know, we spend what, somewhere between 70 and 90% of our time keeping the lights on. >> Jason: Yes. >> And the wave of kind of software-defined, you know, all of these type things, supposed to be, we need to simplify our environment, you know and, therefore I can take those resources and reallocate them, retrain them, put them on cool new things. What are you seeing from the customers, you know, just organizationally from what happens to the storage people as well as how do they take advantage of some of these tougher things like application modernization? >> Good question. Good question. So, you know it depends on the company right? There are, like you said, there are some customers that want to keep them separated and that's perfectly fine you know, there are tools that you can use with ScaleIO so that you can manage the storage independently of the compute. But then you've got things like our tight integration with vSphere, where the VMadmin can manage the storage as well. So, it depends on the preferences as well as the maturity of the organization and the skillset of the folks that are managing it as well. If you can have a storage admin become more agile and be able to manage the compute and the VMs as well then perfect. They become more generalists, right? We've talked about how these specialties becomes more generalists in these types of HCI and NextGen environments. So if they have that skillset then perfect and both ScaleIO and VSAN can enable that. And then if you're looking at app modernization, you know what do you need from an infrastructure storage perspective to achieve that, and how can you enable your application developers access that storage even faster? And that's really was ScaleIO does with the whole automation points behind everything. With, be able to add resources on the fly, remove resources on the fly, reallocate on the fly. So being able to be flexible for what they need when they all of a sudden are ramping up a new application is really critical. >> Yeah. I guess, I'm wondering if you have any specific examples. One of the critiques if you talk about, you know, storage, admins, fast is not something that usually, you think of. Flash is fast and everything like that but, how do we keep up with the pace change, how do I move things? How does ScaleIO help change that equation? Even just specifically for storage? >> Well I think that in order to be able to keep up with that change, right, it's about, as you said, simplifying their job and making it easier. So, if you've got the tools and the, just the functionality in the product itself, to be able to help them learn faster, be able to press a button as opposed to being able to allocate an array group and (murmurs) things that have an architecture, that makes that be able to achieve that as well, that's really how you do it. You know I haven't talked to any storage admins lately, unfortunately. So I can't give you a specific example, but that's really what we see at kind of the one on one level. >> And from a buyer's train of perspective, so much has changed and shifted towards this C-Suite. When we look at things like data protection, we, you know, some announcements about that yesterday, storage, and you said you haven't spoken with storage admins in a while. There's a lot of data that show that data protection storage isn't an IT problem, it's a business problem. So how has the conversation now with Dell EMC with respect to whether it's ScaleIO or whatnot, shifted upstream if you will, talking to more senior executives, rather than the storage guys and gals that are managing specific pieces? Tell us about that-- >> Sure. >> Conversation and maybe cultural shift. >> Well when you talk to any C level executive, what's the top of mind, right? Security, saving, cost savings, budget, right? So when we're talking to executives, where they talk about data center transformation, how software defines storage and enables that both at the architectural level and at the IT level, but also about how we can make their business easier to run and how it can save them money. so if you're able to get all this great flexibility and scalability and all this you know, performance, but then be able to preserve the features that you need, like compression and snapshots and being able to connect to your data protections suites as well? So if you can tell them all that and say hey and you know what, we have customers saving 50% five year TCO by doing that, without needing to do data migration or tech refreshers anymore. They're like alright, sign me up. Because you have to understand too, when you talk to them, they don't need to go buy an array the next day, and spend a couple million dollars they maybe be will be able to utilize in the future or not. They can start very small. Three nodes, four nodes, and have this pay as you go licensing so they love that as well because it grows on their terms. Not on our terms, on their terms. And that's really important for you know people that in those C level suites that are trying to maximize the efficiency of the business. >> Alright, Jason, one thing's when customers buy into a solution like this, it's more of a platform discussion these days and of course one of the things they're looking for is where are you taking me down the road? So it's great, here's what I can do today, one of the things I love this whole wave of it, is, you know, upgrades and migrations were like, you know, the four letter words for anybody in storage. >> Dirty words. >> And I said, you know, when we have a pool of resources and I can kind of add and remove nodes it was like, oh my God, that was, we conservatively estimated like five years ago that 30% of the overall TCO was based on that alone and. Wow. Scrap that. Last time you're ever going to need to, you know, migrate once you get on this platform. But, I want you to talk to us a little bit about, you know a little bit, kind of the vision and roadmap. What are >> Sure. >> You talking to customers about. >> Absolutely. So, you know with a product like this, it's constantly evolving and innovating so when we talk to customers about what's in the future, well you have to first be thinking about data services. Data services are always very important and with ScaleIO, you know, admittedly, we're a little short on some data services because we more focus on scalability and performance and making sure that we have a six nines architecture. So, the first and biggest thing that's coming very soon, if you were at Dell EMC with ScaleIO is compression. So being able to, you know for your block storage workloads, being able to maximize the efficiency of your storage even more with some in line compression? Very important. So we're doing that. We're also enhancing our snapshot's functionality so that, you know when you talk snapshots and SDS, you know, you compare it to an enterprise array, probably not up to snuff. Well what we're doing now with our snapshot keeping in relation to ScaleIO is we're actually going to have them be better or even much better than something you'd find in like an all flash array. You know, where you can have you know, thousands of snapshots in a v-tree and things like that. But it also goes to hardware as well. 'Cause there's always hardware, right? And with the innovation within Dell EMC with Dell PowerEdge servers with our friends in CPSD, we're able to innovate a lot faster with ScaleIO and SDS. So, 14G was announced. Well ScaleIO's going to be one of the first products within Dell EMC through our ScaleIO Ready Node to support mV dims and MVME. So as you know we support MVME today, one of the few software device storage platforms out there today that supports it, in a roll your own server model. With the Ready Node 14G coming out later this year, with the ScaleIO Ready Node, immediately out of the gate mVdim and MVME technology in a ScaleIO Dell EMC hardware product, 'cause it's already you know its Dell PowerEdge servers and ScaleIO software. And then helping our management keep our management keep (murmurs) as well so, introducing VVols for our VMware customers, being able to provide something called AMS which is our automated management services for the Ready Node so that you can deploy, configure, manage, upgrade, not only the storage software but the firmware as well as the EXS hypervisor all in a single button, in all a single interface, so we're doing that as well. So it's all about, you know, taking advantage of NextGeneration functionality from the hardware perspective, simplifying the management, then introducing critical features and functionality that our customers have been asking for. >> Just to make sure I'm 100% on this, things like the data services, that's software, so everybody that's got it today, will be able to upgrade it. Obviously the next generation of hardware always helps along the way, but you know, you manage those a little bit separate even though you want to handle both of those vectors. >> Yes, exactly. So when you upgrade to ScaleIO.next when it comes out you'll get that feature functionality. Now there's a few things you need to understand, right? You should have Mvdims and some type of flash media to support it. >> Stu: Sure. >> Because you're trying to maximize scalability and performance while providing these features, there's some dependencies there. But yeah, out of the gate, those features will be available. That's why it's called software-defined storage. It's all in the software, all this world of goodness is. >> Okay so take me upstream. Lot of new features, functionality coming out; what are the new business benefits if I'm the CEO of Swisscom, that I'm going to be able to achieve from that? >> Well I think definitely increased performance. Definitely increased efficiency of your storage with things like compression and snapshots. Now, if you're able to compress that data, get more out of your system-- >> But what kind of like, in terms of TCL. How am I going to be able to reduce. >> Oh, well. >> What are the factors of-- (grunts loudly) >> You know, we haven't run the numbers yet, but you know, the fact that we already can achieve 50% TCO, it can only get better from there when we're introducing these types of features, where you're maximizing efficiency, so, we expect it to bump up a bit. We're hoping we can work with you guys to get some good numbers that come out of it. >> Excellent. So continued strengthening of those-- business outcomes is, >> Yeah, that's it. You know, making sure, >> what you're talking about. >> Makings sure that the customers that want to move to software-defined storage in their data center, are able to achieve that in the most seamless way, and be able to reap the benefits. >> Fantastic. Well Jason, thanks so much for sharing your insights what's happening, um, peanut butter and jelly. Makes me hungry. I think it's time for lunch. >> It is lunch time, yeah. >> We thank you so much for coming back-- on the Cube. >> Thanks for having me. I really appreciate it. >> And for my co-host Stu Miniman, I'm Lisa Martin you are watching the Cube live, day two of our continuing coverage from VMworld 2017. Stick around. We'll be right back after a short break. (electronic music)

Published Date : Aug 29 2017

SUMMARY :

Brought to you by VMware and its ecosystem partners. We're excited to be joined next by Jason Brown Good to have you here, so to talk to you about customers for ScaleIO for Vmware. since before the acquisition, you know, Maybe, help us kind of understand some of the, you know, But on the flip side you can also do a more modern 'Cause I've talked to a couple of very large customers seems to be having, you know they've got 10,000 customers, And therefore, that's why when you look at your business So you talked about VSAN and ScaleIO, So with that, you know, a year later and customers now VSAN is going to be great there, but then you look to the One of the challenges most people have, you know, And the wave of kind of software-defined, you know, perspective to achieve that, and how can you enable your One of the critiques if you talk about, you know, in the product itself, to be able to help them we, you know, some announcements about that yesterday, and scalability and all this you know, performance, I love this whole wave of it, is, you know, upgrades and And I said, you know, when we have a pool of resources So being able to, you know for your block storage along the way, but you know, you manage those a little So when you upgrade to ScaleIO.next when it comes out you'll It's all in the software, all this world of goodness is. Swisscom, that I'm going to be able to achieve from that? Definitely increased efficiency of your storage How am I going to be able You know, we haven't run the numbers yet, but you know, So continued strengthening of those-- You know, making sure, and be able to reap the benefits. Well Jason, thanks so much for sharing your insights We thank you so much for coming back-- I really appreciate it. you are watching the Cube live,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ComcastORGANIZATION

0.99+

JasonPERSON

0.99+

Lisa MartinPERSON

0.99+

VerizonORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

Jason BrownPERSON

0.99+

Michael DellPERSON

0.99+

CitiORGANIZATION

0.99+

SwisscomORGANIZATION

0.99+

tensQUANTITY

0.99+

Stu MinimanPERSON

0.99+

100%QUANTITY

0.99+

32%QUANTITY

0.99+

50%QUANTITY

0.99+

Las VegasLOCATION

0.99+

30%QUANTITY

0.99+

CitiGroupORGANIZATION

0.99+

ScaleIOTITLE

0.99+

StuPERSON

0.99+

two productsQUANTITY

0.99+

firstQUANTITY

0.99+

VMwareORGANIZATION

0.99+

10,000 customersQUANTITY

0.99+

a year laterDATE

0.99+

bothQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

WindowsTITLE

0.99+

FirstQUANTITY

0.99+

BothQUANTITY

0.99+

16 nodesQUANTITY

0.99+

vSphereTITLE

0.99+

yesterdayDATE

0.99+

LinuxTITLE

0.99+

Three nodesQUANTITY

0.98+

thousandsQUANTITY

0.98+

five years agoDATE

0.98+

tens of millionsQUANTITY

0.98+

NoSQLTITLE

0.98+

OneQUANTITY

0.98+

VMworld 2017EVENT

0.98+

eightQUANTITY

0.97+

90%QUANTITY

0.97+

50,000QUANTITY

0.97+

todayDATE

0.97+

a millionQUANTITY

0.97+

HANATITLE

0.97+

later this yearDATE

0.97+

DellORGANIZATION

0.97+

five yearQUANTITY

0.97+

Hyper-VTITLE

0.96+

single buttonQUANTITY

0.96+

both worldsQUANTITY

0.96+

Cloud Monitoring and Analytics: First Steps In Successful Business Transformation


 

>> Welcome to our Palo Alto studio, all of you coming in over the airwaves. It's a wonderful opportunity today to talk about something very important with Computer Associates or, CA Tech, as they're now known. And I want to highlight one point about the slide title, the title they chose for the day, we chose for the day, Cloud and Hybrid IT Analytics for Digital Business. One of the most interesting things that you're going to hear about today is that it's going to keep coming back to business challenges and business problems. At the end of the day that's what the focus needs to be on. While we certainly do want to do more with the technology we have and drive greater effectiveness and utilization out of the technology that we use in our digital business, increasingly the ability to tie technology decisions to business outcomes is possible and all IT professionals must make that effort, as well as all IT vendors, if the community is going to be successful. Now what I'm going to talk about specifically is how cloud monitoring plays inside this drive to increase the effectiveness of business through digital technologies. And to do that, I'm going to talk about a few things. The first thing I'm going to talk about is what is a digital business and how does it impact strategic technology capabilities? Now the reason why this is so important is because there's an enormous amount of conversation in the industry about digital businesses, multi-channel for digital businesses, customer experience for digital businesses, some other attribute. And while those are all examples or potential benefits of digital business, at its core digital business is something else. We want to articulate what that is because it informs all decisions that we're going to make about a lot of different things. The second thing I'm going to talk about is this notion of advanced analytics and how advanced analytics are crucial to not only achieving the outcomes of digital business but also to sustain the effort in the transformation process. And as you might expect, if we're going to use analytics to improve our effectiveness, then we have to be in a position to gather the data that we need from the variety of resources necessary to succeed with a digital business strategy. Those are the three things I'm going to talk about but let's start with this first one. What is digital business and how does it impact technology capabilities? Now to do that, I want to show you something that we're quite proud of here at Wikibon SiliconANGLE because we're a research firm and a company that's dedicated to helping communities make better decision. The power of digital community is clear. It's a very, very important resource, overall, inside any business. And what we do is we have a tool that we call CrowdChat. And the purpose of CrowdChat is to bring together members of the community and surface the best insights they have about their undertakings. Now I'm not using this to just pitch what CrowdChat is, I really want to talk through how this is a representation of the power of digital community. I want to point you to a few things in this slide. First off, note that it's, very importantly, this was from a CrowdChat that we did on 31 January 2017 but the thing to note here is a couple of things. Now let's see if I can click through them here. Well the first thing to note is that it reached 3.4 million people linked to the technology decision making. Think about that. Wikibon SiliconANGLE is not a huge company. We're a very focused company that strongly emphasizes the role that technology can play in helping to make decisions and improve business outcomes. But this CrowdChat reached 3.4 million decision makers as part of our ongoing effort. And it clearly is an indication, ultimately, that today customers, in fact, are at the center of what goes on within digital business decision making. So customers are at the centers of these crucial market information flows. Now this is going to be something we come back to over and over and over. It used to be that folks who sold stuff were the primary centers of what happened with the information flows of the industry. But through social media, tools like CrowdChat and others, today customers are in a much better position overall to establish their voices and share their insights about what works and what doesn't work. In many respects, that is the core focus of digital business. So that leads us to this question of what is digital business. Now I am a fan of Peter Drucker. It's hard to argue with Peter Drucker and it's one of the reasons I start with Peter Drucker is because people don't typically argue with me when I start there. And Peter Drucker famously said many years ago that the purpose of a business is to create and keep a customer. Now you can go on about what about shareholder value, what about employees, and those are all true things. There's no question that that's also important. But the fundamental keeps coming back that if you don't have customers and you don't provide a great experience for those customers, you're not going to have a business. So what's the difference between digital business and business? The biggest difference between digital business and business and in fact how we properly define the concept of digital business is that digital businesses apply data to create and keep customers. That's the basis of digital business. It's how do you use your data assets to differentiate your business and especially to provide a superior experience, a superior value proposition, and superior outcomes for your customers. That is the core of digital business. If you're using data to differentiate how you engage customers, how you provide that experience for customers, and how you improve their outcomes, then you are more digital business than you were yesterday. If you use more data, you are more digital business than your competition. So this is a way of properly thinking about the role of digital business. And to summarize it slightly differently, what we strongly believe is that what decision makers have to do over the course of the next number of years is find ways to put their data to work. That is the fundamental goal of an IT professional today. And increasing, increasingly the goal of many business professionals. Find ways to apply data so that you can increase the work the firm does for customers. That's kind of the simple thread we're trying to pull here. Data, put to work, superior customer experience. Now at the centerpiece of this simple prescriptive is an enormous amount of complexity. A lot of decisions that have to be made because most businesses are not organized around their data. Most businesses don't institutionalize the way they engage customers or perform their work based on what their data assets can provide. Most businesses are built around the hardware, at least if you're an IT person, they're built around the hardware assets or maybe even the application assets. But increasingly it's become incumbent on CIOs and IT leaders to recognize that the central value of the business, at least that they work with, is the data and how that data performs work for the business. So that leads to the second question. Given the enormity of data in the future of digital business, we have to ask the question, "Well what role "is advanced analytics playing to keep us on track "as we thing about, ultimately, driving forward "for a digital business?" Now we draw this picture out to customers to try to explain the things that they'll have to do to become an increasingly digital business. And it starts with this idea that a digital business transformation requires investment in new capabilities, new business capabilities that foster the role that digital assets can play within the business that simplify making decisions about where to put people and how to institutionalize work and ultimately help sustain the value of the data within the business over time. And a way to think about it is that any digital business has to establish the capabilities to better capture data create catalysts from data. Now what do we mean by that? We mean basically that data is a catalyst for action. Data can actually be the source of value if you're a media company, for example. But in most businesses data is a catalyst, the next best action, a better prediction of superior forecast, a faster and simpler, and less expensive report for compliance purposes. Data is a catalyst. So we capture it and we translate it into a catalyst that then can actually guide action. That's the simple set of capabilities that we have to deploy here. Capturing data, turning it into the catalysts that then have consequential impacts in front of customers, provides superior experience and better business. Now if we try to map those prescriptions for business capabilities onto industry buzzwords, here's what we end with. Capture Data, well that's the centerpiece of what the industrial internet of things is about, or the internet of things is about, if we're talking mainly about small devices in a consumer world. Capturing data is essential and IIoT is going to be crucial to that effort as well as mobile computing and other types of things. We like to talk about it sometimes is the internet of things and people. Big data and analytics should be properly thought of as helping businesses turn those streams of information into models and insights that can lead to action. So that's what the whole purpose of what big data analytics is all about. It's not to just capture more data and store more data, it's about using that data that comes from a lot of different locations and turning it into catalysts, sources of value within the business. And the final one is branded customer experience. At the end of the day, what we're talking about is how we're going to use digital technology to better engage our customers, better engage our partners, better engage our markets, and better engage our employees. And increasingly, as customers demonstrate a preference for greater utilization of digital technology in their lives, the whole notion of a branded experience is going to be tied back to how well we provide these essential digital capabilities to our customers in our markets. So analytics plays an incredibly important role here because we've always been pretty good at capturing data and we've always, we're getting better I guess I should say, at utilizing insights from that data that could be gleaned on an episodic basis and turning that into some insight for a customer. Usually really smart people in sales or marketing or manufacturing or product management play that role. But what we're talking about is operationalizing, turning data into value for customers on a continuous ongoing basis. And Analytics is crucial for that and analytics also is crucial to ensure that we could stay on track as we effect these transformations and transitions. Now I want to draw your attention, obviously, to an important piece as we go forward here. And that is this notion how do we capture that data so that it is appropriately prepped and set up so that we can create value from analytics. And that's going to be the basis of the third point that I'm going to talk about. Why is hybrid cloud monitoring emerging as a crucial transformation tool? Now monitoring has been around for a long time. We've been monitoring individual assets to ensure we get greater efficiency and utilization. CA's been a master of that for 30, 35 years. Increasingly though, we need to think about how systems come together in a lot of different ways to increase what we call the plasticity of the infrastructure. The ability of the infrastructure to not only scale but to reconfigure itself in response to the crucial new work that digital businesses have to perform. So how's that going to play out? It's become very popular within the industry to talk about how data is going to move to the cloud. And that's certainly going to happen. There's going to be a lot of data that ends up in the cloud. But as we think about the realities of moving data, data is not just an ephemeral thing. Data has real physical characteristics, real legal implications. And ultimately intellectual property is increasingly rendered in the form of data. And so we have to be very careful how we think about data being moved across the enterprise into any number of different locations. It's one of the most strategic decisions that a board of directors is going to make. How do we handle and take care of our data assets? Now I want to focus just on one element of that. Hopefully provide a simple proof point to make this argument. And that is, if we looked at how data is generated, for example, in an Edge setting. Say we looked at the cost of moving data from a wind farm. A relatively small straightforward wind farm with a number of different sensors. What does it cost to move that data to the cloud? And that's provided here. If we think about the real costs of data, the cost of moving data from an Edge situation, even in a relatively simple example, back to the cloud can be dramatic. Hundreds of thousands of dollars. Limitations based on latencies, concerns about traversing borders that have legal jurisdictions, and obviously also, as I said, the intellectual property realities. But the bottom line here is that it shows that it's going to be much cheaper to process the data in place, process the data close to where the action needs to be taken, than to move it all to the cloud. And we think that's going to become a regular feature of how we think about setting up infrastructure in business in the future. Increasingly, it's not going to be about moving data to the cloud only, we're going to have additional options about moving cloud and cloud services to the data. Increasingly this is going to be the tact that businesses are going to take. It's find ways to move that sense of control, that notion of quality of service, and that flexibility in how we provision infrastructure so that the cloud experience comes to where the event needs to take place. That going forward will be the centerpiece of a lot of technology decision making. It doesn't mean we're not going to move data to the cloud it just means that we're going to be smart about when we do it, how we do it, and understanding when it makes more sense to move the cloud or the cloud set of services closer to the event so that we can process it in place. Now this is a really crucial concern because it suggests there's going to be a greater distribution of data and not a greater centralization of data. And you can probably see where I'm going with this. Greater distribution of data ultimately means that there's going to be a lot more things that require that we have to have visibility into their performance, visibility into how they work. If it was all going to be in one place then we could let someone else actually handle a lot of those questions about what's going on, how is it working. But as our businesses become more digital and our data assets become more central to how we provide customer experience, it means that the resources that we use to generate value out of those assets have to be managed and monitored appropriately. Now we have done a lot of work around this and what our research pretty strongly shows is that over the next 10 years, we're going to see three things happen. First off, we're going to see a lot of investment in public cloud options both in the form of SaaS as well as infrastructure as a service. So that will continue. There's no question that we're going to see some of the big public cloud suppliers become more important. But our expectation also, is we will see significant net new investment in what we call true private cloud. The idea of moving those cloud services on premise so that we can support local events that need high quality data and that kind of capability. The second thing I want to point out here is that while we do expect to see significant net new efficiencies and how we run all these resources, if we look at the cost of labor over the course of operational labor over the course of the next decade, we do expect to see the cost go down about around 7%. So we will see greater productivity in the world of IT labor. But it's not going to crash like many people predict. And one of the reasons it's not going to crash is because of the incredible net new reports of digital assets. But the third thing to note here is that we are not going to see the type of massive dumping of traditional infrastructure that many people predict. There's too many assets, too much value already in place in a lot of systems, and instead what we're going to see is a blending of all of these different capabilities in a rational way so that the business can achieve the digital outcomes that it seeks. The challenge, though, over the course of the next decade, however, is going to be to find ways, while we're going to have all these different resources, be a feature of our technology plan, be a feature of how we run our business. Historically we've tended to think about these in silos and the monitoring challenge that we put in place was to better generate efficiencies out of an individual asset. Well as we go forward, increasingly we need to think about how not one resource works, but how all these resources work. It's time for business to think about the internet not as something that's external, but as the basis for their computing. The internet is a computer. How we slice it up for our business is a statement about how we're going to build a set of distributive capabilities but weave them together so that we have a set of resources that can, in fact, reflect the business needs and support business requirements. And monitoring becomes crucial to that because as we move forward the goal needs to be to be able to enfranchise, federate a lot of these distributive resources into a working coherent statement of how computing serves our business. And that's going to require an approach that is much more focused on how things come together and how things can be bought into a coherent whole as opposed just the efficiency of any single tool or any single device. That's where digital business has to go, how can we bring all of these resources together into a coherent whole that supports our business needs. And that is the goal of the next generation of monitoring is to make that possible. Okay, so as we think about what we've talked about we basically made a couple of points here. The first when we talked about what is digital business, the first point that I made is data is the digital business asset. That's what we're trying to do here is use data to improve the effectiveness of the outcomes that we seek for customers. Digital business elevates IT but forces real and material changes. The second point that I made is how are advanced analytics helping. Well analytics turns business, or turns data into business catalysts that ultimately guide and shape customer experience. Crucial point. And the last point that I want to make is when we think about cloud monitoring remember that if we move forward in the digital world, as you make choices, your brand fails when your infrastructure fails. So as a consequence for those of you who are in the midst of thinking about the future role that monitoring is going to play in your world, choose your suppliers carefully. It's not about having a tool for a device, it's about thinking about how all of this can be, how monitoring can bring a lot of different resources into a coherent picture to ensure that your business is able to process, compute, store, and effect dramatic improvements to customer experience across the entire infrastructure asset. And the last thought that I'll leave you with is that CA Tech has been one of the companies of the vanguard of thinking about how this is going to work over the next decade in the industry.

Published Date : Aug 22 2017

SUMMARY :

so that the cloud experience comes to where the event

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
31 January 2017DATE

0.99+

30QUANTITY

0.99+

second questionQUANTITY

0.99+

first pointQUANTITY

0.99+

second pointQUANTITY

0.99+

oneQUANTITY

0.99+

FirstQUANTITY

0.99+

firstQUANTITY

0.99+

third thingQUANTITY

0.99+

3.4 million peopleQUANTITY

0.99+

Peter DruckerPERSON

0.99+

CA TechORGANIZATION

0.99+

third pointQUANTITY

0.99+

todayDATE

0.99+

one pointQUANTITY

0.99+

Palo AltoLOCATION

0.99+

second thingQUANTITY

0.99+

bothQUANTITY

0.98+

yesterdayDATE

0.98+

three thingsQUANTITY

0.98+

first oneQUANTITY

0.98+

one elementQUANTITY

0.97+

CrowdChatTITLE

0.97+

OneQUANTITY

0.97+

Hundreds of thousands of dollarsQUANTITY

0.97+

next decadeDATE

0.96+

single toolQUANTITY

0.94+

one resourceQUANTITY

0.94+

single deviceQUANTITY

0.94+

about around 7%QUANTITY

0.94+

first thingQUANTITY

0.91+

Wikibon SiliconANGLEORGANIZATION

0.9+

one placeQUANTITY

0.9+

Computer AssociatesORGANIZATION

0.85+

3.4 million decision makersQUANTITY

0.84+

CALOCATION

0.75+

35 yearsQUANTITY

0.74+

10 yearsDATE

0.7+

many years agoDATE

0.69+

EdgeTITLE

0.67+

coupleQUANTITY

0.63+

CrowdChatORGANIZATION

0.56+

SaaSTITLE

0.51+

pointsQUANTITY

0.48+

nextQUANTITY

0.43+