Image Title

Search Results for single flash:

Steve Mullaney, CEO, Aviatrix | AWS re:Invent 2022


 

(upbeat music) >> You got it, it's theCUBE. We are in Vegas. This is the Cube's live coverage day one of the full event coverage of AWS reInvent '22 from the Venetian Expo Center. Lisa Martin here with Dave Vellante. We love being in Vegas, Dave. >> Well, you know, this is where Super Cloud sort of was born. >> It is. >> Last year, just about a year ago. Steve Mullaney, CEO of of Aviatrix, you know, kind of helped us think it through. And we got some fun stories around. It's happening, but... >> It is happening. We're going to be talking about Super Cloud guys. >> I guess I just did the intro, Steve Mullaney >> You did my intro, don't do it again. >> Sorry I stole that from you, yeah. >> Steve Mullaney, joined just once again, one of our alumni. Steve, great to have you back on the program. >> Thanks for having me back. >> Dave: It's happening. >> It is happening. >> Dave: We talked about a year ago. Net Studio was right there. >> That was two years. Was that year ago, that was a year ago. >> Dave: It was last year. >> Yeah, I leaned over >> What's happening? >> so it's happening. It's happening. You know what, the thing I noticed what's happening now is the maturity of the cloud, right? So, if you think about this whole journey to cloud that has been, what, AWS 12 years. But really over the last few years is when enterprises have really kind of joined that journey. And three or four years ago, and this is why I came out of retirement and went to Aviatrix, was they all said, okay, now we're going to do cloud. You fast forward now three, four years from now, all of a sudden those five-year plans of evacuating the data center, they got one year left, two year left, and they're going, oh crap, we don't have five years anymore. We're, now the maturity's starting to say, we're starting to put more apps into the cloud. We're starting to put business critical apps like SAP into the cloud. This is not just like the low-hanging fruit anymore. So what's happening now is the business criticality, the scale, the maturity. And they're all now starting to hit a lot of limits that have been put into the CSPs that you never used to hit when you didn't have business critical and you didn't have that scale. They were always there. The rocks were always there. Just it was, you never hit 'em. People are starting to hit 'em now. So what's happening now is people are realizing, and I'm going to jump the gun, you asked me for my bumper sticker. The bumper sticker for Aviatrix is, "Good enough is no longer good enough." Now it's funny, it came in a keynote today, but what we see from our customers is it's time to upgrade the native constructs of networking and network security to be enterprise-grade now. It's no longer good enough to just use the native constructs because of a lack of visibility, the lack of controls, the lack of troubleshooting capabilities, all these things. "I now need enterprise grade networking." >> Let me ask you a question 'cause you got a good historical perspective on the industry. When you think about when Maritz was running VMWare. He was like any app, he said basically we're building a software mainframe. And they kind of did that, right? But then they, you know, hit the issue with scale, right? And they can't replicate the cloud. Are there things that we can draw from that experience and apply that to the cloud? What's the same, what's different? >> Oh yeah. So, 1992, do you remember what happened in 1992? I do this, weird German software company called SAP >> Yeah, R3. announced a release as R/3. Which was their first three-tier client-server application of SAP. Before that it ran on mainframes, TCP/IP. Remember that Protocol War? Guess what happened post-1992, everybody goes up like this. Infrastructure completely changes. Cisco, EMC, you name it, builds out these PCE client-server architectures. The WAN changes, MPLS, the campus, everything's home running back to that data center running SAP. That was the last 30 years ago. Great transformation of SAP. They've did it again. It's called S/4Hana. And now it's running and people are switching to S/4Hana and they're moving to the cloud. It's just starting. And that is going to alter how you build infrastructure. And so when you have that, being able to troubleshoot in hours versus minutes is a big deal. This is business critical, millions of dollars. This is not fun and games. So again, back to my, what was good enough for the last three or four years for enterprises no longer good enough, now I'm running business critical apps like SAP, and it's going to completely change infrastructure. That's happening in the cloud right now. And that's obviously a significant seismic shift, but what are some of the barriers that customers have been able to eliminate in order to get there? Or is it just good enough isn't good enough anymore? >> Barriers in terms of, well, I mean >> Lisa: The adoption. Yeah well, I mean, I think it's all the things that they go to cloud is, you know, the complexity, really, it's the agility, right? So the barrier that they have to get over is how do I keep the developer happy because the developer went to the cloud in the first place, why? Swipe the credit card because IT wasn't doing their job, 'cause every time I asked them for something, they said no. So I went around 'em. We need that. That's what they have to overcome in the move to the cloud. That is the obstacle is how do I deliver that visibility, that control, the enterprise, great functionality, but yet give the developer what they want. Because the minute I stop giving them that swipe the card operational model, what do you think they're going to do? They're going to go around me again and I can't, and the enterprise can't have that. >> That's a cultural shift. >> That's the main barrier they've got to overcome. >> Let me ask you another question. Is what we think of as mission critical, the definition changing? I mean, you mentioned SAP, obviously that's mission critical for operations, but you're also seeing new applications being developed in the cloud. >> I would say anything that's, I call business critical, same thing, but it's, business critical is internal to me, like SAP, but also anything customer-facing. That's business critical to me. If that app goes down or it has a problem, I'm not collecting revenue. So, you know, back 30 years ago, we didn't have a lot of customer-facing apps, right? It really was just SAP. I mean there wasn't a heck of a lot of cust- There were customer-facing things. But you didn't have all the digitalization that we have now, like the digital economy, where that's where the real explosion has come, is you think about all the customer-facing applications. And now every enterprise is what? A technology, digital company with a customer-facing and you're trying to get closer and closer to who? The consumer. >> Yeah, self-service. >> Self-service, B2C, everybody wants to do that. Get out of the middle man. And those are business critical applications for people. >> So what's needed under the covers to make all this happen? Give us a little double click on where you guys fit. >> You need consistent architecture. Obviously not just for one cloud, but for any cloud. But even within one cloud, forget multicloud, it gets worst with multicloud. You need a consistent architecture, right? That is automated, that is as code. I can't have the human involved. These are all, this is the API generation, you've got to be able to use automation, Terraform. And all the way from the application development platform you know, through Jenkins and all other software, through CICD pipeline and Terraform, when you, when that developer says, I want infrastructure, it has to go build that infrastructure in real time. And then when it says, I don't need it anymore it's got to take it away. And you cannot have a human involved in that process. That's what's completely changed. And that's what's giving the agility. And that's kind of a cloud model, right? Use software. >> Well, okay, so isn't that what serverless does, right? >> That's part of it. Absolutely. >> But I might still want control sometimes over the runtime if I'm running those mission critical applications. Everything in enterprise is a heterogeneous thing. It's like people, people say, well there's going to, the people going to repatriate back to on-prem, they are not repatriating back to on-prem. >> We were just talking about that, I'm like- >> Steve: It's not going to happen, right? >> It's a myth, it's a myth. >> And there's things that maybe shouldn't have ever gone into the cloud, I get that. Look, do people still have mainframes? Of course. There's certain things that you just, doesn't make sense to move to the new generation. There were things, certain applications that are very static, they weren't dynamic. You know what, keeping it on-prem it's, probably makes sense. So some of those things maybe will go back, but they never should have gone. But we are not repatriating ever, you know, that's not going to happen. >> No I agree. I mean, you know, there was an interesting paper by Andreessen, >> Yeah. >> But, I mean- >> Steve: Yeah it was a little self-serving for some company that need more funding, yeah. You look at the numbers. >> Steve: Yeah. >> It tells the story. It's just not happening. >> No. And the reason is, it's that agility, right? And so that's what people, I would say that what you need to do is, and in order to get that agility, you have to have that consistency. You have to have automation, you have to get these people out of the way. You have to use software, right? So it's that you have that swipe the card operational model for the developers. They don't want to hear the word no. >> Lisa: Right. >> What do you think is going to happen with AWS? Because we heard, I don't know if you heard Selipsky's keynote this morning, but you've probably heard the hallway talk. >> Steve: I did, yeah. >> Okay. You did. So, you know, connecting the dots, you know doubling down on all the primitives, that we expected. We kind of expected more of the higher level stuff, which really didn't see much of that, a little bit. >> Steve: Yeah. So, you know, there's a whole thing about, okay, does the cloud get commoditized? Does it not? I think the secret weapon's the ecosystem, right? Because they're able to sell through with guys like you. Make great margins on that. >> Steve: Yeah, well, yeah. >> What are your thoughts though on the future of AWS? >> IAS is going to get commoditized. So this is the fallacy that a lot of the CSPs have, is they thought that they were going to commoditize enterprise. It never happens that way. What's going to happen is infrastructure as a service, the lower level, which is why you see all the CSPs talking about what? Oracle Cloud, industry cloud. >> Well, sure, absolutely, yeah. >> We got to get to the apps, we got to get to SAP, we got to get to all that, because that's not going to get commoditized, right. But all the infrastructural service where AWS is king that is going to get commoditized, absolutely. >> Okay, so, but historically, you know Cisco's still got 60% plus gross margins. EMC always had good margin. How pure is the lone survivor in Flash? They got 70% gross margins. So infrastructure actually has always been a pretty good business. >> Yeah that's true. But it's a hell of a lot easier, particularly with people like Aviatrix and others that are building these common architectural things that create simplicity and abstract the way the complexities of underneath such that we allow your network to run an AWS, Azure, Google, Oracle, whatever, exactly the same. So it makes it a hell of a lot easier >> Dave: Super cloud. >> to go move. >> But I want to tap your brain because you have a good perspective of this because servers used to be a great margin business too on-prem and now it's not. It's a low margin business 'cause all the margin went to Intel. >> Yeah. But the cloud guys, you know, AWS in particular, makes a ton of dough on servers, so, or compute. So it's going to be interesting to see over time if that gets com- that's why they're going so hard after silicon. >> I think if they can, I think if you can capture the workload. So AWS and everyone else, as another example, this SAP, they call that a gravity workload. You know what gravity workload is? It's a black hole. It drags everything else with it. If you get SAP or Oracle or a mainframe app, it ain't going anywhere. And then what's going to happen is all your other apps are going to follow it. So that's what they're all going to fight for, is type of app. >> You said something earlier about, forget multicloud, for a moment, but, that idea of the super cloud, this abstraction layer, I mean, is that a real business value for customers other than, oh I got all these clouds, I need 'em to work together. You know, from your perspective from Aviatrix perspective, is it an opportunity for you to build on top of that? Or are you just looking at, look, I'm going to do really good work in AWS, in Azure? Now we're making the same experience. >> I hear this every single day from our customers is they look and they say, good enough isn't good enough. I've now hit the point, I'm hitting route limitations. I'm hitting, I'm doing things manually, and that's fine when I don't have that many applications or I don't have mission critical. The dogs are eating the dog food, we're going into the cloud and they're looking and then saying this is not an operational model for me. I've hit the point where I can't keep doing this, I can't throw bodies at this, I need software. And that's the opportunity for us, is they look and they say, I'm doing it in one cloud, but, and there's zero chance I'm going to be able to figure that out in the two or three other clouds. Every enterprise I talk to says multicloud is inevitable. Whether they're in it now, they all know they're going to go, because it's the business units that demand it. It's not the IT teams that demand it, it's the line of business that says, I like GCP for this reason. >> The driver's functionality that they're getting. >> It's the app teams that say, I have this service and GCP's better at it than AWS. >> Yeah, so it's not so much a cost game or the end all coffee mug, right? >> No, no. >> Google does this better than Microsoft, or better than- >> If you asked an IT person, they would rather not have multicloud. They actually tried to fight it. No, why would you want to support four clouds when you could support one right? That's insane. >> Dave and Lisa: Right. If they didn't have a choice and, and so it, the decision was made without them, and actually they weren't even notified until day before. They said, oh, good news, we're going to GCP tomorrow. Well, why wasn't I notified? Well, we're notifying you now. >> Yeah, you would've said, no. >> Steve: This is cloud bottle, let's go. >> Super cloud again. Did you see the Berkeley paper, sky computing I think they call it? Down at Berkeley, yep Dave Linthicum from Deloitte. He's talking about, I think he calls it meta cloud. It's happening. >> Yeah, yeah, yeah. >> It's happening. >> No, and because customers, customers want that. They... >> And talk about some customer example or two that you think really articulates the value of why it's happening and the outcomes that it's generating. >> I mean, I was just talking to Lamb Weston last night. So we had a reception, Lamb Weston, huge, frozen potatoes. They serve like, I dunno, some ungodly percentage of all the french fries to all the fast food. It's unbelievable what they do. Do you know, they have special chemicals they put on the french fries. So when you get your DoorDash, they stay crispy longer. They've invented that patented it. But anyway, it's all these businesses you've never heard of and they do all the, and again, they're moving to SAP or they're actually SAP in the cloud, they're one of the first ones. They did it through Accenture. They're pulling it back off from Accenture. They're not happy with the service they're getting. They're going to use us for their networking and network security because they're going to get that visibility and control back. And they're going to repatriate it back from a managed service and bring it back and run it in-house. And the SAP basis engineers want it to happen because they see the visibility and control that the infrastructure guy's going to get because of us, which leads to, all they care about is uptime and performance. That's it. And they're going to say the infrastructure team's going to lead to better uptime and better performance if it's running on Aviatrix. >> And business performance and uptime, business critical >> That is the business. That is the business. >> It is. So what are some of the things next coming down the pike from Aviatrix? Any secret sauce you can share? >> Lot of secrets. So, two secrets. One, the next thing people really want to do, embedded network security into the network. We've kind of talked about this. You're going to be seeing some things from us. Where does network security belong? In the network. Embedded in the fabric of the network, not as this dumb device called the next-gen firewall that you steer traffic to. It has to be into the fabric of what we do, what we call airspace. You're going to see us talk about that. And then the next thing, back to the maturity of the cloud, as they build out the core, guess what they're doing? It's this thing called edge, Dave, right? And guess what they're going to do? It's not about connecting the cloud to the edge to the cloud with dumb things like SD-WAN, right? Or SaaS. It's actually the other way around. Go into the cloud, turn around, look out at the edge and say, how do I extend the cloud out to the edge, and make it look like a VPC. That's what people are doing. Why, 'cause I want the operational model. I want all the things that I can do in the cloud out at the edge. And everyone knows it's been in networking. I've been in networking for 37 years. He who wins the core does what? Wins the edge, 'cause that's what happens. You do it first in the core and then you want one architecture, one common architecture, one consistent way of doing everything. And that's going to go out to the edge and it's going to look like a VPC from an operational model. >> And Amazon's going to support that, no doubt. >> Yeah, I mean every, you know, every, and then it's just how do you want to go do that? And us as the networking and network security provider, we're getting dragged to the edge by our customer. Because you're my networking provider. And that means, end to end. And they're trying to drag us into on-prem too, yeah. >> Lot's going on, you're going to have to come back- >> Because they want one networking vendor. >> But wait, and you say what? >> We will never do like switches and any of the keep Arista, the Cisco, and all that kind of stuff. But we will start sucking in net flow. We will start doing, from an operational perspective, we will integrate a lot of the things that are happening in on-prem into our- >> No halfway house. >> Copilot. >> No halfway house, no two architectures. But you'll take the data in. >> You want one architecture. >> Yeah. >> Yeah, totally. >> Right play. >> Amazing stuff. >> And he who wins the core, guess what's more strategic to them? What's more strategic on-prem or cloud? Cloud. >> It flipped three years ago. >> Dave: Yeah. >> So he who wins in the clouds going to win everywhere. >> Got it, We'll keep our eyes on that. >> Steve: Cause and effect. >> Thank you so much for joining us. We've got your bumper sticker already. It's been a great pleasure having you on the program. You got to come back, there's so, we've- >> You posting the bumper sticker somewhere? >> Lisa: It's going to be our Instagram. >> Oh really, okay. >> And an Instagram sto- This is new for you guys. Always coming up with new ideas. >> Raising the bar. >> It is, it is. >> Me advance, I mean, come on. >> I love it. >> All right, for our guest Steve Mullaney and Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live enterprise and emerging tech coverage.

Published Date : Nov 29 2022

SUMMARY :

This is the Cube's live coverage day one Well, you know, this is where you know, kind of helped We're going to be talking don't do it again. I stole that from you, yeah. Steve, great to have you Dave: We talked about Was that year ago, that was a year ago. We're, now the maturity's starting to say, and apply that to the cloud? 1992, do you remember And that is going to alter in the move to the cloud. That's the main barrier being developed in the cloud. like the digital economy, Get out of the middle man. covers to make all this happen? And all the way from the That's part of it. the people going to into the cloud, I get that. I mean, you know, there You look at the numbers. It tells the story. and in order to get that agility, going to happen with AWS? of the higher level stuff, does the cloud get commoditized? a lot of the CSPs have, that is going to get How pure is the lone survivor in Flash? and abstract the way 'cause all the margin went to Intel. But the cloud guys, you capture the workload. of the super cloud, this And that's the opportunity that they're getting. It's the app teams that say, to support four clouds the decision was made without them, Did you see the Berkeley paper, No, and that you think really that the infrastructure guy's That is the business. coming down the pike from Aviatrix? It's not about connecting the cloud to And Amazon's going to And that means, end to end. Because they want and any of the keep Arista, the Cisco, But you'll take the data in. And he who wins the core, clouds going to win everywhere. You got to come back, there's so, we've- This is new for you guys. the leader in live enterprise

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Lisa MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Steve MullaneyPERSON

0.99+

CiscoORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

VegasLOCATION

0.99+

DavePERSON

0.99+

LisaPERSON

0.99+

60%QUANTITY

0.99+

AmazonORGANIZATION

0.99+

Dave LinthicumPERSON

0.99+

twoQUANTITY

0.99+

70%QUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

37 yearsQUANTITY

0.99+

AviatrixORGANIZATION

0.99+

EMCORGANIZATION

0.99+

five-yearQUANTITY

0.99+

1992DATE

0.99+

DeloitteORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

five yearsQUANTITY

0.99+

Last yearDATE

0.99+

AccentureORGANIZATION

0.99+

OracleORGANIZATION

0.99+

a year agoDATE

0.99+

AndreessenPERSON

0.99+

AristaORGANIZATION

0.99+

Lamb WestonORGANIZATION

0.99+

threeDATE

0.99+

tomorrowDATE

0.99+

Net StudioORGANIZATION

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.98+

last yearDATE

0.98+

SAPORGANIZATION

0.98+

todayDATE

0.98+

Lamb WestonORGANIZATION

0.98+

last nightDATE

0.98+

BerkeleyORGANIZATION

0.98+

one cloudQUANTITY

0.98+

one cloudQUANTITY

0.98+

two secretsQUANTITY

0.97+

two architecturesQUANTITY

0.97+

last 30 years agoDATE

0.97+

12 yearsQUANTITY

0.97+

zero chanceQUANTITY

0.97+

three years agoDATE

0.97+

Venetian Expo CenterLOCATION

0.97+

Justin Emerson, Pure Storage | SuperComputing 22


 

(soft music) >> Hello, fellow hardware nerds and welcome back to Dallas Texas where we're reporting live from Supercomputing 2022. My name is Savannah Peterson, joined with the John Furrier on my left. >> Looking good today. >> Thank you, John, so are you. It's been a great show so far. >> We've had more hosts, more guests coming than ever before. >> I know. >> Amazing, super- >> We've got a whole thing going on. >> It's been a super computing performance. >> It, wow. And, we'll see how many times we can say super on this segment. Speaking of super things, I am in a very unique position right now. I am a flanked on both sides by people who have been doing content on theCUBE for 12 years. Yes, you heard me right, our next guest was on theCUBE 12 years ago, the third event, was that right, John? >> Man: First ever VM World. >> Yeah, the first ever VM World, third event theCUBE ever did. We are about to have a lot of fun. Please join me in welcoming Justin Emerson of Pure Storage. Justin, welcome back. >> It's a pleasure to be here. It's been too long, you never call, you don't write. (Savannah laughs) >> Great to see you. >> Yeah, likewise. >> How fun is this? Has the set evolved? Is everything looking good? >> I mean, I can barely remember what happened last week, so. (everyone laughs) >> Well, I remember lot's changed that VM world. You know, Paul Moritz was the CEO if you remember at that time. His actual vision actually happened but not the way, for VMware, but the industry, the cloud, he called the software mainframe. We were kind of riffing- >> It was quite the decade. >> Unbelievable where we are now, how we got here, but not where we're going to be. And you're with Pure Storage now which we've been, as you know, covering as well. Where's the connection into the supercomputing? Obviously storage performance, big part of this show. >> Right, right. >> What's the take? >> Well, I think, first of all it's great to be back at events in person. We were talking before we went on, and it's been so great to be back at live events now. It's been such a drought over the last several years, but yeah, yeah. So I'm very glad that we're doing in person events again. For Pure, this is an incredibly important show. You know, the product that I work with, with FlashBlade is you know, one of our key areas is specifically in this high performance computing, AI machine learning kind of space. And so we're really glad to be here. We've met a lot of customers, met a lot of other folks, had a lot of really great conversations. So it's been a really great show for me. And also just seeing all the really amazing stuff that's around here, I mean, if you want to find, you know, see what all the most cutting edge data center stuff that's going to be coming down the pipe, this is the place to do it. >> So one of the big themes of the show for us and probably, well, big theme of your life, is balancing power efficiency. You have a product in this category, Direct Flash. Can you tell us a little bit more about that? >> Yeah, so Pure as a storage company, right, what do we do differently from everybody else? And if I had to pick one thing, right, I would talk about, it's, you know, as the name implies, we're an all, we're purely flash, we're an all flash company. We've always been, don't plan to be anything else. And part of that innovation with Direct Flash is the idea of rather than treating a solid state disc as like a hard drive, right? Treat it as it actually is, treat it like who it really is and that's a very different kind of thing. And so Direct Flash is all about bringing native Flash interfaces to our product portfolio. And what's really exciting for me as a FlashBlade person, is now that's also part of our FlashBlade S portfolio, which just launched in June. And so the benefits of that are our myriad. But, you know, talking about efficiency, the biggest difference is that, you know, we can use like 90% less DRAM in our drives, which you know, everything uses, everything that you put in a drive uses power, it adds cost and all those things and so that really gives us an efficiency edge over everybody else and at a show like this, where, I mean, you walk the aisles and there's there's people doing liquid cooling and so much immersion stuff, and the reason they're doing that is because power is just increasing everywhere, right? So if you can figure out how do we use less power in some areas means you can shift that budget to other places. So if you can talk to a customer and say, well, if I could shrink your power budget for storage by two thirds or even, save you two-thirds of power, how many more accelerators, how many more CPUs, how much more work could you actually get done? So really exciting. >> I mean, less power consumption, more power and compute. >> Right. >> Kind of power center. So talk about the AI implications, where the use cases are. What are you seeing here? A lot of simulations, a lot of students, again, dorm room to the boardroom we've been saying here on theCUBE this is a great broad area, where's the action in the ML and the AI for you guys? >> So I think, not necessarily storage related but I think that right now there's this enormous explosion of custom silicon around AI machine learning which I as a, you said welcome hardware nerds at the beginning and I was like, ah, my people. >> We're all here, we're all here in Dallas. >> So wonderful. You know, as a hardware nerd we're talking about conferences, right? Who has ever attended hot chips and there's so much really amazing engineering work going on in the silicon space. It's probably the most exciting time for, CPU and accelerator, just innovation in, since the days before X 86 was the defacto standard, right? And you could go out and buy a different workstation with 16 different ISAs. That's really the most exciting thing, I walked past so many different places where you know, our booth is right next to Havana Labs with their gout accelerator, and they're doing this cute thing with one of the AI image generators in their booth, which is really cute. >> Woman: We're going to have to go check that out. >> Yeah, but that to me is like one of the more exciting things around like innovation at a, especially at a show like this where it's all about how do we move forward, the state of the art. >> What's different now than just a few years ago in terms of what's opening up the creativity for people to look at things that they could do with some of the scale that's different now. >> Yeah well, I mean, every time the state of the art moves forward what it means is, is that the entry level gets better, right? So if the high end is going faster, that means that the mid-range is going faster, and that means the entry level is going faster. So every time it pushes the boundary forward, it's a rising tide that floats all boats. And so now, the kind of stuff that's possible to do, if you're a student in a dorm room or if you're an enterprise, the world, the possible just keeps expanding dramatically and expanding almost, you know, geometrically like the amount of data that we are, that we have, as a storage guy, I was coming back to data but the amount of data that we have and the amount of of compute that we have, and it's not just about the raw compute, but also the advances in all sorts of other things in terms of algorithms and transfer learning and all these other things. There's so much amazing work going on in this area and it's just kind of this Kay Green explosion of innovation in the area. >> I love that you touched on the user experience for the community, no matter the level that you're at. >> Yeah. >> And I, it's been something that's come up a lot here. Everyone wants to do more faster, always, but it's not just that, it's about making the experience and the point of entry into this industry more approachable and digestible for folks who may not be familiar, I mean we have every end of the ecosystem here, on the show floor, where does Pure Storage sit in the whole game? >> Right, so as a storage company, right? What AI is all about deriving insights from data, right? And so everyone remembers that magazine cover data's the new oil, right? And it's kind of like, okay, so what do you do with it? Well, how do you derive value from all of that data? And AI machine learning and all of this supercomputing stuff is about how do we take all this data? How do we innovate with it? And so if you want data to innovate with, you need storage. And so, you know, our philosophy is that how do we make the best storage platforms that we can using the best technology for our customers that enable them to do really amazing things with AI machine learning and we've got different products, but, you know at the show here, what we're specifically showing off is our new flashlight S product, which, you know, I know we've had Pure folks on theCUBE before talking about FlashBlade, but for viewers out there, FlashBlade is our our scale out unstructured data platform and AI and machine learning and supercomputing is all about unstructured data. It's about sensor data, it's about imaging, it's about, you know, photogrammetry, all this other kinds of amazing stuff. But, you got to land all that somewhere. You got to process that all somewhere. And so really high performance, high throughput, highly scalable storage solutions are really essential. It's an enabler for all of the amazing other kinds of engineering work that goes on at a place like Supercomputing. >> It's interesting you mentioned data's oil. Remember in 2010, that year, our first year of theCUBE, Hadoop World, Hadoop just started to come on the scene, which became, you know kind of went away and, but now you got, Spark and Databricks and Snowflake- >> Justin: And it didn't go away, it just changed, right? >> It just got refactored and right size, I think for what the people wanted it to be easy to use but there's more data coming. How is data driving innovation as you bring, as people see clearly the more data's coming? How is data driving innovation as you guys look at your products, your roadmap and your customer base? How is data driving innovation for your customers? >> Well, I think every customer who has been, you know collecting all of this data, right? Is trying to figure out, now what do I do with it? And a lot of times people collect data and then it will end up on, you know, lower slower tiers and then suddenly they want to do something with it. And it's like, well now what do I do, right? And so there's all these people that are reevaluating you know, we, when we developed FlashBlade we sort of made this bet that unstructured data was going to become the new tier one data. It used to be that we thought unstructured data, it was emails and home directories and all that stuff the kind of stuff that you didn't really need a really good DR plan on. It's like, ah, we could, now of course, as soon as email goes down, you realize how important email is. But, the perspectives that people had on- >> Yeah, exactly. (all laughing) >> The perspectives that people had on unstructured data and it's value to the business was very different and so now- >> Good bet, by the way. >> Yeah, thank you. So now unstructured data is considered, you know, where companies are going to derive their value from. So it's whether they use the data that they have to build better products whether it's they use the data they have to develop you know, improvements in processes. All those kinds of things are data driven. And so all of the new big advancements in industry and in business are all about how do I derive insights from data? And so machine learning and AI has something to do with that, but also, you know, it all comes back to having data that's available. And so, we're working very hard on building platforms that customers can use to enable all of this really- >> Yeah, it's interesting, Savannah, you know, the top three areas we're covering for reinventing all the hyperscale events is data. How does it drive innovation and then specialized solutions to make customers lives easier? >> Yeah. >> It's become a big category. How do you compose stuff and then obviously compute, more and more compute and services to make the performance goes. So those seem to be the three hot areas. So, okay, data's the new oil refineries. You've got good solutions. What specialized solutions do you see coming out because once people have all this data, they might have either large scale, maybe some edge use cases. Do you see specialized solutions emerging? I mean, obviously it's got DPU emerging which is great, but like, do you see anything else coming out at that people are- >> Like from a hardware standpoint. >> Or from a customer standpoint, making the customer's lives easier? So, I got a lot of data flowing in. >> Yeah. >> It's never stopping, it keeps powering in. >> Yeah. >> Are there things coming out that makes their life easier? Have you seen anything coming out? >> Yeah, I think where we are as an industry right now with all of this new technology is, we're really in this phase of the standards aren't quite there yet. Everybody is sort of like figuring out what works and what doesn't. You know, there was this big revolution in sort of software development, right? Where moving towards agile development and all that kind of stuff, right? The way people build software change fundamentally this is kind of like another wave like that. I like to tell people that AI and machine learning is just a different way of writing software. What is the output of a training scenario, right? It's a model and a model is just code. And so I think that as all of these different, parts of the business figure out how do we leverage these technologies, what it is, is it's a different way of writing software and it's not necessarily going to replace traditional software development, but it's going to augment it, it's going to let you do other interesting things and so, where are things going? I think we're going to continue to start coalescing around what are the right ways to do things. Right now we talk about, you know, ML Ops and how development and the frameworks and all of this innovation. There's so much innovation, which means that the industry is moving so quickly that it's hard to settle on things like standards and, or at least best practices you know, at the very least. And that the best practices are changing every three months. Are they really best practices right? So I think, right, I think that as we progress and coalesce around kind of what are the right ways to do things that's really going to make customers' lives easier. Because, you know, today, if you're a software developer you know, we build a lot of software at Pure Storage right? And if you have people and developers who are familiar with how the process, how the factory functions, then their skills become portable and it becomes easier to onboard people and AI is still nothing like that right now. It's just so, so fast moving and it's so- >> Wild West kind of. >> It's not standardized. It's not industrialized, right? And so the next big frontier in all of this amazing stuff is how do we industrialize this and really make it easy to implement for organizations? >> Oil refineries, industrial Revolution. I mean, it's on that same trajectory. >> Yeah. >> Yeah, absolutely. >> Or industrial revolution. (John laughs) >> Well, we've talked a lot about the chaos and sort of we are very much at this early stage stepping way back and this can be your personal not Pure Storage opinion if you want. >> Okay. >> What in HPC or AIML I guess it all falls under the same umbrella, has you most excited? >> Ooh. >> So I feel like you're someone who sees a lot of different things. You've got a lot of customers, you're out talking to people. >> I think that there is a lot of advancement in the area of natural language processing and I think that, you know, we're starting to take things just like natural language processing and then turning them into vision processing and all these other, you know, I think the, the most exciting thing for me about AI is that there are a lot of people who are, you are looking to use these kinds of technologies to make technology more inclusive. And so- >> I love it. >> You know the ability for us to do things like automate captioning or the ability to automate descriptive, audio descriptions of video streams or things like that. I think that those are really,, I think they're really great in terms of bringing the benefits of technology to more people in an automated way because the challenge has always been bandwidth of how much a human can do. And because they were so difficult to automate and what AI's really allowing us to do is build systems whether that's text to speech or whether that's translation, or whether that's captioning or all these other things. I think the way that AI interfaces with humans is really the most interesting part. And I think the benefits that it can bring there because there's a lot of talk about all of the things that it does that people don't like or that they, that people are concerned about. But I think it's important to think about all the really great things that maybe don't necessarily personally impact you, but to the person who's not cited or to the person who you know is hearing impaired. You know, that's an enormously valuable thing. And the fact that those are becoming easier to do they're becoming better, the quality is getting better. I think those are really important for everybody. >> I love that you brought that up. I think it's a really important note to close on and you know, there's always the kind of terminator, dark side that we obsess over but that's actually not the truth. I mean, when we think about even just captioning it's a tool we use on theCUBE. It's, you know, we see it on our Instagram stories and everything else that opens the door for so many more people to be able to learn. >> Right? >> And the more we all learn, like you said the water level rises together and everything is magical. Justin, it has been a pleasure to have you on board. Last question, any more bourbon tasting today? >> Not that I'm aware of, but if you want to come by I'm sure we can find something somewhere. (all laughing) >> That's the spirit, that is the spirit of an innovator right there. Justin, thank you so much for joining us from Pure Storage. John Furrier, always a pleasure to interview with you. >> I'm glad I can contribute. >> Hey, hey, that's the understatement of the century. >> It's good to be back. >> Yeah. >> Hopefully I'll see you guys in, I'll see you guys in 2034. >> No. (all laughing) No, you've got the Pure Accelerate conference. We'll be there. >> That's right. >> We'll be there. >> Yeah, we have our Pure Accelerate conference next year and- >> Great. >> Yeah. >> I love that, I mean, feel free to, you know, hype that. That's awesome. >> Great company, great runs, stayed true to the mission from day one, all Flash, continue to innovate congratulations. >> Yep, thank you so much, it's pleasure being here. >> It's a fun ride, you are a joy to talk to and it's clear you're just as excited as we are about hardware, so thanks a lot Justin. >> My pleasure. >> And thank all of you for tuning in to this wonderfully nerdy hardware edition of theCUBE live from Dallas, Texas, where we're at, Supercomputing, my name's Savannah Peterson and I hope you have a wonderful night. (soft music)

Published Date : Nov 16 2022

SUMMARY :

and welcome back to Dallas Texas It's been a great show so far. We've had more hosts, more It's been a super the third event, was that right, John? Yeah, the first ever VM World, It's been too long, you I mean, I can barely remember for VMware, but the industry, the cloud, as you know, covering as well. and it's been so great to So one of the big the biggest difference is that, you know, I mean, less power consumption, in the ML and the AI for you guys? nerds at the beginning all here in Dallas. places where you know, have to go check that out. Yeah, but that to me is like one of for people to look at and the amount of of compute that we have, I love that you touched and the point of entry It's an enabler for all of the amazing but now you got, Spark and as you guys look at your products, the kind of stuff that Yeah, exactly. And so all of the new big advancements Savannah, you know, but like, do you see a hardware standpoint. the customer's lives easier? It's never stopping, it's going to let you do And so the next big frontier I mean, it's on that same trajectory. (John laughs) a lot about the chaos You've got a lot of customers, and I think that, you know, or to the person who you and you know, there's always And the more we all but if you want to come by that is the spirit of an Hey, hey, that's the Hopefully I'll see you guys We'll be there. free to, you know, hype that. all Flash, continue to Yep, thank you so much, It's a fun ride, you and I hope you have a wonderful night.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Paul MoritzPERSON

0.99+

JustinPERSON

0.99+

Justin EmersonPERSON

0.99+

JohnPERSON

0.99+

Savannah PetersonPERSON

0.99+

SavannahPERSON

0.99+

DallasLOCATION

0.99+

JuneDATE

0.99+

John FurrierPERSON

0.99+

12 yearsQUANTITY

0.99+

2010DATE

0.99+

Kay GreenPERSON

0.99+

Dallas, TexasLOCATION

0.99+

third eventQUANTITY

0.99+

Dallas TexasLOCATION

0.99+

last weekDATE

0.99+

12 years agoDATE

0.99+

two-thirdsQUANTITY

0.99+

FirstQUANTITY

0.98+

VM WorldEVENT

0.98+

firstQUANTITY

0.98+

two thirdsQUANTITY

0.98+

Havana LabsORGANIZATION

0.98+

Pure AccelerateEVENT

0.98+

next yearDATE

0.98+

todayDATE

0.98+

both sidesQUANTITY

0.98+

Pure StorageORGANIZATION

0.97+

first yearQUANTITY

0.97+

16 different ISAsQUANTITY

0.96+

FlashBladeTITLE

0.96+

three hot areasQUANTITY

0.94+

threeQUANTITY

0.94+

SnowflakeORGANIZATION

0.93+

oneQUANTITY

0.93+

2034DATE

0.93+

one thingQUANTITY

0.93+

SupercomputingORGANIZATION

0.9+

90% lessQUANTITY

0.89+

theCUBEORGANIZATION

0.86+

agileTITLE

0.84+

VM worldEVENT

0.84+

few years agoDATE

0.81+

day oneQUANTITY

0.81+

Hadoop WorldORGANIZATION

0.8+

VMwareORGANIZATION

0.79+

InstagramORGANIZATION

0.78+

Spark andORGANIZATION

0.77+

HadoopORGANIZATION

0.74+

yearsDATE

0.73+

lastDATE

0.73+

three monthsQUANTITY

0.69+

FlashBladeORGANIZATION

0.68+

Direct FlashTITLE

0.67+

yearDATE

0.65+

tier oneQUANTITY

0.58+

SupercomputingTITLE

0.58+

DirectTITLE

0.56+

FlashORGANIZATION

0.55+

86TITLE

0.55+

acesQUANTITY

0.55+

PureORGANIZATION

0.51+

DatabricksORGANIZATION

0.5+

2022ORGANIZATION

0.5+

XEVENT

0.45+

Murli Thirumale, Portworx by Pure Storage | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon and welcome back to Detroit, Lisa Martin here with John Furrier. We are live day two of our coverage of Coan Cloud Native Con North America. John, we've had great conversations. Yeah. All day yesterday. Half a day today. So far we're talking all things, Well, not all things Kubernetes so much more than that. We also have to talk about storage and data management solutions for Kubernetes projects, cuz that's obviously critical. >>Yeah, I mean the big trend here is Kubernetes going mainstream has been for a while. The adopt is crossing over, it's crossing the CADs and with that you're seeing security concerns. You're seeing things being gaps being filled. But enterprise grade is really the, the, the story. It's going enterprise, that's managed services, that's professional service, that's basically making things work at scale. This next segment hits that part and we are gonna talk about it in grade length >>With one of our alumni. Moral morale to Molly is back DP and GM of Port Work's Peer Storage. Great to have you back really? >>Yeah, absolutely. Delightful >>To be here. So I was looking on the website, number one in Kubernetes storage. Three years in a row. Yep. Awesome. What's Coworks doing here at KU Con? >>Well, I'll tell you, we, our engineering crew has been so productive and hard at work that I almost can't decide what to kind of tell you. But I thought what, what, what I thought I would do is kind of tell you that we are in forefront of two major trends in the world of Kubernetes. Right? And the, the two trends that I see are one is as a service, so is trend number one. So it's not software eating the world anymore. That's, that's old, old, old news. It's as a service unifying the world. The world wants easy, We all are, you know, subscribers to things like Netflix. We've been using Salesforce or other HR functions. Everything is as a service. And in the world of Kubernetes, it's a sign of that maturity that John was talking about as a platform that now as a service is the big trend. >>And so headline number one, if you will, is that Port Works is leading in the data management world for Kubernetes by providing, we're going all in on easy on as a service. So everything we do, we are satisfying it, right? So if you think, if you think about, if you think about this, that, that there are really, most of the people who are consuming Kubernetes are people who are building platforms for their dev users. And dev users want self service. That's one of the advantages of, of, of Kubernetes. And the more it is service size and made as a service, the more ready to consume it is. And so we are announcing at the show that we have, you know, the basic Kubernetes data management as a service, ha d r as a service. We have backup as a service and we have database as a service. So these are the three major components of data. And all of those are being made available as a service. And in fact, we're offering and announcing at the show our backup as a service freemium version where you can get free forever a terabyte of, of, you know, stuff to do for Kubernetes for forever. >>Congratulations on the announcement. Totally. In line with what the market wants. Developers want Selfer, they wanna also want simplicity by the way they'll leave if they don't like the service. Correct. So that you, you know that before we get into some more specifics, I want Yeah. Ask you on the industry and some of the point solutions you have, what, it's been two years since the acquisition with Pure Storage. Can you just give an update on how it's gone? Obviously as a service, you guys are hitting all your Marks, developers love it. Storage are big part of the game right now as well as these environments. Yeah. What's the update post acquisition two years. You had a great offering Stay right In >>Point Works. Yeah. So look, John, you're, you're, you're a veteran of the industry and have seen lots of acquisitions, right? And I've been acquired twice before myself. So, you know, there's, there's always best practices and poor practices in terms of acquisitions and I'm, you know, really delighted to say I think this, this acquisition has had some of the best practices. Let me just name a couple of them, right? One of them is just cultural fit, right? Cultural fit is great. Entrepreneurs, anybody, it's not just entrepreneurs. Everybody loves to work in a place they enjoy working with, with people that they, you know, thrive when they, when they interact with. And so the cultural fit with, with Pure is fantastic. The other one is the strategic intent that Pure had when they acquired us is still true. And so that goes a long way, you know, in terms of an investment profile, in terms of the ability to kind of leverage assets within the company. So Pure had kind of disrupted the world of storage using Flash and they wanted to disrupt higher up the stack using Kubernetes. And that's kind of been our role inside their strategy. And it's, it's still true. >>So culture, strategic intent. Yeah. Product market fit as well. You were, you weren't just an asset for customers or acquisition and then let the founders go through their next thing. You are part of their growth play. >>Absolutely. Right. The, the beauty of, of the kind of product market fit is, let's talk about the market is we have been always focused on the global two k and that is at the heart of, you know, purest 10,000 strong customer base, right? They have very strong presence in the, in the global two k. And we, we allow them to kind of go to those same folks with, with the offering. >>So satisfying everything that you do. What's for me as a business, whether I'm a financial services organization, I'm a hospital, I'm a retailer, what's in it for me >>As a customer? Yeah. So the, the what's in it for, for me is two things. It's speed and ease of use, which in a way are related. But, but, but you know, one is when something is provided as a service, it's much more consumable. It's instantly ready. It's like instant oatmeal, right? You just get it just ad hot water and it's there. Yep. So the world of of it has moved from owning large data centers, right? That used to be like 25 years ago and running those data centers better than everybody else to move to let me just consume a data center in the form of a cloud, right? So satisfying the cloud part of the data center. Now people are saying, well I expect that for software and services and I don't want it just from the public cloud, I want it from my own IT department. >>This is old news. And so the, the, the big news here is how fast Kubernetes has kind of moved everything. You know, you take a lot of these changes, Kubernetes is a poster child for things happening faster than the last wave. And in the last couple of years I would say that as a service model has really kind of thrived in the world of Kubernetes. And developers want to be able to get it fast. And the second thing is they want to be able to operate it fast. Self-service is the other benefit. Yeah. So speed and self-service are both benefits of, of >>This. Yeah. And, and the thing that's come up clearly in the cube, this is gonna be part of the headlines we'll probably end up getting a lot of highlights from telling my team to make a note of this, is that developers are gonna be be the, the business if you, if you take digital transformation to its conclusion, they're not a department that serves the business, they are the business that means Exactly. They have to be more productive. So developer productivity has been the top story. Yes. Security as a serves all these things. These are, these are examples to make developers more productive. But one of the things that came up and I wanna get your reaction to is, is that when you have disruption and, and the storage vision, you know what disruption it means. Cuz there's been a whole discussion around disruptive operations. When storage goes down, you have back m dr and failover. If there's a disruption that changes the nature of invisible infrastructure, developers want invisible infrastructure. That's the future steady state. So if there's a disruption in storage >>Yeah. It >>Can't affect the productivity and the tool chains and the workflows of developers. Yep. Right? So how do you guys look at that? Cuz you're a critical component. Storage is a service is a huge thing. Yeah. Storage has to, has to work seamlessly. And let's keep the developers out of the weeds. >>John. I think what, what what you put your finger on is another huge trend in the world of Kubernetes where at Cube Con, after all, which is really where, where all the leading practitioners both come and the leading vendors are. So here's the second trend that we are leading and, and actually I think it's happening not just with us, but with other, for folks in the industry. And that is, you know, the world of DevOps. Like DevOps has been such a catchphrase for all, all of us in the industry last five years. And it's been both a combination of cultural change as well as technology change. Here's what the latest is on the, in the world of DevOps. DevOps is now crystallized. It's not some kind of mysterious art form that you read about how people are practicing. DevOps is, it's broken into two, two things now. >>There is the platform part. So DevOps is now a bunch of platforms. And the other part of DevOps is a bunch of practices. So a little bit on both these, the platforms in the world of es there's only three platforms, right? There's the orchestration platforms, the, you know, eks, the open ships of the world and so on. There are the data management platforms, pro people like Port Works. And the third is security platforms, right? You know, Palo Alto Networks, others Aqua or all in this. So these are the three platforms and there are platform engineering teams now that many of our largest customers, some of the largest banks, the largest service providers, they're all operating as a ES platform engineering team. And then now developers, to your point, developers are in the practice of being able to use these platforms to launch new services. So the, the actual IT ops, the ops are run by developers now and they can do it on these platforms. And the platform engineering team provide that as an ease of use and they're there to troubleshoot when problems happen. So the idea of DevOps as a ops practice and a platform is the newest thing. E and, and ports and pure storage leading in the world of data management platforms >>There. Talk about a customer example that you think really articulates the value that Port Works and Pure Storage delivers from a data management perspective. >>Yeah, so there's so many examples. One of the, one of the longest running examples we have is a very, very large service provider that, you know, you all know and probably use, and they have been using us in the cable kinda set box or cable box business. They get streams of data from, from cable boxes all over the world. They collected all in a centralized large kind of thing and run elastic search and analytics on it. Now what they have done is they couldn't keep up with this at the scale and the depth, right? The speed of, of activity and the distributed nature of the activity. The only way to solve this was to use something like Kubernetes manage with Spark coming, bringing all the data in to deep, deep, deep silos of storage, which are all running not even on a sand, but on kind of, you know, very deep terabytes and terabytes of, of storage. So all of this is orchestrated with the Heco coworks and there's a platform engineering team. We are building that platform for them with some of these other components that allows them to kind of do analytics and, and make some changes in real time. Huge kind of setup for, for >>That. Yeah. Well, you guys have the right architecture. I love the vision. I love what you guys are doing. I think this is right in line with Pures. They've always been disruptors. I remember when we first interviewed the CEO when they started Yep. They, they stayed on path. They didn't waiver. EMC was the big player. They ended up taking their lunch and dinner as well and they beat 'em in the marketplace. But now you got this traction here. So I have to ask you, how's the business, what's the results look like? Either GM cloud native business unit of a storage company that's transformed and transforming? >>Yeah, you know, it's interesting, we just hit the two year anniversary, right John? And so what we did was just kind of like step back and hey, you know, we're running so hard, you just take a step back. And we've tripled the business in the two years since the acquisition, the two years before and, and we were growing through proven. So, you know, that that's quite a fe and we've tripled the number of people, the amount of engineering investments we have, the number of go to market investments have, have been, have been awesome. So business is going really well though, I will say. But I think, you know, we have, we can't be, we we're watching the market closely. You know, as a former ceo, I, you have to kind of learn to read the tea leaves when you invest. And I think, you know, what I would say is we're proceeding with caution in the next two quarters. I view business transformation as not a cancelable activity. So that's the, that's the good news, right? Our customers are large, it's, >>It's >>Right. All they're gonna do is say, Hey, they're gonna put their hand, their hand was always going right on the dial. Now they're kind of putting their hand on the dial going, hey, where, what is happening? But my, my own sense of this is that people will continue to invest through it. The question is at what level? And I also think that this is a six month kind of watch, the watch where, where we put the dial. So Q4 and q1 I think are kind of, you know, we have our, our watch kind of watch the market sign. But I have the highest confidence. What >>Does your gut tell you? You're an entrepreneur, >>Which my, my gut says that we'll go through a little bit of a cautious investment period in the next six months. And after that I think we're gonna be back in, back full, full in the crazy growth that we've always been. We're gonna grow by the way, in the next think >>It's core style. I think I'm, I'm more bullish. I think there's gonna be some, you know, weeding out of some overinvestment pre C or pre bubble. But I think tech's gonna continue to grow. I don't see >>It's stopping. Yeah. And, and the investment is gonna be on these core platforms. See, back to the platform story, it's gonna be in these core platforms and on unifying everything, let's consume it better rather than let's go kind of experiment with a whole bunch of things all over the map, right? So you'll see less experimentation and more kind of, let's harvest some of the investments we've made in the last couple >>Of years and actually be able to, to enable companies in any industry to truly be data companies. Because absolutely. We talked about as a service, we all have these expectations that any service we want, we can get it. Yes. There's no delay because patients has gone Yeah. From the pandemic. >>So it is kind of, you know, tightening up the screws on what they've built. They, you know, adding some polish to it, adding some more capability, like I said, a a a, a combination of harvesting and new investing. It's a combination I think is what we're gonna see. >>Yeah. What are some of the things that you're looking forward to? You talked about some of the, the growth things in the investment, but as we round out Q4 and head into a new year, what are you excited about? >>Yeah, so you know, I mentioned our, as a service kind of platform, the global two K for us has been a set of customers who we co-create stuff with. And so one of the other set of things that we are very excited about and announcing is because we're deployed at scale, we're, we're, we have upgraded our backend. So we have now the ability to go to million IOPS and more and, and for, for the right backends. And so Kubernetes is a add-on which will not slow down your, your core base infrastructure. Second thing that that we, we have is added a bunch of capability in the disaster recovery business continuity front, you know, we always had like metro kind of distance dr. We had long distance dr. We've added a near sync Dr. So now we can provide disaster recovery and business continuity for metro distances across continents and across the planet. Right? That's kind of a major change that we've done. The third thing is we've added the capability for file block and Object. So now by adding object, we're really a complete solution. So it is really that maturity of the business Yeah. That you start seeing as enterprises move to embracing a platform approach, deploying it much more widely. You talked about the early majority. Yeah. Right. And so what they require is more enterprise class capability and those are all the things that we've been adding and we're really looking forward >>To it. Well it sounds like tremendous evolution and maturation of Port Works in the two years since it's been with Pure Storage. You talked about the cultural alignment, great stuff that you're achieving. Congratulations on that. Yeah. Great stuff >>Ahead and having fun. Let's not forget that, that's too life's too short to do. It is right. >>You're right. Thank you. We will definitely, as always on the cube, keep our eyes on this space. Mur. Meley, it's been great to have you back on the program. Thank you for joining, John. >>Thank you so much. It's pleasure. Our, >>For our guests and John Furrier, Lisa Martin here live in Detroit with the cube about Coan Cloud Native Con at 22. We'll be back after a short break.

Published Date : Oct 28 2022

SUMMARY :

So far we're talking all things, Well, not all things Kubernetes so much more than that. crossing over, it's crossing the CADs and with that you're seeing security concerns. Great to have you back really? Yeah, absolutely. So I was looking on the website, number one in Kubernetes storage. And in the world of Kubernetes, it's a sign of that maturity that and made as a service, the more ready to consume it is. Storage are big part of the game right now as well as these environments. And so the cultural fit with, with Pure is fantastic. You were, you weren't just an asset for customers that is at the heart of, you know, purest 10,000 strong customer base, So satisfying everything that you do. So satisfying the cloud part of the data center. And in the last couple of years I would say that So developer productivity has been the top story. And let's keep the developers out of the weeds. So here's the second trend that we are leading and, There's the orchestration platforms, the, you know, eks, Talk about a customer example that you think really articulates the value that Port Works and Pure Storage delivers we have is a very, very large service provider that, you know, you all know I love the vision. And so what we did was just kind of like step back and hey, you know, But I have the highest confidence. We're gonna grow by the way, in the next think I think there's gonna be some, you know, weeding out of some overinvestment experimentation and more kind of, let's harvest some of the investments we've made in the last couple From the pandemic. So it is kind of, you know, tightening up the screws on what they've the growth things in the investment, but as we round out Q4 and head into a new year, what are you excited about? of capability in the disaster recovery business continuity front, you know, You talked about the cultural alignment, great stuff that you're achieving. It is right. it's been great to have you back on the program. Thank you so much. For our guests and John Furrier, Lisa Martin here live in Detroit with the cube about Coan Cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
John FurrierPERSON

0.99+

JohnPERSON

0.99+

Lisa MartinPERSON

0.99+

DetroitLOCATION

0.99+

MollyPERSON

0.99+

Murli ThirumalePERSON

0.99+

six monthQUANTITY

0.99+

twiceQUANTITY

0.99+

DevOpsTITLE

0.99+

yesterdayDATE

0.99+

two thingsQUANTITY

0.99+

EMCORGANIZATION

0.99+

twoQUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

OneQUANTITY

0.99+

Three yearsQUANTITY

0.99+

bothQUANTITY

0.99+

10,000QUANTITY

0.99+

second trendQUANTITY

0.99+

three platformsQUANTITY

0.99+

PureORGANIZATION

0.99+

Half a dayQUANTITY

0.99+

Cube ConORGANIZATION

0.98+

thirdQUANTITY

0.98+

oneQUANTITY

0.98+

Pure StorageORGANIZATION

0.98+

firstQUANTITY

0.98+

second thingQUANTITY

0.98+

third thingQUANTITY

0.98+

global two kORGANIZATION

0.98+

25 years agoDATE

0.97+

two yearsQUANTITY

0.97+

NetflixORGANIZATION

0.97+

Second thingQUANTITY

0.96+

global two k.ORGANIZATION

0.96+

AquaORGANIZATION

0.96+

two yearsDATE

0.96+

two thingsQUANTITY

0.96+

KubernetesTITLE

0.96+

Port Work's Peer StorageORGANIZATION

0.95+

MeleyPERSON

0.95+

two trendsQUANTITY

0.95+

GMORGANIZATION

0.94+

CloudNativeConEVENT

0.94+

todayDATE

0.93+

PuresORGANIZATION

0.93+

SparkTITLE

0.93+

last five yearsDATE

0.92+

three major componentsQUANTITY

0.92+

both benefitsQUANTITY

0.92+

Port WorksORGANIZATION

0.91+

Coan Cloud Native ConEVENT

0.91+

pandemicEVENT

0.89+

ConEVENT

0.89+

22DATE

0.89+

day twoQUANTITY

0.87+

next six monthsDATE

0.87+

two year anniversaryQUANTITY

0.87+

MurPERSON

0.86+

Q4DATE

0.85+

HecoORGANIZATION

0.85+

q1DATE

0.84+

last couple of yearsDATE

0.83+

million IOPSQUANTITY

0.82+

Murli Thirumale, Portworx by Pure Storage | KubeCon + CloudNativeCon NA 2022


 

>>Good afternoon and welcome back to Detroit, Lisa Martin here with John Furrier. We are live day two of our coverage of Coan Cloud Native, Con North America. John, we've had great conversations. Yeah. All day yesterday. Half a day today. So far we're talking all things, Well, not all things Kubernetes so much more than that. We also have to talk about storage and data management solutions for Kubernetes projects, cuz that's obviously critical. >>Yeah, I mean the big trend here is Kubernetes going mainstream has been for a while. The adopt is crossing over, it's crossing the CADs and with that you're seeing security concerns. You're seeing things being gaps being filled. But enterprise grade is really the, the, the story. It's going enterprise, that's managed services, that's professional service, that's basically making things work at scale. This next segment hits that, that part, and we're gonna talk about it in grade length >>With one of our alumni morale to Molly is back VP and GM of Port Work's peer Storage. Great to have you back really? >>Yeah, absolutely. Delightful to >>Be here. So I was looking on the website, number one in Kubernetes storage. Three years in a row. Yep. Awesome. What's Coworks doing here at KU Con? >>Well, I'll tell you, we, our engineering crew has been so productive and hard at work that I almost can't decide what to kind of tell you. But I thought what, what, what I thought I would do is kind of tell you that we are in forefront of two major trends in the world of es. Right? And the, the two trends that I see are one is as a service, so is trend number one. So it's not software eating the world anymore. That's, that's old, old, old news. It's as a service, unifying the world. The world wants easy, We all are, you know, subscribers to things like Netflix. We've been using Salesforce or other HR functions. Everything is as a service. And in the world of Kubernetes, it's a sign of that maturity that John was talking about as a platform that now as a service is the big trend. >>And so headline number one, if you will, is that Port Works is leading in the data management world for the Kubernetes by providing, we're going all in on easy on as a service. So everything we do, we are satisfying it, right? So if you think, if you think about, if you think about this, that, that there are really, most of the people who are consuming Kubernetes are people who are building platforms for their dev users and their users want self service. That's one of the advantages of, of, of Kubernetes. And the more it is service size and made as a service, the more ready to consume it is. And so we are announcing at the show that we have, you know, the basic Kubernetes data management as a service, ha d r as a service. We have backup as a service and we have database as a service. So these are the three major components of data. And all of those are being made available as a service. And in fact, we're offering and announcing at the show our backup as a service freemium version where you can get free forever a terabyte of, of, you know, stuff to do for Kubernetes for forever. >>Congratulations on the announcement. Totally. In line with what the market wants. Developers want self serve, they wanna also want simplicity by the way they'll leave if they don't like the service. Correct. So that you, you know, that before we get into some more specifics, I want to Yeah. Ask you on the industry and some of the point solutions you have, what, it's been two years since the acquisition with Pure Storage. Can you just give an update on how it's gone? Obviously as a service, you guys are hitting all your Marks, developers love it. Storage a big part of the game right now as well as these environments. Yeah. What's the update post acquisition two years, You had a great offering Stay >>Right In Point Works. Yeah. So look, John, you're, you're, you're a veteran of the industry and have seen lots of acquisitions, right? And I've been acquired twice before myself. So, you know, there's, there's always best practices and poor practices in terms of acquisitions and I'm, you know, really delighted to say I think this, this acquisition has had some of the best practices. Let me just name a couple of them, right? One of them is just cultural fit, right? Cultural fit is great. Entrepreneurs, anybody, it's not just entrepreneurs. Everybody loves to work in a place they enjoy working with, with people that they, you know, thrive when they, when they interact with. And so the cultural fit with, with Pure is fantastic. The other one is the strategic intent that Pure had when they acquired us is still true. And so that goes a long way, you know, in terms of an investment profile, in terms of the ability to kind of leverage assets within the company. So Pure had kind of disrupted the world of storage using Flash and they wanted to disrupt higher up the stack using Kubernetes. And that's kind of been our role inside their strategy. And it's, it's still true. >>So culture, strategic intent. Yeah. Product market fit as well. You were, you weren't just an asset for customers or acquisition and then let the founders go through their next thing. You are part of their growth play. >>Absolutely. Right. The, the beauty of, of the kind of product market fit is, let's talk about the market is we have been always focused on the global two k and that is at the heart of, you know, purest 10,000 strong customer base, right? They have very strong presence in the, in the global two k. And we, we allow them to kind of go to those same folks with, with the offering. >>So satisfying everything that you do. What's for me as a business, whether I'm a financial services organization, I'm a hospital, I'm a retailer, what's in it for me >>As a customer? Yeah. So the, the what's in it for, for me is two things. It's speed and ease of use, which in a way are related. But, but, but you know, one is when something is provided as a service, it's much more consumable. It's instantly ready. It's like instant oatmeal, right? You just get it just adho water and it's there. Yep. So the world of of IT has moved from owning large data centers, right? That used to be like 25 years ago and running those data centers better than everybody else to move to let me just consume a data center in the form of a cloud, right? So satisfying the cloud part of the data center. Now people are saying, well I expect that for software and services and I don't want it just from the public cloud, I want it from my own IT department. >>This is old news. And so the, the, the big news here is how fast Kubernetes has kind of moved everything. You know, you take a lot of these changes, Kubernetes is a poster child for things happening faster than the last wave. And in the last couple of years I would say that as a service model has really kind of thrived in the world of Kubernetes. And developers want to be able to get it fast. And the second thing is they wanna be able to operate it fast. Self-service is the other benefit. Yeah. So speed and self-service are both benefits of, of >>This. Yeah. And, and the thing that's come up clearly in the cube, and this is gonna be part of the headlines, we'll probably end up getting a lot of highlights from telling my team to make a note of this, is that developers are gonna be be the business if you, if you take digital transformation to its conclusion, they're not a department that serves the business, they are the business that means Exactly. They have to be more productive. So developer productivity has been the top story. Yes. Security as a services, all these things. These are, these are examples to make developers more productive. But one of the things that came up and I wanna get your reaction to Yeah. Is, is that when you have disruption and, and the storage vision, you know what disruption it means. Cuz there's been a whole discussion around disruptive operations. When storage goes down, you have back DR. And failover. If there's a disruption that changes the nature of invisible infrastructure, developers want invisible infrastructure. That's the future steady state. So if there's a disruption in storage >>Yeah. It >>Can't affect the productivity and the tool chains and the workflows of developers. Yep. Right? So how do you guys look at that? Cause you're a critical component. Storage is a service, it's a huge thing. Yeah. Storage has to, has to work seamlessly. And let's keep the developers out of the weeds. >>John. I think what, what what you put your finger on is another huge trend in the world of Kubernetes where Atan after all, which is really where, where all the leading practitioners both come and the leading vendors are. So here's the second trend that we are leading and, and actually I think it's happening not just with us, but with other, for folks in the industry. And that is, you know, the world of DevOps. Like DevOps has been such a catchphrase for all of of us in the industry last five years. And it's been both a combination of cultural change as well as technology change. Here's what the latest is on the, in the world of DevOps. DevOps is now crystallized. It's not some kind of mysterious art form that you read about. Okay. How people are practicing. DevOps is, it's broken into two, two things now. >>There is the platform part. So DevOps is now a bunch of platforms. And the other part of DevOps is a bunch of practices. So a little bit on both these, the platforms in the world of es there's only three platforms, right? There's the orchestration platforms, the, you know, eks, the open ships of the world and so on. There are the data management platforms, pro people like Port Works. And the third is security platforms, right? You know, Palo Alto Networks, others Aqua are all in this. So these are the three platforms and there are platform engineering teams now that many of our largest customers, some of the largest banks, the largest service providers, they're all operating as a ES platform engineering team. And then now developers, to your point, developers are in the practice of being able to use these platforms to launch new services. So the, the actual IT ops, the ops are run by developers now and they can do it on these platforms. And the platform engineering team provide that as an ease of use and they're there to troubleshoot when problems happen. So the idea of DevOps as a ops practice and a platform is the newest thing. And, and ports and pure storage leading in the world of data management >>Platforms there. Talk about a customer example that you think really articulates the value that Port Works and Pure Storage delivers from a data management >>Perspective. Yeah, so there's so many examples. One of the, one of the longest running examples we have is a very, very large service provider that, you know, you all know and probably use, and they have been using us in the cable kind of set box or cable box business. They get streams of data from, from cable boxes all over the world. They collected all in a centralized large kind of thing and run elastic search and analytics on it. Now what they have done is they couldn't keep up with this at the scale and the depth, right? The speed of, of activity and the distributed nature of the activity. The only way to solve this was to use something like Kubernetes manage with Spark coming, bringing all the data in into deep, deep, deep silos of storage, which are all running not even on a sand, but on kind of, you know, very deep terabytes and terabytes of, of storage. So all of this is orchestrated with the he of Coworks and there's a platform engineering team. We are building that platform for them, them with some of these other components that allows them to kind of do analytics and, and make some changes in real time. Huge kind of setup for, for >>That. Yeah. Well, you guys have the right architecture. I love the vision. I love what you guys are doing. I think this is right in line with Pures. They've always been disruptors. I remember when we first interviewed the CEO and they started Yep. They, they stayed on path. They didn't waver. EMC was the big player. They ended up taking their lunch and dinner as well and they beat 'em in the marketplace. But now you got this traction here. So I have to ask you, how's the business, what's the results look like? You're a GM cloud native business unit of a storage company that's transformed and transforming. >>Yeah, you know, it's interesting, we just hit the two year anniversary, right John? And so what we did was just kind of like step back and hey to, you know, we're running so hard, you just take a step back and we've tripled the business in the two years since the acquisition, the two years before and, and we were growing through proven. So, you know, that that's quite a fee. And we've tripled the number of people, the amount of engineering investments we have, the number of go to market investments have been, have been awesome. So business is going really well though, I will say. But I think, you know, we have, we can't be, we're watching the market closely. You know, as a former ceo, I, you have to kind of learn to read the tea leaves when you invest. And I think, you know, what I would say is we're proceeding with caution in the next two quarters. I view business transformation as not a cancelable activity. So that's the, that's the good news, right? Our customers are large, >>It's >>Right. Never gonna stop prices, right? All they're gonna do is say, Hey, they're gonna put their hand, their hand was always going right on the dial. Now they're kind of putting their hand on the dial going, hey, where, what is happening? But my, my own sense of this is that people who continue to invest through it, the question is at what level? And I also think that this is a six month kind of watch, the watch where, where we put the dial. So Q4 and q1 I think are kind of, you know, we have our, our watch kind of watch the market sign. But I have the highest confidence. What >>Does your gut tell you? You're an >>Entrepreneur. My, my gut says that we'll go through a little bit of a cautious investment period in the next six months. And after that I think we're gonna be back in, back full, full in the crazy growth that we've always been. Yeah. We're gonna grow by the way, in the next, I think >>It's corn style. I think I'm, I'm more bullish. I think it's gonna be some, you know, weeding out of some overinvestment, pre covid or pre bubble. But I think tech's gonna continue to grow. I don't see >>It's stopping. Yeah. And, and the investment is gonna be on these core platforms. See, back to the platform story, it's gonna be in these lower platforms and on unifying everything, let's consume it better rather than let's go kind of experiment with a whole bunch of things all over the map, right? So you'll see less experimentation and more kind of, let's harvest some of the investments we've made in the last couple >>Of years and actually be able to, to enable companies in, in the industry to truly be data companies because absolutely. We talked about as a service, we all have these expectations that any service we want, we can get it. Yes. There's no delay because patients has gone Yeah. From the pandemic. >>So it is kind of, you know, tightening up the screws on what they've built. They, you know, adding some polish to it, adding some more capability, like I said, a, a a, a combination of harvesting and new investing. It's a combination I think is what we're gonna see. >>Yeah. What are some of the things that you're looking forward to? You talked about some of the, the growth things in the investment, but as we round out Q4 and head into a new year, what are you excited about? >>Yeah, so, you know, I mentioned our, as a service kind of platform. The global two K for us has been a set of customers who we co-create stuff with. And so one of the other set of things that we are very excited about and announcing is because we're deployed at scale, we're, we're, we have upgraded our backend. So we have now the ability to go to million IOPS and more and, and for, for the right backends. And so Kubernetes is a add-on, which will not slow down your, your core base infrastructure. Second thing that that we, we have is added a bunch of capability in the disaster recovery business continuity front, you know, we always had like metro kind of distance Dr. We had long distance dr. We've added a near sync Dr. So now we can provide disaster recovery and business continuity for metro distances across continents and across the planet. Right? That's kind of a major change that we've done. The third thing is we've added the capability for file block and Object. So now by adding object, we're really a complete solution. So it is really that maturity of the business Yeah. That you start seeing as enterprises move to embracing a platform approach, deploying it much more widely. You talked about the early majority. Yeah. Right. And so what they require is more enterprise class capability and those are all the things that we've been adding and we're really looking forward to it. >>Well it sounds like tremendous evolution and maturation of Port Works in the two years since it's been with Pure Storage. You talked about the cultural alignment, Great stuff that you are achieving. Congratulations on that. Great stuff >>Ahead and having fun. Let's not forget that that's too life's too short to do. It is. You're right. >>Right. Thank you. We will definitely, as always on the cube, keep our eyes on this space. Mur. Meley, it's been great to have you back on the program. Thank you for joining, John. >>Great. Thank you so much. It's a pleasure. Our, >>For our guests and John Furrier, Lisa Martin here live in Detroit with the cube about Cob Con Cloud native Con at 22. We'll be back after a short break.

Published Date : Oct 27 2022

SUMMARY :

So far we're talking all things, Well, not all things Kubernetes so much more than that. crossing over, it's crossing the CADs and with that you're seeing security concerns. Great to have you back really? Delightful to So I was looking on the website, number one in Kubernetes storage. And in the world of Kubernetes, it's a sign of that maturity that and made as a service, the more ready to consume it is. Storage a big part of the game right now as well as these environments. And so the cultural You were, you weren't just an asset for customers that is at the heart of, you know, purest 10,000 strong customer base, So satisfying everything that you do. So satisfying the cloud part of the data center. And in the last couple of years I would say that disruption and, and the storage vision, you know what disruption it means. And let's keep the developers out So here's the second trend that we are leading and, And the platform engineering team provide that as an ease of use and they're there to troubleshoot Talk about a customer example that you think really articulates the value that Port Works and Pure Storage The speed of, of activity and the distributed nature of the activity. I love the vision. And so what we did was just kind of like step back and hey to, you know, But I have the highest confidence. full in the crazy growth that we've always been. I think it's gonna be some, you know, weeding out of some overinvestment, experimentation and more kind of, let's harvest some of the investments we've made in the last couple in the industry to truly be data companies because absolutely. So it is kind of, you know, tightening up the screws on what they've the growth things in the investment, but as we round out Q4 and head into a new year, what are you excited about? of capability in the disaster recovery business continuity front, you know, You talked about the cultural alignment, Great stuff that you are achieving. Let's not forget that that's too life's too short to do. it's been great to have you back on the program. Thank you so much. For our guests and John Furrier, Lisa Martin here live in Detroit with the cube about Cob Con Cloud

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

John FurrierPERSON

0.99+

Lisa MartinPERSON

0.99+

DetroitLOCATION

0.99+

twiceQUANTITY

0.99+

MollyPERSON

0.99+

OneQUANTITY

0.99+

six monthQUANTITY

0.99+

twoQUANTITY

0.99+

yesterdayDATE

0.99+

DevOpsTITLE

0.99+

two thingsQUANTITY

0.99+

Three yearsQUANTITY

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Port WorkORGANIZATION

0.99+

Murli ThirumalePERSON

0.99+

10,000QUANTITY

0.99+

second trendQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

CoworksORGANIZATION

0.99+

bothQUANTITY

0.99+

thirdQUANTITY

0.99+

PureORGANIZATION

0.99+

EMCORGANIZATION

0.98+

two yearsQUANTITY

0.98+

third thingQUANTITY

0.98+

oneQUANTITY

0.98+

three platformsQUANTITY

0.98+

Half a dayQUANTITY

0.98+

NetflixORGANIZATION

0.98+

firstQUANTITY

0.98+

second thingQUANTITY

0.98+

global two kORGANIZATION

0.97+

KubernetesTITLE

0.97+

25 years agoDATE

0.97+

pandemicEVENT

0.97+

global two k.ORGANIZATION

0.96+

SparkTITLE

0.96+

two trendsQUANTITY

0.96+

Second thingQUANTITY

0.95+

two thingsQUANTITY

0.94+

Port WorksORGANIZATION

0.94+

AquaORGANIZATION

0.94+

three major componentsQUANTITY

0.93+

last five yearsDATE

0.92+

both benefitsQUANTITY

0.92+

PuresORGANIZATION

0.91+

Con North AmericaORGANIZATION

0.9+

Con CloudORGANIZATION

0.9+

ConEVENT

0.89+

two yearsDATE

0.89+

22DATE

0.89+

two KQUANTITY

0.88+

day twoQUANTITY

0.88+

two year anniversaryQUANTITY

0.87+

Coan Cloud NativeORGANIZATION

0.85+

two major trendsQUANTITY

0.84+

todayDATE

0.84+

last couple of yearsDATE

0.82+

Mur. MeleyPERSON

0.82+

GMORGANIZATION

0.82+

q1DATE

0.79+

KubernetesORGANIZATION

0.79+

a terabyteQUANTITY

0.78+

next six monthsDATE

0.77+

Eric Herzog, Infinidat | CUBEConversation


 

>>Hey everyone, welcome to this cube conversation. I'm your host Lisa Martin, and I have the pleasure of welcoming back our most prolific guest on the cube in its history, the CMO of Fin Ad, Eric Herzog. Eric, it's great to see you. Welcome back, >>Lisa. It's great to be here. Love being on the cube. I think this might be number 55 or 56. Been doing 'em a long time with the Cube. You guys are great. >>You, you have, and we always recognize you lately with the Hawaiian shirts. It's your brand that's, that's the Eric Hizo brand. We love it. But I like the pin, the infin nut pin on brand. Thank you. >>Yeah. Oh, gotta be on brand. >>Exactly. So talk about the current IT landscape. So much change we've seen in the last couple of years. Specifically, what are some of the big challenges that you are talking with enterprise customers and cloud service providers? About what, what are some of those major things on their minds? >>So there's a couple things. First of all is obviously with the Rocky economy and even before covid, just for storage in particular, CIOs hate storage. I've been doing this now since 1986. I have never, ever, ever met a CIO at any company I've bid with. And I've been with four of the biggest storage companies on this planet. Never met a cio. Used to be a storage guy. So they know they need it, but boy, they really don't like it. So the storage admins have to manage more and more storage. Exabytes, exabytes, it just ballooning for what a storage admin has to do. Then you then have the covid and is it recession? No. Is it a growth? And then clearly what's happened in the last year with what's going on in Europe and the, is it a recession, the inflation. So they're always looking to, how do we cut money on storage yet still get what we need for our applications, workloads, and use cases. So that's definitely the biggest, the first topic. >>So never met a CIO that was a storage admin or as a fan, but as you point out, they need it. And we've seen needs changing in customer landscapes, especially as the threat landscape has changed so dramatically the last couple of years. Ransomware, you've said it before, I say it too. It's no longer if it's when it's how often. It's the frequency. We've gotta be able to recover. Backups are being targeted. Talk to me about some of, in that landscape, some of the evolutions of customer challenges and maybe those CIOs going, We've gotta make sure that our, our storage data is protected. >>So it's starting to change. However, historically with the cio and then when they started hiring CISOs or security directors, whatever they had, depending on the company size, it was very much about protecting the edge. Okay, if you will, the moat and the wall of the castle. Then it was the network in between. So keep the streets inside the castle clean. Then it was tracking down the bad guy. So if they did get over, the issue is, if I remember correctly, the sheriff of Nottingham never really caught Robinhood. So the problem is the dwell time where the ransomware malware's hidden on storage could be as much as 200 days. So I think they're starting to realize at the security level now, forget, forget the guys on the storage side, the security guys, the cso, the CIO, are starting to realize that if you're gonna have a comprehensive cybersecurity strategy, must include storage. And that is new >>That, well, that's promising then. That's new. I mean obviously promising given the, the challenges and the circumstances. So then from a storage perspective, customers that are in this multi-cloud hybrid cloud environment, you talked about the the edge cloud on-prem. What are some of the key things from a storage perspective that customers have to achieve these days to be secure as data volumes continue to grow and spread? >>So what we've done is implement on both primary storage and secondary storage and technology called infin safe. So Infin Safe has the four legs of the storage cyber security stool. So first of all is creating an air gap. In this case, a logical air gap can be local or remote. We create an immutable snapshot, which means it can't be changed, it can't be altered, so you can't change it. We have a fenced forensic environment to check out the storage because you don't wanna recover. Again, malware and rans square can is hidden. So you could be making amenable snapshots of actually malware, ransomware, and never know you're doing it right. So you have to check it out. Then you need to do a rapid recovery. The most important thing if you have an attack is how fast can you be up and going with recovery? So we have actually instituted now a number of cyber storage security guarantees. >>We will guarantee the SLAs on a, the snapshot is absolutely immutable. So they know that what they're getting is what they were supposed to be getting. And then also we are guaranteeing recovery times on primary storage. We're guaranteeing recovery of under one minute. We'll make the snapshot available under one minute and on secondary storage under 20 minutes. So those are things you gotta look for from a security perspective. And then the other thing you gotta practice, in my world, ransomware, malware, cyber tech is basically a disaster. So yes, you got the hurricane, yes, you got the flood, yes, you got the earthquake. Yes, you got the fire in the building. Yes you got whatever it may be. But if you don't practice malware, ransomware, recoveries and protection, then it might as well be a hurricane or earthquake. It will take your data, >>It will take your data on the numbers of customers that pay ransom is pretty high, isn't it? And and not necessarily able to recover their data. So it's a huge risk. >>So if you think about it, the government documented that last year, roughly $6 trillion was spent either protecting against ransomware and malware or paying ransomware attacks. And there's been several famous ones. There was one in Korea, 72 million ransom. It was one of the Korea's largest companies. So, and those are only the ones that make the news. Most of 'em don't make the news. Right. >>So talk to me then, speaking and making the news. Nobody wants to do that. We, we know every industry is vulnerable to this. Some of the ones that might be more vulnerable, healthcare, government, public sector education. I think the Los Angeles Unified School district was just hit as well in September. They >>Were >>What, talk to me about how infin out is helping customers really dial down the risk when the threat actors are becoming more and more sophisticated? >>Well, there's a couple things. First of all, our infin safe software comes free on our main product. So we have a product called infin Guard for Secondary Storage and it comes for free on that. And then our primary storage product's called the Infin Box. It also comes for free. So they don't have to use it, but we embed it. And then we have reference architectures that we give them our ses, our solutions architects and our technical advisors all up to speed on why they should do it, how they should do it. We have a number of customers doing it. You know, we're heavily concentrated the global Fortune 2000, for example, we publicly announced that 26% of the Fortune 50 use our technology, even though we're a small company. So we go to extra lengths to a B, educated on our own front, our own teams, and then B, make sure they portray that to the end users and our channel partners. But the end users don't pay a dime for the software that does what I just described, it's free, it's included when you get you're Infin box or you're ingar, it's included at no charge. >>That's pretty differentiating from a competitive standpoint. I might, I would guess >>It is. And also the guarantee. So for example, on primary storage, okay, whether you'd put your Oracle or put your SAP or I Mongo or your sequel or your highly transactional workloads, right? Your business finance workload, all your business critical stuff. We are the first and only storage company that offers a primary guarantee on cyber storage resilience. And we offer two of them on primary storage. No other vendor offers a guarantee, which we do on primary storage. Whether you the first and right now as of here we are sitting in the middle of October. We are still the only vendor that offers anything on primary storage from a guaranteed SLA on primary storage for cyber storage resilience. >>Let's talk about those guarantees. Walk me through what you just announced. There's been a a very, a lot of productivity at Infin DAT in 2022. A lot of things that you've announced but on crack some of the things you're announcing. Sure. Talk to me specifically about those guarantees and what's in it for me as a customer. It sounds pretty obvious, but I'd love to hear it from you. >>Okay, so we've done really three different types of guarantees. The first one is we have a hundred percent availability guarantee on our primary storage. And we've actually had that for the last, since 2019. So it's a hundred percent availability. We're guaranteed no downtime, a hundred percent availability, which for our customer base being heavily concentrated, the global Fortune 2000 large government enterprises, big universities and even smaller companies, we do a lot of business with CSPs and MSPs. In fact, at the Flash Memory Summit are Infin Box ssa All Flash was named the best product for hyperscaler deployment. Hyperscaler basically means cloud servers provider. So they need a hundred percent availability. So we have a guarantee on that. Second guarantee we have is a performance guarantee. We'll do an analysis, we look at all their workloads and then we will guarantee in writing what the performance should be based on which, which of our products they want to buy are Infin Box or Infin Box ssa, which is all flash. >>Then we have the third one is all about cyber resilience. So we have two on our Infin box, our Infin box SSA for primary storage, which is a one the immutability of the snapshot and immediately means you can't erase the data. Right? Camp tamper with it. Second one is on the recovery time, which is under a minute. We just announced in the middle of October that we are doing a similar cyber storage resilience guarantee on our ARD secondary product, which is designed for backup recovery, et cetera. We will also offer the immutably snapshot guarantee and also one on the recoverability of that data in under 20 minutes. In fact, we just did a demo at our live launch earlier this week and we demoed 20 petabytes of Veeam backup data recovered in 12 minutes. 12 >>Minutes 2012. >>20 petabytes In >>12 bytes in 12 minutes. Yes. That's massive. That's massively differentiating. But that's essential for customers cuz you know, in terms of backups and protecting the data, it's all about recovery >>A and once they've had the attack, it's how fast you get back online, right? That that's what happens if they've, if they can't stop the attack, can't stop the threat and it happens. They need to get that back as fast as they can. So we have the speed of recovery on primary stores, the first in the industry and we have speed on the backup software and we'll do the same thing for a backup data set recovery as well. Talk >>To me about the, the what's in it for me, For the cloud service providers, they're obviously the ones that you work with are competing with the hyperscalers. How does the guarantees and the differentiators that Fin out is bringing to market? How do you help those cloud SPS dial up their competitiveness against the big cheeses? >>Well, what we do is we provide that underlying infrastructure. We, first of all, we only sell things that are petabyte in scale. That's like always sell. So for example, on our in fitter guard product, the raw capacity is over four petabytes. And the effective capacity, cuz you do data reduction is over 85 petabytes on our newest announced product, on our primary storage product, we now can do up to 17 petabytes of effective capacity in a single rack. So the value to the service rider is they can save on what slots? Power and floor. A greener data center. Yeah, right. Which by the way is not just about environmentals, but guess what? It also translate into operational expense. >>Exactly. CapEx office, >>With a lot of these very large systems that we offer, you can consolidate multiple products from our competitors. So for example, with one of the competitors, we had a deal that we did last quarter 18 competitive arrays into one of ours. So talk about saving, not just on all of the operational expense, including operational manpower, but actually dramatically on the CapEx. In fact, one of our Fortune 500 customers in the telco space over the last five years have told us on CapEx alone, we've saved them $104 million on CapEx by consolidating smaller technology into our larger systems. And one of the key things we do is everything is automated. So we call it autonomous automation use AI based technology. So once you install it, we've got several public references who said, I haven't touched this thing in three or four years. It automatically configures itself. It automatically adjusts to changes in performance and new apps. When I put in point a new app at it automatically. So in the old days the storage admin would optimize performance for a new application. We don't do that, we automatically do it and autonomously the admin doesn't even click a button. We just sense there's new applications and we automate ourselves and configure ourselves without the admin having to do anything. So that's about saving operational expense as well as operational manpower. >>Absolutely. I was, one of the things that was ringing in my ear was workforce productivity and obviously those storage admins being able to to focus on more strategic projects. Can't believe the CIOs aren't coming around yet. But you said there's, there's a change, there's a wave coming. But if we think about the the, the what's in it for me as a customer, the positive business outcomes that I'm hearing, lower tco, your greener it, which is key. So many customers that we talk to are so focused on sustainability and becoming greener, especially with an on-prem footprint, workforce productivity. Talk about some of the other key business outcomes that you're helping customers achieve and how it helps them to be more competitive. >>Sure. So we've got a, a couple different things. First of all, storage can't go down. When the storage goes down, everyone gets blamed. Mission. When an app goes down, no one really thinks about it. It's always the storage guy's fault. So you want to be a hundred percent available. And that's today's businesses, and I'd actually argue it's been this way for 20 years are 24 by seven by 365. So that's one thing that we deliver. Second thing is performance. So we have public references talk about their SAP workload that used to take two hours, now takes 20 minutes, okay? We have another customer that was doing SAP queries. They improved their performance three times, Not 3%, not 3%, three times. So 300% better performance just by using our storages. They didn't touch the sap, they didn't touch the servers. All they do is to put our storage in there. >>So performance relates basically to applications, workloads and use cases and productivity beyond it. So think the productivity of supply chain guys, logistics guys, the shipping guys, the finance guys, right? All these applications that run today's enterprises. So we can automate all that. And then clearly the cyber threat. Yeah, that is a huge issue. And every CIO is concerned about the cyber threat. And in fact, it was interesting, Fortune magazine did a survey of CEOs, and this was last May, the number one concern, 66% in that may survey was cyber security number one concern. So this is not just a CIO thing, this is a CEO thing and a board level >>Thing. I was gonna say it's at at the board level that the cyber security threats are so real, they're so common. No one wants to be the next headline, like the colonial pipeline, right? Or the school districts or whatnot. And everybody is at risk. So then what you're enabling with what you've just announced, the all the guarantees on the SLAs, the massively fast recovery times, which is critical in cyber recovery. Obviously resilience is is key there. Modern data protection it sounds like to me. How do you define that and and what are customers looking for with respect to modern cyber resilience versus data protection? >>Yeah, so we've got normal data protection because we work with all the backup vendors. Our in ARD is what's known as a purpose built backup appliance. So that allows you to back at a much faster rate. And we work all the big back backup vendors, IBM spectrum Protect, we work with veritas vem com vault, oracle arm, anybody who does backup. So that's more about the regular side, the traditional backup. But the other part of modern data protection is infusing that with the cyber resilience. Cuz cyber resilience is a new thing. Yes, from a storage guy perspective, it hasn't been around a long time. Many of our competitors have almost nothing. One or two of our competitors have a pretty robust, but they don't guarantee it the way we guarantee it. So they're pretty good at it. But the fact that we're willing to put our money where our mouth is, we think says we price stand above and then most of the other guys in the storage industry are just starting to get on the bandwagon of having cyber resilience. >>So that changes what you do from data protection, what would call modern data protection is a combination of traditional backup recovery, et cetera. Now with this influence and this infusion of cybersecurity cyber resilience into a storage environment. And then of course we've also happened to add it on primary storage as well. So whether it's primary storage or backup and archive storage, we make sure you have that right cyber resilience to make it, if you will, modern data protection and diff different from what it, you know, the old backup of your grandfather, father, son backup in tape or however you used to do it. We're well beyond that now we adding this cyber resilience aspect. Well, >>From a cyber resilience perspective, ransomware, malware, cyber attacks are, that's a disaster, right? But traditional disaster recovery tools aren't really built to be able to pull back that data as quickly as it sounds like in Trinidad is able to facilitate. >>Yeah. So one of the things we do is in our reference architectures and written documentation as well as when we do the training, we'd sell the customers you need to practice, if you practice when there's a fire, a flood, a hurricane, an earthquake or whatever is the natural disaster you're practicing that you need to practice malware and ran somewhere. And because our recovery is so rapid and the case of our ingar, our fenced environment to do the testing is actually embedded in it. Several of our competitors, if you want the fenced environment, you have to buy a second product with us. It's all embedded in the one item. So A, that makes it more effective from a CapEx and opex perspective, but it also makes it easier. So we recommend that they do the practice recoveries monthly. Now whether they do it or not separate issue, but at least that's what we're recommending and say, you should be doing this on a monthly basis just like you would practice a disaster, like a hurricane or fire or a flood or an earthquake. Need to be practicing. And I think people are starting to hear it, but they don't still think more about, you know, the flood. Yeah. Or about >>The H, the hurricane. >>Yeah. That's what they think about. They not yet thinking about cybersecurity as really a disaster model. And it is. >>Absolutely. It is. Is is the theme of cyber resilience, as you said, this is a new concept, A lot of folks are talking about it, applying it differently. Is that gonna help dial up those folks just really being much more prepared for that type of cyber disaster? >>Well, we've made it so it's automated. Once you set up the immutable snapshots, it just does its thing. You don't set it and forget it. We create the logical air back. Once you do it, same thing. Set it and forget it. The fence forensic environment, easy to deploy. You do have to just configure it once and then obviously the recovery is almost instantaneous. It's under a minute guaranteed on primary storage and under 20 minutes, like I told you when we did our launch this week, we did 20 petabytes of Veeam backup data in 12 minutes. So that's pretty incredible. That's a lot of data to have recovered in 12 minutes. So the more automated we make it, which is what our real forte is, is this autonomous automation and automating as much as possible and make it easy to configure when you do have to configure. That's what differentiates what we do from our perspective. But overall in the storage industry, it's the recognition finally by the CISOs and the CIOs that, wait a second, maybe storage might be an essential part of my corporate cybersecurity strategy. Yes. Which it has not been historically, >>But you're seeing that change. Yes. >>We're starting to see that change. >>Excellent. So talk to me a little bit before we wrap here about the go to market one. Can folks get their hands on the updates to in kindergar and Finn and Safe and Penta box? >>So all these are available right now. They're available now either through our teams or through our, our channel partners globally. We do about 80% of our business globally through the channel. So whether you talk to us or talk to our channel partners, we're there to help. And again, we put our money where your mouth is with those guarantees, make sure we stand behind our products. >>That's awesome. Eric, thank you so much for joining me on the program. Congratulations on the launch. The the year of productivity just continues for infinit out is basically what I'm hearing. But you're really going in the extra mile for customers to help them ensure that the inevitable cyber attacks, that they, that they're complete storage environment on prem will be protected and more importantly, recoverable Very quickly. We appreciate your insights and your input. >>Great. Absolutely love being on the cube. Thank you very much for having us. Of >>Course. It's great to have you back. We appreciate it. For Eric Herzog, I'm Lisa Martin. You're watching this cube conversation live from Palo Alto.

Published Date : Oct 12 2022

SUMMARY :

and I have the pleasure of welcoming back our most prolific guest on the cube in Love being on the cube. But I like the pin, the infin nut pin on brand. So talk about the current IT landscape. So the storage admins have to manage more and more So never met a CIO that was a storage admin or as a fan, but as you point out, they need it. So the problem is the dwell time where the ransomware malware's hidden on storage could be as much as 200 days. So then from a storage perspective, customers that are in this multi-cloud hybrid cloud environment, So Infin Safe has the four legs of the storage cyber security stool. So yes, you got the hurricane, yes, you got the flood, yes, you got the earthquake. And and not necessarily able to recover their data. So if you think about it, the government documented that last year, So talk to me then, speaking and making the news. So we have a product called infin Guard for Secondary Storage and it comes for free I might, I would guess We are the first and only storage company that offers a primary guarantee on cyber on crack some of the things you're announcing. So we have a guarantee on that. in the middle of October that we are doing a similar cyber cuz you know, in terms of backups and protecting the data, it's all about recovery of recovery on primary stores, the first in the industry and we have speed on the backup software How does the guarantees and the differentiators that Fin And the effective capacity, cuz you do data reduction Exactly. So in the old days the storage admin would optimize performance for a new application. So many customers that we talk to are so focused on sustainability So that's one thing that we deliver. So performance relates basically to applications, workloads and use cases and productivity beyond it. So then what you're enabling with what you've just announced, So that's more about the regular side, the traditional backup. So that changes what you do from data protection, what would call modern data protection is a combination of traditional built to be able to pull back that data as quickly as it sounds like in Trinidad is able to facilitate. And because our recovery is so rapid and the case And it is. Is is the theme of cyber resilience, as you said, So the more automated we make it, which is what our real forte is, But you're seeing that change. So talk to me a little bit before we wrap here about the go to market one. So whether you talk to us or talk to our channel partners, we're there to help. Congratulations on the launch. Absolutely love being on the cube. It's great to have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Eric HerzogPERSON

0.99+

EricPERSON

0.99+

20 minutesQUANTITY

0.99+

OneQUANTITY

0.99+

20 yearsQUANTITY

0.99+

EuropeLOCATION

0.99+

twoQUANTITY

0.99+

CapExORGANIZATION

0.99+

IBMORGANIZATION

0.99+

20 petabytesQUANTITY

0.99+

SeptemberDATE

0.99+

last yearDATE

0.99+

26%QUANTITY

0.99+

2022DATE

0.99+

LisaPERSON

0.99+

Palo AltoLOCATION

0.99+

two hoursQUANTITY

0.99+

$104 millionQUANTITY

0.99+

66%QUANTITY

0.99+

300%QUANTITY

0.99+

12 minutesQUANTITY

0.99+

KoreaLOCATION

0.99+

24QUANTITY

0.99+

firstQUANTITY

0.99+

3%QUANTITY

0.99+

12 bytesQUANTITY

0.99+

third oneQUANTITY

0.99+

Second oneQUANTITY

0.99+

Eric HizoORGANIZATION

0.99+

first topicQUANTITY

0.99+

threeQUANTITY

0.99+

first oneQUANTITY

0.99+

oneQUANTITY

0.99+

last quarterDATE

0.99+

last MayDATE

0.99+

2019DATE

0.99+

one itemQUANTITY

0.99+

Second guaranteeQUANTITY

0.99+

56OTHER

0.99+

1986DATE

0.99+

OracleORGANIZATION

0.99+

Fin AdORGANIZATION

0.98+

four yearsQUANTITY

0.98+

under 20 minutesQUANTITY

0.98+

fourQUANTITY

0.98+

three timesQUANTITY

0.98+

under 20 minutesQUANTITY

0.98+

one thingQUANTITY

0.98+

under a minuteQUANTITY

0.98+

hundred percentQUANTITY

0.98+

middle of OctoberDATE

0.98+

VeeamORGANIZATION

0.98+

55OTHER

0.98+

bothQUANTITY

0.97+

sevenQUANTITY

0.97+

Second thingQUANTITY

0.97+

FirstQUANTITY

0.97+

under one minuteQUANTITY

0.97+

todayDATE

0.97+

second productQUANTITY

0.97+

about 80%QUANTITY

0.96+

over 85 petabytesQUANTITY

0.96+

Los Angeles Unified School districtORGANIZATION

0.96+

Infin box SSACOMMERCIAL_ITEM

0.95+

72 million ransomQUANTITY

0.95+

Jason Montgomery, Mantium & Ryan Sevey, Mantium | Amazon re:MARS 2022


 

>>Okay, welcome back. Everyone's Cube's coverage here in Las Vegas for Amazon re Mars machine learning, automation, robotics, and space out. John fir host of the queue. Got a great set of guests here talking about AI, Jason Montgomery CTO and co-founder man and Ryans CEO, founder guys. Thanks for coming on. We're just chatting, lost my train of thought. Cuz we were chatting about something else, your history with DataRobot and, and your backgrounds entrepreneurs. Welcome to the queue. Thanks >>Tur. Thanks for having >>Us. So first, before we get into the conversation, tell me about the company. You guys have a history together, multiple startups, multiple exits. What are you guys working on? Obviously AI is hot here as part of the show. M is Mars machine learning, which we all know is the basis for AI. What's the story. >>Yeah, really. We're we're here for two of the letters and Mars. We're here for the machine learning and the automation part. So at the high level, man is a no code AI application development platform. And basically anybody could log in and start making AI applications. It could be anything from just texting it with the Twilio integration to tell you that you're doing great or that you need to exercise more to integrating with zenes to get support tickets classified. >>So Jason, we were talking too about before he came on camera about the cloud and how you can spin up resources. The data world is coming together and I, and I like to see two flash points. The, I call it the 2010 big data era that began and then failed Hadoop crashed and burned. Yeah. Then out of the, out of the woodwork came data robots and the data stacks and the snowflakes >>Data break snowflake. >>And now you have that world coming back at scale. So we're now seeing a huge era of, I need to stand up infrastructure and platform to do all this heavy lifting. I don't have time to do. Right. That sounds like what you guys are doing. Is that kind of the case? >>That's absolutely correct. Yeah. Typically you would have to hire a whole team. It would take you months to sort of get the infrastructure automation in place, the dev ops DevOps pipelines together. And to do the automation to spin up, spin down, scale up scale down requires a lot of special expertise with, you know, Kubernetes. Yeah. And a lot of the other data pipelines and a lot of the AWS technologies. So we automate a lot of that. So >>If, if DevOps did what they did, infrastructure has code. Yeah. Data has code. This is kind of like that. It's not data ops per se. Is there a category? How do you see this? Cuz it's you could say data ops, but that's also it's DevOps dev. It's a lot going on. Oh yeah. It's not just seeing AI ops, right? There's a lot more, what, what would you call this? >>It's a good question. I don't know if we've quite come up with the name. I know >>It's not data ops. It's not >>Like we call it AI process automation >>SSPA instead of RPA, >>What RPA promised to be. Yes, >>Exactly. But what's the challenge. The number one problem is it's I would say not, not so much all on ever on undifferent heavy lifting. It's a lot of heavy lifting that for sure. Yes. What's involved. What's the consequences of not going this way. If I want to do it myself, can you take me through the, the pros and cons of what the scale scope, the scale of without you guys? >>Yeah. Historically you needed to curate all your data, bring it together and have some sort of data lake or something like that. And then you had to do really a lot of feature engineering and a lot of other sort of data science on the back end and automate the whole thing and deploy it and get it out there. It's a, it's a pretty rigorous and, and challenging problem that, you know, we there's a lot of automation platforms for, but they typically focus on data scientists with these large language models we're using they're pre-trained. So you've sort of taken out that whole first step of all that data collection to start out and you can basically start prototyping almost instantly because they've already got like 6 billion parameters, 10 billion parameters in them. They understand the human language really well. And a lot of other problems. I dunno if you have anything you wanna add to that, Ryan, but >>Yeah, I think the other part is we deal with a lot of organizations that don't have big it teams. Yeah. And it would be impossible quite frankly, for them to ever do something like deploy text, track as an example. Yeah. They're just not gonna do it, but now they can come to us. They know the problem they want solved. They know that they have all these invoices as an example and they wanna run it through a text track. And now with us they can just drag and drop and say, yeah, we want tech extract. Then we wanted to go through this. This is what we >>Want. Expertise is a huge problem. And the fact that it's changing too, right? Yeah. Put that out there. You guys say, you know, cybersecurity challenges. We guys do have a background on that. So you know, all the cutting edge. So this just seems to be this it, I hate to say transformation. Cause I not the word I'm looking for, I'd say stuck in the mud kind of scenario where they can't, they have to get bigger, faster. Yeah. And the scale is bigger and they don't have the people to do it. So you're seeing the rise of managed service. You mentioned Kubernetes, right? I know this young 21 year old kid, he's got a great business. He runs a managed service. Yep. Just for Kubernetes. Why? Because no, one's there to stand up the clusters. >>Yeah. >>It's a big gap. >>So this, you have these sets of services coming in now, where, where do you guys fit into that conversation? If I'm the customer? My problem is what, what is my, what is my problem that I need you guys for? What does it look like to describe my problem? >>Typically you actually, you, you kind of know that your employees are spending a lot of time, a lot of hours. So I'll just give you a real example. We have a customer that they were spending 60 hours a week just reviewing these accounts, payable, invoices, 60 hours a week on that. And they knew there had to be a better way. So manual review manual, like when we got their data, they were showing us these invoices and they had to have their people circle the total on the invoice, highlight the customer name, the >>Person who quit the next day. Right? >>No like they, they, Hey, you know, they had four people doing this, I think. And the point is, is they come to us and we say, well, you know, AI can, can just basically using something like text track can just do this. And then we can enrich those outputs from text track with the AI. So that's where the transformers come in. And when we showed them that and got them up and running in about 30 minutes, they were mind blown. Yeah. And now this is a company that doesn't have a big it department. So the >>Kind, and they had the ability to quantify the problem >>They knew. And, and in this case it was actually a business user. It was not a technical >>In is our she consequence technical it's hours. She consequences that's wasted. Manual, labor wasted. >>Exactly. Yeah. And, and to their point, it was look, we have way more high, valuable tasks that our people could be doing yeah. Than doing this AP thing. It takes 60 hours. And I think that's really important to remember about AI. What're I don't think it's gonna automate away people's jobs. Yeah. What it's going to do is it's going to free us up to focus on what really matters and focus on the high value stuff. And that's what people should >>Be doing. I know it's a cliche. I'm gonna say it again. Cause I keep saying, cause I keep saying for people to listen, the bank teller argument always was the big thing. Oh yeah. They're gonna get killed by the ATM machine. No, they're opening up more branches. That's right. That's right. So it's like, come on. People let's get, get over that. So I, I definitely agree with that. Then the question, next question is what's your secret sauce? I'm the customer I'm gonna like that value proposition. You make something go away. It's a pain relief. Then there's the growth side. Okay. You can solve from problems. Now I want this, the, the vitamin you got aspirin. And I want the vitamin. What's the growth angle for you guys with your customers. What's the big learnings. Once they get the beach head with problem solving. >>I think it, it, it it's the big one is let's say that we start with the account payable thing because it's so our platform's so approachable. They go in and then they start tinkering with the initial, we'll call it a template. So they might say, Hey, you know what, actually, in this edge case, I'm gonna play with this. And not only do I want it to go to our accounting system, but if it's this edge case, I want it to email me. So they'll just drag and drop an email block into our canvas. And now they're making it >>Their own. There is the no code, low code's situation. They're essentially building a notification engine under the covers. They have no idea what they're doing. That's >>Right. They get the, they just know that, Hey, you know what? When, when like the amount's over $10,000, I want an email. They know that's what they want. They don't, they don't know that's the notification engine. Of >>Course that's value email. Exactly. I get what I wanted. All right. So tell me about the secret sauce. What's under the covers. What's the big, big, big scale, valuable, valuable, secret sauce. >>I would say part of it. And, and honestly, the reason that we're able to do this now is transformer architecture. When the transformer papers came out and then of course the attention is all you need paper, those kind of unlocked it and made this all possible. Beyond that. I think the other secret sauce we've been doing this a long time. >>So we kind of, we know we're in the paid points. We went to those band points. Cause we weren't data scientists or ML people. >>Yeah. >>Yeah. You, you walked the snow and no shoes on in the winter. That's right. These kids now got boots on. They're all happy. You've installed machines. You've loaded OSS on, on top of rack switches. Yeah. I mean, it's unbelievable how awesome it's right now to be a developer and now a business user's doing the low code. Yep. If you have the system architecture set up, so back to the data engineering side, you guys had the experience got you here. This is a big discussion right now. We're having in, in, on the cube and many conversations like the server market, you had that go away through Amazon and Google was one of the first, obviously the board, but the idea that servers could be everywhere. So the SRE role came out the site reliability engineer, right. Which was one guy or gal and zillions of servers. Now you're seeing the same kind of role with data engineering. And then there's not a lot of people that fit the requirement of being a data engineer. It's like, yeah, it's very unique. Cause you're dealing with a system architecture, not data science. So start to see the role of this, this, this new persona, because they're taking on all the manual challenges of doing that. You guys are kind of replaced that I think. Well, do you agree with it about the data engineer? First of all? >>I think, yeah. Well and it's different cuz there's the older data engineer and then there's sort of the newer cloud aware one who knows how to use all the cloud technologies. And so when you're trying, we've tried to hire some of those and it's like, okay, you're really familiar with old database technology, but can you orchestrate that in a serverless environment with a lot of AWS technology for instance. And it's, and that's hard though. They don't, they don't, there's not a lot of people who know that space, >>So there's no real curriculum out there. That's gonna teach you how to handle, you know, ETL. And also like I got I'm on stream data from this source. Right. I'm using sequel I'm I got put all together. >>Yeah. So it's yeah, it's a lot of just not >>Data science. It's >>Figure that out. So its a large language models too. We don't have to worry about some of the data there too. It's it's already, you know, codified in the model. And then as we collect data, as people use our platform, they can then curate data. They want to annotate or enrich the model with so that it works better as it goes. So we're kind of curating, collecting the data as it's used. So as it evolves, it just gets better. >>Well, you guys obviously have a lot of experience together and congratulations on the venture. Thank you. What's going on here at re Mars. Why are you here? What's the pitch. What's the story. Where's your, you got two letters. You got the, you got the M for the machine learning and AI and you got the, a for automation. What's the ecosystem here for you? What are you doing? >>Well, I mean, I think you, you kind of said it right. We're here because the machine learning and the automation part, >>But >>More, more widely than that. I mean we work very, very closely with Amazon on a number of front things like text track, transcribe Alexa, basically all these AWS services are just integrations within our system. So you might want to hook up your AI to an Alexa so that you could say, Hey Alexa, tell me updates about my LinkedIn feed. I don't know, whatever, whatever your hearts content >>Is. Well what about this cube transcription? >>Yeah, exactly. A hundred percent. >>Yeah. We could do that. You know, feed all this in there and then we could do summarization of everything >>Here, >>Q and a extraction >>And say, Hey, these guys are >>Technicals. Yeah, >>There you go. No, they mentioned Kubernetes. We didn't say serverless chef puppet. Those are words straight, you know, and no linguistics matters right into that's a service that no one's ever gonna build. >>Well, and actually on that point, really interesting. We work with some healthcare companies and when you're basically, when people call in and they call into the insurance, they have a question about their, what like is this gonna be covered? And what they want to key in on are things like I just went to my doctor and got a cancer diagnosis. So the, the, the relevant thing here is they just got this diagnosis. And why is that important? Well, because if you just got a diagnosis, they want to start a certain triage to make you successful with your treatments. Because obviously there's an >>Incentive to do time. That time series matters and, and data exactly. And machine learning reacts to it. But also it could be fed back old data. It used to be time series to store it. Yeah. But now you could reuse it to see how to make the machine learning better. Are you guys doing anything, anything around that, how to make that machine learning smarter, look doing look backs or maybe not the right word, but because you have data, I might as well look back at it's happened. >>So part of, part of our platform and part of what we do is as people use these applications, to your point, there's lots of data that's getting generated, but we capture all that. And that becomes now a labeled data set within our platform. And you can take that label data set and do something called fine tuning, which just makes the underlying model more and more yours. It's proprietary. The more you do it. And it's more accurate. Usually the more you do it. >>So yeah, we keep all that. I wanna ask your reaction on this is a good point. The competitive advantage in the intellectual property is gonna be the workflows. And so the data is the IP. If this refinement happens, that becomes intellectual property. Yeah. That's kind of not software. It's the data modeling. It's the data itself is worth something. Are you guys seeing that? >>Yeah. And actually how we position the company is man team is a control plane and you retain ownership of the data plane. So it is your intellectual property. Yeah. It's in your system, it's in your AWS environment. >>That's not what everyone else is doing. Everyone wants to be the control plane and the data plan. We >>Don't wanna own your data. We don't, it's a compliance and security nightmare. Yeah. >>Let's be, Real's the question. What do you optimize for? Great. And I think that's a fair, a fair bet. Given the fact that clients want to be more agile with their data anyway, and the more restrictions you put on them, why would that this only gets you in trouble? Yeah. I could see that being a and plus lock. In's gonna be a huge factor. Yeah. I think this is coming fast and no one's talking about it in the press, but everyone's like run to silos, be a silo and that's not how data works. No. So the question is how do you create siloing of data for say domain specific applications while maintaining a horizontally scalable data plan or control plan that seems to be kind of disconnected everyone to lock in their data. What do you guys think about that? This industry transition we're in now because it seems people are reverting back to fourth grade, right. And to, you know, back to silos. >>Yeah. I think, well, I think the companies probably want their silo of data, their IP. And so as they refine their models and, and we give them the ability to deploy it in their own stage maker and their own VPC, they, they retain and own it. They can actually get rid of us and they still have that model. Now they may have to build, you know, a lot of pipelines and other technology to support it. But well, >>Your lock in is usability. Exactly. And value. Yeah. Value proposition is the lock in bingo. That's not counterintuitive. Exactly. Yeah. You say, Hey, more value. How do I wanna get rid of it? Valuable. I'll pay for it. Right. As long as you have multiple value, step up. And that's what cloud does. I mean, think that's the thing about cloud. That's gonna make all this work. In my opinion, the value enablement is much higher. Yeah. So good business model. Anything else here at the show that you observed that you like, that you think people would be interested in? What's the most important story coming out of the, the holistic, if you zoom up and look at re Mars, what's, what's coming out of the vibe. >>You know, one thing that I think about a lot is we're, you know, we have Artis here, humanity hopefully soon gonna be going to Mars. And I think that's really, really exciting. And I also think when we go to Mars, we're probably not gonna send a bunch of software engineers up there. >>Right. So like robots will do break fix now. So, you know, we're good. It's gone. So services are gonna be easy. >>Yeah. But I, oh, >>I left that device back at earth. I just think that's not gonna be good. Just >>Replicated it in one. I think there's like an eight >>Minute, the first monopoly on next day delivery in space. >>They'll just have a spaceship that sends out drones to Barss. Yeah. But I think that when we start going back to the moon and we go to Mars, people are gonna think, Hey, I need this application now to solve this problem that I didn't anticipate having. And in science fiction, we kind of saw this with like how, right? Like you had this AI on this computer or this, on this spaceship that could do all this stuff. We need that. And I haven't seen that here yet. >>No, it's not >>Here yet. And >>It's right now I think getting the hardware right first. Yep. But we did a lot of reporting on this with the D O D and the tactile edge, you know, military applications. It's a fundamental, I won't say it's a tech, religious argument. Like, do you believe in agile realtime data or do you believe in democratizing multi-vendor, you know, capability? I think, I think the interesting needs to sort itself out because sometimes multi vendor multi-cloud might not work for an application that needs this database or this application at the edge. >>Right. >>You know, so if you're in space, the back haul, it matters. >>It really does. Yeah. >>Yeah. Not a good time to go back and get that highly available data. You mean highly, is it highly available or there's two terms highly available, which means real time and available. Yeah. Available means it's on a dis, right? >>Yeah. >>So that's a big challenge. Well guys, thanks for coming on. Plug for the company. What are you guys up to? How much funding do you have? How old are you staff hiring? What's some of the details. >>We're about 45 people right now. We are a globally distributed team. So we hire every like from every country, pretty much we are fully remote. So if you're looking for that, hit us up, definitely always look for engineers, looking for more data scientists. We're very, very well funded as well. And yeah. So >>You guys headquarters out, you guys headquartered. >>So a lot of us live in Columbus, Ohio that's technically HQ, but like I said, we we're in pretty much every continent except in Antarctica. So >>You're for all virtual. >>Yeah. A hundred percent virtual, a hundred percent. >>Got it. Well, congratulations and love to hear that Datadog story at another time >>Or DataBot >>Yeah. I mean data, DataBot sorry. Let's get, get all confused >>Data dog data company. >>Well, thanks for coming on and congratulations for your success and thanks for sharing. Yeah. >>Thanks for having us for having >>Pleasure to be here. It's a cube here at rebars. I'm John furier host. Thanks for watching more coming back after this short break.

Published Date : Jun 23 2022

SUMMARY :

John fir host of the queue. What are you guys working on? So at the high level, man is a no code AI application So Jason, we were talking too about before he came on camera about the cloud and how you can spin up resources. And now you have that world coming back at scale. And a lot of the other data pipelines and a lot of the AWS technologies. There's a lot more, what, what would you call this? I don't know if we've quite come up with the name. It's not data ops. What RPA promised to be. scope, the scale of without you guys? And then you had to do really a lot of feature engineering and They know the problem they want solved. And the scale is bigger and they don't have the So I'll just give you a real example. Person who quit the next day. point is, is they come to us and we say, well, you know, AI can, And, and in this case it was actually a business user. In is our she consequence technical it's hours. And I think that's really important to What's the growth angle for you guys with your customers. I think it, it, it it's the big one is let's say that we start with the account payable There is the no code, low code's situation. They get the, they just know that, Hey, you know what? So tell me about the secret sauce. When the transformer papers came out and then of course the attention is all you need paper, So we kind of, we know we're in the paid points. so back to the data engineering side, you guys had the experience got you here. but can you orchestrate that in a serverless environment with a lot of AWS technology for instance. That's gonna teach you how to handle, you know, It's It's it's already, you know, codified in the model. You got the, you got the M for the machine learning and AI and you got the, a for automation. We're here because the machine learning and the automation part, So you might want to hook up your AI to an Alexa so that Yeah, exactly. You know, feed all this in there and then we could do summarization of everything Yeah, you know, and no linguistics matters right into that's a service that no one's ever gonna build. to start a certain triage to make you successful with your treatments. not the right word, but because you have data, I might as well look back at it's happened. Usually the more you do it. And so the data is ownership of the data plane. That's not what everyone else is doing. Yeah. Given the fact that clients want to be more agile with their data anyway, and the more restrictions you Now they may have to build, you know, a lot of pipelines and other technology to support it. Anything else here at the show that you observed that you like, You know, one thing that I think about a lot is we're, you know, we have Artis here, So, you know, we're good. I just think that's not gonna be I think there's like an eight And I haven't seen that here yet. And O D and the tactile edge, you know, military applications. Yeah. Yeah. What are you guys up to? So we hire every So a lot of us live in Columbus, Ohio that's technically HQ, but like I said, Well, congratulations and love to hear that Datadog story at another time Let's get, get all confused Yeah. It's a cube here at rebars.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JasonPERSON

0.99+

Jason MontgomeryPERSON

0.99+

AntarcticaLOCATION

0.99+

GoogleORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

MarsLOCATION

0.99+

60 hoursQUANTITY

0.99+

AWSORGANIZATION

0.99+

twoQUANTITY

0.99+

LinkedInORGANIZATION

0.99+

Columbus, OhioLOCATION

0.99+

two termsQUANTITY

0.99+

two lettersQUANTITY

0.99+

Las VegasLOCATION

0.99+

21 yearQUANTITY

0.99+

RyanPERSON

0.99+

earthLOCATION

0.99+

10 billion parametersQUANTITY

0.99+

2010DATE

0.99+

over $10,000QUANTITY

0.99+

one guyQUANTITY

0.98+

AlexaTITLE

0.98+

MantiumORGANIZATION

0.98+

four peopleQUANTITY

0.98+

Ryan SeveyPERSON

0.98+

John furierPERSON

0.98+

about 30 minutesQUANTITY

0.98+

6 billion parametersQUANTITY

0.97+

first stepQUANTITY

0.97+

firstQUANTITY

0.96+

DataBotORGANIZATION

0.96+

DevOpsTITLE

0.96+

KubernetesORGANIZATION

0.96+

60 hours a weekQUANTITY

0.96+

oneQUANTITY

0.96+

two flash pointsQUANTITY

0.95+

TurPERSON

0.95+

hundred percentQUANTITY

0.94+

fourth gradeQUANTITY

0.93+

next dayDATE

0.92+

Data dogORGANIZATION

0.92+

FirstQUANTITY

0.92+

one thingQUANTITY

0.91+

TwilioORGANIZATION

0.9+

first monopolyQUANTITY

0.89+

zillions of serversQUANTITY

0.87+

about 45 peopleQUANTITY

0.86+

MarsORGANIZATION

0.81+

eightQUANTITY

0.81+

DataRobotORGANIZATION

0.79+

CubeORGANIZATION

0.76+

JohnPERSON

0.7+

DatadogORGANIZATION

0.69+

moonLOCATION

0.67+

HadoopTITLE

0.67+

RyansPERSON

0.67+

re MarsORGANIZATION

0.61+

CTOPERSON

0.6+

MARS 2022DATE

0.6+

hoursQUANTITY

0.54+

galQUANTITY

0.52+

timeQUANTITY

0.51+

SREORGANIZATION

0.51+

zenesORGANIZATION

0.5+

Eric Herzog, Infinidat | CUBE Conversation April 2022


 

(upbeat music) >> Lately Infinidat has been on a bit of a Super cycle of product announcements. Adding features, capabilities, and innovations to its core platform that are applied across its growing install base. CEO, Phil Bollinger has brought in new management and really emphasized a strong and consistent cadence of product releases, a hallmark of successful storage companies. And one of those new executives is a CMO with a proven product chops, who seems to bring an energy and an acceleration of product output, wherever he lands. Eric Herzog joins us on "theCUBE". Hey, man. Great to see you. Awesome to have you again. >> Dave. Thank you. And of course, for "theCUBE", of course, I had to put on a Hawaiian shirt as always. >> They're back. All right, I love it.(laughs) Watch out for those Hawaiian shirt police, Eric. (both laughing) All right. I want to have you start by. Maybe you can make some comments on the portfolio over the past year. You heard my intro, InfiniBox is the core, the InfiniBox SSA, which announced last year. InfiniGuard you made some substantial updates in February of this year. Real focus on cyber resilience, which we're going to talk about with Infinidat. Give us the overview. >> Sure. Well, what we've got is it started really 11 years ago with the InfiniBox. High end enterprise solution, hybrid oriented really incredible magic fairy dust around the software and all the software technology. So for example, the Neural Cache technology, which has multiple patents on it, allowed the original InfiniBox to outperform probably 85% of the All-Flash Arrays in the industry. And it still does that today. We also of course, had our real, incredible ease-of-use the whole point of the way it was configured and set up from the beginning, which we continued to make sure we do is if you will a set it and forget it model. For example, When you install, you don't create lungs and raid groups and volumes it automatically and autonomously configures. And when you add new solutions, AKA additional applications or additional servers and point it at the InfiniBox. It automatically, again in autonomously, adjust to those new applications learning what it needs to configure everything. So you're not setting cash size and Q depth, or Stripes size, anything you would performance to you don't have to do any of that. So that entire set of software is on the InfiniBox. The InfiniBox SSA II, which we're of course launching today and then inside of the InfiniGuard platform, there's a actually an InfiniBox. So the commonality of snapshots replication, ease of use. All of that is identical across the platform of all-flash array, hybrid array and purpose-built backup secondary storage and no other vendor has that breadth of product that has the same exact software. Some make a similar GUI, but we're talking literally the same exact software. So once you learn it, all three platforms, even if you don't have them, you could easily buy one of the other platforms that you don't have yet. And once you've got it, you already know how to use it. 'Cause you've had one platform to start as an example. So really easy to use from a customer perspective. >> So ever since I've been following the storage business, which has been a long time now, three things that customers want. They want something that is rock solid, dirt cheap and super fast. So performance is something that you guys have always emphasized. I've had some really interesting discussions over the years with Infinidat folks. How do you get performance? If you're using this kind of architecture, it's been quite amazing. But how does this launch extend or affect performance? Why the focus on performance from your standpoint? >> Well, we've done a number of different things to bolster the performance. We've already been industry-leading performance again. The regular InfiniBox outperforms 80, 85% of the All-Flash Arrays. Then, when the announcement of the InfiniBox SSA our first all-flash a year ago, we took that now to the highest demanding workloads and applications in the industry. So what did it add to the super high end Oracle app or SAP or some custom app that someone's created with Mongo or Cassandra. We can absolutely meet the performance between either the InfiniBox or the InfiniBox all-flash with the InfiniBox SSA. However, we've decided to extend the performance even farther. So we added a whole bunch of new CPU cores into our tri part configuration. So we don't have two array controllers like many companies do. We actually have three everything's in threes, which gives us the capability of having our 100% availability guarantee. So we've extended that now we've optimized. We put a additional InfiniBand interconnects between the controllers, we've added the CPU core, we've taken if you will the InfiniBox operating system, Neural Cache and everything else we've had. And what we have done is we have optimized that to take advantage of all those additional cores. This has led us to increase performance in all aspects, IOPS bandwidth and in fact in latency. In latency we now are at 35 mikes of latency. Real world, not a hero number, but real-world on an array. And when you look end to end, if I Mr. Oracle, or SAP sitting in the server and I'll look across that bridge, of course the sand and over to the other building the storage building that entire traversing can be as fast as a 100 microseconds of latency across the entire configuration, not just the storage. >> Yeah. I think that's best in class for an external array. Well, so what's the spectrum you can now hit with the performance ranges. Can you hit all the aspects of the market with the two InfiniBoxes, your original, and then the SSA? >> Yes, even with the original SSA. In fact, we've had one of our end users, who's been first InfiniBox customer, then InfiniBox SSA actually has been running for the last two months. A better version of the SSA II. So they've had a better version and this customer's running high end Oracle rack configurations. So they decided, you know what? We're not going to run storage benchmarks. We're going to run only Oracle benchmarks. And in every benchmark IOPS, latency and bandwidth oriented, we outperformed the next nearest competition. So for example, 57% faster in IOPS, 58% faster in bandwidth and on the latency side using real-world Oracle apps, we were three times better performance on the latency aspect, which of course for a high end high performance workload, that's heavily transactional. Latency is the most important, but when you look across all three of those aspects dramatically outperform. And by the way, that was a beta unit that didn't of course have final code on it yet. So incredible performance angle with the InfiniBox SSA II. >> So I mean you earlier, you were talking about the ease of use. You don't have to provision lungs and all that sort of nonsense, and you've always emphasized ease-of-use. Can you double click on that a little bit? How do you think about that capability? And I'm really interested in why you think it's different from other vendors? >> Well, we make sure that, for example, when you install you don't have to do anything, you have to rack and stack, yes and cable. And of course, point the servers at the storage, but the storage just basically comes up. In fact, we have a customer and it's a public reference that bought a couple units many years ago and they said they were up and going in about two hours. So how many high-end enterprise storage array can be up and going in two hours? Almost I mean, basically nobody about us. So we wanted to make sure that we maintain that when we have customers, one of our big plays, particularly helping with CapEx and OpEx is because we are so performant. We can consolidate, we have a large customer in Europe that took 57 arrays from one of our competitors and consolidate it to five of the original InfiniBox. 57 to 5. They saved about $25 million in capital expense and they're saving about a million and a half a year in operational expense. But the whole point was as they kept adding more and more servers that were connected to those competitive arrays and pointing them at the InfiniBox, there's no performance tuning. Again, that's all ease-of-use, not only saving on operational expense, but obviously as we know, the headcount for storage admins is way down from its peak, which was probably in 2007. Yet every admin is managing what 25 to 50 times the amount of storage between 2007 and 2022. So the reality is the easier it is to use. Not only does of course the CIO love it because both the two of us together probably been storage, doing storage now for close to 80 years would be my guess I've been doing it for 40. You're a little younger. So maybe we're at 75 to 78. Have you ever met a CIO used to be a storage admin ever? >> No. >> And I can't think of one either so guess what? The easier it is to use the CIOs know that they need storage. They don't like it. They're all these days are all software guys. There used to be some mainframe guys in the old days, but they're long gone too. It's all about software. So when you say, not only can we help reduce your CapEx at OpEx, but the operational manpower to run the storage, we can dramatically reduce that because of our ease-of-use that they get and ease-of-use has been a theme on the software side ever since the Mac came out. I mean, Windows used to be a dog. Now it's easy to use and you know, every time the Linux distribution come out, someone's got something that's easier and easier to use. So, the fact that the storage is easy to use, you can turn that directly into, we can help you save on operational manpower and OPEX and CIOs. Again, none of which ever met are storage guys. They love that message. Of course the admins do too 'cause they're managing 25 to 50 times more storage than they had to manage back in 2007. So the easier it is for them at the tactical level, the storage admin, the storage manager, it's a huge deal. And we've made sure we've maintained that as you've added the SSA, as we brought up the InfiniGuard, as we've continue to push new feature function. We always make it easy to use. >> Yeah. Kind of a follow up on that. Just focus on software. I mean, I would think every storage company today, every modern storage company is going to have more software engineers than hardware engineers. And I think Infinidat obviously is no different. You got a strong set of software, it's across the portfolio. It's all included kind of thing. I wonder if you could talk about your software approach and how that is different from your competitors? >> Sure, so we started out 11 years ago when in Infinidat first got started. That was all about commodity hardware. So while some people will use custom this and custom that, yeah and I having worked at two of the biggest storage companies in the world before I came here. Yes, I know it's heavily software, but our percentage of hardware engines, softwares is even less hardware engineering than our competitors have. So we've had that model, which is why this whole what we call the set it and forget it mantra of ease-of-use is critical. We make sure that we've expanded that. For example, we're announcing today, our InfiniOps focus and Infini Ops all software allows us to do AIOps both inside of our storage system with our InfiniVerse and InfiniMetrics packages. They're easy to use. They come pre-installed and they manage capacity performance. We also now have heavy integration with AI, what I'll call data center, AIOps vendors, Vetana ServiceNow, VMware and others. And in that case, we make sure that we expose all of our information out to those AIOps data center apps so that they can report on the storage level. So we've made sure we do that. We have incredible support for the Ansible framework again, which is not only a software statement, but an ease-of-use statement as well. So for the Ansible framework, which is trying to allow an even simpler methodology for infrastructure deployment in companies. We support that extensively and we added some new features. Some more, if you will, what I'll say are more scripts, but they're not really scripts that Ansible hides all that. And we added more of that, whether that be configuration installations, that a DevOps guy, which of course just had all the storage guys listening to this video, have a heart attack, but the DevOps guy could actually configure storage. And I guess for my storage buddies, they can do it without messing up your storage. And that's what Ansible delivers. So between our AIOps focus and what we're doing with InfiniOps, that extends of course this ease-of-use model that we've had and includes that. And all this again, including we already talked about a little bit cyber resilience Dave, within InfiniSafe. All this is included when you buy it. So we don't piecemeal, which is you get this and then we try to upcharge you for that. We have the incredible pricing that delivers this CapEx and an OpEx. Not just for the array, but for the associated software that goes with it, whether that be Neural Cache, the ease-of-use, the InfiniOps, InfiniSafes. You get all of that package together in the way we deploy from a business now perspective, ease of doing business. You don't cut POS for all kinds of pieces. You cut APO and you just get all the pieces on the one PO when we deliver it. >> I was talking yesterday to a VC and we were chatting about AI And of course, everybody's chasing AI. It's a lot of investments go in there, but the reality is, AI is like containers. It's just getting absorbed into virtually every thing. And of course, last year you guys made a pretty robust splash into AIOps. And then with this launch, you're extending that pretty substantially. Tell us a little bit more about the InfiniOps announcement news. >> So the InfiniOps includes our existing in the box framework InfiniVerse and what we do there, by the way, InfiniVerse has the capability with the telemetry feed. That's how we could able to demo at our demo today and also at our demo for our channel partner pre-briefing. Again a hundred mics of latency across the entire configuration, not just to a hundred mics of latency on storage, which by the way, several of our competitors talk about a hundred mics of latency as their quote hero number. We're talking about a hundred mics of latency from the application through the server, through the SAN and out to the storage. Now that is incredible. But the monitoring for that is part of the InfiniOps packaging, okay. We support again with DevOps with all the integration that we do, make it easy for the DevOps team, such as with Ansible. Making sure for the data center people with our integration, with things like VMware and ServiceNow. The data center people who are obviously often not the storage centric person can also be managing the entire data center. And whether that is conversing with the storage admin on, we need this or that, or whether they're doing it themselves again, all that is part of our InfiniOps framework and we include things like the Ansible support as part of that. So InfiniOps is sort of an overarching theme and then overarching thing extends to AIops inside of the storage system. AIops across the data center and even integration with I'll say something that's not even considered an infrastructure play, but something like Ansible, which is clearly a red hat, software oriented framework that incorporates storage systems and servers or networks in the capability of having DevOps people manage them. And quite honestly have the DevOps people manage them without screwing them up or losing data or losing configuration, which of course the server guys, the network guys and the storage guys hate when the DevOps guys play with it. But that integration with Ansible is part of our InfiniOps strategy. >> Now our shift gears a little bit talk about cyber crime and I mean, it's a topic that we've been on for a long time. I've personally been writing about it now for the last few years. Periodically with my colleagues from ETR, we hit that pretty hard. It's top of mind, and now the house just approved what's called the Better Cybercrime Metrics Act. It was a bipartisan push. I mean, the vote was like 377 to 48 and the Senate approved this bill last year. Once president Biden signs it, it's going to be the law's going to be put into effect and you and many others have been active in this space Infinidat. You announced cyber resilience on your purpose bill backup appliance and secondary storage solution, InfiniGuard with the launch of InfiniSafe. What are you doing for primary storage from InfiniBox around cyber resilience? >> So the goal between the InfiniGuard and secondary storage and the InfiniBox and the InfiniBox SSA II, we're launching it now, but the InfiniSafe for InfiniBox will work on the original InfiniBox. It's a software only thing. So there's no extra hardware needed. So it's a software only play. So if you have an InfiniBox today, when you upgrade to the latest software, you can have the InfiniSafe reference architecture available to you. And the idea is to support the four key legs of the cybersecurity table from a storage perspective. When you look at it from a storage perspective, there's really four key things that the CISO and the CIO look for first is a mutable snapshot technology. An article can't be deleted, right? You can schedule it. You can do all kinds of different things, but the point is you can't get rid of it. Second thing of course, is an air gap. And there's two types of air gap, logical air gap, which is what we provide and physical the main physical air gaping would be either to tape or to course what's left of the optical storage market. But we've got a nice logical air gap and we can even do that logical air gaping remotely. Since most customers often buy for disaster recovery purposes, multiple arrays. We can then put that air gap, not just locally, but we can put the air gap of course remotely, which is a critical differentiator for the InfiniBox a remote logical air gap. Many other players have logical, we're logical local, but we're going remote. And then of course the third aspect is a fenced forensic environment. That fence forensic environment needs to be easily set up. So you can determine a known good copy to a restoration after you've had a cyber incident. And then lastly is rapid recovery. And we really pride ourself on this. When you go to our most recent launch in February of the InfiniGuard within InfiniSafe, we were able to demo live a recovery taking 12 minutes and 12 seconds of 1.5 petabytes of backup data from Veeam. Now that could have been any backup data. Convolt IBM spectrum tech Veritas. We happen to show with Veeam, but in 12 minutes and 12 seconds. Now on the primary storage side, depending on whether you're going to try to recover locally or do it from a remote, but if it's local, we're looking at something that's going to be 1 to 2 minutes recovery, because the way we do our snapshot technology, how we just need to rebuild the metadata tree and boom, you can recover. So that's a real differentiator, but those are four things that a CISO and a CIO look for from a storage vendor is this imutable snapshot capability, the air gaping capability, the fenced environment capability. And of course this near instantaneous recovery, which we have proven out well with the InfiniGuard. And now with the InfiniBox SSA II and our InfiniBox platform, we can make that recovery on primary storage, even faster than what we have been able to show customers with the InfiniGuard on the secondary data sets and backup data sets. >> Yeah. I love the four layer cake. I just want to clarify something on the air gap if I could so you got. You got a local air gap. You can do a remote air gap with your physical storage. And then you're saying there's I think, I'm not sure I directly heard that, but then the next layer is going to be tape with the CTA, the Chevy truck access method, right? >> Well, so while we don't actively support tape and go to that there's basically two air gap solutions out there that people talk about either physical, which goes to tape or optical or logical. We do logical air gaping. We don't do air gaping to tape 'cause we don't sell tape. So we make sure that it's a remote logical air gap going to a secondary DR Site. Now, obviously in today's world, no one has a true DR data center anymore, right. All data centers are both active and DR for another site. And because we're so heavily concentrated in the global Fortune 2000, almost all the InfiniBoxes in the field already are set up as in a disaster recovery configuration. So using a remote logical air gap would be is easy for us to do with our InfiniBox SSA II and the whole InfiniBox family. >> And, I get, you guys don't do tape, but when you say remote, so you've got a local air gap, right? But then you also you call a remote logical, but you've got a physical air gap, right? >> Yeah, they would be physically separated, but when you're not going to tape because it's fully removable or optical, then the security analysts consider that type of air gap, a logical air gap, even though it's physically at a remote. >> I understand, you spent a lot of time with the channel as well. I know, and they must be all over this. They must really be climbing on to the whole cyber resiliency. What do you say, do they set up? Like a lot of the guys, doing managed services as well? I'm just curious. Are there separate processes for the air gap piece than there are for the mainstream production environment or is it sort of blended together? How are they approaching that? >> So on the InfiniGuard product line, it's blended together, okay. On the InfiniBox with our InfiniSafe reference architecture, you do need to have an extra server where you create an scuzzy private VLAN and with that private VLAN, you set up your fenced forensic environment. So it's a slightly more complicated. The InfiniGuard is a 100% automated. On the InfiniBox we will be pushing that in the future and we will continue to have releases on InfiniSafe and making more and more automated. But the air gaping and the fence reference now are as a reference architecture configuration. Not with click on a gooey in the InfiniGuard case are original InfiniSafe. All you do is click on some windows and it just goes does. And we're not there yet, but we will be there in the future. But it's such a top of mind topic, as you probably see. Last year, Fortune did a survey of the Fortune 500 CEOs and the number one cited threat at 66% by the way was cybersecurity. So one of the key things store storage vendors do not just us, but all storage vendors is need to convince the CISO that storage is a critical component of a comprehensive cybersecurity strategy. And by having these four things, the rapid recovery, the fenced forensic environment, the air gaping technology and the immutable snapshots. You've got all of the checkbox items that a CISO needs to see to make sure. That said many CISOs still even today stood on real to a comprehensive cybersecurity strategy and that's something that the storage industry in general needs to work on with the security community from a partner perspective. The value is they can sell a full package, so they can go to their end user and say, look, here's what we have for edge protection. Here's what we've got to track the bad guide down once something's happened or to alert you that something's happened by having tools like IBM's, Q Radar and competitive tools to that product line. That can traverse the servers and the software infrastructure, and try to locate malware, ransomware akin to the way all of us have Norton or something like Norton on our laptop that is trolling constantly for viruses. So that's sort of software and then of course storage. And those are the elements that you really need to have an overall cybersecurity strategy. Right now many companies have not realized that storage is critical. When you think about it. When you talk to people in security industry, and I know you do from original insertion intrusion to solution is 287 days. Well guess what if the data sets thereafter, whether it be secondary InfiniGuard or primary within InfiniBox, if they're going to trap those things and they're going to take it. They might have trapped those few data sets at day 50, even though you don't even launch the attack until day 200. So it's a big deal of why storage is so critical and why CISOs and CIOs need to make sure they include it day one. >> It's where the data lives, okay. Eric. Wow.. A lot of topics we discovered. I love the agile sort of cadence. I presume you're not done for the year. Look forward to having you back and thanks so much for coming on today. >> Great. Thanks you, Dave. We of course love being on "theCUBE". Thanks again. And thanks for all the nice things about Infinidat. You've been saying thank you. >> Okay. Yeah, thank you for watching this cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Apr 27 2022

SUMMARY :

to have you again. And of course, for "theCUBE", of course, on the portfolio over the past year. of product that has the following the storage business, and applications in the industry. spectrum you can now hit and on the latency side and all that sort of nonsense, So the reality is the easier it is to use. So the easier it is for it's across the portfolio. and then we try to upcharge you for that. but the reality is, AI is like containers. and servers or networks in the capability and the Senate approved And the idea is to on the air gap if I could so you got. and the whole InfiniBox family. consider that type of air gap, Like a lot of the guys, and the software infrastructure, I love the agile sort of cadence. And thanks for all the nice we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Dave VellantePERSON

0.99+

Steve ManlyPERSON

0.99+

SanjayPERSON

0.99+

RickPERSON

0.99+

Lisa MartinPERSON

0.99+

VerizonORGANIZATION

0.99+

DavidPERSON

0.99+

AWSORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Fernando CastilloPERSON

0.99+

JohnPERSON

0.99+

Dave BalantaPERSON

0.99+

ErinPERSON

0.99+

Aaron KellyPERSON

0.99+

JimPERSON

0.99+

FernandoPERSON

0.99+

Phil BollingerPERSON

0.99+

Doug YoungPERSON

0.99+

1983DATE

0.99+

Eric HerzogPERSON

0.99+

LisaPERSON

0.99+

DeloitteORGANIZATION

0.99+

YahooORGANIZATION

0.99+

SpainLOCATION

0.99+

25QUANTITY

0.99+

Pat GelsingPERSON

0.99+

Data TorrentORGANIZATION

0.99+

EMCORGANIZATION

0.99+

AaronPERSON

0.99+

DavePERSON

0.99+

PatPERSON

0.99+

AWS Partner NetworkORGANIZATION

0.99+

Maurizio CarliPERSON

0.99+

IBMORGANIZATION

0.99+

Drew ClarkPERSON

0.99+

MarchDATE

0.99+

John TroyerPERSON

0.99+

Rich SteevesPERSON

0.99+

EuropeLOCATION

0.99+

BMWORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

three yearsQUANTITY

0.99+

85%QUANTITY

0.99+

Phu HoangPERSON

0.99+

VolkswagenORGANIZATION

0.99+

1QUANTITY

0.99+

Cook IndustriesORGANIZATION

0.99+

100%QUANTITY

0.99+

Dave ValataPERSON

0.99+

Red HatORGANIZATION

0.99+

Peter BurrisPERSON

0.99+

BostonLOCATION

0.99+

Stephen JonesPERSON

0.99+

UKLOCATION

0.99+

BarcelonaLOCATION

0.99+

Better Cybercrime Metrics ActTITLE

0.99+

2007DATE

0.99+

John FurrierPERSON

0.99+

Manish Agarwal and Darren Williams, Cisco


 

>>mhm. >>With me now are Manish Agarwal, senior director of product management for Hyper Flex at Cisco at Flash for all number four. Love that on Twitter And Deron Williams, the director of business development and sales for Cisco Mister Hyper flex at Mr Hyper Flex on Twitter. Thanks, guys. Hey, we're going to talk about some news and and hyper flex and what role it plays in accelerating the hybrid cloud journey. Gentlemen, welcome to the Cube. Good to see you. >>Thanks, David. >>Thanks. Hi, >>Daryn. Let's start with you. So for hybrid cloud you gotta have on Prem Connection. Right? So you've got to have basically a private cloud. What are your thoughts on that? >>Yeah, we agree. You can't, but you can't have a hybrid cloud without that private element. And you've got to have a strong foundation in terms of how you set up the whole benefit of the cloud model you're building in terms of what you want to try and get back from the cloud, you need a strong foundation. I'm conversions provides that we see more and more customers requiring a private cloud, and they're building with hyper convergence in particular hyper flex no to make all that work. They need a good, strong Cloud operations model to be able to connect both the private and the public. And that's where we look at insight. We've got solution around that. To be able to connect that around a Saas offering that looks around simplified operations, gives them optimisation and also automation to bring both private and public together in that hybrid world. >>Darren, let's stay with you for a minute when you talk to your customers. What are they thinking these days, when it comes to implementing hyper converged infrastructure in both the the enterprise and and at the edge? What are they trying to achieve? >>So there's many things they're trying to achieve? Probably the most brutal honesty is they're trying to save money. That's probably the quickest answer, but I think they're trying to look at in terms of simplicity. How can they remove layers of components they've had before in their infrastructure? We see obviously collapsing of storage into hyper conversions and storage networking, and we've got customers that have saved 80% worth of savings by doing that, a collapse into hyper conversion infrastructure away from their three tier infrastructure. Also about scalability. They don't know the end game, so they're looking about how they can size for what they know now and how they can grow that with hyper conversions. Very easy is one of the major factors and benefits of hyper conversions. They also obviously need performance and consistent performance. They don't want to compromise performance around their virtual machines when they want to run multiple workloads. They need that consistency all the way through. And then probably one of the biggest ones is that around. The simplicity model is the management layer ease of management to make it easier for their operations that we've got customers that have told us they've saved 50% of costs in their operations model, deploying out flex also around the time savings. They make massive time savings which they can reinvest in their infrastructure and their operations teams in being able to innovate and go forward. And then I think that we one of the biggest pieces we've seen as people move away from three tier architecture is the deployment elements, and the ease of deployment gets easy with hyper converged, especially with edge edges of major key use case for us and what I want. What our customers want to do is get the benefit of the data centre at the edge without a big investment. They don't compromise in performance, and they want that simplicity in both management employment. And we've seen analysts recommendations around what their readers are telling them in terms of how management deployments key for it, operations teams and how much they're actually saving by deploying edge and taking the burden away when they deployed hyper conversions. As I said, the savings elements to keep it and again, not always, but obviously those are his studies around about public Cloud being quite expensive at times over time for the wrong workloads. So by bringing them back, people can make savings. We again have customers that have made 50% savings over three years compared to their public cloud usage. So I'd say that's the key things that customers looking for >>Great. Thank you for that, Darrin minutes. We have some hard news. You've been working a lot on evolving the hyper flex line. What's the big news that you've just announced? >>Yeah, Thanks. Leave. So there are several things that we are announcing today. the first one is a new offer, um, called hyper Flex Express. This is, you know, Cisco Inter site lead and Cisco and decide managed it Hyper flex configurations that we feel are the fastest part to hybrid cloud. The second is we're expanding our server portfolio by adding support for HX on AM Iraq, U. C s and Iraq. And the third is a new capability that we're introducing that we're calling local contemporaries witness. And let me take a minute to explain what this is. This is a very nifty capability to optimise for forage environments. So, you know, this leverages the Ciscos ubiquitous presence. Uh, the networking, um, you know, products that we have in the environments worldwide. So the smallest hyper flex configuration that we have is, uh it do not configuration, which is primarily used in edge environment. Think of a, you know, a back home in a department store or a oil rig. Or it might even be a smaller data centre, uh, somewhere, uh, on the globe. For these two not configurations. There is always a need for a third entity that, you know, industry term for that is either a witness or an arbitrator. Uh, we had that for hyper flex as well. The problem that customers faces where you host this witness it cannot be on the cluster because it's the job of the witnesses to when the when the infrastructure is going down, it basically breaks, um, sort of upgrade rates. Which note gets to survive, so it needs to be outside of the cluster. But finding infrastructure, uh, to actually host this is a problem, especially in the edge environments where these are resource constrained environment. So what we've done is we've taken that witness. We've converted it into a container reform factor and then qualified a very large a slew of Cisco networking products that we have right from S. R. S R. Texas catalyst, industrial routers, even even a raspberry pi that can host host this witness, eliminating the need for you to find yet another piece of infrastructure or doing any, um, you know, care and feeding of that infrastructure. You can host it on something that already exists in the environment. So those are the three things that we're announcing today. >>So I want to ask you about hyper Flex Express. You know, obviously the whole demand and supply chain is out of whack. Everybody's global supply chain issues are in the news. Everybody's dealing with it. Can you expand on that? A little bit more Can can hyper flex express help customers respond to some of these issues. >>Yeah, indeed. The, uh, you know, the primary motivation for hyper Flex Express was indeed, uh, an idea that, you know, one of the folks around my team had, which was to build a set of hyper flex configurations that are, you know, would have a shorter lead time. But as we were brainstorming, we were actually able to tag on multiple other things and make sure that, you know, there is in it for something in it for customers, for sales as well as our partners. So, for example, you know, for customers, we've been able to dramatically simplify the configuration and the instal for hyper flex express. These are still hypertext configurations, and you would, at the end of it, get a hyper flex cluster. But the part to that cluster is much much simplifying. Second is that we've added in flexibility where you can now deploy these, uh, these are data centre configurations But you can deploy these with or without fabric interconnects, meaning you can deploy it with your existing top of rack. Um, we've also, you know, already attract attractive price point for these. And of course, you know these will have better lead times because we made sure that, you know, we are using components that are that we have clear line of sight from a supply perspective for partner and sales. This is represents a high velocity sales motion, a faster turnaround time, Uh, and a frictionless sales motion for our distributors. Uh, this is actually a settled, risky, friendly configurations, which they would find very easy to stalk and with a quick turnaround time, this would be very attractive for the deceased as well. >>It's interesting many. So I'm looking at some fresh survey data. More than 70% of the customers that were surveyed this GTR survey again mentioned at the top. More than 70% said they had difficulty procuring, uh, server hardware and networking was also a huge problem. So so that's encouraging. What about Manisha AMG that's new for hyper flex? What's that going to give customers that they couldn't get before? >>Yeah, so you know, in the short time that we've had UCS am direct support, we've had several record breaking benchmark results that we've published. So it's a it's a It's a powerful platform with a lot of performance in it and hyper flex. Uh, you know, the differentiator that we've had from Day one is that it is. It has the industry leading storage performance. So with this, we're going to get the fastest compute together with the fastest storage and this we are hoping that will basically unlock, you know, a unprecedented level of performance and efficiency, but also unlock several new workloads that were previously locked out from the hyper converged experience. >>Yeah, cool. Uh, so, Darren, can >>you can you give us >>an idea as to how hyper flexes is doing in the field? >>Sure, Absolutely So both me and my initial been involved right from the start and before it was called Hyper Flex, and we've had a great journey, and it's very excited to see where we're taking where we've been with the technology. So we have over 5000 customers worldwide, and we're currently growing faster year over year than the market. The majority of our customers are repeat buyers, which is always a good sign in terms of coming back when they approved the technology and are comfortable with technology. They repeat by for expanding capacity, putting more workloads on. They're using different use cases on there. And from an energy perspective, more numbers of science so really good. Endorsement the technology. We get used across all verticals or segments, um, to house mission critical applications as well as the traditional virtual server infrastructures. Uh, and we are the lifeblood of our customers around those mission critical customers think one example, and I apologise for the worldwide audience. But this resonates with the American audiences the Super Bowl. So the sofa like stadium that housed the Super Bowl actually has Cisco hyper Flex running all the management services through from the entire stadium for digital signage. Four K video distribution, and it's complete completely cashless. So if that were to break during Super Bowl, that would have been a big, uh, news article, but it was run perfectly. We in the design of the solution, we're able to collapse down nearly 200 servers into a few notes across a few racks and have 100 120 virtual machines running the whole stadium without missing a heartbeat. And that is mission critical for you to run Super Bowl and not be on the front of the press afterwards for the wrong reasons. That's a win for us. So we really are really happy with High Flex where it's going, what it's doing. And some of the use cases were getting involved in very, very excited. >>Come on, Darren. It's Super Bowl NFL. That's a That's international now. And, you know, the NFL >>NFL. It's >>invading London. Of course I see the picture of the real football over your shoulder, But last question for many is give us a little roadmap. What's the future hold for hyper flex? >>Yeah, so you know, as Darren said, both Darren and I have been involved the type of flicks since the beginning, Uh, but I think the best is yet to come. There are three main pillars for for hyper Flex. One is in. The site is central to our strategy. It provides a lot of customer benefit from a single pane of glass management. But we're going to take this beyond the Lifecycle management, which is for hyper flex, which is integrated in winter side today and element management. We're going to take it beyond that and start delivering customer value on the dimensions of a job. Because Interstate really provides us an ideal platform to gather starts from all the clusters across the globe. Do AML and do some predictive analysis with that and return it back as, uh, you know, customer valued, um, actionable insights. So that is one. The second is you'll see us expand the hyper flex portfolio. Go beyond you see us to third party server platforms, and newer, you see a server platforms as well. But the highlight there is one that I'm really really excited about and think that there is a lot of potential in terms of the number of customers we can help is a checks on X CDs. Experience is another thing that we're able to, uh you know, uh, announcing a bunch of capabilities on in this particular launch. But a check sonic series. We'll have that by the end of this calendar year, and that should unlock with the flexibility of X series of hosting a multitude of workloads and the simplicity of hyper flex. We're hoping that would bring a lot of benefits to new workloads, that we're locked out previously. And then the last thing is hyper flex leader platform. This is the heart of the offering today, Uh, and you'll see the hyper flex data platform itself. It's a distributed architecture, unique distributed architecture primarily where we get our, you know, record breaking performance from you'll see it get faster, more scalable, more resilient. And we'll optimise it for, you know, containerised workloads, meaning it will get granular containerised container granular management capabilities and optimised for public. So those are some things that were the team is busy working on, and we should see that come to fruition. I'm hoping that we'll be back at this forum and maybe before the end of the year and talking about some of these new capabilities. >>That's great. Thank you very much for that. Okay, guys, we got to leave it there and you know many She was talking about the HX on X Series. That's huge. Customers are gonna love that, and it's a great transition because in a moment I'll be back with Vikas Ratna and Jim Leach and we're gonna dig into X series. Some real serious engineering went into this platform, and we're gonna explore what it all means. You're watching simplifying hybrid cloud on the cube, your leader in enterprise tech coverage.

Published Date : Mar 11 2022

SUMMARY :

Love that on Twitter And Deron Williams, the director of business development and sales for Cisco Mister So for hybrid cloud you gotta have on Prem from the cloud, you need a strong foundation. and and at the edge? They need that consistency all the way through. on evolving the hyper flex line. Uh, the networking, um, you know, products that we have are in the news. Second is that we've added in flexibility where you can now deploy these, More than 70% of the are hoping that will basically unlock, you know, a unprecedented Uh, so, Darren, can and not be on the front of the press afterwards for the wrong reasons. And, you know, the NFL It's What's the future hold for hyper flex? We'll have that by the end of this calendar year, and that should unlock hybrid cloud on the cube, your leader in enterprise tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Manish AgarwalPERSON

0.99+

50%QUANTITY

0.99+

DarrenPERSON

0.99+

80%QUANTITY

0.99+

Deron WilliamsPERSON

0.99+

CiscoORGANIZATION

0.99+

CiscosORGANIZATION

0.99+

Hyper FlexORGANIZATION

0.99+

twoQUANTITY

0.99+

SecondQUANTITY

0.99+

Super BowlEVENT

0.99+

thirdQUANTITY

0.99+

More than 70%QUANTITY

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

DarynPERSON

0.99+

oneQUANTITY

0.98+

OneQUANTITY

0.98+

over 5000 customersQUANTITY

0.98+

three thingsQUANTITY

0.98+

todayDATE

0.98+

one exampleQUANTITY

0.98+

Darren WilliamsPERSON

0.98+

TwitterORGANIZATION

0.98+

X seriesTITLE

0.97+

three tierQUANTITY

0.97+

Jim LeachPERSON

0.97+

hyper Flex ExpressCOMMERCIAL_ITEM

0.97+

hyper flex expressTITLE

0.96+

Cisco InterORGANIZATION

0.96+

hyper flexORGANIZATION

0.96+

Manisha AMGORGANIZATION

0.96+

hyper flexORGANIZATION

0.95+

first oneQUANTITY

0.95+

Four KQUANTITY

0.94+

S. R. S R.LOCATION

0.94+

hyper FlexORGANIZATION

0.93+

third entityQUANTITY

0.91+

hyper FlexCOMMERCIAL_ITEM

0.91+

Vikas RatnaORGANIZATION

0.91+

InterstateORGANIZATION

0.91+

FlashORGANIZATION

0.9+

nearly 200 serversQUANTITY

0.9+

IraqLOCATION

0.9+

LondonLOCATION

0.9+

end of this calendar yearDATE

0.9+

three main pillarsQUANTITY

0.89+

SaasORGANIZATION

0.89+

single paneQUANTITY

0.89+

100 120 virtual machinesQUANTITY

0.88+

Super Bowl NFLEVENT

0.87+

HXORGANIZATION

0.87+

Day oneQUANTITY

0.85+

X SeriesTITLE

0.83+

AmericanOTHER

0.82+

NFLEVENT

0.81+

ExpressCOMMERCIAL_ITEM

0.8+

Hyper FlexCOMMERCIAL_ITEM

0.78+

Manish AgarwalORGANIZATION

0.76+

fourQUANTITY

0.75+

Cisco Mister Hyper flexORGANIZATION

0.74+

Ajay Singh, Pure Storage | CUBEconversation


 

(upbeat music) >> The Cloud essentially turned the data center into an API and ushered in the era of programmable infrastructure, no longer do we think about deploying infrastructure in rigid silos with a hardened, outer shell, rather infrastructure has to facilitate digital business strategies. And what this means is putting data at the core of your organization, irrespective of its physical location. It also means infrastructure generally and storage specifically must be accessed as sets of services that can be discovered, deployed, managed, secured, and governed in a DevOps model or OpsDev, if you prefer. Now, this has specific implications as to how vendor product strategies will evolve and how they'll meet modern data requirements. Welcome to this Cube conversation, everybody. This is Dave Vellante. And with me to discuss these sea changes is Ajay Singh, the Chief Product Officer of Pure Storage, Ajay welcome. >> Thank you, David, gald to be on. >> Yeah, great to have you, so let's talk about your role at Pure. I think you're the first CPO, what's the vision there? >> That's right, I just joined up Pure about eight months ago from VMware as the chief product officer and you're right, I'm the first our chief product officer at Pure. And at VMware I ran the Cloud management business unit, which was a lot about automation and infrastructure as code. And it's just great to join Pure, which has a phenomenal all flash product set. I kind of call it the iPhone or flash story super easy to use. And how do we take that same ease of use, which is a heart of a Cloud operating principle, and how do we actually take it up to really deliver a modern data experience, which includes infrastructure and storage as code, but then even more beyond that and how do you do modern operations and then modern data services. So super excited to be at Pure. And the vision, if you may, at the end of the day, is to provide, leveraging this moderate experience, a connected and effortless experience data experience, which allows customers to ultimately focus on what matters for them, their business, and by really leveraging and managing and winning with their data, because ultimately data is the new oil, if you may, and if you can mine it, get insights from it and really drive a competitive edge in the digital transformation in your head, and that's what be intended to help our customers to. >> So you joined earlier this year kind of, I guess, middle of the pandemic really I'm interested in kind of your first 100 days, what that was like, what key milestones you set and now you're into your second a 100 plus days. How's that all going? What can you share with us in and that's interesting timing because the effects of the pandemic you came in in a kind of post that, so you had experience from VMware and then you had to apply that to the product organization. So tell us about that sort of first a 100 days and the sort of mission now. >> Absolutely, so as we talked about the vision, around the modern data experience, kind of have three components to it, modernizing the infrastructure and really it's kudos to the team out of the work we've been doing, a ton of work in modernizing the infrastructure, I'll briefly talk to that, then modernizing the data, much more than modernizing the operations. I'll talk to that as well. And then of course, down the pike, modernizing data services. So if you think about it from modernizing the infrastructure, if you think about Pure for a minute, Pure is the first company that took flash to mainstream, essentially bringing what we call consumer simplicity to enterprise storage. The manual for the products with the front and back of a business card, that's it, you plug it in, boom, it's up and running, and then you get proactive AI driven support, right? So that was kind of the heart of Pure. Now you think about Pure again, what's unique about Pure has been a lot of our competition, has dealt with flash at the SSD level, hey, because guess what? All this software was built for hard drive. And so if I can treat NAND as a solid state drive SSD, then my software would easily work on it. But with Pure, because we started with flash, we released went straight to the NAND level, and as opposed to kind of the SSD layer, and what that does is it gives you greater efficiency, greater reliability and create a performance compared to an SSD, because you can optimize at the chip level as opposed to at the SSD module level. That's one big advantage that Pure has going for itself. And if you look at the physics, in the industry for a minute, there's recent data put out by Wikibon early this year, effectively showing that by the year 2026, flash on a dollar per terabyte basis, just the economics of the semiconductor versus the hard disk is going to be cheaper than hard disk. So this big inflection point is slowly but surely coming that's going to disrupt the hardest industry, already the high end has been taken over by flash, but hybrid is next and then even the long tail is coming up over there. And so to end to that extent our lead, if you may, the introduction of QLC NAND, QLC NAND powerful competition is barely introducing, we've been at it for a while. We just recently this year in my first a 100 days, we introduced the flasher AC, C40 and C60 drives, which really start to open up our ability to go after the hybrid story market in a big way. It opens up a big new market for us. So great work there by the team,. Also at the heart of it. If you think about it in the NAND side, we have our flash array, which is a scale-up latency centric architecture and FlashBlade which is a scale-out throughput architecture, all operating with NAND. And what that does is it allows us to cover both structured data, unstructured data, tier one apps and tier two apps. So pretty broad data coverage in that journey to the all flash data center, slowly but surely we're heading over there to the all flash data center based on demand economics that we just talked about, and we've done a bunch of releases. And then the team has done a bunch of things around introducing and NVME or fabric, the kind of thing that you expect them to do. A lot of recognition in the industry for the team or from the likes of TrustRadius, Gartner, named FlashRay, the Carton Peer Insights, the customer choice award and primary storage in the MQ. We were the leader. So a lot of kudos and recognition coming to the team as a result, Flash Blade just hit a billion dollars in cumulative revenue, kind of a leader by far in kind of the unstructured data, fast file an object marketplace. And then of course, all the work we're doing around what we say, ESG, environmental, social and governance, around reducing carbon footprint, reducing waste, our whole notion of evergreen and non-disruptive upgrades. We also kind of did a lot of work in that where we actually announced that over 2,700 customers have actually done non-disruptive upgrades over the technology. >> Yeah a lot to unpack there. And a lot of this sometimes you people say, oh, it's the plumbing, but the plumbing is actually very important too. 'Cause we're in a major inflection point, when we went from spinning disk to NAND. And it's all about volumes, you're seeing this all over the industry now, you see your old boss, Pat Gelsinger, is dealing with this at Intel. And it's all about consumer volumes in my view anyway, because thanks to Steve Jobs, NAND volumes are enormous and what two hard disk drive makers left in the planet. I don't know, maybe there's two and a half, but so those volumes drive costs down. And so you're on that curve and you can debate as to when it's going to happen, but it's not an if it's a when. Let me, shift gears a little bit. Because Cloud, as I was saying, it's ushered in this API economy, this as a service model, a lot of infrastructure companies have responded. How are you thinking at Pure about the as a service model for your customers? What's the strategy? How is it evolving and how does it differentiate from the competition? >> Absolutely, a great question. It's kind of segues into the second part of the moderate experience, which is how do you modernize the operations? And that's where automation as a service, because ultimately, the Cloud has validated and the address of this model, right? People are looking for outcomes. They care less about how you get there. They just want the outcome. And the as a service model actually delivers these outcomes. And this whole notion of infrastructure as code is kind of the start of it. Imagine if my infrastructure for a developer is just a line of code, in a Git repository in a program that goes through a CICD process and automatically kind of is configured and set up, fits in with the Terraform, the Ansibles, all that different automation frameworks. And so what we've done is we've gone down the path of really building out what I think is modern operations with this ability to have storage as code, disability, in addition modern operations is not just storage scored, but also we've got recently introduced some comprehensive ransomware protection, that's part of modern operations. There's all the threat you hear in the news or ransomware. We introduced what we call safe mode snapshots that allow you to recover in literally seconds. When you have a ransomware attack, we also have in the modern operations Pure one, which is maybe the leader in AI driven support to prevent downtime. We actually call you 80% of the time and fix the problems without you knowing about it. That's what modern operations is all about. And then also Martin operations says, okay, you've got flash on your on-prem side, but even maybe using flash in the public Cloud, how can I have seamless multi-Cloud experience in our Cloud block store we've introduced around Amazon, AWS and Azure allows one to do that. And then finally, for modern applications, if you think about it, this whole notion of infrastructure's code, as a service, software driven storage, the Kubernetes infrastructure enables one to really deliver a great automation framework that enables to reduce the labor required to manage the storage infrastructure and deliver it as code. And we have, kudos to Charlie and the Pure storage team before my time with the acquisition of Portworx, Portworx today is truly delivers true storage as code orchestrated entirely through Kubernetes and in a multi-Cloud hybrid situation. So it can run on EKS, GKE, OpenShift rancher, Tansu, recently announced as the leader by giggle home for enterprise Kubernetes storage. We were really proud about that asset. And then finally, the last piece are Pure as a service. That's also all outcome oriented, SLS. What matters is you sign up for SLS, and then you get those SLS, very different from our competition, right? Our competition tends to be a lot more around financial engineering, hey, you can buy it OPEX versus CapEx. And, but you get the same thing with a lot of professional services, we've really got, I'd say a couple of years and lead on, actually delivering and managing with SRE engineers for the SLA. So a lot of great work there. We recently also introduced Cisco FlashStack, again, flash stack as a service, again, as a service, a validation of that. And then finally, we also recently did a announcement with Aquaponics, with their bare metal as a service where we are a key part of their bare metal as a service offering, again, pushing the kind of the added service strategy. So yes, big for us, that's where the buck is skating, half the enterprises, even on prem, wanting to consume things in the Cloud operating model. And so that's where we're putting it lot. >> I see, so your contention is, it's not just this CapEx to OPEX, that's kind of the, during the economic downturn of 2007, 2008, the economic crisis, that was the big thing for CFOs. So that's kind of yesterday's news. What you're saying is you're creating a Cloud, like operating model, as I was saying upfront, irrespective of physical location. And I see that as your challenge, the industry's challenge, be, if I'm going to effect the digital transformation, I don't want to deal with the Cloud primitives. I want you to hide the underlying complexity of that Cloud. I want to deal with higher level problems, but so that brings me to digital transformation, which is kind of the now initiative, or I even sometimes call it the mandate. There's not a one size fits all for digital transformation, but I'm interested in your thoughts on the must take steps, universal steps that everybody needs to think about in a digital transformation journey. >> Yeah, so ultimately the digital transformation is all about how companies are gain a competitive edge in this new digital world or that the company are, and the competition are changing the game on, right? So you want to make sure that you can rapidly try new things, fail fast, innovate and invest, but speed is of the essence, agility and the Cloud operating model enables that agility. And so what we're also doing is not only are we driving agility in a multicloud kind of data, infrastructure, data operation fashion, but we also taking it a step further. We were also on the journey to deliver modern data services. Imagine on a Pure on-prem infrastructure, along with your different public Clouds that you're working on with the Kubernetes infrastructures, you could, with a few clicks run Kakfa as a service, TensorFlow as a service, Mongo as a service. So me as a technology team can truly become a service provider and not just an on-prem service provider, but a multi-Cloud service provider. Such that these services can be used to analyze the data that you have, not only your data, your partner data, third party public data, and how you can marry those different data sets, analyze it to deliver new insights that ultimately give you a competitive edge in the digital transformation. So you can see data plays a big role there. The data is what generates those insights. Your ability to match that data with partner data, public data, your data, the analysis on it services ready to go, as you get the digital, as you can do the insights. You can really start to separate yourself from your competition and get on the leaderboard a decade from now when this digital transformation settles down. >> All right, so bring us home, Ajay, summarize what does a modern data strategy look like and how does it fit into a digital business or a digital organization? >> So look, at the end of the day, data and analysis, both of them play a big role in the digital transformation. And it really comes down to how do I leverage this data, my data, partner data, public data, to really get that edge. And that links back to a vision. How do we provide that connected and effortless, modern data experience that allows our customers to focus on their business? How do I get the edge in the digital transformation? But easily leveraging, managing and winning with their data. And that's the heart of where Pure is headed. >> Ajay Singh, thanks so much for coming inside theCube and sharing your vision. >> Thank you, Dave, it was a real pleasure. >> And thank you for watching this Cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)

Published Date : Aug 18 2021

SUMMARY :

in the era of programmable Yeah, great to have you, And the vision, if you the pandemic you came in in kind of the unstructured data, And a lot of this sometimes and the address of this model, right? of 2007, 2008, the economic crisis, the data that you have, And that's the heart of and sharing your vision. was a real pleasure. And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavidPERSON

0.99+

DavePERSON

0.99+

Ajay SinghPERSON

0.99+

CharliePERSON

0.99+

AmazonORGANIZATION

0.99+

Pat GelsingerPERSON

0.99+

AjayPERSON

0.99+

Steve JobsPERSON

0.99+

80%QUANTITY

0.99+

AWSORGANIZATION

0.99+

PureORGANIZATION

0.99+

TrustRadiusORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

2008DATE

0.99+

2007DATE

0.99+

firstQUANTITY

0.99+

CapExORGANIZATION

0.99+

AquaponicsORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

yesterdayDATE

0.99+

IntelORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

OPEXORGANIZATION

0.99+

MartinPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

100 plus daysQUANTITY

0.99+

Pure StorageORGANIZATION

0.99+

second partQUANTITY

0.99+

over 2,700 customersQUANTITY

0.99+

WikibonORGANIZATION

0.98+

secondQUANTITY

0.98+

first 100 daysQUANTITY

0.98+

billion dollarsQUANTITY

0.98+

this yearDATE

0.97+

KubernetesTITLE

0.97+

CiscoORGANIZATION

0.96+

two and a halfQUANTITY

0.96+

oneQUANTITY

0.96+

MongoORGANIZATION

0.96+

TansuORGANIZATION

0.95+

AzureORGANIZATION

0.95+

early this yearDATE

0.94+

earlier this yearDATE

0.94+

100 daysQUANTITY

0.94+

FlashRayORGANIZATION

0.93+

first companyQUANTITY

0.93+

tier two appsQUANTITY

0.93+

C60COMMERCIAL_ITEM

0.92+

pandemicEVENT

0.92+

OpenShiftORGANIZATION

0.91+

SLSTITLE

0.91+

2026DATE

0.91+

CartonORGANIZATION

0.91+

three componentsQUANTITY

0.9+

todayDATE

0.88+

CloudTITLE

0.88+

a minuteQUANTITY

0.87+

SREORGANIZATION

0.86+

Cloud blockTITLE

0.86+

two hard disk driveQUANTITY

0.86+

EKSORGANIZATION

0.85+

KubernetesORGANIZATION

0.82+

about eight months agoDATE

0.82+

AnsiblesORGANIZATION

0.8+

GKEORGANIZATION

0.79+

KakfaORGANIZATION

0.79+

a decadeDATE

0.77+

tier one appsQUANTITY

0.76+

Peer InsightsTITLE

0.75+

GitTITLE

0.75+

TensorFlowORGANIZATION

0.71+

one big advantageQUANTITY

0.7+

Dr. Eng Lim Goh, HPE | HPE Discover 2021


 

>>Please >>welcome back to HPD discovered 2021. The cubes virtual coverage, continuous coverage of H P. S H. P. S. Annual customer event. My name is Dave Volonte and we're going to dive into the intersection of high performance computing data and AI with DR Eng limb go who is the senior vice president and CTO for AI Hewlett Packard enterprise Doctor go great to see you again. Welcome back to the cube. >>Hello Dave, Great to talk to you again. >>You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the day two keynotes here at discover and you talked about thriving in the age of insights and how to craft a data centric strategy. And you addressed you know some of the biggest problems I think organizations face with data that's You got a data is plentiful but insights they're harder to come by. And you really dug into some great examples in retail banking and medicine and health care and media. But stepping back a little bit with zoom out on discovered 21, what do you make of the events so far? And some of your big takeaways? >>Mm Well you started with the insightful question, Right? Yeah, data is everywhere then. But we like the insight. Right? That's also part of the reason why that's the main reason why you know Antonio on day one focused and talked about that. The fact that we are now in the age of insight, right? Uh and uh and and how to thrive thrive in that in this new age. What I then did on the day to kino following Antonio is to talk about the challenges that we need to overcome in order in order to thrive in this new asia. >>So maybe we could talk a little bit about some of the things that you took away in terms I'm specifically interested in some of the barriers to achieving insights when customers are drowning in data. What do you hear from customers? What we take away from some of the ones you talked about today? >>Oh, very pertinent question. Dave You know the two challenges I spoke about right now that we need to overcome in order to thrive in this new age. The first one is is the current challenge and that current challenge is uh you know stated is no barriers to insight. You know when we are awash with data. So that's a statement. Right? How to overcome those barriers. What are the barriers of these two insight when we are awash in data? Um I in the data keynote I spoke about three main things. Three main areas that received from customers. The first one, the first barrier is in many with many of our customers. A data is siloed. All right. You know, like in a big corporation you've got data siloed by sales, finance, engineering, manufacturing, and so on, uh supply chain and so on. And uh there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the that was the first barrier. We spoke about barriers to incite when we are washed with data. The second barrier is uh that we see amongst our customers is that uh data is raw and dispersed when they are stored and and uh and you know, it's tough to get tough to to get value out of them. Right? And I in that case I I used the example of uh you know the May 6 2010 event where the stock market dropped a trillion dollars in in tens of minutes. You know, we we all know those who are financially attuned with know about this uh incident, But this is not the only incident. There are many of them out there and for for that particular May six event, uh you know, it took a long time to get insight months. Yeah, before we for months we had no insight as to what happened, why it happened, right. Um, and and there were many other incidences like this and the regulators were looking for that one rule that could, that could mitigate many of these incidences. Um, one of our customers decided to take the hard road to go with the tough data right? Because data is rolling dispersed. So they went into all the different feeds of financial transaction information, took the took the tough took the tough road and analyze that data took a long time to assemble. And they discovered that there was quote stuffing right? That uh people were sending a lot of traits in and then cancelling them almost immediately. You have to manipulate the market. Um And why why why didn't we see it immediately? Well, the reason is the process reports that everybody sees the rule in there that says all trades, less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 103 100 100 shares trades uh to fly under the radar to do this manipulation. So here is here the second barrier right? Data could be raw and dispersed. Um Sometimes you just have to take the hard road and um and to get insight And this is 1 1 great example. And then the last barrier is uh is has to do with sometimes when you start a project to to get insight to get uh to get answers and insight. You you realize that all the datas around you but you don't you don't seem to find the right ones to get what you need. You don't you don't seem to get the right ones. Yeah. Um here we have three quick examples of customers. 111 was it was a great example right? Where uh they were trying to build a language translator, a machine language translator between two languages. Right? By not do that. They need to get hundreds of millions of word pairs, you know, of one language compared uh with a corresponding other hundreds of millions of them. They say, well I'm going to get all these word pairs. Someone creative thought of a willing source. And you thought it was the United Nations, you see. So sometimes you think you don't have the right data with you, but there might be another source. And the willing one that could give you that data Right? The 2nd 1 has to do with uh there was uh the uh sometimes you you may just have to generate that data, interesting one. We had an autonomous car customer that collects all these data from their cars, right? Massive amounts of data, loss of sensors, collect loss of data. And uh, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car uh in in um in fine weather and collected the car driving on this highway in rain and also in stone, but never had the opportunity to collect the car in hill because that's a rare occurrence. So instead of waiting for a time where the car can dr inhale, they build a simulation you by having the car collector in snow and simulated him. So, these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated the fact that data silo the Federated it various associated with data. That's tough to get that. They just took the hard road, right? And, and sometimes, thirdly, you just have to be creative to get the right data. You need, >>wow, I I'll tell you, I have about 100 questions based on what you just said. Uh, there's a great example, the flash crash. In fact, Michael Lewis wrote about this in his book The Flash Boys and essentially right. It was high frequency traders trying to front run the market and sending in small block trades trying to get on the front end it. So that's and they, and they chalked it up to a glitch like you said, for months. Nobody really knew what it was. So technology got us into this problem. I guess my question is, can technology help us get out of the problem? And that maybe is where AI fits in. >>Yes, yes. Uh, in fact, a lot of analytics, we went in to go back to the raw data that is highly dispersed from different sources, right, assemble them to see if you can find a material trend, right? You can see lots of trends, right? Like, uh, you know, we if if humans look at things right, we tend to see patterns in clouds, right? So sometimes you need to apply statistical analysis, um math to to be sure that what the model is seeing is is real. Right? And and that required work. That's one area. The second area is uh you know, when um uh there are times when you you just need to to go through that uh that tough approach to to find the answer. Now, the issue comes to mind now is is that humans put in the rules to decide what goes into a report that everybody sees. And in this case uh before the change in the rules. Right? But by the way, after the discovery, uh authorities change the rules and all all shares, all traits of different any sizes. It has to be reported. No. Yeah. Right. But the rule was applied uh you know, to say earlier that shares under 100 trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and and under for understandable reasons. I mean they probably didn't want that for various reasons not to put everything in there so that people could still read it uh in a reasonable amount of time. But uh we need to understand that rules were being put in by humans for the reports we read. And as such there are times you just need to go back to the raw data. >>I want to ask, >>it's gonna be tough. >>Yeah. So I want to ask a question about AI is obviously it's in your title and it's something you know a lot about but and I want to make a statement, you tell me if it's on point or off point. So it seems that most of the Ai going on in the enterprise is modeling data science applied to troves of data but but there's also a lot of ai going on in consumer whether it's you know, fingerprint technology or facial recognition or natural language processing will a two part question will the consumer market as has so often in the enterprise sort of inform us uh the first part and then will there be a shift from sort of modeling if you will to more you mentioned autonomous vehicles more ai influencing in real time. Especially with the edge you can help us understand that better. >>Yeah, it's a great question. Right. Uh there are three stages to just simplify, I mean, you know, it's probably more sophisticated than that but let's simplify three stages. All right. To to building an Ai system that ultimately can predict, make a prediction right or to to assist you in decision making, have an outcome. So you start with the data massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data and the machine uh starts to evolve a model based on all the data is seeing. It starts to evolve right to the point that using a test set of data that you have separately kept a site that you know the answer for. Then you test the model uh you know after you trained it with all that data to see whether it's prediction accuracy is high enough and once you are satisfied with it, you you then deploy the model to make the decision and that's the influence. Right? So a lot of times depend on what what we are focusing on. We we um in data science are we working hard on assembling the right data to feed the machine with, That's the data preparation organization work. And then after which you build your models, you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you you pick one model and the prediction isn't that robust, it is good but then it is not consistent right now. What you do is uh you try another model so sometimes it's just keep trying different models until you get the right kind. Yeah, that gives you a good robust decision making and prediction after which It is tested well Q eight. You would then take that model and deploy it at the edge. Yeah. And then at the edges is essentially just looking at new data, applying it to the model that you have trained and then that model will give you a prediction decision. Right? So uh it is these three stages. Yeah, but more and more uh your question reminds me that more and more people are thinking as the edge become more and more powerful. Can you also do learning at the edge? Right. That's the reason why we spoke about swarm learning the last time, learning at the edge as a swamp, right? Because maybe individually they may not have enough power to do so. But as a swamp they made >>is that learning from the edge? You're learning at the edge? In other words? >>Yes. >>Yeah, I understand the question. Yeah. >>That's a great question. That's a great question. Right? So uh the quick answer is learning at the edge, right? Uh and and also from the edge, but the main goal, right? The goal is to learn at the edge so that you don't have to move the data that the edge sees first back to the cloud or the core to do the learning because that would be the reason. One of the main reasons why you want to learn at the edge, right? Uh So so that you don't need to have to send all that data back and assemble it back from all the different Edge devices, assemble it back to the cloud side to to do the learning right. With someone you can learn it and keep the data at the edge and learn at that point. >>And then maybe only selectively send the autonomous vehicle example you gave us great because maybe there, you know, there may be only persisting, they're not persisting data that is inclement weather or when a deer runs across the front. And then maybe they they do that and then they send that smaller data set back and maybe that's where it's modelling done. But the rest can be done at the edges. It's a new world that's coming down. Let me ask you a question, is there a limit to what data should be collected and how it should be collected? >>That's a great question again, you know uh wow today, full of these uh insightful questions that actually touches on the second challenge. Right? How do we uh in order to thrive in this new age of insight? The second challenge is are you know the is our future challenge, right? What do we do for our future? And and in there is uh the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that I talk about what to collect right? When to organize it when you collect and where will your data be, you know, going forward that you are collecting from? So what, when and where for the what data for the what data to collect? That? That was the question you ask. Um it's it's a question that different industries have to ask themselves because it will vary, right? Um Let me give you the, you use the autonomous car example, let me use that. And We have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from the fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars, collecting data so they can train and eventually deploy commercial cars. Right? Um, so this data collection cars they collect as a fleet of them collect 10 petabytes a day and when it came to us uh building a storage system yeah, to store all of that data, they realized they don't want to afford to store all of it. Now here comes the dilemma, right? Should what should I after I spent so much effort building all these cars and sensors and collecting data, I've now decide what to delete. That's a dilemma right now in working with them on this process of trimming down what they collected. You know, I'm constantly reminded of the sixties and seventies, right? To remind myself 16 seventies we call a large part of our D. N. A junk DNA. Today we realize that a large part of that what we call john has function as valuable function. They are not jeans, but they regulate the function of jeans, you know? So, so what's jumped in the yesterday could be valuable today or what's junk today could be valuable tomorrow. Right? So, so there's this tension going on right between you decided not wanting to afford to store everything that you can get your hands on. But on the other hand, you you know, you worry you you you ignore the wrong ones, right? You can see this tension in our customers, right? And it depends on industry here. Right? In health care, they say I have no choice. I I want it. All right. One very insightful point brought up by one health care provider that really touched me was, you know, we are not we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But you also care for the people were not caring for. How do we find them? Mhm. Right. And that therefore they did not just need to collect data that is uh that they have with from their patients. They also need to reach out right to outside data so that they can figure out who they are not caring for. Right? So they want it all. So I tell us them. So what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and find out. Of course they also come back to us rightfully that, you know, we have to then work out a way to help them build that system, you know, so that health care, right? And and if you go to other industries like banking, they say they can't afford to keep them on, but they are regulated. Seems like healthcare, they are regulated as to uh privacy and such. Like so many examples different industries having different needs but different approaches to how what they collect. But there is this constant tension between um you perhaps deciding not wanting to fund all of that uh all that you can stall right on the other hand, you know, if you if you kind of don't want to afford it and decide not to store some uh if he does some become highly valuable in the future right? Don't worry. >>We can make some assumptions about the future, can't we? I mean, we know there's gonna be a lot more data than than we've ever seen before. We know that we know. Well notwithstanding supply constraints on things like nand, we know the prices of storage is gonna continue to decline. We also know and not a lot of people are really talking about this but the processing power but he says moore's law is dead. Okay, it's waning. But the processing power when you combine the Cpus and N. P. U. S. And Gpus and accelerators and and so forth actually is is increasing. And so when you think about these use cases at the edge, you're going to have much more processing power, you're going to have cheaper storage and it's going to be less expensive processing. And so as an ai practitioner, what can you do with that? >>So the amount of data that's gonna come in, it's gonna we exceed right? Our drop in storage costs are increasing computer power. Right? So what's the answer? Right? So so the the answer must be knowing that we don't and and even the drop in price and increase in bandwidth, it will overwhelm the increased five G will overwhelm five G. Right? Given amount 55 billion of them collecting. Right? So the answer must be that there might need to be a balance between you needing to bring all that data from the 55 billion devices data back to a central as a bunch of central. Cause because you may not be able to afford to do that firstly band with even with five G. M and and SD when you'll still be too expensive given the number of devices out there, Were you given storage costs dropping? You'll still be too expensive to try and store them all. So the answer must be to start at least to mitigate the problem to some leave both a lot of the data out there. Right? And only send back the pertinent ones as you said before. But then if you did that, then how are we gonna do machine learning at the core and the cloud side? If you don't have all the data, you want rich data to train with. Right? Some sometimes you wanna mix of the uh positive type data and the negative type data so you can train the machine in a more balanced way. So the answer must be eventually right. As we move forward with these huge number of devices out of the edge to do machine learning at the edge today, we don't have enough power. Right? The edge typically is characterized by a lower uh energy capability and therefore lower compute power. But soon, you know, even with lower energy they can do more with compute power, improving in energy efficiency, Right? Uh So learning at the edge today we do influence at the edge. So we data model deploy and you do in France at the age, that's what we do today. But more and more I believe given a massive amount of data at the edge, you, you have to have to start doing machine learning at the edge and, and if when you don't have enough power then you aggregate multiple devices, compute power into a swamp and learn as a swan. >>Oh, interesting. So now of course, if, if I were sitting and fly, fly on the wall in hp board meeting, I said okay. HB is as a leading provider of compute how do you take advantage of that? I mean we're going, we're, I know its future, but you must be thinking about that and participating in those markets. I know today you are, you have, you know, edge line and other products. But there's, it seems to me that it's, it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that >>opportunity for the customers? The world will have to have a balance right? Where today the default? Well, the more common mode is to collect the data from the edge and train at uh at some centralized location or a number of centralized location um going forward. Given the proliferation of the edge devices, we'll need a balance. We need both. We need capability at the cloud side. Right? And it has to be hybrid and then we need capability on the edge side. Yeah. That they want to build systems that that on one hand, uh is uh edge adapted, right? Meaning the environmentally adapted because the edge different. They are on a lot of times. On the outside. Uh They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery power. Right? Um, so you have to build systems that adapt to it. But at the same time they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run a rich set of applications. So yes. Um that's that's also the insightful for that Antonio announced in 2018 Uh the next four years from 2018, right $4 billion dollars invested to strengthen our edge portfolio. Edge product lines, Right. Edge solutions. >>I can doctor go, I could go on for hours with you. You're you're just such a great guest. Let's close. What are you most excited about in the future? Of of of it. Certainly H. P. E. But the industry in general. >>Yeah. I think the excitement is uh the customers, right? The diversity of customers and and the diversity in a way they have approached their different problems with data strategy. So the excitement is around data strategy, right? Just like you know uh you know, the the statement made was was so was profound, right? Um And Antonio said we are in the age of insight powered by data. That's the first line, right. Uh The line that comes after that is as such were becoming more and more data centric with data, the currency. Now the next step is even more profound. That is um You know, we are going as far as saying that you know um data should not be treated as cost anymore. No. Right. But instead as an investment in a new asset class called data with value on our balance sheet, this is a this is a step change right? In thinking that is going to change the way we look at data, the way we value it. So that's a statement that this is the exciting thing because because for for me, a city of Ai right uh machine is only as intelligent as the data you feed it with data is a source of the machine learning to be intelligent. So, so that's that's why when when people start to value data, right? And and and say that it is an investment when we collect it, it is very positive for AI because an AI system gets intelligent, get more intelligence because it has a huge amounts of data and the diversity of data. So it would be great if the community values values data. Well, >>you certainly see it in the valuations of many companies these days. Um and I think increasingly you see it on the income statement, you know, data products and people monetizing data services and maybe eventually you'll see it in the in the balance. You know, Doug Laney, when he was a gardener group wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? Dr >>yeah. Question is is the process and methods evaluation right. But I believe we'll get there, we need to get started and then we'll get there. Believe >>doctor goes on >>pleasure. And yeah. And then the Yeah, I will well benefit greatly from it. >>Oh yeah, no doubt people will better understand how to align you know, some of these technology investments, Doctor goes great to see you again. Thanks so much for coming back in the cube. It's been a real pleasure. >>Yes. A system. It's only as smart as the data you feed it with. >>Excellent. We'll leave it there, thank you for spending some time with us and keep it right there for more great interviews from HP discover 21 this is Dave Volonte for the cube. The leader in enterprise tech coverage right back

Published Date : Jun 23 2021

SUMMARY :

Hewlett Packard enterprise Doctor go great to see you again. And you addressed you That's also part of the reason why that's the main reason why you know Antonio on day one So maybe we could talk a little bit about some of the things that you The first one is is the current challenge and that current challenge is uh you know stated So that's and they, and they chalked it up to a glitch like you said, is is that humans put in the rules to decide what goes into So it seems that most of the Ai going on in the enterprise is modeling It starts to evolve right to the point that using a test set of data that you have Yeah. The goal is to learn at the edge so that you don't have to move And then maybe only selectively send the autonomous vehicle example you gave us great because But on the other hand, you you know, you worry you you you But the processing power when you combine the Cpus and N. that there might need to be a balance between you needing to bring all that data from the I know today you are, you have, you know, edge line and other products. Um, so you have to build systems that adapt to it. What are you most excited about in the future? machine is only as intelligent as the data you feed it with data Um and I think increasingly you see it on the income statement, you know, data products and people Question is is the process and methods evaluation right. And then the Yeah, I will well benefit greatly from it. Doctor goes great to see you again. It's only as smart as the data you feed it with. We'll leave it there, thank you for spending some time with us and keep it right there for more great interviews

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael LewisPERSON

0.99+

Dave VolontePERSON

0.99+

DavePERSON

0.99+

Doug LaneyPERSON

0.99+

FranceLOCATION

0.99+

two languagesQUANTITY

0.99+

The Flash BoysTITLE

0.99+

55 billionQUANTITY

0.99+

2018DATE

0.99+

TodayDATE

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

one languageQUANTITY

0.99+

second areaQUANTITY

0.99+

2021DATE

0.99+

May 6 2010DATE

0.99+

last yearDATE

0.99+

tomorrowDATE

0.99+

tens of minutesQUANTITY

0.99+

HPDORGANIZATION

0.99+

less than 100 sharesQUANTITY

0.99+

second barrierQUANTITY

0.99+

todayDATE

0.99+

first partQUANTITY

0.99+

Eng Lim GohPERSON

0.99+

OneQUANTITY

0.99+

HPORGANIZATION

0.99+

first barrierQUANTITY

0.99+

bothQUANTITY

0.98+

three stagesQUANTITY

0.98+

Hewlett PackardORGANIZATION

0.98+

$4 billion dollarsQUANTITY

0.98+

one modelQUANTITY

0.98+

two partQUANTITY

0.98+

first lineQUANTITY

0.98+

United NationsORGANIZATION

0.98+

one areaQUANTITY

0.98+

AntonioPERSON

0.98+

first oneQUANTITY

0.98+

one ruleQUANTITY

0.98+

hundreds of millionsQUANTITY

0.98+

HPEORGANIZATION

0.97+

May sixDATE

0.97+

about 100 questionsQUANTITY

0.97+

johnPERSON

0.96+

two insightQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.95+

yesterdayDATE

0.95+

asiaLOCATION

0.92+

HBORGANIZATION

0.92+

under 100 sharesQUANTITY

0.92+

Three main areasQUANTITY

0.92+

firstQUANTITY

0.92+

hundreds of millions of word pairsQUANTITY

0.91+

under 100 tradesQUANTITY

0.91+

less than 103 100 100 sharesQUANTITY

0.91+

Q eightOTHER

0.9+

three quick examplesQUANTITY

0.9+

two keynotesQUANTITY

0.9+

55 billion devicesQUANTITY

0.89+

firstlyQUANTITY

0.88+

three main thingsQUANTITY

0.88+

Dr.PERSON

0.87+

day oneQUANTITY

0.86+

H P. S H. P. S. Annual customerEVENT

0.85+

2nd 1QUANTITY

0.84+

Eng limbPERSON

0.81+

oneQUANTITY

0.8+

16 seventiesQUANTITY

0.77+

a trillion dollarsQUANTITY

0.74+

one health care providerQUANTITY

0.73+

one ofQUANTITY

0.72+

sixtiesQUANTITY

0.69+

DRPERSON

0.69+

customersQUANTITY

0.68+

Dr Eng Lim Goh, High Performance Computing & AI | HPE Discover 2021


 

>>Welcome back to HPD discovered 2021 the cubes virtual coverage, continuous coverage of H P. S H. P. S. Annual customer event. My name is Dave Volonte and we're going to dive into the intersection of high performance computing data and AI with DR Eng limb go who is the senior vice president and CTO for AI at Hewlett Packard enterprise Doctor go great to see you again. Welcome back to the cube. >>Hello Dave, Great to talk to you again. >>You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the day two keynotes here at discover you talked about thriving in the age of insights and how to craft a data centric strategy and you addressed you know some of the biggest problems I think organizations face with data that's You got a data is plentiful but insights they're harder to come by. And you really dug into some great examples in retail banking and medicine and health care and media. But stepping back a little bit with zoom out on discovered 21, what do you make of the events so far? And some of your big takeaways? >>Mm Well you started with the insightful question, right? Yeah. Data is everywhere then. But we like the insight. Right? That's also part of the reason why that's the main reason why you know Antonio on day one focused and talked about that. The fact that we are now in the age of insight. Right? Uh and and uh and and how to thrive thrive in that in this new age. What I then did on the day to kino following Antonio is to talk about the challenges that we need to overcome in order in order to thrive in this new age. >>So maybe we could talk a little bit about some of the things that you took away in terms I'm specifically interested in some of the barriers to achieving insights when you know customers are drowning in data. What do you hear from customers? What we take away from some of the ones you talked about today? >>Oh, very pertinent question. Dave you know the two challenges I spoke about right now that we need to overcome in order to thrive in this new age. The first one is is the current challenge and that current challenge is uh you know stated is you know, barriers to insight, you know when we are awash with data. So that's a statement right? How to overcome those barriers. What are the barriers of these two insight when we are awash in data? Um I in the data keynote I spoke about three main things. Three main areas that received from customers. The first one, the first barrier is in many with many of our customers. A data is siloed. All right. You know, like in a big corporation you've got data siloed by sales, finance, engineering, manufacturing, and so on, uh supply chain and so on. And uh, there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the that was the first barrier we spoke about barriers to incite when we are washed with data. The second barrier is uh, that we see amongst our customers is that uh data is raw and dispersed when they are stored and and uh and you know, it's tough to get tough to to get value out of them. Right? And I in that case I I used the example of uh you know the May 6 2010 event where the stock market dropped a trillion dollars in in tens of ministerial. We we all know those who are financially attuned with know about this uh incident But this is not the only incident. There are many of them out there and for for that particular May six event uh you know, it took a long time to get insight months. Yeah before we for months we had no insight as to what happened, why it happened, right. Um and and there were many other incidences like this. And the regulators were looking for that one rule that could, that could mitigate many of these incidences. Um one of our customers decided to take the hard road go with the tough data right? Because data is rolling dispersed. So they went into all the different feeds of financial transaction information. Uh took the took the tough uh took the tough road and analyze that data took a long time to assemble and they discovered that there was court stuffing right? That uh people were sending a lot of traits in and then cancelling them almost immediately. You have to manipulate the market. Um And why why why didn't we see it immediately? Well the reason is the process reports that everybody sees uh rule in there that says all trades. Less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 103 100 100 shares trades uh to fly under the radar to do this manipulation. So here is here the second barrier right? Data could be raw and dispersed. Um Sometimes you just have to take the hard road and um and to get insight And this is 1 1 great example. And then the last barrier is uh is has to do with sometimes when you start a project to to get insight to get uh to get answers and insight. You you realize that all the datas around you but you don't you don't seem to find the right ones To get what you need. You don't you don't seem to get the right ones. Yeah. Um here we have three quick examples of customers. 111 was it was a great example right? Where uh they were trying to build a language translator, a machine language translator between two languages. Right? But not do that. They need to get hundreds of millions of word pairs, you know, of one language compared uh with the corresponding other hundreds of millions of them. They say we are going to get all these word pairs. Someone creative thought of a willing source and a huge, so it was a United Nations you see. So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data right. The second one has to do with uh there was uh the uh sometimes you you may just have to generate that data, interesting one. We had an autonomous car customer that collects all these data from their cars, right, massive amounts of data, loss of senses, collect loss of data. And uh you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car uh in in um in fine weather and collected the car driving on this highway in rain and also in stone, but never had the opportunity to collect the car in hale because that's a rare occurrence. So instead of waiting for a time where the car can dr inhale, they build a simulation you by having the car collector in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated the fact that data is silo Federated, it various associated with data. That's tough to get that. They just took the hard road, right? And sometimes, thirdly, you just have to be creative to get the right data you need, >>wow, I tell you, I have about 100 questions based on what you just said. Uh, there's a great example, the flash crash. In fact, Michael Lewis wrote about this in his book, The Flash Boys and essentially right. It was high frequency traders trying to front run the market and sending in small block trades trying to get on the front end it. So that's and they, and they chalked it up to a glitch like you said, for months, nobody really knew what it was. So technology got us into this problem. I guess my question is, can technology help us get out of the problem? And that maybe is where AI fits in. >>Yes, yes. Uh, in fact, a lot of analytics, we went in, uh, to go back to the raw data that is highly dispersed from different sources, right, assemble them to see if you can find a material trend, right? You can see lots of trends right? Like, uh, you know, we, if if humans look at things right, we tend to see patterns in clouds, right? So sometimes you need to apply statistical analysis, um math to be sure that what the model is seeing is is real. Right? And and that required work. That's one area. The second area is uh you know, when um uh there are times when you you just need to to go through that uh that tough approach to to find the answer. Now, the issue comes to mind now is is that humans put in the rules to decide what goes into a report that everybody sees in this case uh before the change in the rules. Right? But by the way, after the discovery, the authorities change the rules and all all shares, all traits of different any sizes. It has to be reported. No. Yeah. Right. But the rule was applied uh you know, to say earlier that shares under 100 trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and and under for understandable reasons. I mean they probably didn't want that for various reasons not to put everything in there so that people could still read it uh in a reasonable amount of time. But uh we need to understand that rules were being put in by humans for the reports we read. And as such, there are times you just need to go back to the raw data. >>I want to ask, >>albeit that it's gonna be tough. >>Yeah. So I want to ask a question about AI is obviously it's in your title and it's something you know a lot about but and I want to make a statement, you tell me if it's on point or off point. So it seems that most of the Ai going on in the enterprise is modeling data science applied to troves of data >>but >>but there's also a lot of ai going on in consumer whether it's you know, fingerprint technology or facial recognition or natural language processing. Will a two part question will the consumer market has so often in the enterprise sort of inform us uh the first part and then will there be a shift from sort of modeling if you will to more you mentioned autonomous vehicles more ai influencing in real time. Especially with the edge. She can help us understand that better. >>Yeah, it's a great question. Right. Uh there are three stages to just simplify, I mean, you know, it's probably more sophisticated than that but let's simplify three stages. All right. To to building an Ai system that ultimately can predict, make a prediction right or to to assist you in decision making, have an outcome. So you start with the data massive amounts data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data and the machine uh starts to evolve a model based on all the data is seeing. It starts to evolve right to the point that using a test set of data that you have separately campus site that you know the answer for. Then you test the model uh you know after you trained it with all that data to see whether it's prediction accuracy is high enough and once you are satisfied with it, you you then deploy the model to make the decision and that's the influence. Right? So a lot of times depend on what what we are focusing on. We we um in data science are we working hard on assembling the right data to feed the machine with, That's the data preparation organization work. And then after which you build your models, you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you you pick one model and the prediction isn't that robust, it is good but then it is not consistent right now what you do is uh you try another model so sometimes it's just keep trying different models until you get the right kind. Yeah, that gives you a good robust decision making and prediction after which It is tested well Q eight. You would then take that model and deploy it at the edge. Yeah. And then at the edges is essentially just looking at new data, applying it to the model, you're you're trained and then that model will give you a prediction decision. Right? So uh it is these three stages. Yeah, but more and more uh you know, your question reminds me that more and more people are thinking as the edge become more and more powerful. Can you also do learning at the edge? Right. That's the reason why we spoke about swarm learning the last time, learning at the edge as a swamp, right? Because maybe individually they may not have enough power to do so. But as a swampy me, >>is that learning from the edge or learning at the edge? In other words? Yes. Yeah. Question Yeah. >>That's a great question. That's a great question. Right? So uh the quick answer is learning at the edge, right? Uh and also from the edge, but the main goal, right? The goal is to learn at the edge so that you don't have to move the data that the Edge sees first back to the cloud or the core to do the learning because that would be the reason. One of the main reasons why you want to learn at the edge, right? Uh So so that you don't need to have to send all that data back and assemble it back from all the different edge devices, assemble it back to the cloud side to to do the learning right? With swampland. You can learn it and keep the data at the edge and learn at that point. >>And then maybe only selectively send the autonomous vehicle example you gave us. Great because maybe there, you know, there may be only persisting, they're not persisting data that is inclement weather or when a deer runs across the front and then maybe they they do that and then they send that smaller data set back and maybe that's where it's modelling done. But the rest can be done at the edges. It's a new world that's coming down. Let me ask you a question, is there a limit to what data should be collected and how it should be collected? >>That's a great question again. You know uh wow today, full of these uh insightful questions that actually touches on the second challenge. Right? How do we uh in order to thrive in this new age of inside? The second challenge is are you know the is our future challenge, right? What do we do for our future? And and in there is uh the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that I talk about what to collect right? When to organize it when you collect and then where will your data be, you know going forward that you are collecting from? So what, when and where for the what data for the what data to collect? That? That was the question you ask. Um it's it's a question that different industries have to ask themselves because it will vary, right? Um let me give you the you use the autonomous car example, let me use that. And you have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from the fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars collecting data so they can train and eventually deploy commercial cars, right? Um so this data collection cars they collect as a fleet of them collect temporal bikes a day. And when it came to us building a storage system to store all of that data, they realized they don't want to afford to store all of it. Now, here comes the dilemma, right? What should I after I spent so much effort building all these cars and sensors and collecting data, I've now decide what to delete. That's a dilemma right now in working with them on this process of trimming down what they collected. You know, I'm constantly reminded of the sixties and seventies, right? To remind myself 60 and seventies, we call a large part of our D. N. A junk DNA. Today. We realize that a large part of that what we call john has function as valuable function. They are not jeans, but they regulate the function of jeans, you know, So, so what's jump in the yesterday could be valuable today or what's junk today could be valuable tomorrow, Right? So, so there's this tension going on right between you decided not wanting to afford to store everything that you can get your hands on. But on the other hand, you you know, you worry you you you ignore the wrong ones, right? You can see this tension in our customers, right? And it depends on industry here, right? In health care, they say I have no choice. I I want it. All right. One very insightful point brought up by one health care provider that really touched me was, you know, we are not we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But you also care for the people were not caring for. How do we find them? Mhm. Right. And that therefore, they did not just need to collect data. That is that they have with from their patients. They also need to reach out right to outside data so that they can figure out who they are not caring for, right? So they want it all. So I tell us them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and find that. Of course they also come back to us rightfully that you know, we have to then work out a way to help them build that system, you know? So that's health care, right? And and if you go to other industries like banking, they say they can't afford to keep them off, but they are regulated, seems like healthcare, they are regulated as to uh privacy and such. Like so many examples different industries having different needs, but different approaches to how what they collect. But there is this constant tension between um you perhaps deciding not wanting to fund all of that uh all that you can store, right? But on the other hand, you know, if you if you kind of don't want to afford it and decide not to store some uh if he does some become highly valuable in the future, right? Yeah. >>We can make some assumptions about the future, can't we? I mean, we know there's gonna be a lot more data than than we've ever seen before. We know that we know well notwithstanding supply constraints on things like nand. We know the prices of storage is going to continue to decline. We also know, and not a lot of people are really talking about this but the processing power but he says moore's law is dead okay. It's waning. But the processing power when you combine the Cpus and NP US and GPUS and accelerators and and so forth actually is is increasing. And so when you think about these use cases at the edge, you're going to have much more processing power, you're gonna have cheaper storage and it's going to be less expensive processing And so as an ai practitioner, what can you do with that? >>Yeah, it's highly again, another insightful questions that we touched on our keynote and that that goes up to the why I do the where? Right, When will your data be? Right. We have one estimate that says that by next year there will be 55 billion connected devices out there. Right. 55 billion. Right. What's the population of the world? Of the other? Of 10 billion? But this thing is 55 billion. Right? Uh and many of them, most of them can collect data. So what do you what do you do? Right. Um So the amount of data that's gonna come in, it's gonna weigh exceed right? Our drop in storage costs are increasing computer power. Right? So what's the answer? Right. So, so the the answer must be knowing that we don't and and even the drop in price and increase in bandwidth, it will overwhelm the increased five G will overwhelm five G. Right? Given amount 55 billion of them collecting. Right? So, the answer must be that there might need to be a balance between you needing to bring all that data from the 55 billion devices of data back to a central as a bunch of central Cause because you may not be able to afford to do that firstly band with even with five G. M and and SD when you'll still be too expensive given the number of devices out there. Were you given storage cause dropping will still be too expensive to try and store them all. So the answer must be to start at least to mitigate the problem to some leave both a lot of the data out there. Right? And only send back the pertinent ones as you said before. But then if you did that, then how are we gonna do machine learning at the core and the cloud side? If you don't have all the data you want rich data to train with. Right? Some sometimes you want a mix of the uh positive type data and the negative type data so you can train the machine in a more balanced way. So the answer must be eventually right. As we move forward with these huge number of devices out of the edge to do machine learning at the edge. Today, we don't have enough power. Right? The edge typically is characterized by a lower uh, energy capability and therefore lower compute power. But soon, you know, even with lower energy, they can do more with compute power improving in energy efficiency, Right? Uh, so learning at the edge today, we do influence at the edge. So we data model deploy and you do influence at the age, that's what we do today. But more and more, I believe, given a massive amount of data at the edge, you you have to have to start doing machine learning at the edge. And and if when you don't have enough power, then you aggregate multiple devices, compute power into a swamp and learn as a swan, >>interesting. So now, of course, if I were sitting and fly on the wall in HP board meeting, I said, okay, HP is as a leading provider of compute, how do you take advantage of that? I mean, we're going, I know it's future, but you must be thinking about that and participating in those markets. I know today you are you have, you know, edge line and other products. But there's it seems to me that it's it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that >>opportunity for your customers? Uh the world will have to have a balance right? Where today the default, Well, the more common mode is to collect the data from the edge and train at uh at some centralized location or a number of centralized location um going forward. Given the proliferation of the edge devices, we'll need a balance. We need both. We need capability at the cloud side. Right. And it has to be hybrid. And then we need capability on the edge side. Yeah. That they want to build systems that that on one hand, uh is uh edge adapted, right? Meaning the environmentally adapted because the edge different they are on a lot of times on the outside. Uh They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? Um so you have to build systems that adapt to it, but at the same time they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run rich a set of applications. So yes. Um that's that's also the insightful for that Antonio announced in 2018, Uh the next four years from 2018, right, $4 billion dollars invested to strengthen our edge portfolio, edge product lines, right Edge solutions. >>I get a doctor go. I could go on for hours with you. You're you're just such a great guest. Let's close what are you most excited about in the future of of of it? Certainly H. P. E. But the industry in general. >>Yeah I think the excitement is uh the customers right? The diversity of customers and and the diversity in a way they have approached their different problems with data strategy. So the excitement is around data strategy right? Just like you know uh you know the the statement made was was so was profound. Right? Um And Antonio said we are in the age of insight powered by data. That's the first line right? The line that comes after that is as such were becoming more and more data centric with data the currency. Now the next step is even more profound. That is um you know we are going as far as saying that you know um data should not be treated as cost anymore. No right. But instead as an investment in a new asset class called data with value on our balance sheet, this is a this is a step change right in thinking that is going to change the way we look at data the way we value it. So that's a statement that this is the exciting thing because because for for me a city of AI right uh machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. So so that's that's why when when people start to value data right? And and and say that it is an investment when we collect it. It is very positive for ai because an Ai system gets intelligent, more intelligence because it has a huge amounts of data and the diversity of data. So it'd be great if the community values values data. Well >>you certainly see it in the valuations of many companies these days. Um and I think increasingly you see it on the income statement, you know data products and people monetizing data services and maybe eventually you'll see it in the in the balance. You know Doug Laney when he was a gardener group wrote a book about this and a lot of people are thinking about it. That's a big change isn't it? Dr >>yeah. Question is is the process and methods evaluation. Right. But uh I believe we'll get there, we need to get started then we'll get their belief >>doctor goes on and >>pleasure. And yeah and then the yeah I will will will will benefit greatly from it. >>Oh yeah, no doubt people will better understand how to align you know, some of these technology investments, Doctor goes great to see you again. Thanks so much for coming back in the cube. It's been a real pleasure. >>Yes. A system. It's only as smart as the data you feed it with. >>Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HP discover 21. This is dave a lot for the cube. The leader in enterprise tech coverage right back.

Published Date : Jun 17 2021

SUMMARY :

at Hewlett Packard enterprise Doctor go great to see you again. the age of insights and how to craft a data centric strategy and you addressed you know That's also part of the reason why that's the main reason why you know Antonio on day one So maybe we could talk a little bit about some of the things that you The first one is is the current challenge and that current challenge is uh you know stated So that's and they, and they chalked it up to a glitch like you said, is is that humans put in the rules to decide what goes into So it seems that most of the Ai going on in the enterprise is modeling be a shift from sort of modeling if you will to more you mentioned autonomous It starts to evolve right to the point that using a test set of data that you have is that learning from the edge or learning at the edge? The goal is to learn at the edge so that you don't have to move the data that the And then maybe only selectively send the autonomous vehicle example you gave us. But on the other hand, you know, if you if you kind of don't want to afford it and But the processing power when you combine the Cpus and NP that there might need to be a balance between you needing to bring all that data from the I know today you are you have, you know, edge line and other products. Um so you have to build systems that adapt to it, but at the same time they must not Let's close what are you most excited about in the future of machine is only as intelligent as the data you feed it with. Um and I think increasingly you see it on the income statement, you know data products and Question is is the process and methods evaluation. And yeah and then the yeah I will will will will benefit greatly from it. Doctor goes great to see you again. It's only as smart as the data you feed it with. Thank you for spending some time with us and keep it right there for more great

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael LewisPERSON

0.99+

Dave VolontePERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

HPORGANIZATION

0.99+

two languagesQUANTITY

0.99+

The Flash BoysTITLE

0.99+

55 billionQUANTITY

0.99+

10 billionQUANTITY

0.99+

second challengeQUANTITY

0.99+

Hewlett PackardORGANIZATION

0.99+

two challengesQUANTITY

0.99+

second areaQUANTITY

0.99+

one languageQUANTITY

0.99+

TodayDATE

0.99+

last yearDATE

0.99+

Doug LaneyPERSON

0.99+

tomorrowDATE

0.99+

next yearDATE

0.99+

bothQUANTITY

0.99+

OneQUANTITY

0.99+

todayDATE

0.99+

first lineQUANTITY

0.99+

first partQUANTITY

0.99+

May 6 2010DATE

0.99+

$4 billion dollarsQUANTITY

0.99+

two partQUANTITY

0.99+

Less than 100 sharesQUANTITY

0.99+

HPDORGANIZATION

0.99+

one modelQUANTITY

0.98+

one ruleQUANTITY

0.98+

one areaQUANTITY

0.98+

second barrierQUANTITY

0.98+

60QUANTITY

0.98+

55 billion devicesQUANTITY

0.98+

AntonioPERSON

0.98+

johnPERSON

0.98+

three stagesQUANTITY

0.98+

hundreds of millionsQUANTITY

0.97+

about 100 questionsQUANTITY

0.97+

Eng Lim GohPERSON

0.97+

HPEORGANIZATION

0.97+

first barrierQUANTITY

0.97+

first oneQUANTITY

0.97+

Three main areasQUANTITY

0.97+

yesterdayDATE

0.96+

tens of ministerialQUANTITY

0.96+

two insightQUANTITY

0.96+

Q eightOTHER

0.95+

2021DATE

0.94+

seventiesQUANTITY

0.94+

two keynotesQUANTITY

0.93+

a dayQUANTITY

0.93+

firstQUANTITY

0.92+

H P. S H. P. S. Annual customerEVENT

0.91+

United NationsORGANIZATION

0.91+

less than 103 100 100 sharesQUANTITY

0.91+

under 100 tradesQUANTITY

0.9+

under 100 sharesQUANTITY

0.9+

day oneQUANTITY

0.88+

about 10 petabytes a dayQUANTITY

0.88+

three quick examplesQUANTITY

0.85+

one health care providerQUANTITY

0.85+

one estimateQUANTITY

0.84+

three main thingsQUANTITY

0.83+

hundreds of millions of word pairsQUANTITY

0.82+

AntonioORGANIZATION

0.81+

sixtiesQUANTITY

0.78+

oneQUANTITY

0.77+

May sixDATE

0.75+

firstlyQUANTITY

0.74+

trillion dollarsQUANTITY

0.73+

second oneQUANTITY

0.71+

HP discover 21ORGANIZATION

0.69+

DR Eng limbPERSON

0.69+

one of our customersQUANTITY

0.66+

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCube's virtual coverage, continuous coverage of HPE's annual customer event. My name is Dave Vellante and we're going to dive into the intersection of high-performance computing, data and AI with Dr. Eng Lim Goh who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Dr. Goh, great to see you again. Welcome back to theCube. >> Hey, hello, Dave. Great to talk to you again. >> You might remember last year we talked a lot about swarm intelligence and how AI is evolving. Of course you hosted the Day 2 keynotes here at Discover. And you talked about thriving in the age of insights and how to craft a data-centric strategy and you addressed some of the biggest problems I think organizations face with data. And that's, you got to look, data is plentiful, but insights, they're harder to come by and you really dug into some great examples in retail, banking, and medicine and healthcare and media. But stepping back a little bit we'll zoom out on Discover '21, you know, what do you make of the events so far and some of your big takeaways? >> Hmm, well, you started with the insightful question. Data is everywhere then but we lack the insight. That's also part of the reason why that's a main reason why, Antonio on Day 1 focused and talked about that, the fact that we are in the now in the age of insight and how to thrive in this new age. What I then did on the Day 2 keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights when you know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Very pertinent question, Dave. You know, the two challenges I spoke about how to, that we need to overcome in order to thrive in this new age, the first one is the current challenge. And that current challenge is, you know state of this, you know, barriers to insight, when we are awash with data. So that's a statement. How to overcome those barriers. One of the barriers to insight when we are awash in data, in the Day 2 keynote, I spoke about three main things, three main areas that receive from customers. The first one, the first barrier is with many of our customers, data is siloed. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing, and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a Federation layer above all those silos so that when you build applications above they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know, barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And it's tough to get to value out of them. In that case I use the example of the May 6, 2010 event where the stock market dropped a trillion dollars in tens of minutes. We all know those who are financially attuned with, know about this incident. But that this is not the only incident. There are many of them out there. And for that particular May 6, event, you know it took a long time to get insight, months, yeah, before we, for months we had no insight as to what happened, why it happened. And there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road to go with the tough data. Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road and analyze that data took a long time to assemble. And he discovered that there was quote stuffing. That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees had the rule in there that says all trades less than 100 shares don't need to report in there. And so what people did was sending a lot of less than 100 shares trades to fly under the radar to do this manipulation. So here is, here the second barrier. Data could be raw and disperse. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah. Here we have three quick examples of customers. One was a great example where they were trying to build a language translator a machine language translator between two languages. But in order to do that they need to get hundreds of millions of word pairs of one language compare with the corresponding other hundreds of millions of them. They say, "Where I'm going to get all these word pairs?" Someone creative thought of a willing source and huge source, it was a United Nations. You see, so sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data. The second one has to do with, there was the, sometimes you may just have to generate that data. Interesting one. We had an autonomous car customer that collects all these data from their cars. Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hail because that's a rare occurrence. So instead of waiting for a time where the car can drive in hail, they build a simulation by having the car collected in snow and simulated hail. So these are some of the examples where we have customers working to overcome barriers. You have barriers that is associated with the fact, that data silo, if federated barriers associated with data that's tough to get at. They just took the hard road. And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow, I tell you, I have about 100 questions based on what you just said. And as a great example, the flash crash in fact Michael Lewis wrote about this in his book, the "Flash Boys" and essentially. It was high frequency traders trying to front run the market and sending in small block trades trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. Can I guess my question is can technology help us get get out of the problem? And that maybe is where AI fits in. >> Yes. Yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, assemble them to see if you can find a material trend. You can see lots of trends. Like, no, we, if humans at things we tend to see patterns in clouds. So sometimes you need to apply statistical analysis, math to be sure that what the model is seeing is real. And that required work. That's one area. The second area is, you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. And in this case before the change in the rules. By the way, after the discovery, the authorities changed the rules and all shares all trades of different, any sizes it has to be reported. Not, yeah. But the rule was applied to to say earlier that shares under 100, trades under 100 shares need not be reported. So sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't, wanted for various reasons not to put everything in there so that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such there are times we just need to go back to the raw data. >> I want to ask you-- Or be it that it's going to be tough there. >> Yeah, so I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about and I'm going to make a statement. You tell me if it's on point or off point. Seems that most of the AI going on in the enterprise is modeling data science applied to troves of data. But there's also a lot of AI going on in consumer, whether it's fingerprint technology or facial recognition or natural language processing. Will, to two-part question, will the consumer market, let's say as it has so often in the enterprise sort of inform us is sort of first part. And then will there be a shift from sort of modeling, if you will, to more, you mentioned autonomous vehicles more AI inferencing in real-time, especially with the Edge. I think you can help us understand that better. >> Yeah, this is a great question. There are three stages to just simplify, I mean, you know, it's probably more sophisticated than that, but let's just simplify there're three stages to building an AI system that ultimately can predict, make a prediction. Or to assist you in decision-making, have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data. And the machine starts to evolve a model based on all the data is seeing it starts to evolve. To a point that using a test set of data that you have separately kept a site that you know the answer for. Then you test the model, you know after you're trained it with all that data to see whether his prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision and that's the inference. So a lot of times depending on what we are focusing on. We in data science are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you wanted to make. You pick the right models and then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that a robust, it is good, but then it is not consistent. Now what you do is you try another model. So sometimes you just keep trying different models until you get the right kind, yeah, that gives you a good robust decision-making and prediction. Now, after which, if it's tested well, Q8 you will then take that model and deploy it at the Edge, yeah. And then at the Edge is essentially just looking at new data applying it to the model that you have trained and then that model will give you a prediction or a decision. So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful, can you also do learning at the Edge? That's the reason why we spoke about swarm learning the last time, learning at the Edge as a swarm. Because maybe individually they may not have enough power to do so, but as a swarm, they may. >> Is that learning from the Edge or learning at the Edge. In other words, is it-- >> Yes. >> Yeah, you don't understand my question, yeah. >> That's a great question. That's a great question. So answer is learning at the Edge, and also from the Edge, but the main goal, the goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. So that you don't need to have to send all that data back and assemble it back from all the different Edge devices assemble it back to the Cloud side to do the learning. With swarm learning, you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send the autonomous vehicle example you gave is great 'cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front and then maybe they do that and then they send that smaller data set back and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming to, let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah, well, today full of these insightful questions that actually touches on the second challenge. How do we, to in order to thrive in this new age of insight. The second challenge is our future challenge. What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talk about what to collect, and when to organize it when you collect, and then where will your data be going forward that you are collecting from? So what, when, and where. For the what data, for what data to collect that was the question you asked. It's a question that different industries have to ask themselves because it will vary. Let me give you the, you use the autonomous car example. Let me use that and you have this customer collecting massive amounts of data. You know, we talking about 10 petabytes a day from a fleet of their cars and these are not production autonomous cars. These are training autonomous cars, collecting data so they can train and eventually deploy a commercial cars. Also these data collection cars, they collect 10 as a fleet of them collect 10 petabytes a day. And then when it came to us, building a storage system to store all of that data they realize they don't want to afford to store all of it. Now here comes the dilemma. What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma. Now in working with them on this process of trimming down what they collected. I'm constantly reminded of the 60s and 70s. To remind myself 60s and 70s, we call a large part of our DNA, junk DNA. Today we realized that a large part of that, what we call junk has function has valuable function. They are not genes but they regulate the function of genes. So what's junk in yesterday could be valuable today, or what's junk today could be valuable tomorrow. So there's this tension going on between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you know you worry, you ignore the wrong ones. You can see this tension in our customers. And then it depends on industry here. In healthcare they say, I have no choice. I want it all, why? One very insightful point brought up by one healthcare provider that really touched me was you know, we are not, we don't only care. Of course we care a lot. We care a lot about the people we are caring for. But we also care for the people we are not caring for. How do we find them? And therefore, they did not just need to collect data that they have with, from their patients they also need to reach out to outside data so that they can figure out who they are not caring for. So they want it all. So I asked them, "So what do you do with funding if you want it all?" They say they have no choice but they'll figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us, rightfully that you know, we have to then work out a way to to help them build a system. So that healthcare. And if you go to other industries like banking, they say they can afford to keep them all. But they are regulated same like healthcare. They are regulated as to privacy and such like. So many examples, different industries having different needs but different approaches to how, what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can store. But on the other hand you know, if you kind of don't want to afford it and decide not to store some, maybe those some become highly valuable in the future. You worry. >> Well, we can make some assumptions about the future, can't we? I mean we know there's going to be a lot more data than we've ever seen before, we know that. We know, well not withstanding supply constraints and things like NAND. We know the price of storage is going to continue to decline. We also know and not a lot of people are really talking about this but the processing power, everybody says, Moore's Law is dead. Okay, it's waning but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth, actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again another insightful question that we touched on, on our keynote and that goes up to the why, I'll do the where. Where will your data be? We have one estimate that says that by next year, there will be 55 billion connected devices out there. 55 billion. What's the population of the world? Well, off the order of 10 billion, but this thing is 55 billion. And many of them, most of them can collect data. So what do you do? So the amount of data that's going to come in is going to way exceed our drop in storage costs our increasing compute power. So what's the answer? The answer must be knowing that we don't and even a drop in price and increase in bandwidth, it will overwhelm the 5G, it'll will overwhelm 5G, given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all that data from the 55 billion devices of the data back out to a central, as a bunch of central cost because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you still be too expensive given the number of devices out there. You know given storage costs dropping it'll still be too expensive to try and install them all. So the answer must be to start at least to mitigate the problem to some leave most a lot of the data out there. And only send back the pertinent ones, as you said before. But then if you did that then, how are we going to do machine learning at the core and the Cloud side, if you don't have all the data you want rich data to train with. Sometimes you want to a mix of the positive type data, and the negative type data. So you can train the machine in a more balanced way. So the answer must be you eventually, as we move forward with these huge number of devices are at the Edge to do machine learning at the Edge. Today we don't even have power. The Edge typically is characterized by a lower energy capability and therefore, lower compute power. But soon, you know, even with low energy, they can do more with compute power, improving in energy efficiency. So learning at the Edge today we do inference at the Edge. So we data, model, deploy and you do inference at age. That's what we do today. But more and more, I believe given a massive amount of data at the Edge you have to have to start doing machine learning at the Edge. And if when you don't have enough power then you aggregate multiple devices' compute power into a swarm and learn as a swarm. >> Oh, interesting, so now of course, if I were sitting in a flyer flying the wall on HPE Board meeting I said, "Okay, HPE is a leading provider of compute." How do you take advantage that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products, but there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for your customers? >> The wall will have to have a balance. Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud side. And it has to be hybrid. And then we need capability on the Edge side. Yeah that we need to build systems that on one hand is Edge-adapted. Meaning they environmentally-adapted because the Edge differently are on it. A lot of times on the outside, they need to be packaging-adapted and also power-adapted. Because typically many of these devices are battery-powered. So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. They must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insightful for that. Antonio announced in 2018 for the next four years from 2018, $4 billion invested to strengthen our Edge portfolio our Edge product lines, Edge solutions. >> Dr. Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers. The diversity of customers and the diversity in the way they have approached their different problems with data strategy. So the excitement is around data strategy. Just like, you know, the statement made for us was so, was profound. And Antonio said we are in the age of insight powered by data. That's the first line. The line that comes after that is as such we are becoming more and more data-centric with data the currency. Now the next step is even more profound. That is, you know, we are going as far as saying that data should not be treated as cost anymore, no. But instead, as an investment in a new asset class called data with value on our balance sheet. This is a step change in thinking that is going to change the way we look at data, the way we value it. So that's a statement. So this is the exciting thing, because for me a CTO of AI, a machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. So that's why when the people start to value data and say that it is an investment when we collect it it is very positive for AI because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. So it'd be great if the community values data. >> Well, are you certainly see it in the valuations of many companies these days? And I think increasingly you see it on the income statement, you know data products and people monetizing data services, and yeah, maybe eventually you'll see it in the balance sheet, I know. Doug Laney when he was at Gartner Group wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? Dr. Goh. >> Yeah, yeah, yeah. Your question is the process and methods in valuation. But I believe we'll get there. We need to get started and then we'll get there, I believe, yeah. >> Dr. Goh it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh yeah, no doubt. People will better understand how to align some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCube. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (both chuckling) >> Well, excellent, we'll leave it there. Thank you for spending some time with us so keep it right there for more great interviews from HPE Discover '21. This is Dave Vellante for theCube, the leader in enterprise tech coverage. We'll be right back (upbeat music)

Published Date : Jun 10 2021

SUMMARY :

Dr. Goh, great to see you again. Great to talk to you again. and you addressed some and how to thrive in this new age. of the ones you talked about today? One of the barriers to insight And as a great example, the flash crash is that humans put in the rules to decide that it's going to be tough there. and it's something you know a lot about And the machine starts to evolve a model Is that learning from the Yeah, you don't So that you don't need to have but the rest can be done at the Edge. But on the other hand you know, And so when you think about and the Cloud side, if you I know today you are, you So you have to build about in the future as the data you feed it with. And I think increasingly you Your question is the process And then the AI will Dr. Goh, great to see you again. as the data you feed it with. Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

AntonioPERSON

0.99+

2018DATE

0.99+

10 billionQUANTITY

0.99+

$4 billionQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

two languagesQUANTITY

0.99+

two challengesQUANTITY

0.99+

May 6DATE

0.99+

Flash BoysTITLE

0.99+

two-partQUANTITY

0.99+

55 billionQUANTITY

0.99+

tomorrowDATE

0.99+

Gartner GroupORGANIZATION

0.99+

second areaQUANTITY

0.99+

TodayDATE

0.99+

last yearDATE

0.99+

less than 100 sharesQUANTITY

0.99+

hundreds of millionsQUANTITY

0.99+

first lineQUANTITY

0.99+

OneQUANTITY

0.99+

HPEORGANIZATION

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

May 6, 2010DATE

0.99+

10QUANTITY

0.99+

first barrierQUANTITY

0.99+

bothQUANTITY

0.99+

less than 100 shareQUANTITY

0.99+

Dr.PERSON

0.99+

one modelQUANTITY

0.99+

tens of minutesQUANTITY

0.98+

one areaQUANTITY

0.98+

one languageQUANTITY

0.98+

EdgeORGANIZATION

0.98+

three stagesQUANTITY

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

one ruleQUANTITY

0.98+

GohPERSON

0.98+

FirstlyQUANTITY

0.98+

first oneQUANTITY

0.97+

United NationsORGANIZATION

0.97+

firstQUANTITY

0.97+

oneQUANTITY

0.97+

first barrierQUANTITY

0.97+

Hewlett Packard EnterpriseORGANIZATION

0.96+

about 100 questionsQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.95+

Day 2QUANTITY

0.94+

Eng Lim GohPERSON

0.94+

Day 1QUANTITY

0.93+

under 100QUANTITY

0.92+

DrPERSON

0.92+

one estimateQUANTITY

0.91+

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCUBE's virtual coverage, continuous coverage of HPE's Annual Customer Event. My name is Dave Vellante, and we're going to dive into the intersection of high-performance computing, data and AI with Doctor Eng Lim Goh, who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Doctor Goh, great to see you again. Welcome back to theCUBE. >> Hello, Dave, great to talk to you again. >> You might remember last year we talked a lot about Swarm intelligence and how AI is evolving. Of course, you hosted the Day 2 Keynotes here at Discover. And you talked about thriving in the age of insights, and how to craft a data-centric strategy. And you addressed some of the biggest problems, I think organizations face with data. That's, you've got a, data is plentiful, but insights, they're harder to come by. >> Yeah. >> And you really dug into some great examples in retail, banking, in medicine, healthcare and media. But stepping back a little bit we zoomed out on Discover '21. What do you make of the events so far and some of your big takeaways? >> Hmm, well, we started with the insightful question, right, yeah? Data is everywhere then, but we lack the insight. That's also part of the reason why, that's a main reason why Antonio on day one focused and talked about the fact that we are in the now in the age of insight, right? And how to try thrive in that age, in this new age? What I then did on a Day 2 Keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So, maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights. You know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Oh, very pertinent question, Dave. You know the two challenges I spoke about, that we need to overcome in order to thrive in this new age. The first one is the current challenge. And that current challenge is, you know, stated is now barriers to insight, when we are awash with data. So that's a statement on how do you overcome those barriers? What are the barriers to insight when we are awash in data? In the Day 2 Keynote, I spoke about three main things. Three main areas that we receive from customers. The first one, the first barrier is in many, with many of our customers, data is siloed, all right. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above, they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know? Barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And you know, it's tough to get at, to tough to get a value out of them, right? And in that case, I use the example of, you know, the May 6, 2010 event where the stock market dropped a trillion dollars in terms of minutes. We all know those who are financially attuned with know about this incident but that this is not the only incident. There are many of them out there. And for that particular May 6 event, you know, it took a long time to get insight. Months, yeah, before we, for months we had no insight as to what happened. Why it happened? Right, and there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road they go with the tough data, right? Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road. And analyze that data took a long time to assemble. And they discovered that there was caught stuffing, right? That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees, the rule in there that says, all trades less than a hundred shares don't need to report in there. And so what people did was sending a lot of less than a hundred shares trades to fly under the radar to do this manipulation. So here is the second barrier, right? Data could be raw and dispersed. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah? Here we have three quick examples of customers. One was a great example, right? Where they were trying to build a language translator or machine language translator between two languages, right? By not do that, they need to get hundreds of millions of word pairs. You know of one language compare with the corresponding other. Hundreds of millions of them. They say, well, I'm going to get all these word pairs. Someone creative thought of a willing source and a huge, it was a United Nations. You see? So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data, right? The second one has to do with, there was the sometimes you may just have to generate that data. Interesting one, we had an autonomous car customer that collects all these data from their their cars, right? Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hill because that's a rare occurrence. So instead of waiting for a time where the car can drive in hill, they build a simulation by having the car collected in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated. In fact, that data silo, they federated it. Virus associated with data, that's tough to get at. They just took the hard road, right? And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow! I tell you, I have about a hundred questions based on what you just said, you know? (Dave chuckles) And as a great example, the Flash Crash. In fact, Michael Lewis, wrote about this in his book, the Flash Boys. And essentially, right, it was high frequency traders trying to front run the market and sending into small block trades (Dave chuckles) trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. (Dave chuckles) I guess my question is can technology help us get out of the problem? And that maybe is where AI fits in? >> Yes, yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, right? Assembled them to see if you can find a material trend, right? You can see lots of trends, right? Like, no, we, if humans look at things that we tend to see patterns in Clouds, right? So sometimes you need to apply statistical analysis math to be sure that what the model is seeing is real, right? And that required, well, that's one area. The second area is you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. Now, in this case, before the change in the rules, right? But by the way, after the discovery, the authorities changed the rules and all shares, all trades of different any sizes it has to be reported. >> Right. >> Right, yeah? But the rule was applied, you know, I say earlier that shares under a hundred, trades under a hundred shares need not be reported. So, sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't wanted a various reasons not to put everything in there. So that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such, there are times we just need to go back to the raw data. >> I want to ask you... >> Oh, it could be, that it's going to be tough, yeah. >> Yeah, I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about but. And I'm going to make a statement, you tell me if it's on point or off point. So seems that most of the AI going on in the enterprise is modeling data science applied to, you know, troves of data. But there's also a lot of AI going on in consumer. Whether it's, you know, fingerprint technology or facial recognition or natural language processing. Well, two part question will the consumer market, as it has so often in the enterprise sort of inform us is sort of first part. And then, there'll be a shift from sort of modeling if you will to more, you mentioned the autonomous vehicles, more AI inferencing in real time, especially with the Edge. Could you help us understand that better? >> Yeah, this is a great question, right? There are three stages to just simplify. I mean, you know, it's probably more sophisticated than that. But let's just simplify that three stages, right? To building an AI system that ultimately can predict, make a prediction, right? Or to assist you in decision-making. I have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data, and the machine starts to evolve a model based on all the data it's seeing. It starts to evolve, right? To a point that using a test set of data that you have separately kept aside that you know the answer for. Then you test the model, you know? After you've trained it with all that data to see whether its prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision. And that's the inference, right? So a lot of times, depending on what we are focusing on, we in data science are, are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you need to make. You pick the right models. And then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that robust. It is good, but then it is not consistent, right? Now what you do is you try another model. So sometimes it gets keep trying different models until you get the right kind, yeah? That gives you a good robust decision-making and prediction. Now, after which, if it's tested well, QA, you will then take that model and deploy it at the Edge. Yeah, and then at the Edge is essentially just looking at new data, applying it to the model that you have trained. And then that model will give you a prediction or a decision, right? So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful. Can you also do learning at the Edge? >> Right. >> That's the reason why we spoke about Swarm Learning the last time. Learning at the Edge as a Swarm, right? Because maybe individually, they may not have enough power to do so. But as a Swarm, they may. >> Is that learning from the Edge or learning at the Edge? In other words, is that... >> Yes. >> Yeah. You do understand my question. >> Yes. >> Yeah. (Dave chuckles) >> That's a great question. That's a great question, right? So the quick answer is learning at the Edge, right? And also from the Edge, but the main goal, right? The goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the Call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. Right? So that you don't need to have to send all that data back and assemble it back from all the different Edge devices. Assemble it back to the Cloud Site to do the learning, right? Some on you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send. >> Yeah. >> The autonomous vehicle, example you gave is great. 'Cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front. And then maybe they do that and then they send that smaller data setback and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming through. Let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah. Well, today full of these insightful questions. (Dr. Eng chuckles) That actually touches on the the second challenge, right? How do we, in order to thrive in this new age of insight? The second challenge is our future challenge, right? What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talked about what to collect, right? When to organize it when you collect? And then where will your data be going forward that you are collecting from? So what, when, and where? For what data to collect? That was the question you asked, it's a question that different industries have to ask themselves because it will vary, right? Let me give you the, you use the autonomous car example. Let me use that. And we do have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from a fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars, collecting data so they can train and eventually deploy commercial cars, right? Also this data collection cars, they collect 10, as a fleet of them collect 10 petabytes a day. And then when they came to us, building a storage system you know, to store all of that data, they realized they don't want to afford to store all of it. Now here comes the dilemma, right? What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma, right? Now in working with them on this process of trimming down what they collected, you know, I'm constantly reminded of the 60s and 70s, right? To remind myself 60s and 70s, we called a large part of our DNA, junk DNA. >> Yeah. (Dave chuckles) >> Ah! Today, we realized that a large part of that what we call junk has function as valuable function. They are not genes but they regulate the function of genes. You know? So what's junk in yesterday could be valuable today. Or what's junk today could be valuable tomorrow, right? So, there's this tension going on, right? Between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you worry, you ignore the wrong ones, right? You can see this tension in our customers, right? And then it depends on industry here, right? In healthcare they say, I have no choice. I want it all, right? Oh, one very insightful point brought up by one healthcare provider that really touched me was you know, we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But who also care for the people we are not caring for? How do we find them? >> Uh-huh. >> Right, and that definitely, they did not just need to collect data that they have with from their patients. They also need to reach out, right? To outside data so that they can figure out who they are not caring for, right? So they want it all. So I asked them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us rightfully, that you know we have to then work out a way to help them build a system, you know? So that's healthcare, right? And if you go to other industries like banking, they say they can afford to keep them all. >> Yeah. >> But they are regulated, seemed like healthcare, they are regulated as to privacy and such like. So many examples different industries having different needs but different approaches to what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can install, right? But on the other hand, you know if you kind of don't want to afford it and decide not to start some. Maybe those some become highly valuable in the future, right? (Dr. Eng chuckles) You worry. >> Well, we can make some assumptions about the future. Can't we? I mean, we know there's going to be a lot more data than we've ever seen before. We know that. We know, well, not withstanding supply constraints and things like NAND. We know the prices of storage is going to continue to decline. We also know and not a lot of people are really talking about this, but the processing power, but the says, Moore's law is dead. Okay, it's waning, but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again, another insightful question that we touched on our Keynote. And that goes up to the why, uh, to the where? Where will your data be? Right? We have one estimate that says that by next year there will be 55 billion connected devices out there, right? 55 billion, right? What's the population of the world? Well, of the other 10 billion? But this thing is 55 billion. (Dave chuckles) Right? And many of them, most of them can collect data. So what do you do? Right? So the amount of data that's going to come in, it's going to way exceed, right? Drop in storage costs are increasing compute power. >> Right. >> Right. So what's the answer, right? So the answer must be knowing that we don't, and even a drop in price and increase in bandwidth, it will overwhelm the, 5G, it will overwhelm 5G, right? Given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all of that data from the 55 billion devices of the data back to a central, as a bunch of central cost. Because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you'll still be too expensive given the number of devices out there. You know given storage costs dropping is still be too expensive to try and install them all. So the answer must be to start, at least to mitigate from to, some leave most a lot of the data out there, right? And only send back the pertinent ones, as you said before. But then if you did that then how are we going to do machine learning at the Core and the Cloud Site, if you don't have all the data? You want rich data to train with, right? Sometimes you want to mix up the positive type data and the negative type data. So you can train the machine in a more balanced way. So the answer must be eventually, right? As we move forward with these huge number of devices all at the Edge to do machine learning at the Edge. Today we don't even have power, right? The Edge typically is characterized by a lower energy capability and therefore lower compute power. But soon, you know? Even with low energy, they can do more with compute power improving in energy efficiency, right? So learning at the Edge, today we do inference at the Edge. So we data, model, deploy and you do inference there is. That's what we do today. But more and more, I believe given a massive amount of data at the Edge, you have to start doing machine learning at the Edge. And when you don't have enough power then you aggregate multiple devices, compute power into a Swarm and learn as a Swarm, yeah. >> Oh, interesting. So now of course, if I were sitting and fly on the wall and the HPE board meeting I said, okay, HPE is a leading provider of compute. How do you take advantage of that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products. But there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for the customers? >> Hmm, the wall will have to have a balance, right? Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud Site, right? And it has to be hybrid. And then we need capability on the Edge side that we need to build systems that on one hand is an Edge adapter, right? Meaning they environmentally adapted because the Edge differently are on it, a lot of times on the outside. They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. It must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insight for that Antonio announced in 2018. For the next four years from 2018, right? $4 billion invested to strengthen our Edge portfolio. >> Uh-huh. >> Edge product lines. >> Right. >> Uh-huh, Edge solutions. >> I could, Doctor Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of, certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers, right? The diversity of customers and the diversity in the way they have approached different problems of data strategy. So the excitement is around data strategy, right? Just like, you know, the statement made for us was so was profound, right? And Antonio said, we are in the age of insight powered by data. That's the first line, right? The line that comes after that is as such we are becoming more and more data centric with data that currency. Now the next step is even more profound. That is, you know, we are going as far as saying that, you know, data should not be treated as cost anymore. No, right? But instead as an investment in a new asset class called data with value on our balance sheet. This is a step change, right? Right, in thinking that is going to change the way we look at data, the way we value it. So that's a statement. (Dr. Eng chuckles) This is the exciting thing, because for me a CTO of AI, right? A machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. Right? (Dr. Eng chuckles) So, that's why when the people start to value data, right? And say that it is an investment when we collect it it is very positive for AI. Because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. >> Yeah. >> So it'd be great, if the community values data. >> Well, you certainly see it in the valuations of many companies these days. And I think increasingly you see it on the income statement. You know data products and people monetizing data services. And yeah, maybe eventually you'll see it in the balance sheet. I know Doug Laney, when he was at Gartner Group, wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? >> Yeah, yeah. >> Dr. Goh... (Dave chuckles) >> The question is the process and methods in valuation. Right? >> Yeah, right. >> But I believe we will get there. We need to get started. And then we'll get there. I believe, yeah. >> Doctor Goh, it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh, yeah, no doubt. People will better understand how to align, you know some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCUBE. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (Dave chuckles) (Dr. Eng laughs) >> Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HPE Discover 21. This is Dave Vellante for theCUBE, the leader in Enterprise Tech Coverage. We'll be right back. (upbeat music)

Published Date : Jun 8 2021

SUMMARY :

Doctor Goh, great to see you again. great to talk to you again. And you talked about thriving And you really dug in the age of insight, right? of the ones you talked about today? to get what you need. And as a great example, the Flash Crash. is that humans put in the rules to decide But the rule was applied, you know, that it's going to be tough, yeah. So seems that most of the AI and the machine starts to evolve a model they may not have enough power to do so. Is that learning from the Edge You do understand my question. or the Call to do the learning. but the rest can be done at the Edge. When to organize it when you collect? But on the other hand, to help them build a system, you know? all that you can install, right? And so when you think about So what do you do? of the data back to a central, in that opportunity for the customers? And it has to be hybrid. about in the future of, as the data you feed it with. if the community values data. And I think increasingly you The question is the process We need to get started. And then the AI will Dr. Goh, great to see you again. as smart as the data Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

$4 billionQUANTITY

0.99+

AntonioPERSON

0.99+

two languagesQUANTITY

0.99+

10 billionQUANTITY

0.99+

55 billionQUANTITY

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

Gartner GroupORGANIZATION

0.99+

first lineQUANTITY

0.99+

10QUANTITY

0.99+

second areaQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

Hundreds of millionsQUANTITY

0.99+

TodayDATE

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

two partQUANTITY

0.99+

May 6, 2010DATE

0.99+

OneQUANTITY

0.99+

EdgeORGANIZATION

0.99+

first barrierQUANTITY

0.99+

less than a hundred sharesQUANTITY

0.99+

next yearDATE

0.98+

EngPERSON

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

May 6DATE

0.98+

United NationsORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

one areaQUANTITY

0.98+

one modelQUANTITY

0.98+

first oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

Dr.PERSON

0.97+

less than a hundred sharesQUANTITY

0.97+

three stagesQUANTITY

0.97+

one ruleQUANTITY

0.97+

Three main areasQUANTITY

0.97+

Flash BoysTITLE

0.97+

one languageQUANTITY

0.97+

oneQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.96+

Flash CrashTITLE

0.95+

under a hundredQUANTITY

0.95+

FirstlyQUANTITY

0.95+

Murli Thirumale, Pure Storage | CUBE Conversations, May 2021


 

(bright upbeat music) >> Hey, welcome to theCUBE's coverage of Pure Accelerate 2021. I'm Lisa Martin, please stay welcoming back one of our alumni Murli Thirumale is here, the VP & GM of the Cloud Native Business Unit at Pure Storage, Murli, welcome back. >> Lisa, it's great to be back at theCUBE, looking forward to discussion. >> Likewise, so it's been about six months or so since the Portworx acquisition by Pure Storage, give us a lay of the land, what's been going on? What are some of the successes, early wins, and some of the lessons that you've learned? >> Yeah, this is my third time being in Cloud, being a serial entrepreneur. So I've seen this movie before, and I have to say that this is really a lot of good anticipation followed by actually a lot of good stuff that has happened since, so it's been really a great ride so far. And when, let me start with the beginning, what the fundamental goal of the acquisition were, right? The couple of major goals, and then I can talk about how that integration is going. Really, I think from our viewpoint, from the Portworx viewpoint, the goal of the acquisition, from our view, was really to help turbocharge in our growth, we had really a very, very good product that was well accepted and established at customers, doing well as far as industry acceptance was concerned. And frankly, we had some great reference customers and some great installs expanding pretty well. Our issue was really how fast can we turbocharge that growth because as everybody knows, for a startup, the expensive part of an expansion is really on the go-to-market and sales side. And frankly, the timing for this was critical for us because the market had moved from the Kubernetes' market, has moved from sort of the innovator stage to the early majority stage. So from the Pure side, I think this made a lot of sense for them, because they have been looking for how they can expand their subscription models, how they can move to add more value from the array based business that there really have been a wonderful disruptor and to add more value up the stack, and that was the premise of the acquisition. One of the things that I paid a lot of attention to, as anybody does in acquisitions, is not just the strategy but really to understand if there was a culture fit between the teams, because a lot of the times acquisitions don't work because of the poor culture fit. So now let me kind of fast forward little bit and say, "Hey, what we know looking back in about six to eight months into it, how has it turning out so far?" And things have been just absolutely wonderful. Let me actually start with the culture fit, because that often is ignored and is one of the most important parts, right? The resonance in the culture between the two companies is just off the charts, right? It actually starts with what I would call a dramatic kind of customer first orientation, it's something we always had at Portworx. I always used to tell our customers with a startup you end up kind of, you buy the product, but you get the team, right? That's what happens with early stage startups, but Pure is sort of the same way, they are very focused on customer. So the customer focus is a very very useful thing that pulls us together. The second thing that's been really heartwarming to see has been really the focus on product excellence. Pure made it's dramatic entry into the market using Flash, and being the best Flash-based solution, and now they've expanded into many, many different areas. And Portworx also had a focus on product excellence, and so that has kind of moved the needle forward for both of us. And then I think the third thing is really a focus on the team winning, and not just an individual, right? And look, in these COVID times, this has been a tough year for everybody, I think it's, to some extent, even as we onboard new people, it's the culture of the team, the ability to bring new people onboard, and buy the culture, and make progress, all of that is really a function of how well the team is, 'we' is greater than 'me' type of a model, and I think that both these three values of customer first, high focus on product excellence, and the value in the team, including the resellers and the customers as part of the team, has really been the cornerstone, I think, of our success in the integration. >> That's outstanding because, like you said, this is not your first rodeo launching, coming out of stealth and launching and getting acquired, but doing so during one of the most challenging times in the last 100 years in our history while aligning cultures, I think that says a lot about the leadership on the Portworx side and the Pure side. >> I have to say, right? This is one of those amazing things, many people now that having been acquired can say this, really, most of the diligence, the transactions, all of that were done over Zoom, right? So, and then of course, everything since then is we're still in Zoom paradise. And so I think it really is a testament to the modern tools and stuff that we have that enable that. Now, let me talk a little bit about the content of what has happened, right? So strategically, I think the three areas that I think we've had huge synergy and seeing the benefits are first and foremost on the product side. A little later, I'd like to talk a little bit about some of the announcements we're making, but essentially, Pure had this outstanding core storage infrastructure product, well-known in the industry, very much Flash-oriented, part of the whole all Flash era now. And Portworx really came in with the idea of driving Kubernetes and Cloud Native workloads, which are really the majority of modern workloads. And what we found since then is that the integration of having really a more complete stack, which is really centered around what used to be an IT infrastructure of purchase, and what is in fact, for Kubernetes, a more DevOps oriented purchase. And that kind of a combination of being able to provide that combo in one package is something that we've been working very hard on in the last six months. And I'll mention some of the announcements, but we have a number of integrations with FlashArray and FlashBlade and other Pure products that we're able to highlight. So product integration for sure has been an area of some focus, but against a lot of progress. The second one is really customer synergy. I kind of described to our team when we got acquired, I said it's, for us, it's, being acquired by Pure is like strapping a rocket ship to ourselves as a small company, because we now have access to a huge customer footprint. Pure has over 8,000 customers, hugely amazingly high, almost unbelievable NPS score with customers, one of the best in the IT industry. And I think we are finding that with the deployment of containers becoming more ubiquitous, right? 80, 90% of customers in the enterprise are adopting Kubernetes and Containers. And therefore these 8,000 customers are a big huge target, they got a big target sign for both of us to be able to leverage. And so we've had a number of things that we're doing to address and use the Pure sales team to get access to them. The Pure channel of course is also part of that, Pure is 100% channel organization, which is great. So I think the synergy on the customer side with being able to have a solution that works for infrastructure and for DevOps has been a big area. In this day and age, Kubernetes is an area, for many of your listeners who are very, very familiar with Kubernetes, customers struggle, not just with day zero, but day one, day two, day three, right? It's how do you put it in production. And support, and integrating, and the use of Kubernetes and containers, putting that stack together is a big area. So support is a big area of pain for customers, and it's an area that, again, for a Portworx viewpoint, now we've expanded our footprint with a great support organization that we can bring to bear 24 by seven around the globe. Portworx is running on a lot of mission critical applications in big industries like finance and retail, and these types of things, really, support is a big area. And then the last thing I will just say is the use cases are usually synergistic, right? And we'll talk a little bit more about use cases as we go along here, but really there's legacy apps, right? In an interesting way, there's 80% of, IT spending is still on legacy apps, if you will, in that stack. However, 80% of all the new applications are being deployed on this modern app stack, right? >> Right. >> With all these open-source type of products and technologies. And most of that stack, most of the modern app stack is containerized. The 80, 85% of those applications really are where customers have chosen containers and Kubernetes as the as the mechanism to deliver those apps. And therefore Pure products like FlashBlade were very, very focused with fast recovery for these kind of modern apps, which are the stack of AI, and personalization, and all the modern digital apps. And I think those things can align well with the Portworx offering. So really around the areas of culture, customers, product synergy, support, and finally use cases, are all kind of been areas of huge progress for us. >> It also seems to me that the Portworx acquisition gives Pure a foray, a new buying center with respect to DevOps, talk to me a little bit about that as an opportunity for Pure. >> Yeah, the modern world is one where the enterprise itself has segmented into whole lot of new areas of spending and infrastructure ownership, right? And in the old days it used to be the network, storage, compute, and apps, sort of the old model of the world. And of course the app model has moved on, and then certainly there's a lot of different ways, web apps, the three tier apps, and the web apps, and so on. But the infrastructure world has morphed really into a bunch of other sub-segments, and some of it is still traditional hardware, but then even that is being cloudified, right? Because a lot of companies like Pure have taken their hardware array offerings and are offering that as a cloud-like offering where you can purchase it as a service, and in fact, Pure is offering a set of solutions called Evergreen that allow you to not even, you're just under subscription, you get your hardware refresh bundled in, very, very innovative. So you have now new buying centers coming in, in addition to the old traditional IT, there is sort of this whole, what used to be in the old ways called middleware, now has kind of morphed into this DevSecOps set of folks, right? Which is DevOps it's ITOps, and even security is a big part of that, the CISO Organization has that kind of segment. And so these buying centers often have new budgets, right? It turns out that, for example, to contrast, the Portworx budget really comes from entirely different budget, right? Our top two budget sources are usually CIO initiatives, they're not from the traditional storage budget, it comes from things like move to cloud or business transformation. And those set of folks, that set of customers, is really born in a different era, so to speak. You know, Lisa, they come, and I come from the old world, so I would say that I'm kind of more of an oldie, hopefully a Goldie, but an oldie. These folks are born in the post-DevOps, post-cloud, post-open-source world, right? They are used to brand new tools, get-ops, the way that everything's run on the cloud, it's on demand. So what we bring to Pure is really the ability to take their initiatives, which were around infrastructure, and cloudifying infrastructure to now adding two layers on top of that, right? So what Portworx adds to Pure is the access to the new automation layer of middleware. Kubernetes is nothing but really an automation of model for containers and for infrastructure now. And then the third layer is on top of us, is what I would call SaaS, the SaaSified layer, and as a service layer. And so we bring the opportunity to get those SaaS-like budgets, the DevOps budgets, and the DevOps and the SaaS kind of buyers, and together the business has very different models to it. In addition to not just a different technologies, the buying behavior is different, it's based on a consumption model, it's a subscription business. So it really is a change for new budgets, new buyers, and new financial models, which is a subscription model, which as you know, is valued much more highly by Wall Street nowadays compared to say some of the older hardware models. >> Well, Murli, when we talk about storage, we talk about data or the modern data experience. The more and more data that's being produced, the more value potentially there is for organizations, I think we saw, we learned several lessons in the last year, and one of them is that being able to glean insights from data in real-time or near real-time is, for many businesses, no longer a nice to have, it's really table stakes, it was for survival of getting through COVID, it is now in terms of identification of new business models, but it elevates the data conversation up to the C-suite, the board going, "Is our data protected? Is it secure? Can we access it?" And, "How do we deliver a modern data experience to our customers and to our internal employees?" So with that modern data experience, and maybe the elevation about conversation lengths, talk to me about some of the things that you're announcing at Accelerate with respect to Portworx. >> Yeah, so there are two sets of announcements. To be honest actually, this is a pretty exciting time for us, we're in theCUBE Cone time and the Accelerate time. And so let me kind of draw a circle around both those sets of announcements, if you will, right? So let's start perhaps with just the sets of things that we are announcing at Accelerate, right? This is kind of the first things that are coming up right now. And I'll tell you, there are some very, very exciting things that we're doing. So the majority of the announcements are centered around a release that we have called 2.8, so Portworx says, "We've been in the market now for well over five years with the product that really has been well deployed in very large global 2K enterprises." So the three or four major announcements, one of them is what I was talking about earlier, the integration of true Kubernetes applications running on Pure Storage. So we have a Cloud Native, a Native implementation of Portworx running on FlashArray and FlashBlade, where essentially when users now provision a container volume to Portworx, the storage volumes are magically created on FlashArray and FlashBlade, right? It's the idea of, without having to interface, so a DevOps engineer can deploy storage as code by provisioning volumes using Kubernetes without having to go issue a trouble ticket or a service ticket for a PureArray. And Portworx essentially access a layer between Kubernetes and the PureArray, and we allow configuration of volumes on the storage volumes of the PureArray directly. So essentially now on FlashArray, these volumes now receive the full suite of Portworx Storage Management features, including Kubernetes DR, backup, security, auto scaling, and migration. So that is a first version of this integration, right? The second one, it's, I am, is a personal favorite of mine, it's very, very exciting, right? When we came into Pure, we discovered that Pure already had this software solution called Pure as a service, it was essentially a Pure1 service that allowed for continuous call home, and log and diagnostic information, really an awesome window for customers to be able to see what their array utilization is like, complete observability, end-to-end on capacity, what's coming up, and allowed for proactive addressing of outages, or issues, or being able to kind of see it before it happen. The good news now is Portworx is integrated with Pure1, and so now customers have a unified observability stack for their Kubernetes applications using Portworx and FlashArray and FlashBlade in the Pure1 portal. So we are in the Pure1 portal now really providing end-to-end troubleshooting of issues and deployment, so very, very exciting, something that I think is a major step forward, right? >> Absolutely, well that single pane of glass is critical for management, so many companies waste a lot of time and resources managing disparate disconnected systems. And again, the last year has taught us so many businesses, there wasn't time, because there's going to be somebody right behind you that's going to be faster and more nimble, and has that single pane of glass unified view to be able to make better decisions. Last question, really, before we wrap here. >> Yeah. >> I can hear your momentum, I can feel your momentum through Zoom here. Talk to me about what's next, 'cause I know that when the acquisition happened about, we said six months or so ago, you said, "This is a small step in the Portworx journey." So what's ahead? >> Lisa, great question. I can state 10 things, but let me kind of step up a little bit at the 10,000 foot level, right? In one sense, I think no company gets to declare victory in this ongoing battle and we're just getting started. But if I had to kind of say, "What are some of the major teams that we have been part of and have been able to make happen in addition to take advantage of?" Pure obviously took advantage of the Flash wave, and they moved to all Flash, that's been a major disruptor with Pure being the lead. For Portworx, it has been really the move to containers and data management in an automated form, right? Kubernetes has become sort of not just a container orchestrator looking North, but looking southbound, is orchestrating infrastructure, we are in the throws of that revolution. But if you think about it, the other thing that's happening is all of this is in the service of, if you're a CIO, you're in the service of lines of businesses asking for a way to run their applications in a multicloud way, run their applications faster. And that is really the, as a service revolution, and it feels a little silly to almost talk about it as a service in that it's this late in the Cloud era, but the reality is that's just beginning, right? As a service revolution dramatically changed the IaaS business, the infrastructure business. But if you look at it, data services as a, data as a service is something that is what our customers are doing, so our customers are taking Pure hardware, Portworx software, and then they are building them into a platform as a service, things like databases as a service. And what we are doing, you will see some announcements from us in the second half of this year, terribly exciting, I just can't wait for it, where we're going to be actually moving forward to allow our customers to more quickly get to data services at the push of a button, so to speak, right? So- >> Excellent. >> The idea of database as a service to offer messaging as a service, search as a service, streaming as a service, and then finally some ML kind of AI as a service, these five categories of data services are what you should be expecting to see from Portworx and Pure going forward in the next half. >> Big potential there to really kick the door wide open on the total adjustable market. Well, Murli, it's been great to have you on the program, I can't wait to have you on next 'cause I know that there's so much more, like I said, I can feel your momentum through our virtual experience here. Thank you so much for joining us and giving us the lay of the land of what's been happening with the Portworx acquisition and all of the momentum and excitement that is about to come, we appreciate your time. >> Thank you, Lisa. Cheers to a great reduced COVID second half of the year. >> Oh, cheers to that. >> Yeah cheers, thanks. >> From Murli Thirumale, I'm Lisa Martin, you're watching theCUBE's coverage of Pure Accelerate. (bright upbeat music)

Published Date : May 13 2021

SUMMARY :

of the Cloud Native Business Lisa, it's great to be back at theCUBE, and so that has kind of moved the needle on the Portworx side and the Pure side. of the announcements, most of the modern app the Portworx acquisition is really the ability to and maybe the elevation This is kind of the first things And again, the last year has taught us step in the Portworx journey." advantage of the Flash wave, forward in the next half. and all of the momentum and excitement COVID second half of the year. coverage of Pure Accelerate.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

Pure StorageORGANIZATION

0.99+

PortworxORGANIZATION

0.99+

Murli ThirumalePERSON

0.99+

80%QUANTITY

0.99+

May 2021DATE

0.99+

two companiesQUANTITY

0.99+

threeQUANTITY

0.99+

Murli ThirumalePERSON

0.99+

100%QUANTITY

0.99+

MurliPERSON

0.99+

24QUANTITY

0.99+

third layerQUANTITY

0.99+

10 thingsQUANTITY

0.99+

two layersQUANTITY

0.99+

PortworxTITLE

0.99+

last yearDATE

0.99+

FlashArrayTITLE

0.99+

8,000 customersQUANTITY

0.99+

bothQUANTITY

0.99+

over 8,000 customersQUANTITY

0.99+

80, 85%QUANTITY

0.99+

AccelerateORGANIZATION

0.99+

10,000 footQUANTITY

0.99+

third timeQUANTITY

0.99+

two setsQUANTITY

0.98+

oneQUANTITY

0.98+

second thingQUANTITY

0.98+

first versionQUANTITY

0.98+

third thingQUANTITY

0.98+

one packageQUANTITY

0.98+

OneQUANTITY

0.98+

firstQUANTITY

0.98+

FlashTITLE

0.98+

second oneQUANTITY

0.97+

three tierQUANTITY

0.97+

PureArrayTITLE

0.97+

one senseQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

four major announcementsQUANTITY

0.97+

Pure1TITLE

0.97+

PureORGANIZATION

0.97+

80, 90%QUANTITY

0.97+

ZoomORGANIZATION

0.97+

Flash waveEVENT

0.96+

FlashBladeTITLE

0.96+

five categoriesQUANTITY

0.96+

first thingsQUANTITY

0.96+

KubernetesTITLE

0.96+

PureTITLE

0.95+

three areasQUANTITY

0.95+

three valuesQUANTITY

0.95+

sevenQUANTITY

0.95+

CB Bohn, Principal Data Engineer, Microfocus | The Convergence of File and Object


 

>> Announcer: From around the globe it's theCUBE. Presenting the Convergence of File and Object brought to you by Pure Storage. >> Okay now we're going to get the customer perspective on object and we'll talk about the convergence of file and object, but really focusing on the object pieces this is a content program that's being made possible by Pure Storage and it's co-created with theCUBE. Christopher CB Bohn is here. He's a lead architect for MicroFocus the enterprise data warehouse and principal data engineer at MicroFocus. CB welcome good to see you. >> Thanks Dave good to be here. >> So tell us more about your role at Microfocus it's a pan Microfocus role because we know the company is a multi-national software firm it acquired the software assets of HP of course including Vertica tell us where you fit. >> Yeah so Microfocus is you know, it's like I can says it's wide, worldwide company that it sells a lot of software products all over the place to governments and so forth. And it also grows often by acquiring other companies. So there is there the problem of integrating new companies and their data. And so what's happened over the years is that they've had a number of different discreet data systems so you've got this data spread all over the place and they've never been able to get a full complete introspection on the entire business because of that. So my role was come in, design a central data repository and an enterprise data warehouse, that all reporting could be generated against. And so that's what we're doing and we selected Vertica as the EDW system and Pure Storage FlashBlade as the communal repository. >> Okay so you obviously had experience with with Vertica in your previous role, so it's not like you were starting from scratch, but paint a picture of what life was like before you embarked on this sort of consolidated approach to your data warehouse. Was it just dispared data all over the place? A lot of M and A going on, where did the data live? >> CB: So >> Right so again the data is all over the place including under people's desks and just dedicated you know their own private SQL servers, It, a lot of data in a Microfocus is one on SQL server, which has pros and cons. Cause that's a great transactional database but it's not really good for analytics in my opinion. So but a lot of stuff was running on that, they had one Vertica instance that was doing some select reporting. Wasn't a very powerful system and it was what they call Vertica enterprise mode where it had dedicated nodes which had the compute and storage in the same locus on each server okay. So Vertica Eon mode is a whole new world because it separates compute from storage. Okay and at first was implemented in AWS so that you could spin up you know different numbers of compute nodes and they all share the same communal storage. But there has been a demand for that kind of capability, but in an on-prem situation. Okay so Pure storage was the first vendor to come along and have an S3 emulation that was actually workable. And so Vertica worked with Pure Storage to make that all happen and that's what we're using. >> Yeah I know back when back from where we used to do face-to-face, we would be at you know Pure Accelerate, Vertica was always there it stopped by the booth, see what they're doing so tight integration there. And you mentioned Eon mode and the ability to scale, storage and compute independently. And so and I think Vertica is the only one I know they were the first, I'm not sure anybody else does that both for cloud and on-prem, but so how are you using Eon mode, are you both in AWS and on-prem are you exclusively cloud? Maybe you could describe that a little bit. >> Right so there's a number of internal rules at Microfocus that you know there's, it's not AWS is not approved for their business processes. At least not all of them, they really wanted to be on-prem and all the transactional systems are on-prem. And so we wanted to have the analytics OLAP stuff close to the OLTP stuff right? So that's why they called there, co-located very close to each other. And so we could, what's nice about this situation is that these S3 objects, it's an S3 object store on the Pure Flash Blade. We could copy those over if we needed it to AWS and we could spin up a version of Vertica there, and keep going. It's like a tertiary GR strategy cause we actually have a, we're setting up a second, Flash Blade Vertica system geo located elsewhere for backup and we can get into it if you want to talk about how the latest version of the Pure software for the Flash Blade allows synchronization across network boundaries of those Flash Blade which is really nice because if, you know there's a giant sinkhole opens up under our Koll of facility and we lose that thing then we just have to switch to DNS. And we were back in business of the DR. And then the third one was to go, we could copy those objects over to AWS and be up and running there. So we're feeling pretty confident about being able to weather whatever comes along. >> Yeah I'm actually very interested in that conversation but before we go there. you mentioned you want, you're going to have the old lab close to the OLTP, was that for latency reasons, data movement reasons, security, all of the above. >> Yeah it's really all of the above because you know we are operating under the same sub-net. So to gain access to that data, you know you'd have to be within that VPN environment. We didn't want to going out over the public internet. Okay so and just for latency reasons also, you know we have a lot of data and we're continually doing ETL processes into Vertica from our production data, transactional databases. >> Right so they got to be approximate. So I'm interested in so you're using the Pure Flash Blade as an object store, most people think, oh object simple but slow. Not the case for you is that right? >> Not the case at all >> Why is that. >> This thing had hoop It's ripping, well you have to understand about Vertica and the way it stores data. It stores data in what they call storage containers. And those are immutable, okay on disc whether it's on AWS or if you had a enterprise mode Vertica, if you do an update or delete it actually has to go and retrieve that object container from disc and it destroys it and rebuilds it, okay which is why you don't, you want to avoid updates and deletes with vertica because the way it gets its speed is by sorting and ordering and encoding the data on disk. So it can read it really fast. But if you do an operation where you're deleting or updating a record in the middle of that, then you've got to rebuild that entire thing. So that actually matches up really well with S3 object storage because it's kind of the same way, it gets destroyed and rebuilt too okay. So that matches up very well with Vertica and we were able to design the system so that it's a panda only. Now we have some reports that we're running in SQL server. Okay which we're taking seven days. So we moved that to Vertica from SQL server and we rewrote the queries, which were had, which had been written in TC SQL with a bunch of loops and so forth and we were to get, this is amazing it went from seven days to two seconds, to generate this report. Which has tremendous value to the company because it would have to have this long cycle of seven days to get a new introspection in what they call the knowledge base. And now all of a sudden it's almost on demand two seconds to generate it. That's great and that's because of the way the data is stored. And the S3 you asked about, oh you know it, it's slow, well not in that context. Because what happens really with Vertica Eon mode is that it can, they have, when you set up your compute nodes, they have local storage also which is called the depot. It's kind of a cache okay. So the data will be drawn from the Flash Blade and cached locally. And that was, it was thought when they designed that, oh you know it's that'll cut down on the latency. Okay but it turns out that if you have your compute nodes close meaning minimal hops to the Flash Blade that you can actually tell Vertica, you know don't even bother caching that stuff just read it directly on the fly from the from the Flash Blade and the performance is still really good. It depends on your situation. But I know for example a major telecom company that uses the same topologies we're talking about here they did the same thing. They just dropped the cache cause the Flash Blade was able to deliver the data fast enough. >> So that's, you're talking about that's speed of light issues and just the overhead of switching infrastructure is that, it's eliminated and so as a result you can go directly to the storage array? >> That's correct yeah, it's like, it's fast enough that it's almost as if it's local to the compute node. But every situation is different depending on your needs. If you've got like a few tables that are heavily used, then yeah put them in the cache because that'll be probably a little bit faster. But if you're have a lot of ad hoc queries that are going on, you know you may exceed the storage of the local cache and then you're better off having it just read directly from the, from the Flash Blade. >> Got it so it's >> Okay. >> It's an append only approach. So you're not >> Right >> Overwriting on a record, so but then what you have automatically re index and that's the intelligence of the system. how does that work? >> Oh this is where we did a little bit of magic. There's not really anything like magic but I'll tell you what it is I mean. ( Dave laughing) Vertica does not have indexes. They don't exist. Instead I told you earlier that it gets a speed by sorting and encoding the data on disk and ordering it right. So when you've got an append-only situation, the natural question is well if I have a unique record, with let's say ID one, two, three, what happens if I append a new version of that, what happens? Well the way Vertica operates is that there's a thing called a projection which is actually like a materialized columnar data store. And you can have a, what they call a top-K projection, which says only put in this projection the records that meet a certain condition. So there's a field that we like to call a discriminator field which is like okay usually it's the latest update timestamp. So let's say we have record one, two, three and it had yesterday's date and that's the latest version. Now a new version comes in. When the data at load time vertical looks at that and then it looks in the projection and says does this exist already? If it doesn't then it adds it. If it does then that one now goes into that projection okay. And so what you end up having is a projection that is the latest snapshot of the data, which would be like, oh that's the reality of what the table is today okay. But inherent in that is that you now have a table that has all the change history of those records, which is awesome. >> Yeah. >> Because, you often want to go back and revisit, you know what it will happen to you. >> But that materialized view is the most current and the system knows that at least can (murmuring). >> Right so we then create views that draw off from that projection so that our users don't have to worry about any of that. They just get oh and say select from this view and they're getting the latest greatest snapshot of what the reality of the data is right now. But if they want to go back and say, well how did this data look two days ago? That's an easy query for them to do also. So they get the best of both worlds. >> So could you just plug any flash array into your system and achieve the same results or is there anything really unique about Pure? >> Yeah well they're the only ones that have got I think really dialed in the S3 object form because I don't think AWS actually publishes every last detail of that S3 spec. Okay so it had, there's a certain amount of reverse engineering they had to do I think. But they got it right. When we've, a couple maybe a year and a half ago or so there they were like at 99%, but now they worked with Vertica people to make sure that that object format was true to what it should be. So that it works just as if Vertica doesn't care, if it is on AWS or if it's on Pure Flash Blade because Pure did a really good job of dialing in that format and so Vertica doesn't care. It just knows S3, doesn't know what it doesn't care where it's going it just works. >> So the essentially vendor R and D abstracted that complexity so you didn't have to rewrite the application is that right? >> Right, so you know when Vertica ships it's software, you don't get a specific version for Pure or AWS, it's all in one package, and then when you configure it, it knows oh okay well, I'm just pointed at the, you know this port, on the Pure storage Flash Blade, and it just works. >> CB what's your data team look like? How is it evolving? You know a lot of customers I talked to they complain that they struggled to get value out of the data and they don't have the expertise, what does your team look like? How is it, is it changing or did the pandemic change things at all? I wonder if you could bring us up to date on that? >> Yeah but in some ways Microfocus has an advantage in that it's such a widely dispersed across the world company you know it's headquartered in the UK, but I deal with people I'm in the Bay Area, we have people in Mexico, Romania, India. >> Okay enough >> All over the place yeah all over the place. So when this started, it was actually a bigger project it got scaled back, it was almost to the point where it was going to be cut. Okay, but then we said, well let's try to do almost a skunkworks type of thing with reduced staff. And so we're just like a hand. You could count the number of key people on this on one hand. But we got it all together, and it's been a traumatic transformation for the company. Now there's, it's one approval and admiration from the highest echelons of this company that, hey this is really providing value. And the company is starting to get views into their business that they didn't have before. >> That's awesome, I mean, I've watched Microfocus for years. So to me they've always had a, their part of their DNA is private equity I mean they're sharp investors, they do great M and A >> CB: Yeah >> They know how to drive value and they're doing modern M and A, you know, we've seen what they what wait, what they did with SUSE, obviously driving value out of Vertica, they've got a really, some sharp financial people there. So that's they must have loved the the Skunkworks, fast ROI you know, small denominator, big numerator. (laughing) >> Well I think that in this case, smaller is better when you're doing development. You know it's a two-minute cooks type of thing and if you've got people who know what they're doing, you know I've got a lot of experience with Vertica, I've been on the advisory board for Vertica for a long time. >> Right And you know I was able to learn from people who had already, we're like the second or third company to do a Pure Flash Blade Vertica installation, but some of the best companies after they've already done it we are members of the advisory board also. So I learned from the best, and we were able to get this thing up and running quickly and we've got you know, a lot of other, you know handful of other key people who know how to write SQL and so forth to get this up and running quickly. >> Yeah so I mean, look it Pure is a fit I mean I sound like a fan boy, but Pure is all about simplicity, so is object. So that means you don't have to ra, you know worry about wrangling storage and worrying about LANs and all that other nonsense and file names but >> I have burned by hardware in the past you know, where oh okay they built into a price and so they cheap out on stuff like fans or other things in these components fail and the whole thing goes down, but this hardware is super good quality. And so I'm happy with the quality of that we're getting. >> So CB last question. What's next for you? Where do you want to take this initiative? >> Well we are in the process now of, we're when, so I designed a system to combine the best of the Kimball approach to data warehousing and the inland approach okay. And what we do is we bring over all the data we've got and we put it into a pristine staging layer. Okay like I said it's a, because it's append-only, it's essentially a log of all the transactions that are happening in this company, just as they appear okay. And then from the Kimball side of things we're designing the data marts now. So that's what the end users actually interact with. So we're taking the, we're examining the transactional systems to say, how are these business objects created? What's the logic there and we're recreating those logical models in Vertica. So we've done a handful of them so far, and it's working out really well. So going forward we've got a lot of work to do, to create just about every object that the company needs. >> CB you're an awesome guest really always a pleasure talking to you and >> Thank you. >> congratulations and good luck going forward stay safe. >> Thank you, you too Dave. >> All right thank you. And thank you for watching the Convergence of File and Object. This is Dave Vellante for theCUBE. (soft music)

Published Date : Apr 28 2021

SUMMARY :

brought to you by Pure Storage. but really focusing on the object pieces it acquired the software assets of HP all over the place to Okay so you obviously so that you could spin up you know and the ability to scale, and we can get into it if you want to talk security, all of the above. Yeah it's really all of the above Not the case for you is that right? And the S3 you asked about, storage of the local cache So you're not and that's the intelligence of the system. and that's the latest version. you know what it will happen to you. and the system knows that at least the data is right now. in the S3 object form and then when you configure it, I'm in the Bay Area, And the company is starting to get So to me they've always had loved the the Skunkworks, I've been on the advisory a lot of other, you know So that means you don't have to by hardware in the past you know, Where do you want to take this initiative? object that the company needs. congratulations and good And thank you for watching

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

Dave VellantePERSON

0.99+

MexicoLOCATION

0.99+

AWSORGANIZATION

0.99+

MicroFocusORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

UKLOCATION

0.99+

seven daysQUANTITY

0.99+

RomaniaLOCATION

0.99+

99%QUANTITY

0.99+

HPORGANIZATION

0.99+

MicrofocusORGANIZATION

0.99+

two-minuteQUANTITY

0.99+

secondQUANTITY

0.99+

two secondsQUANTITY

0.99+

IndiaLOCATION

0.99+

KimballORGANIZATION

0.99+

Pure StorageORGANIZATION

0.99+

each serverQUANTITY

0.99+

CB BohnPERSON

0.99+

yesterdayDATE

0.99+

two days agoDATE

0.99+

firstQUANTITY

0.99+

Christopher CB BohnPERSON

0.98+

SQLTITLE

0.98+

VerticaTITLE

0.98+

a year and a half agoDATE

0.98+

both worldsQUANTITY

0.98+

Pure Flash BladeCOMMERCIAL_ITEM

0.98+

bothQUANTITY

0.98+

verticaTITLE

0.98+

Bay AreaLOCATION

0.97+

oneQUANTITY

0.97+

Flash BladeCOMMERCIAL_ITEM

0.97+

third oneQUANTITY

0.96+

CBPERSON

0.96+

one packageQUANTITY

0.96+

todayDATE

0.95+

Pure storage Flash BladeCOMMERCIAL_ITEM

0.95+

first vendorQUANTITY

0.95+

pandemicEVENT

0.94+

S3TITLE

0.94+

martsDATE

0.92+

SkunkworksORGANIZATION

0.91+

SUSEORGANIZATION

0.89+

threeQUANTITY

0.87+

S3COMMERCIAL_ITEM

0.87+

third companyQUANTITY

0.84+

Pure Flash Blade VerticaCOMMERCIAL_ITEM

0.83+

DV Pure Storage 208


 

>> Thank you, sir. All right, you ready to roll? >> Ready. >> All right, we'll go ahead and go in five, four, three, two. >> Okay, let's summarize the convergence of file and object. First, I want to thank our guests, Matt Burr, Scott Sinclair, Garrett Belsner, and CB Bonne. I'm your host, Dave Vellante, and please allow me to briefly share some of the key takeaways from today's program. So first, as Scott Sinclair of ESG stated surprise, surprise, data's growing. And Matt Burr, he helped us understand the growth of unstructured data. I mean, estimates indicate that the vast majority of data will be considered unstructured by mid decade, 80% or so. And obviously, unstructured data is growing very, very rapidly. Now, of course, your definition of unstructured data, now that may vary across a wide spectrum. I mean, there's video, there's audio, there's documents, there's spreadsheets, there's chat. I mean, these are generally considered unstructured data but of course they all have some type of structure to them. You know, perhaps it's not as strict as a relational database, but there's certainly metadata and certain structure to these types of use cases that I just mentioned. Now, the key to what Pure is promoting is this idea of unified fast file and object, U-F-F-O. Look, object is great, it's inexpensive, it's simple, but historically, it's been less performant, so good for archiving, or cheap and deep types of examples. Organizations often use file for higher performance workloads and let's face it, most of the world's data lives in file formats. What Pure is doing is bringing together file and object by, for example, supporting multiple protocols, ie, NFS, SMB, and S3. S3, of course, has really given a new life to object over the past decade. Now, the key here is to essentially enable customers to have the best of both worlds, not having to trade off performance for object simplicity. And a key discussion point that we've had in the program has been the impact of Flash on the long, slow, death of spinning disk. Look, hard disk drives, they had a great run, but HDD volumes, they peaked in 2010, and Flash, as you well know, has seen tremendous volume growth thanks to the consumption of Flash in mobile devices and then of course, its application into the enterprise. And as volume is just going to keep growing and growing, and growing. the price declines of Flash are coming down faster than those of HDD. So it's, the writing's on the wall. It's just a matter of time. So Flash is riding down that cost curve very, very aggressively and HDD has essentially become a managed decline business. Now, by bringing Flash to object as part of the FlashBlade portfolio and allowing for multiple protocols, Pure hopes to eliminate the dissonance between file and object and simplify the choice. In other words, let the workload decide. If you have data in a file format, no problem. Pure can still bring the benefits of simplicity of object at scale to the table. So again, let the workload inform what the right strategy is not the technical infrastructure. Now Pure, of course, is not alone. There are others supporting this multi-protocol strategy. And so we asked Matt Burr why Pure, what's so special about you? And not surprisingly, in addition to the product innovation, he went right to Pure's business model advantages. I mean, for example, with its Evergreen support model which was very disruptive in the marketplace. You know, frankly, Pure's entire business disrupted the traditional disk array model which was, fundamentally, it was flawed. Pure forced the industry to respond. And when it achieved escape velocity and Pure went public, the entire industry had to react. And a big part of the Pure value prop in addition to this business model innovation that we just discussed is simplicity. Pure's keep it simple approach coincided perfectly with the ascendancy of cloud where technology organizations needed cloud-like simplicity for certain workloads that were never going to move into the cloud. They were going to stay on-prem. Now I'm going to come back to this but allow me to bring in another concept that Garrett and CB really highlighted, and that is the complexity of the data pipeline. And what do I mean, what do I mean by that, and why is this important? So Scott Sinclair articulated or he implied that the big challenge is organizations, they're data full, but insights are scarce; a lot of data, not as much insights, and it takes time, too much time to get to those insights. So we heard from our guests that the complexity of the data pipeline was a barrier to getting to faster insights. Now, CB Bonne shared how he streamlined his data architecture using Vertica's Eon Mode which allowed him to scale, compute, independently of storage, so that brought critical flexibility and improved economics at scale. And FlashBlade, of course, was the backend storage for his data warehouse efforts. Now, the reason I think this is so important is that organizations are struggling to get insights from data and the complexity associated with the data pipeline and data lifecycles, let's face it, it's overwhelming organizations. And there, the answer to this problem is a much longer and different discussion than unifying object and file. That's, you know, I could spend all day talking about that, but let's focus narrowly on the part of the issue that is related to file and object. So the situation here is the technology has not been serving the business the way it should. Rather, the formula is twisted in the world of data and big data, and data architectures. The data team is mired in complex technical issues that impact the time to insights. Now, part of the answer is to abstract the underlying infrastructure complexity and create a layer with which the business can interact that accelerates instead of impedes innovation. And unifying file and object is a simple example of this where the business team is not blocked by infrastructure nuance, like does this data reside in the file or object format? Can I get to it quickly and inexpensively in a logical way or is the infrastructure in a stovepipe and blocking me? So if you think about the prevailing sentiment of how the cloud is evolving to incorporate on premises, workloads that are hybrid, and configurations that are working across clouds, and now out to the edge, this idea of an abstraction layer that essentially hides the underlying infrastructure is a trend we're going to see evolve this decade. Now, is UFFO the be-all end-all answer to solving all of our data pipeline challenges? No, no, of course not. But by bringing the simplicity and economics of object together with the ubiquity and performance of file, UFFO makes it a lot easier. It simplifies a life organizations that are evolving into digital businesses, which by the way, is every business. So, we see this as an evolutionary trend that further simplifies the underlying technology infrastructure and does a better job supporting the data flows for organizations so they didn't have to spend so much time worrying about the technology details that add little value to the business. Okay, so thanks for watching the convergence of file and object and thanks to Pure Storage for making this program possible. This is Dave Vellante for theCUBE. We'll see you next time.

Published Date : Feb 8 2021

SUMMARY :

All right, you ready to roll? in five, four, three, two. that impact the time to insights.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Matt BurrPERSON

0.99+

Scott SinclairPERSON

0.99+

Garrett BelsnerPERSON

0.99+

ESGORGANIZATION

0.99+

80%QUANTITY

0.99+

fiveQUANTITY

0.99+

CB BonnePERSON

0.99+

twoQUANTITY

0.99+

2010DATE

0.99+

FirstQUANTITY

0.99+

todayDATE

0.99+

firstQUANTITY

0.98+

fourQUANTITY

0.98+

threeQUANTITY

0.98+

both worldsQUANTITY

0.98+

FlashTITLE

0.97+

CBPERSON

0.97+

VerticaORGANIZATION

0.97+

Pure StorageORGANIZATION

0.96+

PureORGANIZATION

0.96+

GarrettPERSON

0.96+

EvergreenORGANIZATION

0.86+

past decadeDATE

0.59+

UFFOORGANIZATION

0.59+

Pure Storage 208COMMERCIAL_ITEM

0.59+

PurePERSON

0.58+

this decadeDATE

0.5+

FlashBladeORGANIZATION

0.43+

FlashBladeTITLE

0.37+

Michael Sotnick, Pure Storage & Rob Czarnecki, AWS Outposts | AWS re:Invent 2020 Partner Network Day


 

>>from >>around the globe. It's the Cube with digital coverage of AWS reinvent 2020. Special coverage sponsored by AWS Global Partner Network. >>Hi. Welcome to the Cube. Virtual and our coverage of AWS reinvent 2020 with special coverage of a PM partner experience. I'm John for your host. We are the Cube. Virtual. We can't be there in person with a remote. And our two next guests are We have pure storage. Michael Slotnick, VP of Worldwide Alliances, Pure storage. And Robert Czarnecki, principal product manager for a U. S. Outposts. Welcome to the Cube. >>Wonderful to be here. Great to see you. And thanks for having us, >>Michael. Great to see you pure. You guys had some great Momenta, um, earnings and some announcements. You guys have some new news? We're here. Reinvent all part of a W s and outpost. I want to get into it right away. Uh, talk about the relationship with AWS. I know you guys have some hot news. Just came out in late November. We're here in the event. All the talk is about new higher level services. Hybrid edge. What do you guys doing? What's the story? >>Yeah, Look, I gotta tell you the partnership with AWS is a very high profile and strategic partnership for pure storage. We've worked hard with our cloud block store for AWS, which is an extensive bility solution for pure flash array and into a W s. I think the big news and one of things that we're most proud of is the recent establishment of pure being service ready and outpost ready. And the first and Onley on Prem storage solution and were shoulder to shoulder with AWS is a W s takes outpost into the data center. Now they're going after key workloads that were well known for. And we're very excited Thio, partner with AWS in that regard, >>you know, congratulations to pure. We've been following you guys from the beginning since inception since it was founded startup. And now I'll see growing public company on the next level kind of growth plan. You guys were early on all this stuff with with with flash with software and cloud. So it's paying off. Rob, I wanna get toe Outpost because this was probably most controversial announcements I've ever covered at reinvent for the past eight years. It really was the first sign that Andy was saying, You know what? We're working backwards from the customers and they all are talking Hybrid. We're gonna have Outpost. Give us the update. What kind of workloads and verticals are seeing Success without post? Now that that's part of the portfolio, How does it all working out? Give us the update on the workloads in the verticals. >>Absolutely. Although I have to say I'd call it more exciting than controversial. We're so excited about the opportunities that outpost opened for our customers. And, you know, customers have been asking us for years. How can we bring AWS services to our data centers? And we thought about it for a long time. And until until we define the outpost service, we really I thought we could do better. And what outpost does it lets us take those services that customers are familiar with? It lets us bring it to their data center and and one of the really bright spots over the past year has just been how many different industries and market segments have shown interest. Outpost right. You could have customers, for example, with data residency needs, those that have to do local data processing. Uh, maybe have Leighton see needs on a specific workload that needs to run near their end users. We're just folks trying to modernize their data center, and that's a journey. That transformation takes time, right? So So Outpost works for all of those customers. And one of the things that's really become clear to us is that to enable the success that we think L Post can have, we need to meet customers where they are. And and one of the fantastic things about the outpost ready program is many of those customers air using pure and they have pure hardware and way. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to each other looks really good. >>You know, 20 used to kind of welcome back my controversial comments. You know, I meant in the sense of that's when Cloud really got big into the enterprise and you have to deal with hybrid. So I do think it's exciting because the edges a big theme here. Can you just share real quick before I get in some of the pure questions on this edge piece with the hybrid because what what's the customer need? And when you talk to customers, I know you guys, you know, really kind of work backwards from the customer. What are their needs? What causes them to look at Outpost as part of their hybrid? What's the Keith consideration? >>Yeah, so? So there are a couple of different needs. John, right? One, for example, is way have regions and local zones across the globe. But we're not everywhere and and their their data residency regulations that they're becoming increasingly common and popular. So customers I come to us and say, Look, I really need to run, for example, of financial services workload. It needs to be in Thailand, and we don't have a reason or local zone in Thailand. But we could get him an outpost to to places where they need to be right. So the that that requirement to keep data, whether it's by regulation or by a contractual agreement, that's a that's a big driver. The other pieces there's There's a tremendous amount of interest in the that top down executive sponsorship across enterprise customers to transform their operations right to modernize their their digital approach but there, when they actually look a look at their estate, they do see an awful lot of hardware, and that's a hard challenge. Thio Plan the migration when you could bring an outpost right into that data center. It really makes it much easier because AWS is right there. There could be a monolithic architecture that it doesn't lend well toe having part of the workload running in the region, part of the workload running in their data center. But with an outpost, they can extend AWS to their data center, and that just makes it so much easier for them to get started on their digital transformation. >>Michael, this is This is the key trend. You guys saw early Cloud operations on premise. It becomes cloud ified at that point when you have Dev ops on on Premises and then cloud pure cloud for bursting all that stuff. And now you've got the edge exploding as well of growth and opportunity. What causes the customer to get the pure option on outputs? What's the What's the angle for you guys? Obviously storage, you get data and I can see this whole Yeah, there's no region and certainly outpost stores data, and that's a requirement for a lot of, you know, certainly global customers and needs. What's the pure angle on this? >>Yeah, I appreciate that. And appreciate Rob's comments around what AWS sees in the wild in terms of yours footprint in the market share that we've established his company over 11 years in business and, you know, over eight years of shipping product. You know, what I would tell you is one of the things that that a lot of people misses the simplicity and the consistency that air characteristically, you know very much in the AWS experience and equally within the pure experience and that that's really powerful. So as we were successful in putting pure into workloads that, you know, for for all the reasons that Rob talked about right data gravity, you know, the the regulatory issues, you know, just application architecture and its inability to move to the public cloud. Um, you know, our predictability are simplicity. Are consistency really match with the costumers getting with other work clothes that they had in AWS? And so with a W S outposts that's really bringing to the customer that single pane of glass to manage their entire environment. And so we saw that we made the three year investment in Outpost. Is Rob said Just having our solution? Inp Yours Data center. It's set up and running today with a solution built on flash Blade, which is our unstructured data solution and, you know, delivering fantastic performance results in a I and ML workloads. We see the same opportunity within backup and disaster recovery workloads and into analytics and then equally the opportunity toe build. You know, Flash Ray and our other storage solutions, and to build architectures with outposts in our data center and bring them to market >>real quick just to follow up on that. What use cases are you seeing that are most successful without post and in general in general, how do you guys get your customers to integrate with the rest of, uh, their environment? Because you you no one's got. Now this operating environments not just cloud public, is cloud on premise and everything else. >>Yeah, you know what's cool is, and then Rob hit right on. It is the the wide range of industries and the wide range of use cases and workloads that air finding themselves attracted to the outpost offering on DSO. You know, without a doubt there's gonna be, You know, I think what people would immediately believe ai and ml workloads and the importance of having high performance storage and to have a high performance outpost environment, you know, as close to the center as possible of those solutions. But it doesn't stop there. Traditional virtualized database workloads that for reasons of application architecture, aren't candidates to move. AWS is public cloud offering our great fit for outpost and those air workloads that we've always traditionally been successful within the market and see a great opportunity. Thio, you know, build on that success as an outpost partner. >>Rob, I gotta ask, you last reinvent when we're in person. When we had real life back then e was at the replay party and hanging out, and this guy comes out to me. I don't even know who he was. Obviously big time engineer over there opens his hand up and shows me this little processor and I'm like, closes and he's like and I go take a picture and it was like freaking out. Don't take a picture. It was it was the big processor was the big, uh, kind of person. Uh, I think it was the big monster. And it was just so small. See the innovation and hard where you guys have done a lot, there s that's cool. I like get your thoughts on where the future is going there because you've got great hardware innovation, but you got the higher level services with containers. I know you guys took your time. Containers are super important because that's going to deal with that. So how do you look at that? You got the innovation in the hardware check containers. How does that all fit in? Because you guys have been making a lot of investments in some of these cloud native projects. What's your position on that? >>You know, it's all part of one common story, John right customers that they want an easy path to delivering impact for their business. Right. And, you know, you've heard us speak a lot over the past few years about how we're really seeing these two different types of customers. We have those customers that really loved to get those foundational core building blocks and stitch them together in a creative way. But then you have more and more customers that they wanna. They wanna operate at a different level, and and that's okay. We want to support both of them. We want to give both of them all the tools that they need. Thio spend their time and put their resource is towards what differentiates their business and just be able to give them support at whatever level they need on the infrastructure side. And it's fantastic that are combination of investments in hardware and services. And now, with Outpost, we can bring those investments even closer to the customer. If you really think about it that way, the possibilities become limitless. >>Yeah, it's not like the simplicity asked, but it was pretty beautiful to the way it looks. It looks nice. Michael. Gotta ask you on your side. A couple of big announcements over that we've been following from pure looking back. You already had the periods of service announcement you bought the port Works was acquisition. Yeah, that's container management. Across the data center, including outposts you got pure is a service is pure. Is the service working with outpost and how and if so, how and what's the consumption model for customers there. >>Yeah, thanks so much, John. And appreciate you following us the way that you do it. Zits meaningful and appreciate it. Listen, you know, I think the customers have made it clear and in AWS is, you know, kind of led the way in terms of the consumption and experience expectations that customers have. It's got to be consumable. They've got to pay for what they use. It's got to be outcome oriented and and we're doing that with pure is a service. And so I think we saw that early and have invested in pure is a service for our customers. And, you know, we look at the way we acquired outposts as ah customer and a partner of AWS aan dat is exactly the same way customers can consume pure. You know, all of our solutions in a, you know, use what you need, pay for what you use, um, environment. And, you know, one of the exciting things about AWS partnership is its wide ranging and one of the things that AWS has done, I think world class is marketplace. And so we're excited to share with this audience, you know, really? On the back of just recent announcement that, pure is the service is available within the AWS marketplace. And so you think about the, you know, simplicity and the consistency that pure and AWS delivered to the market. AWS customers demand that they get that in the marketplace, and and we're proud to have our offerings there. And Port Works has been in the marketplace and and will continue to be showcased from a container management standpoint. So as those workloads increasingly become, you know, the cloud native you know, Dev Ops, Containerized workloads. We've got a solution and to end to support >>that great job. Great insight. Congratulations to pure good moves as making some good moves. Rob, I want to just get to the final word here on Outpost again. Great. Everyone loves this product again. It's a lot of attention. It's really that that puts the operating models cloud firmly on the in the on premise world for Amazon opens up a lot of good conversation and business opportunities and technical integrations or are all around you. So what's your message to the ecosystem out there for outposts? How do I What's the what's the word? I wanna do I work with you guys? How do I get involved? What are some of the opportunities? What's your position? How do you talk to the ecosystem? >>Yeah, You know, John, I think the best way to frame it is we're just getting started. We've got our first year in the books. We've seen so many promising signals from customers, had so many interesting conversations that just weren't possible without outposts. And, uh, you know, working with partners like pure and expanding our outpost. Ready program is just the beginning. Right? We launched back in September. We've We've seen another meaningful set of partners come out. Uh, here it reinvent, and we're gonna continue toe double down on both the outpost business, but specifically on on working with our partners. I think that the key to unlocking the magic of outpost is meeting customers where they are. And those customers are using our partners. And there's no reason that it shouldn't just work when they move there. Their partner based workload from their existing infrastructure right over to the outpost. >>All right, I'll leave it there. Michael saw the VP of worldwide alliances that pier storage congratulations. Great innovation strategy It's easy to do alliances when you've got a great product and technology congratulated. Rob Kearney Key principle product manager. Outpost will be speaking more to you throughout the next couple of weeks. Here at Reinvent Virtual. Thanks for coming. I appreciate it. >>Thank you. Thank you. >>Okay. So cute. Virtual. We are the Cube. Virtual. We wish we could be there in person this year, but it's a virtual event. Over three weeks will be lots of coverage. I'm John for your host. Thanks for watching.

Published Date : Dec 3 2020

SUMMARY :

It's the Cube with digital coverage We are the Cube. Great to see you. Great to see you pure. And the first and Onley on Prem storage And now I'll see growing public company on the next level kind of growth plan. Send an outpost over to the pure lab recently, and I have to tell you a picture of those two racks next to I meant in the sense of that's when Cloud really got big into the enterprise and you So the that that requirement to keep data, What's the What's the angle for you guys? the the regulatory issues, you know, just application architecture and its inability in general in general, how do you guys get your customers to integrate with the rest of, the importance of having high performance storage and to have a high performance outpost See the innovation and hard where you guys have done And, you know, you've heard us speak a lot You already had the periods of service announcement you bought the port Works was acquisition. to share with this audience, you know, really? It's really that that puts the And, uh, you know, working with partners like pure and expanding our outpost. Outpost will be speaking more to you throughout the next couple of weeks. Thank you. We are the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

AWSORGANIZATION

0.99+

Michael SotnickPERSON

0.99+

Robert CzarneckiPERSON

0.99+

Rob CzarneckiPERSON

0.99+

ThailandLOCATION

0.99+

MichaelPERSON

0.99+

Michael SlotnickPERSON

0.99+

AndyPERSON

0.99+

RobPERSON

0.99+

SeptemberDATE

0.99+

AmazonORGANIZATION

0.99+

three yearQUANTITY

0.99+

late NovemberDATE

0.99+

Rob KearneyPERSON

0.99+

Reinvent VirtualORGANIZATION

0.99+

two racksQUANTITY

0.99+

AWS Global Partner NetworkORGANIZATION

0.99+

bothQUANTITY

0.99+

first yearQUANTITY

0.99+

this yearDATE

0.99+

over 11 yearsQUANTITY

0.99+

L PostORGANIZATION

0.99+

oneQUANTITY

0.98+

over eight yearsQUANTITY

0.98+

LeightonORGANIZATION

0.98+

OutpostORGANIZATION

0.98+

firstQUANTITY

0.97+

KeithPERSON

0.97+

OneQUANTITY

0.97+

two next guestsQUANTITY

0.97+

outpostORGANIZATION

0.96+

todayDATE

0.96+

first signQUANTITY

0.96+

ThioPERSON

0.95+

Over three weeksQUANTITY

0.94+

2020TITLE

0.89+

Worldwide AlliancesORGANIZATION

0.88+

CubeCOMMERCIAL_ITEM

0.88+

single paneQUANTITY

0.88+

Flash RayORGANIZATION

0.87+

U. S.LOCATION

0.87+

two different typesQUANTITY

0.87+

re:Invent 2020 Partner Network DayEVENT

0.84+

past yearDATE

0.83+

outpostsORGANIZATION

0.83+

past eight yearsDATE

0.8+

Eric Herzog, IBM | VMworld 2020


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman. This is theCUBE's coverage of VMworld 2020 of course, happening virtually. And there are certain people that we talk to every year at theCUBE, and this guest, I believe, has been on theCUBE at VMworld more than any others. It's actually not Pat Gelsinger, Eric Herzog. He is the chief marketing officer and vice president of global storage channels at IBM. Eric, Mr. Zoginstor, welcome back to theCUBE, nice to see you. >> Thank you very much, Stu. IBM always enjoys hanging with you, John, and Dave. And again, glad to be here, although not in person this time at VMworld 2020 virtual. Thanks again for having IBM. >> Alright, so, you know, some things are the same, others, very different. Of course, Eric, IBM, a long, long partner of VMware's. Why don't you set up for us a little bit, you know, 2020, the major engagements, what's new with IBM and VMware? >> So, a couple of things, first of all, we have made our Spectrum Virtualize software, software defined block storage work in virtual machines, both in AWS and IBM Cloud. So we started with IBM Cloud and then earlier this year with AWS. So now we have two different cloud platforms where our Spectrum Virtualize software sits in a VM at the cloud provider. The other thing we've done, of course, is V7 support. In fact, I've done several VMUGs. And in fact, my session at VMworld is going to talk about both our support for V7 but also what we're doing with containers, CSI, Kubernetes overall, and how we can support that in a virtual VMware environment, and also we're doing with traditional ESX and VMware configurations as well. And of course, out to the cloud, as I just talked about. >> Yeah, that discussion of hybrid cloud, Eric, is one that we've been hearing from IBM for a long time. And VMware has had that message, but their cloud solutions have really matured. They've got a whole group going deep on cloud native. The Amazon solutions have been something that they've been partnering, making sure that, you know, data protection, it can span between, you know, the traditional data center environment where VMware is so dominant, and the public clouds. You're giving a session on some of those hybrid cloud solutions, so share with us a little bit, you know, where do the visions completely agree? What's some of the differences between what IBM is doing and maybe what people are hearing from VMware? >> Well, first of all, our solutions don't always require VMware to be installed. So for example, if you're doing it in a container environment, for example, with Red Hat OpenShift, that works slightly different. Not that you can't run Red Hat products inside of a virtual machine, which you can, but in this case, I'm talking Red Hat native. We also of course do VMware native and support what VMware has announced with their Kubernetes based solutions that they've been talking about since VMworld last year, obviously when Pat made some big announcements onstage about what they were doing in the container space. So we've been following that along as well. So from that perspective, we have agreement on a virtual machine perspective and of course, what VMware is doing with the container space. But then also a slightly different one when we're doing Red Hat OpenShift as a native configuration, without having a virtual machine involved in that configuration. So those are both the commonalities and the differences that we're doing with VMware in a hybrid cloud configuration. >> Yeah. Eric, you and I both have some of those scars from making sure that storage works in a virtual environment. It took us about a decade to get things to really work at the VM level. Containers, it's been about five years, it feels like we've made faster progress to make sure that we can have stateful environments, we can tie up with storage, but give us a little bit of a look back as to what we've learned and how we've made sure that containerized, Kubernetes environments, you know, work well with storage for customers today. >> Well, I think there's a couple of things. First of all, I think all the storage vendors learn from VMware. And then the expansion of virtual environments beyond VMware to other virtual environments as well. So I think all the storage vendors, including IBM learned through that process, okay, when the next thing comes, which of course in this case happens to be containers, both in a VMware environment, but in an open environment with the Kubernetes management framework, that you need to be able to support it. So for example, we have done several different things. We support persistent volumes in file block and object store. And we started with that almost three years ago on the block side, then we added the file side and now the object storage side. We also can back up data that's in those containers, which is an important feature, right? I am sitting there and I've got data now and persistent volume, but I got to back it up as well. So we've announced support for container based backup either with Red Hat OpenShift or in a generic Kubernetes environment, because we're realistic at IBM. We know that you have to exist in the software infrastructure milieu, and that includes VMware and competitors of VMware. It includes Red Hat OpenShift, but also competitors to Red Hat. And we've made sure that we support whatever the end user needs. So if they're going with Red Hat, great. If they're going with a generic container environment, great. If they're going to use VMware's container solutions, great. And on the virtualization engines, the same thing. We started with VMware, but also have added other virtualization engines. So you think the storage community as a whole and IBM in particular has learned, we need to be ready day one. And like I said, three years ago, we already had persistent volume support for block store. It's still the dominant storage and we had that three years ago. So for us, that would be really, I guess, two years from what you've talked about when containers started to take off. And within two years we had something going that was working at the end user level. Our sales team could sell our business partners. As you know, many of the business partners are really rallying around containers, whether it be Red Hat or in what I'll call a more generic environment as well. They're seeing the forest through the trees. I do think when you look at it from an end user perspective, though, you're going to see all three. So, particularly in the Global Fortune 1000, you're going to see Red Hat environments, generic Kubernetes environments, VMware environments, just like you often see in some instances, heterogeneous virtualization environments, and you're still going to see bare metal. So I think it's going to vary by application workload and use case. And I think all, I'd say midsize enterprise up, let's say, $5 billion company and up, probably will have at least two, if not all three of those environments, container, virtual machine, and bare metal. So we need to make sure that at IBM we support all those environments to keep those customers happy. >> Yeah, well, Eric, I think anybody, everybody in the industry knows, IBM can span those environments, you know, support through generations. And very much knows that everything in IT tends to be additive. You mentioned customers, Eric, you talk to a lot of customers. So bring us inside, give us a couple examples if you would, how are they dealing with this transition? For years we've been talking about, you know, enabling developers, having them be tied more tightly with what the enterprise is doing. So what are you seeing from some of your customers today? >> Well, I think the key thing is they'd like to use data reuse. So, in this case, think of a backup, a snap or replica dataset, which is real world data, and being able to use that and reuse that. And now the storage guys want to make sure they know who's, if you will, checked it out. We do that with our Spectrum Copy Data Management. You also have, of course, integration with the Ansible framework, which IBM supports, in fact, we'll be announcing some additional support for more features in Ansible coming at the end of October. We'll be doing a large launch, very heavily on containers. Containers and primary storage, containers in hybrid cloud environments, containers in big data and AI environments, and containers in the modern data protection and cyber resiliency space as well. So we'll be talking about some additional support in this case about Ansible as well. So you want to make sure, one of the key things, I think, if you're a storage guy, if I'm the VP of infrastructure, or I'm the CIO, even if I'm not a storage person, in fact, if you think about it, I'm almost 70 now. I have never, ever, ever, ever met a CIO who used to be a storage guy, ever. Whether I, I've been with big companies, I was at EMC, I was at Seagate Maxtor, I've been at IBM actually twice. I've also done seven startups, as you guys know at theCUBE. I have never, ever met a CIO who used to be a storage person. Ever, in all those years. So, what appeals to them is, how do I let the dev guys and the test guys use that storage? At the same time, they're smart enough to know that the software guys and the test guys could actually screw up the storage, lose the data, or if they don't lose the data, cost them hundreds of thousands to millions of dollars because they did something wrong and they have to reconfigure all the storage solutions. So you want to make sure that the CIO is comfortable, that the dev and the test teams can use that storage properly. It's a part of what Ansible's about. You want to make sure that you've got tight integration. So for example, we announced a container native version of our Spectrum Discover software, which gives you comprehensive metadata, cataloging and indexing. Not only for IBM's scale-out file, Spectrum Scale, not only for IBM object storage, IBM cloud object storage, but also for Amazon S3 and also for NetApp filers and also for EMC Isilon. And it's a container native. So you want to make sure in that case, we have an API. So the AI software guys, or the big data software guys could interface with that API to Spectrum Discover, let them do all the work. And we're talking about a piece of software that can traverse billions of objects in two seconds, billions of them. And is ideal to use in solutions that are hundreds of petabytes, up into multiple exabytes. So it's a great way that by having that API where the CIO is confident that the software guys can use the API, not mess up the storage because you know, the storage guys and the data scientists can configure Spectrum Discover and then save it as templates and run an AI workload every Monday, and then run a big data workload every Tuesday, and then Wednesday run a different AI workload and Thursday run a different big data. And so once they've set that up, everything is automated. And CIOs love automation, and they really are sensitive. Although they're all software guys, they are sensitive to software guys messing up the storage 'cause it could cost them money, right? So that's their concern. We make it easy. >> Absolutely, Eric, you know, it'd be lovely to say that storage is just invisible, I don't need to think about it, but when something goes wrong, you need those experts to be able to dig in. You spent some time talking about automation, so critically important. How about the management layer? You know, you think back, for years it was, vCenter would be the place that everything can plug in. You could have more generalists using it. The HCI waves were people kind of getting away from being storage specialists. Today VMware has, of course vCenter's their main estate, but they have Tanzu. On the IBM and Red Hat side, you know, this year you announced the Advanced Cluster Management. What's that management landscape look like? How does the storage get away from managing some of the bits and bytes and, you know, just embrace more of that automation that you talked about? >> So in the case of IBM, we make sure we can support both. We need to appeal to the storage nerd, the storage geek if you will. The same time to a more generalist environment, whether it be an infrastructure manager, whether it be some of the software guys. So for example, we support, obviously vCenter. We're going to be supporting all of the elements that are going to happen in a container environment that VMware is doing. We have hot integration and big time integration with Red Hat's management framework, both with Ansible, but also in the container space as well. We're announcing some things that are coming again at the end of October in the container space about how we interface with the Red Hat management schema. And so you don't always have to have the storage expert manage the storage. You can have the Red Hat administrator, or in some cases, the DevOps guys do it. So we're making sure that we can cover both sides of the fence. Some companies, this just my personal belief, that as containers become commonplace while the software guys are going to want to still control it, there eventually will be a Red Hat/container admin, just like all the big companies today have VMware admins. They all do. Or virtualization admins that cover VMware and VMware's competitors such as Hyper-V. They have specialized admins to run that. And you would argue, VMware is very easy to use, why aren't the software guys playing with it? 'Cause guess what? Those VMs are sitting on servers containing both apps and data. And if the software guy comes in to do something, messes it up, so what have of the big entities done? They've created basically a virtualization admin layer. I think that over time, either the virtualization admins become virtualization/container admins, or if it's a big enough for both estates, there'll be container admins at the Global Fortune 500, and they'll also be virtualization admins. And then the software guys, the devOps guys will interface with that. There will always be a level of management framework. Which is why we integrate, for example, with vCenter, what we're doing with Red Hat, what we do with generic Kubernetes, to make sure that we can integrate there. So we'll make sure that we cover all areas because a number of our customers are very large, but some of our customers are very small. In fact, we have a company that's in the software development space for autonomous driving. They have over a hundred petabytes of IBM Spectrum Scale in a container environment. So that's a small company that's gone all containers, at the same time, we have a bunch of course, Global Fortune 1000s where IBM plays exceedingly well that have our products. And they've got some stuff sitting in VMware, some such sitting in generic Kubernetes, some stuff sitting in Red Hat OpenShift and some stuff still in bare metal. And in some cases they don't want their software people to touch it, in other cases, these big accounts, they want their software people empowered. So we're going to make sure we could support both and both management frameworks. Traditional storage management framework with each one of our products and also management frameworks for virtualization, which we've already been doing. And now management frame first with container. We'll make sure we can cover all three of those bases 'cause that's what the big entities will want. And then in the smaller names, you'll have to see who wins out. I mean, they may still use three in a small company, you really don't know, so you want to make sure you've got everything covered. And it's very easy for us to do this integration because of things we've already historically done, particularly with the virtualization environment. So yes, the interstices of the integration are different, but we know here's kind of the process to do the interconnectivity between a storage management framework and a generic management framework, in, originally of course, vCenter, and now doing it for the container world as well. So at least we've learned best practices and now we're just tweaking those best practices in the difference between a container world and a virtualization world. >> Eric, VMworld is one of the biggest times of the year, where we all get together. I know how busy you are going to the show, meeting with customers, meeting with partners, you know, walking the hallways. You're one of the people that traveled more than I did pre-COVID. You know, you're always at the partner shows and meeting with people. Give us a little insight as to how you're making sure that, partners and customers, those conversations are still happening. We understand everything over video can be a little bit challenging, but, what are you seeing here in 2020? How's everybody doing? >> Well, so, a couple of things. First of all, I already did two partner meetings today. (laughs) And I have an end user meeting, two end user meetings tomorrow. So what we've done at IBM is make sure we do a couple things. One, short and to the point, okay? We have automated tools to actually show, drawing, just like the infamous walk up to the whiteboard in a face to face meeting, we've got that. We've also now tried to make sure everybody is being overly inundated with WebEx. And by the way, there's already a lot of WebEx anyway. I can think of meeting I had with a telco, one of the Fortune 300, and this was actually right before Thanksgiving. I was in their office in San Jose, but they had guys in Texas and guys in the East Coast all on. So we're still over WebEx, but it also was a two and a half hour meeting, actually almost a three hour meeting. And both myself and our Flash CTO went up to the whiteboard, which you could then see over WebEx 'cause they had a camera showing up onto the whiteboard. So now you have to take that and use integrated tools. One, but since people are now, I would argue, over WebEx. There is a different feel to doing the WebEx than when you're doing it face to face. We have to fly somewhere, or they have to fly somewhere. We have to even drive somewhere, so in between meetings, if you're going to do four customer calls, Stu, as you know, I travel all over the world. So I was in Sweden actually right before COVID. And in one day, the day after we had a launch, we launched our new Flash System products in February on the 11th, on February 12th, I was still in Stockholm and I had two partner meetings and two end user meetings. But the sales guy was driving me around. So in between the meetings, you'd be in the car for 20 minutes or half an hour. So it connects different when you can do WebEx after WebEx after WebEx with basically no break. So you have to be sensitive to that when you're talking to your partners, sensitive of that when you're talking to the customers sensitive when you're talking to the analysts, such as you guys, sensitive when you're talking to the press and all your various constituents. So we've been doing that at IBM, really, since the COVID thing got started, is coming up with some best practices so we don't overtax the end users and overtax our channel partners. >> Yeah, Eric, the joke I had on that is we're all following the Bill Belichick model now, no days off, just meeting, meeting, meeting every day, you can stack them up, right? You used to enjoy those downtimes in between where you could catch up on a call, do some things. I had to carve out some time to make sure that stack of books that normally I would read in the airports or on flights, everything, you know. I do enjoy reading a book every now and again, so. Final thing, I guess, Eric. Here at VMworld 2020, you know, give us final takeaways that you want your customers to have when it comes to IBM and VMware. >> So a couple of things, A, we were tightly integrated and have been tightly integrated for what they've been doing in their traditional virtualization environment. As they move to containers we'll be tightly integrated with them as well, as well as other container platforms, not just from IBM with Red Hat, but again, generic Kubernetes environments with open source container configurations that don't use IBM Red Hat and don't use VMware. So we want to make sure that we span that. In traditional VMware environments, like with Version 7 that came out, we make sure we support it. In fact, VMware just announced support for NVMe over Fibre Channel. Well, we've been shipping NVMe over Fibre Channel for just under two years now. It'll be almost two years, well, it will be two years in October. So we're sitting here in September, it's almost been two years since we've been shipping that. But they haven't supported it, so now of course we actually, as part of our launch, I pre say something, as part of our launch, the last week of October at IBM's TechU it'll be on October 27th, you can join for free. You don't need to attend TechU, we'll have a free registration page. So just follow Zoginstor or look at my LinkedIns 'cause I'll be posting shortly when we have the link, but we'll be talking about things that we're doing around V7, with support for VMware's announcement of NVMe over Fibre Channel, even though we've had it for two years coming next month. But they're announcing support, so we're doing that as well. So all of those sort of checkbox items, we'll continue to do as they push forward into the container world. IBM will be there right with them as well because we know it's a very large world and we need to support everybody. We support VMware. We supported their competitors in the virtualization space 'cause some customers have, in fact, some customers have both. They've got VMware and maybe one other of the virtualization elements. Usually VMware is the dominant of course, but if they've got even a little bit of it, we need to make sure our storage works with it. We're going to do the same thing in the container world. So we will continue to push forward with VMware. It's a tight relationship, not just with IBM Storage, but with the server group, clearly with the cloud team. So we need to make sure that IBM as a company stays very close to VMware, as well as, obviously, what we're doing with Red Hat. And IBM Storage makes sure we will do both. I like to say that IBM Storage is a Switzerland of the storage industry. We work with everyone. We work with all these infrastructure players from the software world. And even with our competitors, our Spectrum Virtualized software that comes on our Flash Systems Array supports over 550 different storage arrays that are not IBM's. Delivering enterprise-class data services, such as snapshot, replication data, at rest encryption, migration, all those features, but you can buy the software and use it with our competitors' storage array. So at IBM we've made a practice of making sure that we're very inclusive with our software business across the whole company and in storage in particular with things like Spectrum Virtualize, with what we've done with our backup products, of course we backup everybody's stuff, not just ours. We're making sure we do the same thing in the virtualization environment. Particularly with VMware and where they're going into the container world and what we're doing with our own, obviously sister division, Red Hat, but even in a generic Kubernetes environment. Everyone's not going to buy Red Hat or VMware. There are people going to do Kubernetes industry standard, they're going to use that, if you will, open source container environment with Kubernetes on top and not use VMware and not use Red Hat. We're going to make sure if they do it, what I'll call generically, if they use Red Hat, if they use VMware or some combo, we will support all of it and that's very important for us at VMworld to make sure everyone is aware that while we may own Red Hat, we have a very strong, powerful connection to VMware and going to continue to do that in the future as well. >> Eric Herzog, thanks so much for joining us. Always a pleasure catching up with you. >> Thank you very much. We love being with theCUBE, you guys do great work at every show and one of these days I'll see you again and we'll have a beer. In person. >> Absolutely. So, definitely, Dave Vellante and John Furrier send their best, I'm Stu Miniman, and thank you as always for watching theCUBE. (relaxed electronic music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He is the chief marketing officer And again, glad to be here, you know, 2020, the major engagements, So we started with IBM Cloud so share with us a little bit, you know, and the differences that we're doing to make sure that we can and now the object storage side. So what are you seeing from and containers in the On the IBM and Red Hat side, you know, So in the case of IBM, we and meeting with people. and guys in the East Coast all on. in the airports or on and maybe one other of the Always a pleasure catching up with you. We love being with theCUBE, and thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Pat GelsingerPERSON

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

JohnPERSON

0.99+

ZoginstorPERSON

0.99+

TexasLOCATION

0.99+

DavePERSON

0.99+

StockholmLOCATION

0.99+

SwedenLOCATION

0.99+

20 minutesQUANTITY

0.99+

Dave VellantePERSON

0.99+

$5 billionQUANTITY

0.99+

San JoseLOCATION

0.99+

Stu MinimanPERSON

0.99+

FebruaryDATE

0.99+

SeptemberDATE

0.99+

billionsQUANTITY

0.99+

2020DATE

0.99+

October 27thDATE

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

VMworldORGANIZATION

0.99+

two secondsQUANTITY

0.99+

half an hourQUANTITY

0.99+

VMwareORGANIZATION

0.99+

ThursdayDATE

0.99+

WednesdayDATE

0.99+

Red HatTITLE

0.99+

bothQUANTITY

0.99+

February 12thDATE

0.99+

Red Hat OpenShiftTITLE

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

end of OctoberDATE

0.99+

twiceQUANTITY

0.99+

two and a half hourQUANTITY

0.99+

tomorrowDATE

0.99+

OctoberDATE

0.99+

SwitzerlandLOCATION

0.99+

hundreds of petabytesQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

StuPERSON

0.99+

PatPERSON

0.99+

Seagate MaxtorORGANIZATION

0.99+

telcoORGANIZATION

0.99+

three years agoDATE

0.99+

Charlie Giancarlo, Pure Storage | CUBE Conversation, June 2020


 

>> From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. (intense music) >> Hi, everybody, this is Dave Vellante in theCUBE, and as you know, I've been doing a CEO series, and welcome to the isolation economy. We're here at theCUBE's remote studio, and really pleased to have Charlie Giancarlo, who is the CEO of PureStorage. Charlie, I wish we were face-to-face at Pure Accelerate, but this'll have to do. Thanks for coming on. >> You know, Dave, it's always fun to be face-to-face with you. At Pure Accelerate when we do it in person is great fun, but we do what we have to do, and actually, this has been a great event for us, so appreciate you coming on air with me. >> Yeah, and we're going to chat about that, but I want to start off with this meme that's been going around the internet. I was going to use the wrecking ball. I don't know if you've seen that. It's got the people, the executives in the office building saying, "Eh, digital transformation; "not in my lifetime," complacency, and then this big wrecking ball, the COVID-19. You've probably seen it, but as you can see here, somebody created a survey, Who's leading the digital transformation at your company? The CEO, the CTO, or of course circled is COVID-19, and so we've seen that, right? You had no choice but to be a digital company. >> Well, there's that, and there's also the fact that the CEOs who've been wanting to push a digital transformation against a team that wants to stick with the status quo, it gives the CEO now, and even within our own company in Pure, to drive towards that digital transformation when people didn't really take up the mantle. So no, it's a great opportunity for digital transformation, and of course, the companies that have been doing it all along have been getting ahead during this crisis, and the ones that haven't are having some real trouble. And you and I have had some really interesting conversations. Again, that's, I think, the thing I miss most, not only having you in theCUBE, but the side conversations at the cocktail parties, et cetera. And we've talked about IP, and China, and the history of the US, and all kinds of interesting things there, but one of the things I want to put forth, and I know you guys, Kix especially, has done a lot of work on Tech For Good, but the narrative pre-COVID, PC I guess we'd call it, was really a lot of vitriol toward big tech especially, but you know what? That tech lash... Without tech, where would we be right now? >> Well, just think about it, right? Where would we be without videoconferencing, without the internet, right? We'd be sheltered in place with literally nothing to do, and all business would stop, and of course many businesses that require in-person have, but thank God you can still get goods at your home. You can still get food, you can still get all these things that today is enabled by technology. We've seen this ourselves, in terms of having to make emergency shipments during our first quarter to critical infrastructure to keep things going. It's been quite a quarter. I was saying to my team recently that we had just gotten everyone together in February for our sales kickoff for the year, and it felt like a full year since I had seen them all. >> Well, I had interviewed, I think, is it Mike Fitzgerald, your head of supply chain. >> Yes. >> In March, and he was saying, "No. "We have no disruptions. "We're delivering for clients," and we certainly saw that in your results in the quarter. >> Yeah, no, we're very fortunate, but we had been planning for doing our normal business continuity disaster planning, and actually, once we saw COVID in Asia in January we started exercising all those muscles, including pre-shipping product around to depos around the world in case transportation got clogged, which it in fact did. So we were well-prepared, but we're also, I think, very fortunate in terms of the fact that we had a very distributed supply chain. >> Yeah, I mean you guys obviously did a good job. You saw in Dell's earnings they held pretty firm. HPE, on the other hand, really saw some disruption, so congratulations to you and the team on that. So as we think about exiting this isolation economy, we've done work that shows about 44% of CIOs see a U-shaped recovery, but it's very fragmented. It varies by industry. It varies by how digital the organizations are. Are they able to provide physical distancing? How essential are these organizations? And so I'm sure you're seeing that in your customer base as well. How are you thinking about exiting this isolation economy? >> Well, I've certainly resisted trying to predict a U- or a V-shape, because I think there are many more unknowns than there are knowns, and in particular, we don't know if there's a second wave. If there is a second wave, is it going to be more or less lethal than the first wave? And as you know, maybe some of your audience knows, I contracted COVID in March. So I've done a lot of reading on not just COVID, but also on the Spanish flu of 1918-1919. It's going to take a while before this settles down, and we don't know what it's going to look like the rest of the year or next year. So a lot of the recovery is going to depend on that. What we can do, however, is make sure that we're prepared to work from home, work in the office, that we make sure that our team out in the field is well-placed to be able to support our customers in the environment, and the way that we're incenting our overall team now has less to do with the macro than it does with our specific segment, and what I mean by that is we're incenting our team to continue to build market share, and to continue to outperform our competition as we go forward, and also on our customer satisfaction figure, which you know is our Net Promoter Score, which is the highest in the industry. So that's how we're incenting our team. >> Yeah, and we're going to talk about that, and by the way, yes, I did know, and it's great to see you healthy, and I'd be remiss if I didn't also express my condolences, Matt, the loss of Matt Danziger, your head of IR, terrible tragedy. Of course Matt had some roots in Boston, went to school in Maine. >> Yeah. >> Loved Cape Cod, and so really sad loss, I'm sure, for all of the Puritans. >> It's affected us all very personally, because Matt was just an incredible team member, a great friend, and so young and vital. When someone that young dies for almost unexplainable reasons. It turned out to be a congenital heart condition that nobody knew about, but it just breaks... It just breaks everyone's heart, so thank you for your condolences. I appreciate it. >> You're welcome. Okay, so let's get into the earnings a little bit. I want to just pull up one of the charts that shows roughly, I have approximately Q1 because some companies like NetApp, Dell, HPE, are sort of staggered, but the latest results you saw IBM growing at 19%. Now we know that was mainframe-driven in a very easy compare. Pure plus 12, and then everybody else in the negative. Dell, minus five, so actually doing pretty well relative to NetApp and HPE, who, as I said, had some challenges with deliveries. But let's talk about your quarter. You continue to be the one sort of shining star in the storage business. Let's get into it. What are your big takeaways that you want us to know about? >> Well, of course I'd rather see everybody in the black, right, everybody in the positive, but we continue to take market share and continue to grow 20 to 30% faster than the rest of the industry combined, and it's quarter after quarter. It's not just a peak in one quarter and then behind in another quarter. Every quarter we're ahead of the rest of the industry, and I think the reasoning is really quite straightforward. We're the one company that invests in storage as if it's high technology. You do hear quite often, and even among some customers, that storage is commoditized, and all of our competitors invest in it, or don't invest in it, as if it's a commoditized market. Our view is quite straightforward. The science and the engineering of computing and data centers continues to evolve, continues to advance, has to advance if we continue down this path of becoming more of a digital economy. As we all know, processors advance in speed and capability. Networking advances in terms of speed and capability. Well, data storage is a third of data center spend, and if it doesn't continue to advance at the same pace or faster than everything else, it becomes a major bottleneck. We've been the innovator. If you look at a number of different studies, year after year, now over six or seven years, we are the leader in innovation in the data storage market, and we're being rewarded for that by penetrating more and more of the customer base. >> All right, let's talk about that. And you mentioned in your keynote at Accelerate that you guys spend more on R&D as a percentage of revenue than anybody, and so I want to throw out some stats. I'm sorry, folks, I don't have a slide on this. HPE spends about 1.8 billion a year on R&D, about 6% of revenues. IBM, I've reported on IBM and how it's spending the last 10 years, spent a huge amount on dividends and stock buybacks, and they spent six billion perpetually on R&D, which is now 8% of revenue. Dell at five billion. Of course Dell used to spend well under a billion before the EMC acquisition. That's about 6% of revenue. And NetApp, 800 million, much higher. They're a pure play, about 13%. Pure spends 430 million last year on R&D, which is over 30% of revenue on R&D, to your point. >> Yeah, yeah, well, as I said, we treat it like it's high technology, which it is, right? If you're not spending at an appropriate level you're going to fall behind, and so we continue to advance. I will say that you mentioned big numbers by the other players, but I was part of a big organization as well with a huge R&D budget, but what matters is what percent of the revenue of a specific area are you spending, right? You mentioned Dell and VMware. A very large fraction of their spend is on VMware. Great product and great company, but very little is being spent in the area of storage. >> Well, and the same thing's true for IBM, and I've made this point. In fact, I made this point about Snowflake last week in my breaking analysis. How is Snowflake able to compete with all these big whales? And the same thing for you guys. Every dime you spend on R&D goes to making your storage products better for your customers. Your go-to-market, same thing. Your partner ecosystem, same thing, and so you're the much more focused play. >> Right, well I think it boils down to one very simple thing, right? Most of our competitors are, you might call them one-stop shops, so the shopping mall of IT gear, right? The Best Buy, if you will, of information technology. We're really the sole best of breed player in data storage, right, and if you're a company that wants two vendors, you might choose one that's a one-stop shop. If you have the one-stop shop, the next one you want is a best of breed player, right? And we fill that role for our customers. >> Look it, this business is a technology business, and technology and innovation is driven by research and development, period, the end. But I want to ask you, so the storage business generally, look, you're kind of the one-eyed man in the land of the blind here. I mean the storage business has been somewhat on the back burner. In part it's your fault because you put so much flash into the data center, gave so much headroom that organizations didn't have to buy spindles anymore to get to performance, the cloud has also been a factor. But look, last decade was a better decade for storage than the previous decade when you look at the exits that you guys had and escape velocity, Nutanix, if you can kind of put them in there, too. Much larger than say the Compellents or 3PARs. They didn't make it to a billion. So my question is storage businesses, is it going to come back as a growth business? Like you said, you wish everybody were in the black here. >> Right, well a lot of what's being measured, of course, is enterprise on-prem storage, right? If we add on-prem and cloud, it actually continues to be a big growth business, because data is not shrinking. In fact, data is still growing faster than the price reduction of the media underneath, right, so it's still growing. And as you know, more recently we've introduced what we call Pure as-a-Service and Cloud Block Store. So now we have our same software, which we call Purity, that runs on our on-prem arrays, also running on AWS, and currently in beta on Azure. So from our point of view this is a... First of all, it's a big market, about $30 to $40 billion total. If you add in cloud, it's another $10 to $15 billion, which is a new opportunity for us. Last year we were about 1.65 billion. We're still less than, as you know, less than 10% of the overall market. So the opportunity for us to grow is just tremendous out there, and whether or not total storage grows, for us it's less important right now than the market share that we pick up. >> Right, okay, so I want to stay on that for a minute and talk about... I love talking about the competition. So what I'm showing here with this kind of wheel slide is data from our data partner ETR, and they go out every quarter. They have a very simple methodology. It's like Net Promoter Score, and it's very consistent. They say relative to last year, are you adopting the platform, that's the lime green, and so this is Pure's data. Are you increasing spend by 6% or more? That's the 32%, the forest green. Is spending going to be flat? Is it going to decrease by more than 6%? That's the 9%. And then are you replacing the platform, 2%. Now this was taken at the height of the US lockdown. This last survey. >> Wow. >> So you can see the vast majority of customers are either keeping spending the same, or they're spending more. >> Yeah. >> So that's very, very strong. And I want to just bring up another data point, which is we like to plot that Net Score here on the vertical axis, and then what we call market share. It's not like IDC market share, but it's pervasiveness in the survey. And you can see here, to your point, Pure is really the only, and I've cited the other vendors on the right hand, that box there, you're the only company in the green with a 40% Net Score, and you can see everybody else is well below the line in the red, but to your point, you got a long way to go in terms of gaining market share. >> Exactly, right, and the reason... I think the reason why you're seeing that is really our fundamental and basic value is that our product and our company is easy to do business with and easy to operate, and it's such a pleasure to use versus the competition that customers really appreciate the product and the company. We do have a Net Promoter Score of over 80, which I think you'd be hard-pressed to find another company in any industry with Net Promoter Scores that high. >> Yeah, so I want to stay on the R&D thing for a minute, because you guys bet the company from day one on simplicity, and that's really where you put a lot of effort. So the cloud is vital here, and I want to get your perspective on it. You mentioned your Cloud Block Store, which I like that, it's native to AWS. I think you're adding other platforms. I think you're adding Azure as well, and I'm sure you'll do Google. >> Azure, Azure's in beta, yes. >> Yeah, Google's just a matter of time. Alibaba, you'll get them all, but the key here is that you're taking advantage of the native services, and let's take AWS as an example. You're using EC2, and high priority instances of EC2, as an example, to essentially improve block storage on Amazon. Amazon loves it because it sells Compute. Maybe the storage guys in Amazon don't love it so much, but it's all about the customer, and so the native cloud services are critical. I'm sure you're going to do the same thing for Azure and other clouds, and that takes a lot of investment, but I heard George Kurian today addressing some analysts, talking about they're the only company doing kind of that cloud native approach. Where are you placing your bets? How much of it is cloud versus kind of on-prem, if you will? >> Yeah, well... So first of all, an increasing fraction is cloud, as you might imagine, right? We started off with a few dozen developers, and now we're at many more than that. Of course the majority of our revenue still comes from on-prem, but the value is the following in our case, which is that we literally have the same software operating, from a customer and from a application standpoint. It is the same software operating on-prem as in the cloud, which means that the customer doesn't have to refactor their application to move it into the cloud, and we're the one vendor that's focused on block. What NetApp is doing is great, but it's a file-based system. It's really designed for smaller workloads and low performance workloads. Our system's designed for high performance enterprise workloads, Tier 1 workloads in the cloud. To say that they're both cloud sort of washes over the fact that they're almost going after two completely separate markets. >> Well, I think it's interesting that you're both really emphasizing cloud native, which I think is very important. I think that some of the others have some catching up to do in that regard, and again, that takes a big investment in not just wrapping your stack, and shoving it in the cloud, and hosting it in the cloud. You're actually taking advantage of the local services. >> Well, I mean one thing I'll mention was Amazon gave us an award, which they give to very few vendors. It's called the Well-Architected AWS Award, because we've designed it not to operate, let's say, in a virtualized environment on AWS. We really make use of the native AWS EC2 services. It is designed like a web service on EC2. >> And the reason why this is so important is just, again, to share with our audience is because when you start talking about multi-cloud and hybrid cloud, you want the same exact experience on-prem as you do in the cloud, whether it's hybrid or across clouds, and the key is if you're using cloud native services, you have the most efficient, the highest performance, lowest latency, and lowest cost solution. That is going to be... That's going to be a determinate of the winner. >> Yes, I believe so. Customers don't want to be doing... Be working with software that is going to change, fundamentally change and cause them to have to refactor their applications. If it's not designed natively to the cloud, then when Amazon upgrades it may cause a real problem with the software or with the environment, and so customers don't want that. They want to know they're cloud native. >> Well, your task over the next 10 years is something. Look it, it's very challenging to grow a company the size of Pure, period, but let's face it, you guys caught EMC off-guard. You were driving a truck through the Symmetrics base and the VNX base. Not that that was easy. (chuckling) And they certainly didn't make it easy for ya. But now we've got this sort of next chapter, and I want to talk a little bit about this. You guys call it the Modern Data Experience. You laid it out last Accelerate, kind of your vision. You talked about it more at this year's Accelerate. I wonder if you could tell us the key takeaways from your conference this year. >> Right, the key takeaway... So let me talk about both. I'll start with Modern Data Experience and then key takeaways from this Accelerate. So Modern Data Experience, for those that are not yet familiar with it, is the idea that an on-prem experience would look very similar, if not identical, to a cloud experience. That is to say that applications and orchestrators just use APIs to be able to call upon and have delivered the storage environment that they want to see instantaneously over a high speed network. The amazing thing about storage, even today, is that it's highly mechanical, it's highly hardware-oriented to where if you have a new application and you want storage, you actually have to buy an array and connect it. It's physical. Where we want to be is just like in the cloud. If you have a new application and you want storage or you want data services, you just write a few APIs in your application and it's delivered immediately and automatically, and that's what we're delivering on-prem with the Modern Data Experience. What we're also doing, though, is extending that to the cloud, and with Cloud Block Store as part of this, with that set of interfaces and management system exactly the same as on-prem, you now have that cloud experience across all the clouds without having to refactor applications in one or the other. So that's our Modern Data Experience. That's the vision that drives us. We've delivered more and more against it starting at the last Accelerate, but even more now. Part of this is being able to deliver storage that is flexible and able to be delivered by API. On this Accelerate we delivered our Purity 6.0 for Flash Array, which adds not only greater resiliency characteristics, but now file for the first time in a Flash Array environment, and so now the same Flash Array can deliver both file and block. Which is a unified experience, but all delivered by API and simple to operate. We've also delivered, more recently, Flash Array 3.0... I'm sorry, Purity 3.0 on FlashBlade that delivers the ability for FlashBlade now to have very high resiliency characteristics, and to be able to even better deliver the ability to restore applications when there's been a failure of their data systems very, very rapidly, something that we call Rapid Restore. So these are huge benefits. And the last one I'll mention, Pure as-a-Service allows a customer today to be able to contract for storage as a service on-prem and in the cloud with one unified subscription. So they only pay for what they use. They only pay for what they use when they use it, and they only pay for it, regardless of where it's used, on-prem or in the cloud, and it's a true subscription model. It's owned and operated by Pure, but the customer gets the benefit of only paying for what they use, regardless of where they use it. >> Awesome, thanks for that run through. And a couple other notes that I had, I mean you obviously talked about the support for the work from home and remote capabilities. Automation came up a lot. >> Yep. >> You and I, I said, we have these great conversations, and one of the ones I would have with you if we were having a drink somewhere would be if you look at productivity stats in US and Europe, they're declining-- >> Yes. >> Pretty dramatically. And if you think about the grand challenges we have, the global challenges, whether it's pandemics, or healthcare, or feeding people, et cetera, we're not going to be able to meet those challenges without automation. I mean people, for years, have been afraid of automation. "Oh, we're going to lose jobs." We don't have enough people to solve all these problems, and so I think that's behind us, right-- >> Yeah, I agree. >> The fear of automation. So that came up. Yeah, go ahead, please. >> I once met with Alan Greenspan. You may remember him. >> Of course. >> This is after he was the chairman, and he said, "Look, I've studied the economies now "for the last 100 years, "and the fact of the matter is "that wealth follows productivity." The more productive you are as a society, that means the greater the wealth that exists for every individual, right? The standard of living follows productivity, and without productivity there's no wealth creation for society. So to your point, yeah, if we don't become more productive, more efficient, people don't live better, right? >> Yeah, I knew you'd have some good thoughts on that, and of course, speaking of Greenspan, we're seeing a little bit of rational exuberance maybe in the market. (chuckling) Pretty amazing. But you also talked about containers, and persisting containers, and Kubernetes, the importance of Kubernetes. That seems to be a big trend that you guys are hopping on as well. >> You bet. It is the wave of the future. Now, like all waves of the future, it's going to take time. Containers work entirely differently from VMs and from machines in terms of how they utilize resources inside a data center environment, and they are extraordinarily dynamic. They require the ability to build up, tear down connections to storage, and create storage, and spin it down at very, very rapid rates, and again, it's all API-driven. It's all responsive, not to human operators, but it's got to be responsive to the application itself and to the orchestration environment. And again, I'll go back to what we talked about with our Modern Data Experience. It's exactly the kind of experience that our customers want to be able to be that responsive to this new environment. >> My last question is from John Furrier. He asked me, "Hey, Charlie knows a lot about networking." We were talking about multi-cloud. Obviously cross-cloud networks are going to become increasingly important. People are trying to get rid of their MPLS networks, really moving to an SD-WAN environment. Your thoughts on the evolution of networking over the next decade. >> Well, I'll tell you. I'm a big believer that even SD-WANs, over time, are going to become obsolete. Another way to phrase it is the new private network is the internet. I mean look at it now. What does SD-WAN mean when nobody's in the local office, right? No one's in the remote office; they're all at home. And so now we need to think about the fact... Sometimes it's called Zero Trust. I don't like that term. Nobody wants to talk about zero anything. What it really is about is that there is no internal network anymore. The fact of the matter is even for... Let's say I'm inside my own company's network. Well, do they trust my machine? Maybe not. They may trust me but not my machine, and so what we need to have is going to a cloud model where all communication to all servers goes through a giant, call it a firewall or a proxy service, where everything is cleaned before it's delivered. People, individuals only get, and applications, only get access to the applications that they're authorized to use, not to a network, because once they're in the network they can get anywhere. So they should only get access to the applications they're able to use. So my personal opinion is the internet is the future private network, and that requires a very different methodology for authentication for security and so forth, and if we think that we protect ourselves now by firewalls, we have to rethink that. >> Great perspectives. And by the way, you're seeing more than glimpses of that. You look at Zscaler's results recently, and that's kind of the security cloud, and I'm glad you mentioned that you don't like that sort of Zero Trust. You guys, even today, talked about near zero RPO. That's an honest statement-- >> Right. >> Because there's no such thing as zero RPO. (chuckling) >> Right, yeah. >> Charlie, great to have you on. Thanks so much for coming back in theCUBE. Great to see you again. >> Dave, always a pleasure. Thank you so much, and hopefully next time in person. >> I hope so. All right, and thank you for watching, everybody. This is Dave Vellante for theCUBE, and we'll see you next time. (smooth music)

Published Date : Jun 16 2020

SUMMARY :

leaders all around the world, and really pleased to it's always fun to be executives in the office building and of course, the companies for our sales kickoff for the year, your head of supply chain. and we certainly saw that in and actually, once we saw HPE, on the other hand, and the way that we're incenting our overall team and it's great to see you healthy, I'm sure, for all of the Puritans. so thank you for your condolences. but the latest results you and continue to grow 20 to 30% faster and how it's spending the last 10 years, and so we continue to advance. Well, and the same the next one you want is a and development, period, the end. than the market share that we pick up. height of the US lockdown. are either keeping spending the same, the red, but to your point, and it's such a pleasure to So the cloud is vital here, and so the native cloud It is the same software operating and hosting it in the cloud. It's called the and the key is if you're and cause them to have to You guys call it the and in the cloud with for the work from home and so I think that's behind us, right-- So that came up. I once met with Alan Greenspan. that means the greater the wealth That seems to be a big trend that you guys They require the ability to build up, over the next decade. The fact of the matter is even for... and that's kind of the security cloud, such thing as zero RPO. Charlie, great to have you on. Thank you so much, and and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

Mike FitzgeraldPERSON

0.99+

BostonLOCATION

0.99+

20QUANTITY

0.99+

June 2020DATE

0.99+

six billionQUANTITY

0.99+

George KurianPERSON

0.99+

DavePERSON

0.99+

MattPERSON

0.99+

FebruaryDATE

0.99+

Pure AccelerateORGANIZATION

0.99+

MaineLOCATION

0.99+

MarchDATE

0.99+

AWSORGANIZATION

0.99+

Matt DanzigerPERSON

0.99+

Charlie GiancarloPERSON

0.99+

$10QUANTITY

0.99+

EuropeLOCATION

0.99+

AsiaLOCATION

0.99+

CharliePERSON

0.99+

AlibabaORGANIZATION

0.99+

8%QUANTITY

0.99+

USLOCATION

0.99+

Last yearDATE

0.99+

JanuaryDATE

0.99+

five billionQUANTITY

0.99+

Alan GreenspanPERSON

0.99+

last weekDATE

0.99+

19%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

6%QUANTITY

0.99+

NetAppORGANIZATION

0.99+

9%QUANTITY

0.99+

HPEORGANIZATION

0.99+

$15 billionQUANTITY

0.99+

GoogleORGANIZATION

0.99+

theCUBEORGANIZATION

0.99+

less than 10%QUANTITY

0.99+

PureStorageORGANIZATION

0.99+

800 millionQUANTITY

0.99+

NutanixORGANIZATION

0.99+

430 millionQUANTITY

0.99+

32%QUANTITY

0.99+

zero RPOQUANTITY

0.99+

$40 billionQUANTITY

0.99+

2%QUANTITY

0.99+

EC2TITLE

0.99+

EMCORGANIZATION

0.99+

todayDATE

0.99+

Cloud Block StoreTITLE

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

first timeQUANTITY

0.99+

COVID-19OTHER

0.99+

bothQUANTITY

0.99+

one quarterQUANTITY

0.99+

AccelerateORGANIZATION

0.99+

next yearDATE

0.98+

about $30QUANTITY

0.98+

Flash Array 3.0TITLE

0.98+

ETRORGANIZATION

0.98+

Cape CodLOCATION

0.98+

BA: Most CIOs Expect a U Shaped COVID Recovery


 

(upbeat music) >> From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a Cube Conversation. >> As we've been reporting, the COVID-19 pandemic has created a bifurcated IT spending picture. And over the last several weeks, we've reported both in the macro and even some come at it from a vendor and a sector view. I mean, for example, we've reported on some of the companies that have really continued to thrive, we look at the NASDAQ and its near a toll-time hard. Companies like Okta and CrowdStrike, we've reported on Snowflake, UiPath. The sectors, RPA, some of the analytic databases around AI, maybe even to a lesser extent Cloud but still has a lot tailwinds relative to some of those on-prem infrastructure plays. Even companies like Cisco, bifurcated in and of themselves, where you see this more rocky side of the house doing quite well. The work-from-home stuff but maybe some of the traditional networking not as much. Well, now what if you flip that to really try to understand what's going on with the shape of the recovery which is the main narrative right now. Is it a V shape? Is it a U shape? What do people expect? And now to understand that, you really have to look at different industries because different industries are going to come back at a different pace. With me again is Sagar Kadakia, who's the Director of Research at ETR. Sagar, you guys are all over this, as usual timely information, it's great to see you again. Hope all is well in New York City. >> Thanks so much David, it's a pleasure to be back on again. >> Yeah, so where are we in the cycle? You've done a great job and very timely, ETR was the first to really put out data on the Covid impact with the server that ran from mid March to mid April. And now everybody's attention Sagar, is focused on, okay, we've started to come back, stores are starting to open, people are beginning to go out again and everybody wants to know what the shape of the recovery looks like. So, where are we actually in that research cycle for you guys? >> Yeah, no problem. So, like you said, in that kind of March, April timeframe, we really want to go out there and get an idea of what are going to be the budget impacts as it relates to IT because of COVID-19, right? So, we kind of ended off there around a decline of 5%. And coming into the year, the consensus was a growth of 4% or 5%, right? So, we saw about a 900 or 1000 base point swing, to the negative side. And then (murmurs) topic we covered in March and April were which sectors of vendors were going to benefit as a result of work-from-home. And so, now as we kind of fast forward to the research cycle as we kind of go more into May and into the summer, rather than asking those exact same question again, because it's just been maybe 40 or 50 days. We really want to (murmurs) on the recovery type as well as well as kind of more emerging private vendors, right? We want it to understand what's going to be the impact on these vendors that typically rely on larger conferences, more in person meetings, because these are younger technologies. There's not a lot of information about them. And so, last Thursday we launched our biannual emerging technology study. It covers roughly 300 private emerging technologies across maybe 60 sectors of technology. And in tandem, we've launched a COVID Flash Poll, right? What we want to do was kind of twofold. One really understand from CIOs the recovery type they had in mind, as well as if they were seeing any kind of permanent changes in their IT, stacks IT spend because of COVID-19. And so, if we kind of look at the first chart here, and kind of get more into that first question around recovery type, what we asked CIOs in this kind of COVID Flash Poll, again, we did it last Thursday was, what type of recovery are you expecting? Is it V-shaped so kind of of a brief decline, maybe 1/4, and then you're going to start seeing growth into 2 each 20. Is it U-shaped? So two to 3/4 of a decline or deceleration revenue, and you're kind of forecasting that growth in revenue as an organization to come back in 2021. Is it L-shaped, right? So, maybe three, four or 5/4 of a decline or deceleration. And very minimal to moderate growth or none of the above, your organization is actually benefiting from COVID-19, as we've seen some many reports. So, those are kind of the options that we gave CIOs and you kind of see them at first chart here. >> Well, interesting. And this is a survey, a flash of survey, 700 CIOs or approximately. And the interesting thing I really want to point out here is, the COVID pandemic, it didn't suppress all companies, and the return is it's not going to be a rising tide that lifts all ships. You really got to do your research. You have to understand the different sectors, really try to peel back the onion skin and understand why there are certain momentum, how certain organizations are accommodating the work from home. We heard several weeks ago, how there's a major change in networking mindsets we're talking about how security is changing. We're going to talk about some of the permanents, but it's really, really important to try to understand these different trends by different industries, which we're going to talk about in a minute. But if you take a look at this slide, I mean, obviously most people expect this U-shape decline. I mean, U-shape recovery rather. So it's two or 3/4 followed by some growth next year. But as we'll see, some of these industries are going to really go deeper with an L-shape recovery. And then it's really interesting that a pretty large and substantial portion see this as a tailwind, presumably those with strong SAS models, annual recurring revenue models, your thoughts? >> If we kind of start on this kind of aggregate chart, you're looking at about 44% of CEO's anticipate a U-shaped recovery, right? That's the largest bucket. Then you can see another 15% anticipate an L-shape recovery 14 on the V-shaped, and then 16% to your point that are kind of seeing this tailwind. But if we kind of focus on that largest bucket that U-shaped, one of the things to remember and again, when we asked this to CIOs within this kind of COVID Flash Poll, we also asked, can you give us some commentary? And so, one of the things that, or one of the themes that are kind of coming along with this U-shape recovery is CIOs are cautiously optimistic about this U-shape recovery. They believe that they can get back onto a growth cycle, into 2021, as long as there's a vaccine available. We don't go into a second wave of lockdowns. Economic activity picks up, a lot of the government actions become effective. So there are some kind of let's call it qualifiers, with this bucket of CIOs that are anticipating a U-shape recovery. What they're saying is that, "look, we are expecting these things to happen, "we're not expecting a lockdown, "we are expecting a vaccine. "And if that takes place, "then we do expect an uptake in growth, "or going back to kind of pre COVID levels in 2021." But I think it's fair to assume that if one or more of these are ups and things do get worse as all these States are opening up, maybe the recovery cycle gets pushed along. So kind of at the aggregate, this is where we are right now. >> Yeah. So as I was saying, you really have to understand the different, not only different sectors not only the different vendors, but you can really get to look into the industries, and then even within industries. So if we pull up the next chart, we have the industry sort of break down, and sort of the responses by the industry's V-shape, U-shape or L-shape. I had a conversation with a CIO of a major resort, just the other day. And even he was saying, well, it was actually, I'll tell you it was Wyndham Resorts, public company. I mean, and obviously that business got crushed. They had their earnings call the other day. They talked about how they cut their capex in half. But the stock, Sagar, since the March loss is more than doubled. >> Yeah. >> It was just amazing. And now, but even there, within that sector, they're appealing that on you are doing well, certain parts are going to come back sooner, certain parts are going to take longer, depending on, what type of resort, what type of hotel. So, it really is a complicated situation. So, take us through what you're seeing by industry. >> Yeah, sure. So let's start with kind of the IT-Telco, retail, consumer space. Dave to your point, there's going to be a tremendous amount of bifurcation within both of those verticals. Look, if we start on the IT-Telco side, you're seeing a very large bucket of individuals, right over 20%? That indicated they're seeing a tailwind or additional revenue because of COVID-19 and Dave, we spoke about this all the way back in March, right? All these work from home vendors. CIOs were doubling down on Cloud and SAS and we've seen how some of these vendors have reported in April, with very good reports, all the major Cloud vendors, right? Like Select Security vendors. And so, that's why you're seeing on the kind of Telco side, definitely more positivity, right? As you relates to recovery type, right? Some of them are not even going through recovery. They're seeing an acceleration, same thing on the retail consumer side. You're seeing another large bucket of people who are indicating, "look, we've benefited." And again, there's going to be a lot of bifurcation, there's been a lot of retail consumers. You just mentioned with the hotel lines, that are definitely hurting. But if you have a good online presence as a retailer, and you had essential goods or groceries, you benefited. And those are the organizations that we're seeing really indicate that they saw an acceleration due to COVID-19. So, I thought those two verticals between kind of the IT and retail side, there was a big bucket of people who indicated positivity. So I thought that was kind of the first kind of as we talked about kind of feeling this onion back. That was really interesting. >> Tech continues to power on, and I think a lot of people try, I think somebody was saying that the record time in which we've developed a vaccine previously was like mumps or something. I mean, it was just like years. But now today, 2020, we've got AI, we've got all this data, you've got these great companies all working on this. And so, wow, if we can compress that, that's going to change the equation. A couple of other things Sagar that jump out at me here in this chart that I want to ask you about. I mean, the education, the colleges, are really kind of freaking out right now, some are coming back. I know, like for instance, my daughter at University of Arizona, they're coming back in the fall indefinitely, others are saying, no. You can clearly see the airlines and transportation, has the biggest sort of L-shape, which is the most negative. I'm sure restaurants and hospitality are kind of similar. And then you see energy which got crushed. We had oil (laughs) negative people paying it, big barrels of oil. But now look at that, expectation of a pretty strong, U-shape recovery as people start driving again, and the economy picks up. So, maybe you could give us some thoughts on some of those sort of outliers. >> Yeah. So I kind of bucket the next two outliers as from an L-shaped and a U-shaped. So on the L-shaped side, like you said, education airlines, transportation, and probably to a little bit lesser extent, industrials materials, manufacturing services consulting. These verticals are indicating the highest percentages from an L-shaped recovery, right? So, three plus 1/4 of revenue declines in deceleration, followed by kind of minimal to moderate growth. And look, there's no surprise here. Those are the verticals that have been impacted the most, by less demand from consumers and businesses. And then as you mentioned on the energy utility side, and then I would probably bucket maybe healthcare, pharma, those have some of the largest, percentages of U-shaped recovery. And it's funny, like I read a lot of commentary from some of the energy and the healthcare CIOs, and they were saying they were very optimistic (laughs) about a U-shaped type of recovery. And so it kind of, maybe with those two issues that we could even kind of lump them into, probably to a lesser extent, but you could probably lump it into the prior one with the airlines and the education and services consulting, and IMM, where these are definitely the verticals that are going to see the longest, longest recoveries. And it's probably a little bit more uniform, versus what we've kind of talked about a few minutes ago with IT and retail consumer where it's definitely very bifurcated. There's definitely winners and losers there. >> Yeah. And again, it's a very complicated situation. A lot of people that I've talked to are saying, "look, we really don't have a clear picture, "that's why all these companies are not giving guidance." Many people, however, are optimistic only for a vaccine, but also their thinking is young people with disposable income, they're going to kind of say,"Damn the torpedoes, "I'm not really going to be exposed." >> And they could come back much stronger, there seems to be pent up demand for some of the things like elective surgery, or even some other sort of more important, healthcare needs. So, that obviously could be a snapback. So, obviously we're really closely looking at this, one thing though is certain, is that people are expecting a permanent change, and you've got data that really shows that on the next chart. >> That's right. So, one of the last questions that we ask kind of this quick COVID Flash Poll was, do you anticipate permanent changes to your kind of IT stack, IT spend, based on the last few months? As everyone has been working remotely, and rarely do you see results point this much in one direction, but 92% of CIOs and kind of high level IT end users indicated yes, there are all going to be permanent changes. And one of the things we talked about in March, and look, we were really the first ones, in our discussion, where we were talking about work from home spend, kind of negating or bouncing out all these declines, right? We were saying, look, yes, we are seeing a lot of budgets come down, but surprisingly, we're seeing 20,30% of organizations accelerate spend. And even the ones that are spending less, even them, some of their budgets are kind of being negated by this work from home spend, right? When you think about collaboration tools and additional VPN and networking bandwidth, and laptops and then security, all that stuff. CIOs now continue to spend on, because what CIOs now understand is productivity has remained at very high levels, right? In March CIOs were very concerned with the catastrophe and productivity that has not come true. So on the margin CIOs and organizations are probably much more positive on that front. And so now, because there is no vaccine, where we know CIOs and just in general, the population, we don't know when one is coming. And so remote work seems to be the new norm moving forward, especially that productivity levels are pretty good with people working from home. So, from that perspective, everything that looked like it was maybe going to be temporary, just for the next few months, as people work from home, that's how organizations are now moving forward. >> Well, and we saw Twitter, basically said, "we're going to make work from home permanent." That's probably because their CEO wants to live in Africa. Google, I think, is going to the end of the year. >> I think many companies are going to look at a hybrid, and give employees a choice, say, "look, if you want to work from home "and you can be productive, you get your stuff done, we're cool with that." I think the other point is, everybody talks about these digital transformations leading into COVID. I got to tell you, I think a lot of companies were sort of complacent. They talk the talk, but they weren't walking the walk, meaning they really weren't becoming digital businesses. They really weren't putting data at the core. And I think now it's really becoming an imperative. And there's no question that what we've been talking about and forecasting has been pulled forward, and you're either going to have to step up your digital game or you're going to be in big trouble. And the other thing I'm really interested in is will companies sub-optimize profitability in the near term, in order to put better business resiliency in place, and better flexibility, will they make those investments? And I think if they do, longer term, they're going to be in better shape. If they don't, they could maybe be okay in the near term, but I'm going to put up a caution sign, although the longer term. >> Now look, I think everything that's been done in the last few months, in terms of having those continuation plans, due to pandemics and all that stuff, look, you got to have that in your playbook, right? And so to your point, this is where CIOs are going and if you're not transforming yourself or you didn't before, lesson learned, because now you're probably having to move twice as fast to support all your employees. So I think this pandemic really kind of sped up digital transformation initiatives, which is why, you're seeing some companies, SAS and Cloud related companies, with very good earnings reports that are guiding well. And then you're seeing other companies that are pulling their guidance because of uncertainty, but it's likely more on the side if they're just not seeing the same levels of spend, because if they haven't oriented themselves, on that digital transformation side. So I think events like this, they typically showcase winners and losers than when things are going well. and everything's kind of going up. >> Well, I think that too, there's a big discussion around is the S&P over valued right now. I won't make that call, but I will say this, that there's a lot of data out there. There's data in earnings reports, there's data about this pandemic, which it continues to change. Maybe not so much daily, but we're getting new information, multiple times a week. So you got to look to that data. You got to make your call, pick your spots, earlier you talk about a stock pickers market. I think it's very much true here. There are some going to be really strong companies. emerging out of this, don't gamble but do your research. And I think you'll find some gems out there, maybe Warren buffet can't find them okay. (laughs) But the guys at main street. I'm optimistic, I wonder how you feel about the recovery. I think I maybe tainted by tech. (laughs). I'm very much concerned about certain industries, but I think the tech industry, which is our business's, going to come out of this pretty strong? >> Yeah. Look, the one thing we should have stated this earlier, the majority of organizations are not expecting a V-shaped recovery. And yet I still think there's part of the consensus is expecting a V-shaped recovery. You can see as we demonstrate in some of the earlier charts, That U-shaped, there is some cautious optimism around there, almost the majority of organizations are expecting a U-shape recovery. And even then, as we mentioned, right? That U-shape, there is some cautious optimism around there, and I have it, you probably have it where. Yes, if everything goes well, it looks like 2021 we can really get back on track. But there's so much unknown. And so yes, that does give I think everyone pause when it comes from an investment perspective, and even just bringing on technologies. into your organization, right? Which ones are going to work, which ones aren't? So, I'm definitely on the boat of, this is a more U-shaped in a V-shape recovery. I think the data backs that up. I think when it comes to Cloud and SAS players, those areas, and I think you've seen this on the investment side, a lot of money has come out of all these other sectors that we mentioned that are having these L-shaped recoveries. A lot of it has gone into the text-based. I imagine that will continue. And so that might be kind of, it's tough to sometimes balance what's going on, on the investment that stock market side, with how organizations are recovering. I think people are really looking out into two, 3/4 and saying, look to your point where you said that earlier, is there a lot of that pent up demand, are things going to get right back to normal? Because I think a lot of people are anticipating that. And if we don't see that, I think the next time we do some of these kind of COVID Flash Polls I'm interested to see whether or not, maybe towards the end of the summer, these recovery cycles are actually longer because maybe we didn't see some of that stuff. So there's still a lot of unknowns. But what we do know right now is it's not a V-shaped recovery. >> I agree, especially on the unknowns, there's monetary policy, there's fiscal policy, there's an election coming up. >> That's fine. >> There's escalating tensions with China. There's your thoughts on the efficacy of the vaccine? what about therapeutics? Do people who've had this get immunity? How many people actually have it? What about testing? So the point I'm making here is it's very, very important that you update your forecast regularly That's why it's so great to have this partnership with you guys, because you're constantly updating the numbers. It's not just a one shot deal. So Sagar, thanks so much for coming on. I'm looking forward to having you on in the coming weeks. Really appreciate it. >> Absolutely. Yeah, we'll really start kind of digging into how a lot of these emerging technologies are fairing because of COVID-19. So, I'm actually interested to start digging through the data myself. So yeah, we'll do some reporting in the coming weeks about that as well. >> Well, thanks everybody for watching this episode of theCUBE Insights powered by ETR. I'm Dave Vellante for Sagar Kadakia, check out etr.plus, that's where all the ETR data lives, I publish weekly on wikibond.com and siliconangle.com. And you can reach me @dvellante. We'll see you next time. (gentle music).

Published Date : May 21 2020

SUMMARY :

leaders all around the world, And over the last several a pleasure to be back on again. on the Covid impact And coming into the year, And the interesting thing I one of the things to remember and sort of the responses to come back sooner, kind of the first kind of and the economy picks up. So I kind of bucket the next two outliers A lot of people that I've for some of the things And one of the things we "we're going to make work And the other thing I'm And so to your point, this There are some going to be A lot of it has gone into the text-based. I agree, especially on the unknowns, to have this partnership with you guys, in the coming weeks about that as well. And you can reach me @dvellante.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavePERSON

0.99+

DavidPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

MarchDATE

0.99+

OktaORGANIZATION

0.99+

20,30%QUANTITY

0.99+

2021DATE

0.99+

AfricaLOCATION

0.99+

SagarPERSON

0.99+

4%QUANTITY

0.99+

16%QUANTITY

0.99+

AprilDATE

0.99+

Palo AltoLOCATION

0.99+

CrowdStrikeORGANIZATION

0.99+

5%QUANTITY

0.99+

15%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

New York CityLOCATION

0.99+

92%QUANTITY

0.99+

COVID-19OTHER

0.99+

ETRORGANIZATION

0.99+

next yearDATE

0.99+

two issuesQUANTITY

0.99+

last ThursdayDATE

0.99+

TelcoORGANIZATION

0.99+

Wyndham ResortsORGANIZATION

0.99+

twoQUANTITY

0.99+

UiPathORGANIZATION

0.99+

mid AprilDATE

0.99+

mid MarchDATE

0.99+

700 CIOsQUANTITY

0.99+

NASDAQORGANIZATION

0.99+

bothQUANTITY

0.99+

threeQUANTITY

0.99+

50 daysQUANTITY

0.99+

firstQUANTITY

0.99+

first questionQUANTITY

0.99+

S&PORGANIZATION

0.99+

twiceQUANTITY

0.99+

first chartQUANTITY

0.99+

ChinaORGANIZATION

0.99+

one shotQUANTITY

0.99+

MayDATE

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

BostonLOCATION

0.98+

Sagar KadakiaPERSON

0.98+

60 sectorsQUANTITY

0.98+

40QUANTITY

0.98+

fourQUANTITY

0.98+

@dvellantePERSON

0.98+

SASORGANIZATION

0.98+

5/4QUANTITY

0.97+

COVID pandemicEVENT

0.97+

2020DATE

0.97+

over 20%QUANTITY

0.97+

University of ArizonaORGANIZATION

0.96+

COVID-19 pandemicEVENT

0.96+

siliconangle.comOTHER

0.96+

one thingQUANTITY

0.95+

TwitterORGANIZATION

0.95+

several weeks agoDATE

0.95+

two outliersQUANTITY

0.95+

one directionQUANTITY

0.94+

3/4QUANTITY

0.94+

COVIDOTHER

0.93+

pandemicEVENT

0.93+

two verticalsQUANTITY

0.93+

OneQUANTITY

0.93+

Gabriel Chapman, Pure Storage | Virtual Vertica BDC 2020


 

>>Yeah, it's the queue covering the virtual vertical Big Data Conference 2020. Brought to you by vertical. >>Hi, everybody. And welcome to this cube special presentation of the vertical virtual Big Data conference. The Cube is running in parallel with Day One and day two of the vertical of Big Data event. By the way, the Cube has been every single big data event in It's our pleasure to be here in the virtual slash digital event as well. Gabriel Chapman is here. He's the director of Flash Blade Products Solutions Marketing at Pure Storage. Great to see you. Thanks for coming on. >>Great to see you too. How's it going? >>It's going very well. I mean, I wish we were meeting in Boston at the Encore Hotel, but, uh, you know, and hopefully we'll be able to meet it, accelerate at some point, future or one of the sub shows that you guys are doing the regional shows, but because we've been covering that show as well. But I really want to get into it. And the last accelerate September 2019 pure and vertical announced. Ah, partnership. I remember a joint being ran up to me and said, Hey, you got to check this out. The separation of compute and storage by EON mode now available on Flash Blade. So, uh and and I believe still the only company that can support that separation and independent scaling both on Prem and in the cloud. So I want to ask, what were the trends and analytical database and cloud led to this partnership? You know, >>realistically, I think what we're seeing is that there's been a kind of a larger shift when it comes to modern analytics platforms towards moving away from the traditional, you know, Hadoop type architecture where we were doing on and leveraging a lot of directors that storage primarily because of the limitations of how that solution was architected. When we start to look at the larger trends towards you know how organizations want to do this type of work on premises, they're looking at solutions that allow them to scale the compute storage pieces independently and therefore, you know, the flash blade platform ended up being a great solution to support America in their transition Tian mode. Leveraging essentially is an S three object store. >>Okay, so let's let's circle back on that you guys in your in your announcement of the flash blade, you make the claim that Flash Blade is the industry's most advanced file and object storage platform ever. That's a bold statement. So defend that What? >>I would like to go beyond that and just say, you know, So we've really kind of looked at this from a standpoint of, you know, as as we've developed Flash Blade as a platform and keep in mind, it's been a product that's been around for over three years now and has been very successful for pure storage. The reality is, is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go, and we believe that we're a leader in that fast object best file storage place in realistically, which we start to see more organizations start to look at building solutions that leverage cloud storage characteristics. But doing so on Prem for a multitude of different reasons. We've built a platform that really addresses a lot of those needs around simplicity around, you know, making things this year that you know, fast matters for us. Ah, simple is smart. Um we can provide, you know, cloud integrations across the spectrum. And, you know, there's a subscription model that fits into that as well. We fall that that falls into our umbrella of what we consider the modern day takes variance. And it's something that we've built into the entire pure portfolio. >>Okay, so I want to get into the architecture a little bit of flash blade and then understand the fit for, uh, analytic databases generally, but specifically for vertical. So it is a blade, so you got compute and network included. It's a key value store based system. So you're talking about scale out. Unlike, unlike, uh, pure is sort of, you know, initial products which were scale up, Um, and so I want on It is a fabric based system. I want to understand what that all means to take us through the architecture. You know, some of the quote unquote firsts that you guys talk about. So let's start with sort of the blade >>aspect. Yeah, the blade aspect of what we call the flash blade. Because if you look at the actual platform, you have, ah, primarily a chassis with built in networking components, right? So there's ah, fabric interconnect with inside the platform that connects to each one of the individual blades. Individual blades have their own compute that drives basically a pure storage flash components inside. It's not like we're just taking SSD is and plugging them into a system and like you would with the traditional commodity off the shelf hardware design. This is very much an engineered solution that is built towards the characteristics that we believe were important with fast filing past object scalability, massive parallel ization. When it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to 150 that's that's the kind of scale that customers are looking for, especially as we start to address these larger analytics pools. They are multi petabytes data sets, you know that single addressable object space and, you know, file performance that is beyond what most of your traditional scale up storage platforms are able to deliver. >>Yes, I interviewed cause last September and accelerate, and Christie Pure has been attacked by some of the competitors. There's not having scale out. I asked him his thoughts on that, he said Well, first of all, our flash blade is scale out. He said, Look, anything that adds complexity, you know we avoid. But for the workloads that are associated with flash blade scale out is the right sort of approach. Maybe you could talk about why that is. Well, >>realistically, I think you know that that approach is better when we're starting to work with large, unstructured data sets. I mean, flash blade is unique. The architected to allow customers to achieve superior resource utilization for compute and storage, while at the same time, you know, reducing significantly the complexity that has arisen around this kind of bespoke or siloed nature of big data and analytics solutions. I mean, we're really kind of look at this from a standpoint of you have built and delivered are created applications in the public cloud space of dress, you know, object storage and an unstructured data. And for some organizations, the importance is bringing that on Prem. I mean, we do see about repatriation coming on a lot of organizations as these data egress, charges continue to expand and grow, um, and then organizations that want even higher performance and what we're able to get into the public cloud space. They are bringing that data back on Prem They are looking at from a stamp. We still want to be able to scale the way we scale in the cloud. We still want to operate the same way we operate in the cloud, but we want to do it within control of our own, our own borders. And so that's, you know, that's one of the bigger pieces to that. And we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models? A zealous the benefits and efficiencies of scale that they're able to afford but allowing customers to do that with inside their own data center. >>So you're talking about the trends earlier. You have these cloud native databases that allowed of the scaling of compute and storage independently. Vertical comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you, You love me. Here's a press release and then we go on or they're just straight, you know, go to market. Are there other aspects of this partnership that they're non Barney deal like, in other words, any specific engineering. Um, you know other go to market programs? Could you talk about that a little bit? Yeah, >>it's it's It's more than just that what we consider a channel meet in the middle or, you know, that Barney type of deal. It's realistically, you know, we've done some first with Veronica that I think, really Courtney, if they think you look at the architecture and how we did, we've brought to market together. Ah, we have solutions. Teams in the back end who are, you know, subject matter experts. In this space, if you talk to joy and the people from vertical, they're very high on our very excited about the partnership because it often it opens up a new set of opportunities for their customers to leverage on mode and get into some of the the nuance task specs of how they leverage the depot depot with inside each individual. Compute node in adjustments with inside their reach. Additional performance gains for customers on Prem and at the same time, for them, that's still tough. The ability to go into that cloud model if they wish to. And so I think a lot of it is around. How do we partner is to companies? How do we do a joint selling motions? How do we show up in and do white papers and all of the traditional marketing aspects that we bring to the market? And then, you know, joint selling opportunities exist where they are, and so that's realistically. I think, like any other organization that's going to market with a partner on MSP that they have, ah, strong partnership with. You'll continue to see us, you know, talking about are those mutually beneficial relationships and the solutions that we're bringing to the market. >>Okay, you know, of course, he used to be a Gartner analyst, and you go to the vendor side now, but it's but it's, but it's a Gartner analyst. You're obviously objective. You see it on, you know well, there's a lot of ways to skin the cat There, there their strengths, weaknesses, opportunities, threats, etcetera for every vendor. So you have you have vertical who's got a very mature stack and talking to a number of the customers out there who are using EON mode. You know there's certain workloads where these cloud native databases makes sense. It's not just the economics of scaling and storage independently. I want to talk more about that. There's flexibility aspect as well. But Vertical really has to play its its trump card, which is Look, we've got a big on premise state, and we're gonna bring that eon capability both on Prem and we're embracing the cloud now. There obviously have been there to play catch up in the cloud, but at the same time, they've got a much more mature stack than a lot of these other cloud native databases that might have just started a couple of years ago. So you know, so there's trade offs that customers have to make. How do you sort through that? Where do you see the interest in this? And and what's the sweet spot for this partnership? You know, we've >>been really excited to build the partnership with vertical A and provide, you know, we're really proud to provide pretty much the only on Prem storage platform that's validated with the yang mode to deliver a modern data experience for our customers together. You know, it's ah, it's that partnership that allows us to go into customers that on Prem space, where I think that there's still not to say that not everybody wants to go there, but I think there's aspects and solutions that worked very well there. But for the vast majority, I still think that there's, you know, the your data center is not going away. And you do want to have control over some of the many of the assets with inside of the operational confines. So therefore, we start to look at how do we can do the best of what cloud offers but on prim. And that's realistically, where we start to see the stronger push for those customers. You still want to manage their data locally. A swell as maybe even worked around some of the restrictions that they might have around cost and complexity hiring. You know, the different types of skills skill sets that are required to bring applications purely cloud native. It's still that larger part of that digital transformation that many organizations are going for going forward with. And realistically, I think they're taking a look at the pros and cons, and we've been doing cloud long enough where people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center. So I mean, realistically, as we move forward, that's, Ah, that better option when it comes to a modern architecture that can do, you know, we can deliver an address, a diverse set of performance requirements and allow the organization to continue to grow the model to the data, you know, based on the data that they're actually trying to leverage. And that's really what Flash was built for. It was built for a platform that could address small files or large files or high throughput, high throughput, low latency scale of petabytes in a single name. Space in a single rack is we like to put it in there. I mean, we see customers that have put 150 flash blades into production as a single name space. It's significant for organizations that are making that drive towards modern data experience with modern analytics platforms. Pure and Veronica have delivered an experience that can address that to a wide range of customers that are implementing uh, you know, particularly on technology. >>I'm interested in exploring the use case. A little bit further. You just sort of gave some parameters and some examples and some of the flexibility that you have, um, and take us through kind of what the customer discussions are like. Obviously you've got a big customer base, you and vertical that that's on Prem. That's the the unique advantage of this. But there are others. It's not just the economics of the granular scaling of compute and storage independently. There are other aspects of take us through that sort of a primary use case or use cases. Yeah, you >>know, I mean, I could give you a couple customer examples, and we have a large SAS analyst company which uses vertical on last way to authenticate the quality of digital media in real time, You know, then for them it makes a big difference is they're doing their streaming and whatnot that they can. They can fine tune the grand we control that. So that's one aspect that that we address. We have a multinational car car company, which uses vertical on flash blade to make thousands of decisions per second for autonomous vehicle decision making trees. You know, that's what really these new modern analytics platforms were built for, um, there's another healthcare organization that uses vertical on flash blade to enable healthcare providers to make decisions in real time. The impact lives, especially when we start to look at and, you know, the current state of affairs with code in the Corona virus. You know, those types of technologies, we're really going to help us kind of get of and help lower invent, bend that curve downward. So, you know, there's all these different areas where we can address that the goals and the achievements that we're trying to look bored with with real time analytics decision making tools like and you know, realistically is we have these conversations with customers they're looking to get beyond the ability of just, you know, a data scientist or a data architect looking to just kind of driving information >>that we're talking about Hadoop earlier. We're kind of going well beyond that now. And I guess what I'm saying is that in the first phase of cloud, it was all about infrastructure. It was about, you know, uh, spin it up. You know, compute and storage is a little bit of networking in there. >>It >>seems like the next new workload that's clearly emerging is you've got. And it started with the cloud native databases. But then bringing in, you know, AI and machine learning tooling on top of that Ah, and then being able to really drive these new types of insights and it's really about taking data these bog this bog of data that we've collected over the last 10 years. A lot of that is driven by a dupe bringing machine intelligence into the equation, scaling it with either cloud public cloud or bringing that cloud experience on Prem scale. You know, across organizations and across your partner network, that really is a new emerging workloads. You see that? And maybe talk a little bit about what you're seeing with customers. >>Yeah. I mean, it really is. We see several trends. You know, one of those is the ability to take a take this approach to move it out of the lab, but into production. Um, you know, especially when it comes to data science projects, machine learning projects that traditionally start out as kind of small proofs of concept, easy to spin up in the cloud. But when a customer wants to scale and move towards a riel you know, derived a significant value from that. They do want to be able to control more characteristic site, and we know machine learning, you know, needs toe needs to learn from a massive amounts of data to provide accuracy. There's just too much data retrieving the cloud for every training job. Same time Predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking. You know, we see this. Ah, the visualization of Data Analytics is Tricia deployed is being on a continuum with, you know, the things that we've been doing in the long in the past with data warehousing, data Lakes, ai on the other end. But this way, we're starting to manifest it and organizations that are looking towards getting more utility and better elasticity out of the data that they are working for. So they're not looking to just build apps, silos of bespoke ai environments. They're looking to leverage. Ah, you know, ah, platform that can allow them to, you know, do ai, for one thing, machine learning for another leverage multiple protocols to access that data because the tools are so much Jeff um, you know, it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environment. >>I think it's gonna be a big growth area in the coming years. Gable. I wish we were in Boston together. You would have painted your little corner of Boston orange. I know that you guys have but really appreciate you coming on the cube wall to wall coverage. Two days of the vertical vertical virtual big data conference. Keep it right there. Right back. Right after this short break, Yeah.

Published Date : Mar 31 2020

SUMMARY :

Brought to you by vertical. of the vertical of Big Data event. Great to see you too. future or one of the sub shows that you guys are doing the regional shows, but because we've been you know, the flash blade platform ended up being a great solution to support America Okay, so let's let's circle back on that you guys in your in your announcement of the I would like to go beyond that and just say, you know, So we've really kind of looked at this from a standpoint you know, initial products which were scale up, Um, and so I want on It is a fabric based object space and, you know, file performance that is beyond what most adds complexity, you know we avoid. you know, that's one of the bigger pieces to that. straight, you know, go to market. it's it's It's more than just that what we consider a channel meet in the middle or, you know, So you know, so there's trade offs that customers have to make. been really excited to build the partnership with vertical A and provide, you know, we're really proud to provide pretty and some examples and some of the flexibility that you have, um, and take us through you know, the current state of affairs with code in the Corona virus. It was about, you know, uh, spin it up. But then bringing in, you know, AI and machine learning data because the tools are so much Jeff um, you know, it is a growing diversity of I know that you guys have but really appreciate you coming on the cube wall to wall coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Gabriel ChapmanPERSON

0.99+

September 2019DATE

0.99+

BostonLOCATION

0.99+

BarneyORGANIZATION

0.99+

GartnerORGANIZATION

0.99+

Two daysQUANTITY

0.99+

VeronicaPERSON

0.99+

JeffPERSON

0.99+

last SeptemberDATE

0.99+

thousandsQUANTITY

0.98+

150QUANTITY

0.98+

CourtneyPERSON

0.98+

oneQUANTITY

0.98+

one aspectQUANTITY

0.98+

Day OneQUANTITY

0.97+

day twoQUANTITY

0.97+

seven bladesQUANTITY

0.97+

bothQUANTITY

0.96+

Virtual VerticaORGANIZATION

0.96+

over three yearsQUANTITY

0.96+

150 flash bladesQUANTITY

0.95+

firstQUANTITY

0.95+

single rackQUANTITY

0.94+

Corona virusOTHER

0.94+

single nameQUANTITY

0.94+

first phaseQUANTITY

0.94+

Pure StorageORGANIZATION

0.93+

PremORGANIZATION

0.92+

Christie PureORGANIZATION

0.91+

single platformQUANTITY

0.91+

each individualQUANTITY

0.91+

this yearDATE

0.91+

firstsQUANTITY

0.9+

Big Data Conference 2020EVENT

0.9+

AmericaLOCATION

0.89+

Flash Blade Products SolutionsORGANIZATION

0.89+

couple of years agoDATE

0.88+

single nameQUANTITY

0.84+

each oneQUANTITY

0.84+

one thingQUANTITY

0.83+

TriciaPERSON

0.82+

PureORGANIZATION

0.81+

last 10 yearsDATE

0.8+

HadoopTITLE

0.75+

single addressableQUANTITY

0.74+

secondQUANTITY

0.72+

VeronicaORGANIZATION

0.7+

Encore HotelLOCATION

0.68+

Big DataEVENT

0.67+

CubeCOMMERCIAL_ITEM

0.66+

SASORGANIZATION

0.65+

Flash BladeTITLE

0.62+

petabytesQUANTITY

0.62+

eonORGANIZATION

0.59+

couple customerQUANTITY

0.55+

EONORGANIZATION

0.53+

single bigQUANTITY

0.5+

BigEVENT

0.49+

yearsDATE

0.48+

subQUANTITY

0.46+

2020DATE

0.33+

UNLIST TILL 4/2 - Vertica Big Data Conference Keynote


 

>> Joy: Welcome to the Virtual Big Data Conference. Vertica is so excited to host this event. I'm Joy King, and I'll be your host for today's Big Data Conference Keynote Session. It's my honor and my genuine pleasure to lead Vertica's product and go-to-market strategy. And I'm so lucky to have a passionate and committed team who turned our Vertica BDC event, into a virtual event in a very short amount of time. I want to thank the thousands of people, and yes, that's our true number who have registered to attend this virtual event. We were determined to balance your health, safety and your peace of mind with the excitement of the Vertica BDC. This is a very unique event. Because as I hope you all know, we focus on engineering and architecture, best practice sharing and customer stories that will educate and inspire everyone. I also want to thank our top sponsors for the virtual BDC, Arrow, and Pure Storage. Our partnerships are so important to us and to everyone in the audience. Because together, we get things done faster and better. Now for today's keynote, you'll hear from three very important and energizing speakers. First, Colin Mahony, our SVP and General Manager for Vertica, will talk about the market trends that Vertica is betting on to win for our customers. And he'll share the exciting news about our Vertica 10 announcement and how this will benefit our customers. Then you'll hear from Amy Fowler, VP of strategy and solutions for FlashBlade at Pure Storage. Our partnership with Pure Storage is truly unique in the industry, because together modern infrastructure from Pure powers modern analytics from Vertica. And then you'll hear from John Yovanovich, Director of IT at AT&T, who will tell you about the Pure Vertica Symphony that plays live every day at AT&T. Here we go, Colin, over to you. >> Colin: Well, thanks a lot joy. And, I want to echo Joy's thanks to our sponsors, and so many of you who have helped make this happen. This is not an easy time for anyone. We were certainly looking forward to getting together in person in Boston during the Vertica Big Data Conference and Winning with Data. But I think all of you and our team have done a great job, scrambling and putting together a terrific virtual event. So really appreciate your time. I also want to remind people that we will make both the slides and the full recording available after this. So for any of those who weren't able to join live, that is still going to be available. Well, things have been pretty exciting here. And in the analytic space in general, certainly for Vertica, there's a lot happening. There are a lot of problems to solve, a lot of opportunities to make things better, and a lot of data that can really make every business stronger, more efficient, and frankly, more differentiated. For Vertica, though, we know that focusing on the challenges that we can directly address with our platform, and our people, and where we can actually make the biggest difference is where we ought to be putting our energy and our resources. I think one of the things that has made Vertica so strong over the years is our ability to focus on those areas where we can make a great difference. So for us as we look at the market, and we look at where we play, there are really three recent and some not so recent, but certainly picking up a lot of the market trends that have become critical for every industry that wants to Win Big With Data. We've heard this loud and clear from our customers and from the analysts that cover the market. If I were to summarize these three areas, this really is the core focus for us right now. We know that there's massive data growth. And if we can unify the data silos so that people can really take advantage of that data, we can make a huge difference. We know that public clouds offer tremendous advantages, but we also know that balance and flexibility is critical. And we all need the benefit that machine learning for all the types up to the end data science. We all need the benefits that they can bring to every single use case, but only if it can really be operationalized at scale, accurate and in real time. And the power of Vertica is, of course, how we're able to bring so many of these things together. Let me talk a little bit more about some of these trends. So one of the first industry trends that we've all been following probably now for over the last decade, is Hadoop and specifically HDFS. So many companies have invested, time, money, more importantly, people in leveraging the opportunity that HDFS brought to the market. HDFS is really part of a much broader storage disruption that we'll talk a little bit more about, more broadly than HDFS. But HDFS itself was really designed for petabytes of data, leveraging low cost commodity hardware and the ability to capture a wide variety of data formats, from a wide variety of data sources and applications. And I think what people really wanted, was to store that data before having to define exactly what structures they should go into. So over the last decade or so, the focus for most organizations is figuring out how to capture, store and frankly manage that data. And as a platform to do that, I think, Hadoop was pretty good. It certainly changed the way that a lot of enterprises think about their data and where it's locked up. In parallel with Hadoop, particularly over the last five years, Cloud Object Storage has also given every organization another option for collecting, storing and managing even more data. That has led to a huge growth in data storage, obviously, up on public clouds like Amazon and their S3, Google Cloud Storage and Azure Blob Storage just to name a few. And then when you consider regional and local object storage offered by cloud vendors all over the world, the explosion of that data, in leveraging this type of object storage is very real. And I think, as I mentioned, it's just part of this broader storage disruption that's been going on. But with all this growth in the data, in all these new places to put this data, every organization we talk to is facing even more challenges now around the data silo. Sure the data silos certainly getting bigger. And hopefully they're getting cheaper per bit. But as I said, the focus has really been on collecting, storing and managing the data. But between the new data lakes and many different cloud object storage combined with all sorts of data types from the complexity of managing all this, getting that business value has been very limited. This actually takes me to big bet number one for Team Vertica, which is to unify the data. Our goal, and some of the announcements we have made today plus roadmap announcements I'll share with you throughout this presentation. Our goal is to ensure that all the time, money and effort that has gone into storing that data, all the data turns into business value. So how are we going to do that? With a unified analytics platform that analyzes the data wherever it is HDFS, Cloud Object Storage, External tables in an any format ORC, Parquet, JSON, and of course, our own Native Roth Vertica format. Analyze the data in the right place in the right format, using a single unified tool. This is something that Vertica has always been committed to, and you'll see in some of our announcements today, we're just doubling down on that commitment. Let's talk a little bit more about the public cloud. This is certainly the second trend. It's the second wave maybe of data disruption with object storage. And there's a lot of advantages when it comes to public cloud. There's no question that the public clouds give rapid access to compute storage with the added benefit of eliminating data center maintenance that so many companies, want to get out of themselves. But maybe the biggest advantage that I see is the architectural innovation. The public clouds have introduced so many methodologies around how to provision quickly, separating compute and storage and really dialing-in the exact needs on demand, as you change workloads. When public clouds began, it made a lot of sense for the cloud providers and their customers to charge and pay for compute and storage in the ratio that each use case demanded. And I think you're seeing that trend, proliferate all over the place, not just up in public cloud. That architecture itself is really becoming the next generation architecture for on-premise data centers, as well. But there are a lot of concerns. I think we're all aware of them. They're out there many times for different workloads, there are higher costs. Especially if some of the workloads that are being run through analytics, which tend to run all the time. Just like some of the silo challenges that companies are facing with HDFS, data lakes and cloud storage, the public clouds have similar types of siloed challenges as well. Initially, there was a belief that they were cheaper than data centers, and when you added in all the costs, it looked that way. And again, for certain elastic workloads, that is the case. I don't think that's true across the board overall. Even to the point where a lot of the cloud vendors aren't just charging lower costs anymore. We hear from a lot of customers that they don't really want to tether themselves to any one cloud because of some of those uncertainties. Of course, security and privacy are a concern. We hear a lot of concerns with regards to cloud and even some SaaS vendors around shared data catalogs, across all the customers and not enough separation. But security concerns are out there, you can read about them. I'm not going to jump into that bandwagon. But we hear about them. And then, of course, I think one of the things we hear the most from our customers, is that each cloud stack is starting to feel even a lot more locked in than the traditional data warehouse appliance. And as everybody knows, the industry has been running away from appliances as fast as it can. And so they're not eager to get locked into another, quote, unquote, virtual appliance, if you will, up in the cloud. They really want to make sure they have flexibility in which clouds, they're going to today, tomorrow and in the future. And frankly, we hear from a lot of our customers that they're very interested in eventually mixing and matching, compute from one cloud with, say storage from another cloud, which I think is something that we'll hear a lot more about. And so for us, that's why we've got our big bet number two. we love the cloud. We love the public cloud. We love the private clouds on-premise, and other hosting providers. But our passion and commitment is for Vertica to be able to run in any of the clouds that our customers choose, and make it portable across those clouds. We have supported on-premises and all public clouds for years. And today, we have announced even more support for Vertica in Eon Mode, the deployment option that leverages the separation of compute from storage, with even more deployment choices, which I'm going to also touch more on as we go. So super excited about our big bet number two. And finally as I mentioned, for all the hype that there is around machine learning, I actually think that most importantly, this third trend that team Vertica is determined to address is the need to bring business critical, analytics, machine learning, data science projects into production. For so many years, there just wasn't enough data available to justify the investment in machine learning. Also, processing power was expensive, and storage was prohibitively expensive. But to train and score and evaluate all the different models to unlock the full power of predictive analytics was tough. Today you have those massive data volumes. You have the relatively cheap processing power and storage to make that dream a reality. And if you think about this, I mean with all the data that's available to every company, the real need is to operationalize the speed and the scale of machine learning so that these organizations can actually take advantage of it where they need to. I mean, we've seen this for years with Vertica, going back to some of the most advanced gaming companies in the early days, they were incorporating this with live data directly into their gaming experiences. Well, every organization wants to do that now. And the accuracy for clickability and real time actions are all key to separating the leaders from the rest of the pack in every industry when it comes to machine learning. But if you look at a lot of these projects, the reality is that there's a ton of buzz, there's a ton of hype spanning every acronym that you can imagine. But most companies are struggling, do the separate teams, different tools, silos and the limitation that many platforms are facing, driving, down sampling to get a small subset of the data, to try to create a model that then doesn't apply, or compromising accuracy and making it virtually impossible to replicate models, and understand decisions. And if there's one thing that we've learned when it comes to data, prescriptive data at the atomic level, being able to show end of one as we refer to it, meaning individually tailored data. No matter what it is healthcare, entertainment experiences, like gaming or other, being able to get at the granular data and make these decisions, make that scoring applies to machine learning just as much as it applies to giving somebody a next-best-offer. But the opportunity has never been greater. The need to integrate this end-to-end workflow and support the right tools without compromising on that accuracy. Think about it as no downsampling, using all the data, it really is key to machine learning success. Which should be no surprise then why the third big bet from Vertica is one that we've actually been working on for years. And we're so proud to be where we are today, helping the data disruptors across the world operationalize machine learning. This big bet has the potential to truly unlock, really the potential of machine learning. And today, we're announcing some very important new capabilities specifically focused on unifying the work being done by the data science community, with their preferred tools and platforms, and the volume of data and performance at scale, available in Vertica. Our strategy has been very consistent over the last several years. As I said in the beginning, we haven't deviated from our strategy. Of course, there's always things that we add. Most of the time, it's customer driven, it's based on what our customers are asking us to do. But I think we've also done a great job, not trying to be all things to all people. Especially as these hype cycles flare up around us, we absolutely love participating in these different areas without getting completely distracted. I mean, there's a variety of query tools and data warehouses and analytics platforms in the market. We all know that. There are tools and platforms that are offered by the public cloud vendors, by other vendors that support one or two specific clouds. There are appliance vendors, who I was referring to earlier who can deliver package data warehouse offerings for private data centers. And there's a ton of popular machine learning tools, languages and other kits. But Vertica is the only advanced analytic platform that can do all this, that can bring it together. We can analyze the data wherever it is, in HDFS, S3 Object Storage, or Vertica itself. Natively we support multiple clouds on-premise deployments, And maybe most importantly, we offer that choice of deployment modes to allow our customers to choose the architecture that works for them right now. It still also gives them the option to change move, evolve over time. And Vertica is the only analytics database with end-to-end machine learning that can truly operationalize ML at scale. And I know it's a mouthful. But it is not easy to do all these things. It is one of the things that highly differentiates Vertica from the rest of the pack. It is also why our customers, all of you continue to bet on us and see the value that we are delivering and we will continue to deliver. Here's a couple of examples of some of our customers who are powered by Vertica. It's the scale of data. It's the millisecond response times. Performance and scale have always been a huge part of what we have been about, not the only thing. I think the functionality all the capabilities that we add to the platform, the ease of use, the flexibility, obviously with the deployment. But if you look at some of the numbers they are under these customers on this slide. And I've shared a lot of different stories about these customers. Which, by the way, it still amaze me every time I talk to one and I get the updates, you can see the power and the difference that Vertica is making. Equally important, if you look at a lot of these customers, they are the epitome of being able to deploy Vertica in a lot of different environments. Many of the customers on this slide are not using Vertica just on-premise or just in the cloud. They're using it in a hybrid way. They're using it in multiple different clouds. And again, we've been with them on that journey throughout, which is what has made this product and frankly, our roadmap and our vision exactly what it is. It's been quite a journey. And that journey continues now with the Vertica 10 release. The Vertica 10 release is obviously a massive release for us. But if you look back, you can see that building on that native columnar architecture that started a long time ago, obviously, with the C-Store paper. We built it to leverage that commodity hardware, because it was an architecture that was never tightly integrated with any specific underlying infrastructure. I still remember hearing the initial pitch from Mike Stonebreaker, about the vision of Vertica as a software only solution and the importance of separating the company from hardware innovation. And at the time, Mike basically said to me, "there's so much R&D in innovation that's going to happen in hardware, we shouldn't bake hardware into our solution. We should do it in software, and we'll be able to take advantage of that hardware." And that is exactly what has happened. But one of the most recent innovations that we embraced with hardware is certainly that separation of compute and storage. As I said previously, the public cloud providers offered this next generation architecture, really to ensure that they can provide the customers exactly what they needed, more compute or more storage and charge for each, respectively. The separation of compute and storage, compute from storage is a major milestone in data center architectures. If you think about it, it's really not only a public cloud innovation, though. It fundamentally redefines the next generation data architecture for on-premise and for pretty much every way people are thinking about computing today. And that goes for software too. Object storage is an example of the cost effective means for storing data. And even more importantly, separating compute from storage for analytic workloads has a lot of advantages. Including the opportunity to manage much more dynamic, flexible workloads. And more importantly, truly isolate those workloads from others. And by the way, once you start having something that can truly isolate workloads, then you can have the conversations around autonomic computing, around setting up some nodes, some compute resources on the data that won't affect any of the other data to do some things on their own, maybe some self analytics, by the system, etc. A lot of things that many of you know we've already been exploring in terms of our own system data in the product. But it was May 2018, believe it or not, it seems like a long time ago where we first announced Eon Mode and I want to make something very clear, actually about Eon mode. It's a mode, it's a deployment option for Vertica customers. And I think this is another huge benefit that we don't talk about enough. But unlike a lot of vendors in the market who will dig you and charge you for every single add-on like hit-buy, you name it. You get this with the Vertica product. If you continue to pay support and maintenance, this comes with the upgrade. This comes as part of the new release. So any customer who owns or buys Vertica has the ability to set up either an Enterprise Mode or Eon Mode, which is a question I know that comes up sometimes. Our first announcement of Eon was obviously AWS customers, including the trade desk, AT&T. Most of whom will be speaking here later at the Virtual Big Data Conference. They saw a huge opportunity. Eon Mode, not only allowed Vertica to scale elastically with that specific compute and storage that was needed, but it really dramatically simplified database operations including things like workload balancing, node recovery, compute provisioning, etc. So one of the most popular functions is that ability to isolate the workloads and really allocate those resources without negatively affecting others. And even though traditional data warehouses, including Vertica Enterprise Mode have been able to do lots of different workload isolation, it's never been as strong as Eon Mode. Well, it certainly didn't take long for our customers to see that value across the board with Eon Mode. Not just up in the cloud, in partnership with one of our most valued partners and a platinum sponsor here. Joy mentioned at the beginning. We announced Vertica Eon Mode for Pure Storage FlashBlade in September 2019. And again, just to be clear, this is not a new product, it's one Vertica with yet more deployment options. With Pure Storage, Vertica in Eon mode is not limited in any way by variable cloud, network latency. The performance is actually amazing when you take the benefits of separate and compute from storage and you run it with a Pure environment on-premise. Vertica in Eon Mode has a super smart cache layer that we call the depot. It's a big part of our secret sauce around Eon mode. And combined with the power and performance of Pure's FlashBlade, Vertica became the industry's first advanced analytics platform that actually separates compute and storage for on-premises data centers. Something that a lot of our customers are already benefiting from, and we're super excited about it. But as I said, this is a journey. We don't stop, we're not going to stop. Our customers need the flexibility of multiple public clouds. So today with Vertica 10, we're super proud and excited to announce support for Vertica in Eon Mode on Google Cloud. This gives our customers the ability to use their Vertica licenses on Amazon AWS, on-premise with Pure Storage and on Google Cloud. Now, we were talking about HDFS and a lot of our customers who have invested quite a bit in HDFS as a place, especially to store data have been pushing us to support Eon Mode with HDFS. So as part of Vertica 10, we are also announcing support for Vertica in Eon Mode using HDFS as the communal storage. Vertica's own Roth format data can be stored in HDFS, and actually the full functionality of Vertica is complete analytics, geospatial pattern matching, time series, machine learning, everything that we have in there can be applied to this data. And on the same HDFS nodes, Vertica can actually also analyze data in ORC or Parquet format, using External tables. We can also execute joins between the Roth data the External table holds, which powers a much more comprehensive view. So again, it's that flexibility to be able to support our customers, wherever they need us to support them on whatever platform, they have. Vertica 10 gives us a lot more ways that we can deploy Eon Mode in various environments for our customers. It allows them to take advantage of Vertica in Eon Mode and the power that it brings with that separation, with that workload isolation, to whichever platform they are most comfortable with. Now, there's a lot that has come in Vertica 10. I'm definitely not going to be able to cover everything. But we also introduced complex types as an example. And complex data types fit very well into Eon as well in this separation. They significantly reduce the data pipeline, the cost of moving data between those, a much better support for unstructured data, which a lot of our customers have mixed with structured data, of course, and they leverage a lot of columnar execution that Vertica provides. So you get complex data types in Vertica now, a lot more data, stronger performance. It goes great with the announcement that we made with the broader Eon Mode. Let's talk a little bit more about machine learning. We've been actually doing work in and around machine learning with various extra regressions and a whole bunch of other algorithms for several years. We saw the huge advantage that MPP offered, not just as a sequel engine as a database, but for ML as well. Didn't take as long to realize that there's a lot more to operationalizing machine learning than just those algorithms. It's data preparation, it's that model trade training. It's the scoring, the shaping, the evaluation. That is so much of what machine learning and frankly, data science is about. You do know, everybody always wants to jump to the sexy algorithm and we handle those tasks very, very well. It makes Vertica a terrific platform to do that. A lot of work in data science and machine learning is done in other tools. I had mentioned that there's just so many tools out there. We want people to be able to take advantage of all that. We never believed we were going to be the best algorithm company or come up with the best models for people to use. So with Vertica 10, we support PMML. We can import now and export PMML models. It's a huge step for us around that operationalizing machine learning projects for our customers. Allowing the models to get built outside of Vertica yet be imported in and then applying to that full scale of data with all the performance that you would expect from Vertica. We also are more tightly integrating with Python. As many of you know, we've been doing a lot of open source projects with the community driven by many of our customers, like Uber. And so now with Python we've integrated with TensorFlow, allowing data scientists to build models in their preferred language, to take advantage of TensorFlow. But again, to store and deploy those models at scale with Vertica. I think both these announcements are proof of our big bet number three, and really our commitment to supporting innovation throughout the community by operationalizing ML with that accuracy, performance and scale of Vertica for our customers. Again, there's a lot of steps when it comes to the workflow of machine learning. These are some of them that you can see on the slide, and it's definitely not linear either. We see this as a circle. And companies that do it, well just continue to learn, they continue to rescore, they continue to redeploy and they want to operationalize all that within a single platform that can take advantage of all those capabilities. And that is the platform, with a very robust ecosystem that Vertica has always been committed to as an organization and will continue to be. This graphic, many of you have seen it evolve over the years. Frankly, if we put everything and everyone on here wouldn't fit on a slide. But it will absolutely continue to evolve and grow as we support our customers, where they need the support most. So, again, being able to deploy everywhere, being able to take advantage of Vertica, not just as a business analyst or a business user, but as a data scientists or as an operational or BI person. We want Vertica to be leveraged and used by the broader organization. So I think it's fair to say and I encourage everybody to learn more about Vertica 10, because I'm just highlighting some of the bigger aspects of it. But we talked about those three market trends. The need to unify the silos, the need for hybrid multiple cloud deployment options, the need to operationalize business critical machine learning projects. Vertica 10 has absolutely delivered on those. But again, we are not going to stop. It is our job not to, and this is how Team Vertica thrives. I always joke that the next release is the best release. And, of course, even after Vertica 10, that is also true, although Vertica 10 is pretty awesome. But, you know, from the first line of code, we've always been focused on performance and scale, right. And like any really strong data platform, the execution engine, the optimizer and the execution engine are the two core pieces of that. Beyond Vertica 10, some of the big things that we're already working on, next generation execution engine. We're already actually seeing incredible early performance from this. And this is just one example, of how important it is for an organization like Vertica to constantly go back and re-innovate. Every single release, we do the sit ups and crunches, our performance and scale. How do we improve? And there's so many parts of the core server, there's so many parts of our broader ecosystem. We are constantly looking at coverages of how we can go back to all the code lines that we have, and make them better in the current environment. And it's not an easy thing to do when you're doing that, and you're also expanding in the environment that we are expanding into to take advantage of the different deployments, which is a great segue to this slide. Because if you think about today, we're obviously already available with Eon Mode and Amazon, AWS and Pure and actually MinIO as well. As I talked about in Vertica 10 we're adding Google and HDFS. And coming next, obviously, Microsoft Azure, Alibaba cloud. So being able to expand into more of these environments is really important for the Vertica team and how we go forward. And it's not just running in these clouds, for us, we want it to be a SaaS like experience in all these clouds. We want you to be able to deploy Vertica in 15 minutes or less on these clouds. You can also consume Vertica, in a lot of different ways, on these clouds. As an example, in Amazon Vertica by the Hour. So for us, it's not just about running, it's about taking advantage of the ecosystems that all these cloud providers offer, and really optimizing the Vertica experience as part of them. Optimization, around automation, around self service capabilities, extending our management console, we now have products that like the Vertica Advisor Tool that our Customer Success Team has created to actually use our own smarts in Vertica. To take data from customers that give it to us and help them tune automatically their environment. You can imagine that we're taking that to the next level, in a lot of different endeavors that we're doing around how Vertica as a product can actually be smarter because we all know that simplicity is key. There just aren't enough people in the world who are good at managing data and taking it to the next level. And of course, other things that we all hear about, whether it's Kubernetes and containerization. You can imagine that that probably works very well with the Eon Mode and separating compute and storage. But innovation happens everywhere. We innovate around our community documentation. Many of you have taken advantage of the Vertica Academy. The numbers there are through the roof in terms of the number of people coming in and certifying on it. So there's a lot of things that are within the core products. There's a lot of activity and action beyond the core products that we're taking advantage of. And let's not forget why we're here, right? It's easy to talk about a platform, a data platform, it's easy to jump into all the functionality, the analytics, the flexibility, how we can offer it. But at the end of the day, somebody, a person, she's got to take advantage of this data, she's got to be able to take this data and use this information to make a critical business decision. And that doesn't happen unless we explore lots of different and frankly, new ways to get that predictive analytics UI and interface beyond just the standard BI tools in front of her at the right time. And so there's a lot of activity, I'll tease you with that going on in this organization right now about how we can do that and deliver that for our customers. We're in a great position to be able to see exactly how this data is consumed and used and start with this core platform that we have to go out. Look, I know, the plan wasn't to do this as a virtual BDC. But I really appreciate you tuning in. Really appreciate your support. I think if there's any silver lining to us, maybe not being able to do this in person, it's the fact that the reach has actually gone significantly higher than what we would have been able to do in person in Boston. We're certainly looking forward to doing a Big Data Conference in the future. But if I could leave you with anything, know this, since that first release for Vertica, and our very first customers, we have been very consistent. We respect all the innovation around us, whether it's open source or not. We understand the market trends. We embrace those new ideas and technologies and for us true north, and the most important thing is what does our customer need to do? What problem are they trying to solve? And how do we use the advantages that we have without disrupting our customers? But knowing that you depend on us to deliver that unified analytics strategy, it will deliver that performance of scale, not only today, but tomorrow and for years to come. We've added a lot of great features to Vertica. I think we've said no to a lot of things, frankly, that we just knew we wouldn't be the best company to deliver. When we say we're going to do things we do them. Vertica 10 is a perfect example of so many of those things that we from you, our customers have heard loud and clear, and we have delivered. I am incredibly proud of this team across the board. I think the culture of Vertica, a customer first culture, jumping in to help our customers win no matter what is also something that sets us massively apart. I hear horror stories about support experiences with other organizations. And people always seem to be amazed at Team Vertica's willingness to jump in or their aptitude for certain technical capabilities or understanding the business. And I think sometimes we take that for granted. But that is the team that we have as Team Vertica. We are incredibly excited about Vertica 10. I think you're going to love the Virtual Big Data Conference this year. I encourage you to tune in. Maybe one other benefit is I know some people were worried about not being able to see different sessions because they were going to overlap with each other well now, even if you can't do it live, you'll be able to do those sessions on demand. Please enjoy the Vertica Big Data Conference here in 2020. Please you and your families and your co-workers be safe during these times. I know we will get through it. And analytics is probably going to help with a lot of that and we already know it is helping in many different ways. So believe in the data, believe in data's ability to change the world for the better. And thank you for your time. And with that, I am delighted to now introduce Micro Focus CEO Stephen Murdoch to the Vertica Big Data Virtual Conference. Thank you Stephen. >> Stephen: Hi, everyone, my name is Stephen Murdoch. I have the pleasure and privilege of being the Chief Executive Officer here at Micro Focus. Please let me add my welcome to the Big Data Conference. And also my thanks for your support, as we've had to pivot to this being virtual rather than a physical conference. Its amazing how quickly we all reset to a new normal. I certainly didn't expect to be addressing you from my study. Vertica is an incredibly important part of Micro Focus family. Is key to our goal of trying to enable and help customers become much more data driven across all of their IT operations. Vertica 10 is a huge step forward, we believe. It allows for multi-cloud innovation, genuinely hybrid deployments, begin to leverage machine learning properly in the enterprise, and also allows the opportunity to unify currently siloed lakes of information. We operate in a very noisy, very competitive market, and there are people, who are in that market who can do some of those things. The reason we are so excited about Vertica is we genuinely believe that we are the best at doing all of those things. And that's why we've announced publicly, you're under executing internally, incremental investment into Vertica. That investments targeted at accelerating the roadmaps that already exist. And getting that innovation into your hands faster. This idea is speed is key. It's not a question of if companies have to become data driven organizations, it's a question of when. So that speed now is really important. And that's why we believe that the Big Data Conference gives a great opportunity for you to accelerate your own plans. You will have the opportunity to talk to some of our best architects, some of the best development brains that we have. But more importantly, you'll also get to hear from some of our phenomenal Roth Data customers. You'll hear from Uber, from the Trade Desk, from Philips, and from AT&T, as well as many many others. And just hearing how those customers are using the power of Vertica to accelerate their own, I think is the highlight. And I encourage you to use this opportunity to its full. Let me close by, again saying thank you, we genuinely hope that you get as much from this virtual conference as you could have from a physical conference. And we look forward to your engagement, and we look forward to hearing your feedback. With that, thank you very much. >> Joy: Thank you so much, Stephen, for joining us for the Vertica Big Data Conference. Your support and enthusiasm for Vertica is so clear, and it makes a big difference. Now, I'm delighted to introduce Amy Fowler, the VP of Strategy and Solutions for FlashBlade at Pure Storage, who was one of our BDC Platinum Sponsors, and one of our most valued partners. It was a proud moment for me, when we announced Vertica in Eon mode for Pure Storage FlashBlade and we became the first analytics data warehouse that separates compute from storage for on-premise data centers. Thank you so much, Amy, for joining us. Let's get started. >> Amy: Well, thank you, Joy so much for having us. And thank you all for joining us today, virtually, as we may all be. So, as we just heard from Colin Mahony, there are some really interesting trends that are happening right now in the big data analytics market. From the end of the Hadoop hype cycle, to the new cloud reality, and even the opportunity to help the many data science and machine learning projects move from labs to production. So let's talk about these trends in the context of infrastructure. And in particular, look at why a modern storage platform is relevant as organizations take on the challenges and opportunities associated with these trends. The answer is the Hadoop hype cycles left a lot of data in HDFS data lakes, or reservoirs or swamps depending upon the level of the data hygiene. But without the ability to get the value that was promised from Hadoop as a platform rather than a distributed file store. And when we combine that data with the massive volume of data in Cloud Object Storage, we find ourselves with a lot of data and a lot of silos, but without a way to unify that data and find value in it. Now when you look at the infrastructure data lakes are traditionally built on, it is often direct attached storage or data. The approach that Hadoop took when it entered the market was primarily bound by the limits of networking and storage technologies. One gig ethernet and slower spinning disk. But today, those barriers do not exist. And all FlashStorage has fundamentally transformed how data is accessed, managed and leveraged. The need for local data storage for significant volumes of data has been largely mitigated by the performance increases afforded by all Flash. At the same time, organizations can achieve superior economies of scale with that segregation of compute and storage. With compute and storage, you don't always scale in lockstep. Would you want to add an engine to the train every time you add another boxcar? Probably not. But from a Pure Storage perspective, FlashBlade is uniquely architected to allow customers to achieve better resource utilization for compute and storage, while at the same time, reducing complexity that has arisen from the siloed nature of the original big data solutions. The second and equally important recent trend we see is something I'll call cloud reality. The public clouds made a lot of promises and some of those promises were delivered. But cloud economics, especially usage based and elastic scaling, without the control that many companies need to manage the financial impact is causing a lot of issues. In addition, the risk of vendor lock-in from data egress, charges, to integrated software stacks that can't be moved or deployed on-premise is causing a lot of organizations to back off the all the way non-cloud strategy, and move toward hybrid deployments. Which is kind of funny in a way because it wasn't that long ago that there was a lot of talk about no more data centers. And for example, one large retailer, I won't name them, but I'll admit they are my favorites. They several years ago told us they were completely done with on-prem storage infrastructure, because they were going 100% to the cloud. But they just deployed FlashBlade for their data pipelines, because they need predictable performance at scale. And the all cloud TCO just didn't add up. Now, that being said, well, there are certainly challenges with the public cloud. It has also brought some things to the table that we see most organizations wanting. First of all, in a lot of cases applications have been built to leverage object storage platforms like S3. So they need that object protocol, but they may also need it to be fast. And the said object may be oxymoron only a few years ago, and this is an area of the market where Pure and FlashBlade have really taken a leadership position. Second, regardless of where the data is physically stored, organizations want the best elements of a cloud experience. And for us, that means two main things. Number one is simplicity and ease of use. If you need a bunch of storage experts to run the system, that should be considered a bug. The other big one is the consumption model. The ability to pay for what you need when you need it, and seamlessly grow your environment over time totally nondestructively. This is actually pretty huge and something that a lot of vendors try to solve for with finance programs. But no finance program can address the pain of a forklift upgrade, when you need to move to next gen hardware. To scale nondestructively over long periods of time, five to 10 years plus is a crucial architectural decisions need to be made at the outset. Plus, you need the ability to pay as you use it. And we offer something for FlashBlade called Pure as a Service, which delivers exactly that. The third cloud characteristic that many organizations want is the option for hybrid. Even if that is just a DR site in the cloud. In our case, that means supporting appplication of S3, at the AWS. And the final trend, which to me represents the biggest opportunity for all of us, is the need to help the many data science and machine learning projects move from labs to production. This means bringing all the machine learning functions and model training to the data, rather than moving samples or segments of data to separate platforms. As we all know, machine learning needs a ton of data for accuracy. And there is just too much data to retrieve from the cloud for every training job. At the same time, predictive analytics without accuracy is not going to deliver the business advantage that everyone is seeking. You can kind of visualize data analytics as it is traditionally deployed as being on a continuum. With that thing, we've been doing the longest, data warehousing on one end, and AI on the other end. But the way this manifests in most environments is a series of silos that get built up. So data is duplicated across all kinds of bespoke analytics and AI, environments and infrastructure. This creates an expensive and complex environment. So historically, there was no other way to do it because some level of performance is always table stakes. And each of these parts of the data pipeline has a different workload profile. A single platform to deliver on the multi dimensional performances, diverse set of applications required, that didn't exist three years ago. And that's why the application vendors pointed you towards bespoke things like DAS environments that we talked about earlier. And the fact that better options exists today is why we're seeing them move towards supporting this disaggregation of compute and storage. And when it comes to a platform that is a better option, one with a modern architecture that can address the diverse performance requirements of this continuum, and allow organizations to bring a model to the data instead of creating separate silos. That's exactly what FlashBlade is built for. Small files, large files, high throughput, low latency and scale to petabytes in a single namespace. And this is importantly a single rapid space is what we're focused on delivering for our customers. At Pure, we talk about it in the context of modern data experience because at the end of the day, that's what it's really all about. The experience for your teams in your organization. And together Pure Storage and Vertica have delivered that experience to a wide range of customers. From a SaaS analytics company, which uses Vertica on FlashBlade to authenticate the quality of digital media in real time, to a multinational car company, which uses Vertica on FlashBlade to make thousands of decisions per second for autonomous cars, or a healthcare organization, which uses Vertica on FlashBlade to enable healthcare providers to make real time decisions that impact lives. And I'm sure you're all looking forward to hearing from John Yavanovich from AT&T. To hear how he's been doing this with Vertica and FlashBlade as well. He's coming up soon. We have been really excited to build this partnership with Vertica. And we're proud to provide the only on-premise storage platform validated with Vertica Eon Mode. And deliver this modern data experience to our customers together. Thank you all so much for joining us today. >> Joy: Amy, thank you so much for your time and your insights. Modern infrastructure is key to modern analytics, especially as organizations leverage next generation data center architectures, and object storage for their on-premise data centers. Now, I'm delighted to introduce our last speaker in our Vertica Big Data Conference Keynote, John Yovanovich, Director of IT for AT&T. Vertica is so proud to serve AT&T, and especially proud of the harmonious impact we are having in partnership with Pure Storage. John, welcome to the Virtual Vertica BDC. >> John: Thank you joy. It's a pleasure to be here. And I'm excited to go through this presentation today. And in a unique fashion today 'cause as I was thinking through how I wanted to present the partnership that we have formed together between Pure Storage, Vertica and AT&T, I want to emphasize how well we all work together and how these three components have really driven home, my desire for a harmonious to use your word relationship. So, I'm going to move forward here and with. So here, what I'm going to do the theme of today's presentation is the Pure Vertica Symphony live at AT&T. And if anybody is a Westworld fan, you can appreciate the sheet music on the right hand side. What we're going to what I'm going to highlight here is in a musical fashion, is how we at AT&T leverage these technologies to save money to deliver a more efficient platform, and to actually just to make our customers happier overall. So as we look back, and back as early as just maybe a few years ago here at AT&T, I realized that we had many musicians to help the company. Or maybe you might want to call them data scientists, or data analysts. For the theme we'll stay with musicians. None of them were singing or playing from the same hymn book or sheet music. And so what we had was many organizations chasing a similar dream, but not exactly the same dream. And, best way to describe that is and I think with a lot of people this might resonate in your organizations. How many organizations are chasing a customer 360 view in your company? Well, I can tell you that I have at least four in my company. And I'm sure there are many that I don't know of. That is our problem because what we see is a repetitive sourcing of data. We see a repetitive copying of data. And there's just so much money to be spent. This is where I asked Pure Storage and Vertica to help me solve that problem with their technologies. What I also noticed was that there was no coordination between these departments. In fact, if you look here, nobody really wants to play with finance. Sales, marketing and care, sure that you all copied each other's data. But they actually didn't communicate with each other as they were copying the data. So the data became replicated and out of sync. This is a challenge throughout, not just my company, but all companies across the world. And that is, the more we replicate the data, the more problems we have at chasing or conquering the goal of single version of truth. In fact, I kid that I think that AT&T, we actually have adopted the multiple versions of truth, techno theory, which is not where we want to be, but this is where we are. But we are conquering that with the synergies between Pure Storage and Vertica. This is what it leaves us with. And this is where we are challenged and that if each one of our siloed business units had their own stories, their own dedicated stories, and some of them had more money than others so they bought more storage. Some of them anticipating storing more data, and then they really did. Others are running out of space, but can't put anymore because their bodies aren't been replenished. So if you look at it from this side view here, we have a limited amount of compute or fixed compute dedicated to each one of these silos. And that's because of the, wanting to own your own. And the other part is that you are limited or wasting space, depending on where you are in the organization. So there were the synergies aren't just about the data, but actually the compute and the storage. And I wanted to tackle that challenge as well. So I was tackling the data. I was tackling the storage, and I was tackling the compute all at the same time. So my ask across the company was can we just please play together okay. And to do that, I knew that I wasn't going to tackle this by getting everybody in the same room and getting them to agree that we needed one account table, because they will argue about whose account table is the best account table. But I knew that if I brought the account tables together, they would soon see that they had so much redundancy that I can now start retiring data sources. I also knew that if I brought all the compute together, that they would all be happy. But I didn't want them to tackle across tackle each other. And in fact that was one of the things that all business units really enjoy. Is they enjoy the silo of having their own compute, and more or less being able to control their own destiny. Well, Vertica's subclustering allows just that. And this is exactly what I was hoping for, and I'm glad they've brought through. And finally, how did I solve the problem of the single account table? Well when you don't have dedicated storage, and you can separate compute and storage as Vertica in Eon Mode does. And we store the data on FlashBlades, which you see on the left and right hand side, of our container, which I can describe in a moment. Okay, so what we have here, is we have a container full of compute with all the Vertica nodes sitting in the middle. Two loader, we'll call them loader subclusters, sitting on the sides, which are dedicated to just putting data onto the FlashBlades, which is sitting on both ends of the container. Now today, I have two dedicated storage or common dedicated might not be the right word, but two storage racks one on the left one on the right. And I treat them as separate storage racks. They could be one, but i created them separately for disaster recovery purposes, lashing work in case that rack were to go down. But that being said, there's no reason why I'm probably going to add a couple of them here in the future. So I can just have a, say five to 10, petabyte storage, setup, and I'll have my DR in another 'cause the DR shouldn't be in the same container. Okay, but I'll DR outside of this container. So I got them all together, I leveraged subclustering, I leveraged separate and compute. I was able to convince many of my clients that they didn't need their own account table, that they were better off having one. I eliminated, I reduced latency, I reduced our ticketing I reduce our data quality issues AKA ticketing okay. I was able to expand. What is this? As work. I was able to leverage elasticity within this cluster. As you can see, there are racks and racks of compute. We set up what we'll call the fixed capacity that each of the business units needed. And then I'm able to ramp up and release the compute that's necessary for each one of my clients based on their workloads throughout the day. And so while they compute to the right before you see that the instruments have already like, more or less, dedicated themselves towards all those are free for anybody to use. So in essence, what I have, is I have a concert hall with a lot of seats available. So if I want to run a 10 chair Symphony or 80, chairs, Symphony, I'm able to do that. And all the while, I can also do the same with my loader nodes. I can expand my loader nodes, to actually have their own Symphony or write all to themselves and not compete with any other workloads of the other clusters. What does that change for our organization? Well, it really changes the way our database administrators actually do their jobs. This has been a big transformation for them. They have actually become data conductors. Maybe you might even call them composers, which is interesting, because what I've asked them to do is morph into less technology and more workload analysis. And in doing so we're able to write auto-detect scripts, that watch the queues, watch the workloads so that we can help ramp up and trim down the cluster and subclusters as necessary. There has been an exciting transformation for our DBAs, who I need to now classify as something maybe like DCAs. I don't know, I have to work with HR on that. But I think it's an exciting future for their careers. And if we bring it all together, If we bring it all together, and then our clusters, start looking like this. Where everything is moving in harmonious, we have lots of seats open for extra musicians. And we are able to emulate a cloud experience on-prem. And so, I want you to sit back and enjoy the Pure Vertica Symphony live at AT&T. (soft music) >> Joy: Thank you so much, John, for an informative and very creative look at the benefits that AT&T is getting from its Pure Vertica symphony. I do really like the idea of engaging HR to change the title to Data Conductor. That's fantastic. I've always believed that music brings people together. And now it's clear that analytics at AT&T is part of that musical advantage. So, now it's time for a short break. And we'll be back for our breakout sessions, beginning at 12 pm Eastern Daylight Time. We have some really exciting sessions planned later today. And then again, as you can see on Wednesday. Now because all of you are already logged in and listening to this keynote, you already know the steps to continue to participate in the sessions that are listed here and on the previous slide. In addition, everyone received an email yesterday, today, and you'll get another one tomorrow, outlining the simple steps to register, login and choose your session. If you have any questions, check out the emails or go to www.vertica.com/bdc2020 for the logistics information. There are a lot of choices and that's always a good thing. Don't worry if you want to attend one or more or can't listen to these live sessions due to your timezone. All the sessions, including the Q&A sections will be available on demand and everyone will have access to the recordings as well as even more pre-recorded sessions that we'll post to the BDC website. Now I do want to leave you with two other important sites. First, our Vertica Academy. Vertica Academy is available to everyone. And there's a variety of very technical, self-paced, on-demand training, virtual instructor-led workshops, and Vertica Essentials Certification. And it's all free. Because we believe that Vertica expertise, helps everyone accelerate their Vertica projects and the advantage that those projects deliver. Now, if you have questions or want to engage with our Vertica engineering team now, we're waiting for you on the Vertica forum. We'll answer any questions or discuss any ideas that you might have. Thank you again for joining the Vertica Big Data Conference Keynote Session. Enjoy the rest of the BDC because there's a lot more to come

Published Date : Mar 30 2020

SUMMARY :

And he'll share the exciting news And that is the platform, with a very robust ecosystem some of the best development brains that we have. the VP of Strategy and Solutions is causing a lot of organizations to back off the and especially proud of the harmonious impact And that is, the more we replicate the data, Enjoy the rest of the BDC because there's a lot more to come

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StephenPERSON

0.99+

Amy FowlerPERSON

0.99+

MikePERSON

0.99+

John YavanovichPERSON

0.99+

AmyPERSON

0.99+

Colin MahonyPERSON

0.99+

AT&TORGANIZATION

0.99+

BostonLOCATION

0.99+

John YovanovichPERSON

0.99+

VerticaORGANIZATION

0.99+

Joy KingPERSON

0.99+

Mike StonebreakerPERSON

0.99+

JohnPERSON

0.99+

May 2018DATE

0.99+

100%QUANTITY

0.99+

WednesdayDATE

0.99+

ColinPERSON

0.99+

AWSORGANIZATION

0.99+

Vertica AcademyORGANIZATION

0.99+

fiveQUANTITY

0.99+

JoyPERSON

0.99+

2020DATE

0.99+

twoQUANTITY

0.99+

UberORGANIZATION

0.99+

Stephen MurdochPERSON

0.99+

Vertica 10TITLE

0.99+

Pure StorageORGANIZATION

0.99+

oneQUANTITY

0.99+

todayDATE

0.99+

PhilipsORGANIZATION

0.99+

tomorrowDATE

0.99+

AT&T.ORGANIZATION

0.99+

September 2019DATE

0.99+

PythonTITLE

0.99+

www.vertica.com/bdc2020OTHER

0.99+

One gigQUANTITY

0.99+

AmazonORGANIZATION

0.99+

SecondQUANTITY

0.99+

FirstQUANTITY

0.99+

15 minutesQUANTITY

0.99+

yesterdayDATE

0.99+

Gabriel Chapman grphx full


 

hi everybody and welcome to this cube special presentation of the verdict of virtual Big Data conference the cube is running in parallel with day 1 and day 2 of the verdict big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage gave great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore Hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on permit in the cloud so Gabe I want to ask you what were the trends in analytical database and cloud that led to this partnership you know realistically I think what we're seeing is that there's been in kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct attached storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash play platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging is essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement so defend that it's supposed to yeah III like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash blade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically would we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem or multitude different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that that falls into our umbrella of what we consider the modern data experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to under in as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect meaning we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytic spools they have multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yes I interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked him his thoughts on that he said well first of all our Flash blade is scale-out and he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale-out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center yes are you talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently of Vertica comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs can you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's the realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do have we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on or very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the depot for Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and at the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring devote to the market and then you know joint selling opportunities as exists where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our chose mutually beneficial relationships and the solutions that we're bringing to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there we're using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously you have to they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple of years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical yawn mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers who still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough for people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allow the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built or it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what the discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the the unique advantage of this but there are others it's not just the economics of the the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a couple customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that that we get address we have a multi national car con company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision-making trees that you know that's what really these new modern analytics platforms were built or there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact Ives especially when we start to look at and you know the current state of affairs with Kovac in the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like Berta and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on prams scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know it derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years gay ball I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sharing but I really appreciate you coming on the cube wall-to-wall coverage two days at the vertical Vertica virtual big data conference keep you right there but right back right after this short break [Music]

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
JimPERSON

0.99+

DavePERSON

0.99+

JohnPERSON

0.99+

JeffPERSON

0.99+

Paul GillinPERSON

0.99+

MicrosoftORGANIZATION

0.99+

DavidPERSON

0.99+

Lisa MartinPERSON

0.99+

PCCWORGANIZATION

0.99+

Dave VolantePERSON

0.99+

AmazonORGANIZATION

0.99+

Michelle DennedyPERSON

0.99+

Matthew RoszakPERSON

0.99+

Jeff FrickPERSON

0.99+

Rebecca KnightPERSON

0.99+

Mark RamseyPERSON

0.99+

GeorgePERSON

0.99+

Jeff SwainPERSON

0.99+

Andy KesslerPERSON

0.99+

EuropeLOCATION

0.99+

Matt RoszakPERSON

0.99+

Frank SlootmanPERSON

0.99+

John DonahoePERSON

0.99+

Dave VellantePERSON

0.99+

Dan CohenPERSON

0.99+

Michael BiltzPERSON

0.99+

Dave NicholsonPERSON

0.99+

Michael ConlinPERSON

0.99+

IBMORGANIZATION

0.99+

MeloPERSON

0.99+

John FurrierPERSON

0.99+

NVIDIAORGANIZATION

0.99+

Joe BrockmeierPERSON

0.99+

SamPERSON

0.99+

MattPERSON

0.99+

Jeff GarzikPERSON

0.99+

CiscoORGANIZATION

0.99+

Dave VellantePERSON

0.99+

JoePERSON

0.99+

George CanuckPERSON

0.99+

AWSORGANIZATION

0.99+

AppleORGANIZATION

0.99+

Rebecca NightPERSON

0.99+

BrianPERSON

0.99+

Dave ValantePERSON

0.99+

NUTANIXORGANIZATION

0.99+

NeilPERSON

0.99+

MichaelPERSON

0.99+

Mike NickersonPERSON

0.99+

Jeremy BurtonPERSON

0.99+

FredPERSON

0.99+

Robert McNamaraPERSON

0.99+

Doug BalogPERSON

0.99+

2013DATE

0.99+

Alistair WildmanPERSON

0.99+

KimberlyPERSON

0.99+

CaliforniaLOCATION

0.99+

Sam GroccotPERSON

0.99+

AlibabaORGANIZATION

0.99+

RebeccaPERSON

0.99+

twoQUANTITY

0.99+

Gabriel Chapman


 

hi everybody and welcome to this cube special presentation of the verdict of virtual Big Data conference the cube is running in parallel with day 1 and day 2 of the verdict big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage gave great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore Hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on permit in the cloud so Gabe I want to ask you what were the trends in analytical database and cloud that led to this partnership you know realistically I think what we're seeing is that there's been in kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct attached storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash play platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging is essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement so defend that it's supposed to yeah III like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash blade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically would we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem or multitude different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that that falls into our umbrella of what we consider the modern data experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to under in as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect meaning we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytic spools they have multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yes I interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked him his thoughts on that he said well first of all our Flash blade is scale-out and he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale-out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center yes are you talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently of Vertica comes in with eon of a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs can you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's the realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do have we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on or very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the depot for Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and at the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring devote to the market and then you know joint selling opportunities as exists where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our chose mutually beneficial relationships and the solutions that we're bringing to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there we're using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously you have to they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple of years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical yawn mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers who still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough for people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allow the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built or it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what the discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the the unique advantage of this but there are others it's not just the economics of the the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a cup of customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that we get address we have a multi national car con company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision-making trees that you know that's what really these new modern analytics platforms were built or there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact Ives especially when we start to look at and you know the current state of affairs with Kovac in the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like Berta and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on prams scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know it derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years gay ball I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sharing but I really appreciate you coming on the cube wall-to-wall coverage two days at the vertical Vertica virtual big data conference keep you right there but right back right after this short break [Music]

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
September 2019DATE

0.99+

Gabriel ChapmanPERSON

0.99+

BostonLOCATION

0.99+

two companiesQUANTITY

0.99+

BarneyORGANIZATION

0.99+

VerticaORGANIZATION

0.99+

GabePERSON

0.99+

GartnerORGANIZATION

0.98+

two daysQUANTITY

0.98+

ChristopherPERSON

0.98+

last SeptemberDATE

0.98+

first phaseQUANTITY

0.97+

a hundred and fiftyQUANTITY

0.97+

one aspectQUANTITY

0.97+

over three yearsQUANTITY

0.97+

seven bladesQUANTITY

0.97+

pureORGANIZATION

0.96+

day 2QUANTITY

0.96+

bothQUANTITY

0.95+

oneQUANTITY

0.95+

single rackQUANTITY

0.95+

firstsQUANTITY

0.94+

Boston OrangeLOCATION

0.94+

coronavirusOTHER

0.93+

Encore HotelLOCATION

0.93+

thousands of decisions per secondQUANTITY

0.93+

single namespaceQUANTITY

0.92+

each oneQUANTITY

0.92+

single platformQUANTITY

0.92+

HadoopTITLE

0.91+

day 1QUANTITY

0.91+

150 flash bladesQUANTITY

0.9+

singleQUANTITY

0.89+

Big DataEVENT

0.88+

firstQUANTITY

0.86+

BertaORGANIZATION

0.86+

a couple of years agoDATE

0.85+

KovacORGANIZATION

0.84+

last 10 yearsDATE

0.82+

PremORGANIZATION

0.81+

each individualQUANTITY

0.8+

IvesORGANIZATION

0.7+

big dataEVENT

0.66+

one of the bigger piecesQUANTITY

0.66+

the sub showsQUANTITY

0.66+

every singleQUANTITY

0.64+

VerticaTITLE

0.61+

EonTITLE

0.57+

dataEVENT

0.56+

egressORGANIZATION

0.56+

timesQUANTITY

0.54+

EonORGANIZATION

0.54+

petabytesQUANTITY

0.53+

s3TITLE

0.49+

UNLISTED DO NOT PUBLISH Woicke Edit Suggestions


 

six five four three two one hi everybody and welcome to this cube special presentation of the verdict of virtual big data conference the cube is running in parallel with day 1 and day 2 of the verdict the big data event by the way the cube has been at every single big data event and it's our pleasure to be here in the virtual / digital event as well Gabriel Chapman is here is the director of flash blade product solutions marketing at pure storage Gabe great to see you thanks for coming on great to see you - how's it going it's going very well I mean I wish we were meeting in Boston at the Encore hotel but you know and and hopefully we'll be able to meet it accelerate at some point you cheer or one of the the sub shows that you guys are doing the regional shows but because we've been covering that show as well but I really want to get into it and the last accelerate September 2019 pure and Vertica announced a partnership I remember a joint being ran up to me and said hey you got to check this out the separation of Butte and storage by a Eon mode now available on flash played so and and I believe still the only company that can support that separation and independent scaling both on prime and in the cloud so gave I want to ask you what were the trends in analytical database and plowed that led to this partnership you know realistically I think what we're seeing is that there's been kind of a larger shift when it comes to modern analytics platforms towards moving away from the the traditional you know Hadoop type architecture where we were doing on and leveraging a lot of direct mass storage primarily because of the limitations of how that solution was architected when we start to look at the larger trends towards you know how organizations want to do this type of work on premises they're looking at solutions that allow them to scale the compute storage pieces independently and therefore you know the flash blade platform ended up being a great solution to support Vertica in their transition to Eon mode leveraging >> essentially as an s3 object store okay so let's let's circle back on that you guys in your in your announcement of a flash blade you make the claim that flash blade is the industry's most advanced file and object storage platform ever that's a bold statement I defend that it's supposed to yeah I I like to go beyond that and just say you know so we've really kind of looked at this from a standpoint of you know as as we've developed flash Wade as a platform and keep in mind it's been a product that's been around for over three years now and has you know it's been very successful for pure storage the reality is is that fast file and fast object as a combined storage platform is a direction that many organizations are looking to go and we believe that we're a leader in that fast object of best file storage place in realistically which we start to see more organizations start to look at building solutions that leverage cloud storage characteristics but doing so on prem for a multitude of different reasons we've built a platform that really addresses a lot of those needs around simplicity around you know making things assure that you know vast matters for us simple is smart we can provide you know cloud integrations across the spectrum and you know there's a subscription model that fits into that as well we fall that falls into our umbrella of what we consider the modern day day experience and it's something that we've built into the entire pure portfolio okay so I want to get into the architecture a little bit of Flash blade and then better understand the fit for analytic databases generally but specifically for Vertica so it is a blade so you got compute and a network included it's a key value store based system so you're talking about scale out unlike unlike viewers sort of you know initial products which were scale up and so I want to as a fabric base system I want to understand what that all mean so take us through the architecture you know some of the quote-unquote firsts that you guys talk about so let's start with sort of the blade aspect yeah the blade aspect mean we call it a flash blade because if you look at the actual platform you have a primarily a chassis with built in networking components right so there's a fabric interconnect with inside the platform that connects to each one of the individual blades the individual blades have their own compute that drives basically a pure storage flash components inside it's not like we're just taking SSDs and plugging them into a system and like you would with the traditional commodity off-the-shelf hardware design this is a very much an engineered solution that is built towards the characteristics that we believe were important with fast file and fast object scalability you know massive parallelization when it comes to performance and the ability to really kind of grow and scale from essentially seven blades right now to a hundred and fifty that's that's the kind of scale that customers are looking for especially as we start to address these larger analytics pools mayo multi petabyte datasets you know that single addressable object space and you know file performance that is beyond what most of your traditional scale-up storage platforms are able to deliver yeah I saw you interviewed cause last September and accelerate and and Christopher's been you know attacked by some of the competitors is not having a scale out I asked them his thoughts on that he said well first of all our flash plate is scale out he said look anything that that that adds the complexity you know we avoid but for the workloads that are associated with Flash blade scale out is the right sort of approach maybe you could talk about why that is well you know realistically I think you know that that approach is better when we're starting to learn to work with large unstructured data sets I mean flash plays uniquely architected to allow customers to achieve you know a superior resource utilization for compute and storage well at the same time you know reducing significantly the complexity that is arisen around these kind of bespoke or siloed nature of big data and analytic solutions I mean we really kind of look at this from a standpoint of you have built and delivered or created applications in the public cloud space that address you know object storage and and unstructured data and and for some organizations the importance is bringing that on Prem I mean we do seek repatriation that coming on for a lot of organizations as these data egress charges continue to expand and grow and then organizations that want even higher performance in the what we're able to get into the public cloud space they are bringing that data back on Prem they are looking at from a standpoint we still want to be able to scale the way we scale on the cloud we still want to operate the same way we operate in the cloud but we want to do it within control of our own you know our own borders and so that's you know that's one of the bigger pieces to that is we start to look at how do we address cloud characteristics and dynamics and consumption metrics or models as well as the benefits and efficiencies of scale that they're able to afford but allowing customers that do that with inside their own data center so you're talking about the trends earlier you had these cloud native databases that allowed the scaling of compute and storage independently Vertica comes in with Eon a lot of times we talk about these these partnerships as Barney deals of you know I love you you love me here's a press release and then we go on or they're just straight you know go to market are there other aspects of this partnership that are that are non Barney deal like in other words any specific you know engineering you know other go to market programs could you talk about that a little bit yeah it's it's it's more than just you know I then what we consider a channel meet in the middle or you know that Barney type of deal it's realistically you know we've done some first with Vertica that I think are really important if they think you look at the architecture and how we do how we've brought this to market together we have solutions teams in the back end who are you know subject matter experts in this space if you talk to joy and the people from vertigo they're very high on they're very excited about the partnership because it often it opens up a new set of opportunities for their customers to to leverage Eon mode and you know get into some of the the nuanced aspects of how they leverage the Depot or Depot with inside each individual compute node and adjustments with inside there I reach additional performance gains for customers on Prem and it's the same time for them there's still the ability to go into that cloud model if they wish to and so I think a lot of it is around how do we partner as two companies how do we do a joint selling motions you know how do we show up and and you know do white papers and all of the the traditional marketing aspects that we bring into the market and then you know joint selling opportunities exist where they are and so that's realistically I think like any other organization that's going to market with a partner or an ISP that they have a strong partnership with you'll continue to see us you know talking about our shows mutually beneficial relationships and the solutions that we're bringing it to the market okay you know of course he used to be a Gartner analyst and you go over to the vendor side now but as but as it but as a gardener analyst you're obviously objective you see it all and you know well there's a lot of ways to skin a cat there are there are there are strengths weaknesses opportunities threats etc for every vendor so you have you have Vertica who's got a very mature stack and and talking to a number of the customers out there who are using Eon mode you know there's certain workloads where these cloud native databases make sense it's not just the economics of scaling compute and storage independently I want to talk more about that there's flexibility aspects as well but Vertica really you know has to play its trump card which is look we've got a big on-premise state and we're gonna bring that you know Eon capability both on Prem and we're embracing the cloud now they're obviously having they had to play catch-up in the cloud but at the same time they've got a much more mature stack than a lot of these other you know cloud native databases that might have just started a couple years ago so you know so there's trade-offs that customers have to make how do you sort through that where do you see the interest in this and and and what's the sweet spot for this partnership you know we've been really excited to build the partnership with Vertica and we're providing you know we're really proud to provide pretty much the only on Prem storage platform that's validated with the vertical Aeon mode to deliver a modern data experience for our customers together you know it's it's that partnership that allows us to go into customers that on Prem space where I think that they're still you know not to say that not everybody wants to go the cloud I think there's aspects and then solutions that work very well there but for the vast majority I still think that there's you know the your data center is not going away and you do want to have control over some of the many of the different facets with inside the operational confines so therefore we start to look at how do we can do the best of what cloud offers but on Prem and that's realistically where we start to see the stronger push for those customers you still want to manage their data locally as well as maybe even work around some of the restrictions that they might have around cost and complexity hiring you know the different types of skills skill sets that are required to bring you know applications purely cloud native it's still that larger part of that digital transformation that many organizations are going for going forward with and realistically I think they're taking a look at the pros and cons and we've been doing cloud long enough where people recognize that you know it's not perfect for everything and that there's certain things that we still want to keep inside our own data center so I mean realistically as we move forward that's that that better option when it comes to a modern architecture they can do it you know we can deliver and address a diverse set of performance requirements and allowed the organization to continue to grow the model to the data you know based on the data that they're actually trying to leverage and that's really what flash Wood was built for it was built for a platform that can address small files or large files or high throughput high throughput low latency scale to petabytes in a single namespace in a single rack as we like to put it in there I mean we see customers that have put you know 150 flash blades into production as a single namespace it's significant for organizations that are making that drive towards modern data experience with modern analytics platforms pure and Vertica have delivered an experience that can address that to a wide range of customers that are implementing you know the verdict technology I'm interested in exploring the the use case a little bit further you just sort of gave some parameters and some examples and some of the flexibility that you have in but take us through kind of what to discuss the customer discussions are like obviously you've got a big customer base you and Vertica that that's on prem that's the the unique advantage of this but there are others it's not just the economics of the granular scaling of compute and storage independently there are other aspects so to take us through that sort of a primary use case or use cases yeah you know I mean I can give you a couple customer examples and we have a large SAS analyst company which uses verdict on flash play to authenticate the quality of digital media in real time and you know then for them it makes a big difference is they're doing they're streaming and whatnot that they can they can fine tune and grandly control that so that's one aspect that we need to address we have a multi national car company which uses verdict on flash blade to make thousands of decisions per second for autonomous vehicle decision making trees you know that's what really these new modern analytics platforms were built for there's another healthcare organization that uses Vertica on flash blade to enable healthcare providers to make decisions in real time the impact vibes especially when we start to look at and you know the current state of affairs little Co vid and the coronavirus you know those types of technologies are really going to help us kind of get love and and help lower and been you know bend that curve downward so you know there's all these different areas where we can address the goals and the achievements that we're trying to look bored with with real-time analytic decision making tools like birth and you know realistically as we have these conversations with customers they're looking to get beyond the ability of just you know you know a data scientist or a data architect looking to just kind of drive in information you know you know I'm gonna set this model up and we'll come back in a day now we need to make these and the performs characteristics the Aeon mode and vertical allows for can get us towards this almost near real-time analytics decision-making process and that the customers and that's the kind of conversations that we're having with customers who really need to be able to turn this around very quickly instead of waiting well I think you're hitting on something that is actually pretty relevant and that is that near real-time analytic you know database we were talking about Hadoop earlier we're kind of going well beyond that now and I guess what I'm saying is that in the first phase of cloud it was all about infrastructure it was about you know spinning up you know compute and storage a little bit of networking in there seems like the the a next a new workload that's clearly emerging is you've got and it started with the cloud native databases but then bringing in you know AI and machine learning tooling on top of that and then being able to really drive these new types of insights and it's really about taking data these bogs this bog of data that we've collected over the last 10 years a lot of that you know driven by Hadoop bringing machine intelligence into the equation scaling it with either cloud public cloud or bringing that cloud experience on-premise scale you know across your organizations and across your partner network that really is a new emerging work load do you see that and maybe talk a little bit about you know what you're seeing with customers yeah I mean it really is we see several trends you know one of those is the ability to take a take this approach to move it out of the lab but into production you know especially when it comes to you know data science projects machine learning projects that traditionally start out as kind of small proofs of concept easy to spin up in the cloud but when a customer wants to scale and move towards a real you know that derived a significant value from that they do want to be able to control more characteristics right and we know machine learning you know needs to needs to learn from a massive amounts of data to provide accuracy there's just too much data to retrieve in the cloud for every training job at the same time predictive analytics without accuracy is not going to deliver the business advantage of what everyone is seeking you know we see this the visualization of data analytics is traditionally deployed as being on a continuum with you know the things that we've been doing in the long you know in the past you know with data warehousing data lakes AI on the other end but but this way we're starting to manifest it in organizations that are looking towards you know getting more utility and better you know elasticity out of the data that they are working for so they're not looking to just build ups you know silos of bespoke AI environments they're looking to leverage you know a platform that can allow them to you know do a I for one thing machine learning for another leverage multiple protocols to access that data because the tools are so much different you know it is a growing diversity of of use cases that you can put on a single platform I think organizations are looking for as they try to scale these environments I think there's gonna be a big growth area in the coming years Gabe well I wish we were in Boston together you would have painted your little corner of Boston Orange I know that you guys are sure but I really appreciate you coming on the cube and thank you very much have a great day you too okay thank you everybody for watching this is the cubes coverage wall-to-wall coverage two days of the vertical Vertica virtual Big Data conference keep her at their very back right after this short break

Published Date : Mar 30 2020

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
BostonLOCATION

0.99+

September 2019DATE

0.99+

Gabriel ChapmanPERSON

0.99+

BarneyORGANIZATION

0.99+

two companiesQUANTITY

0.99+

VerticaORGANIZATION

0.99+

two daysQUANTITY

0.99+

GabePERSON

0.99+

WoickePERSON

0.98+

GartnerORGANIZATION

0.98+

last SeptemberDATE

0.97+

over three yearsQUANTITY

0.97+

one aspectQUANTITY

0.96+

first phaseQUANTITY

0.96+

pureORGANIZATION

0.96+

ChristopherPERSON

0.95+

oneQUANTITY

0.95+

single rackQUANTITY

0.95+

a hundred and fiftyQUANTITY

0.95+

day 2QUANTITY

0.95+

bothQUANTITY

0.93+

seven bladesQUANTITY

0.93+

DepotORGANIZATION

0.93+

150 flash bladesQUANTITY

0.92+

HadoopORGANIZATION

0.92+

single namespaceQUANTITY

0.92+

single platformQUANTITY

0.92+

day 1QUANTITY

0.92+

coronavirusOTHER

0.91+

firstsQUANTITY

0.91+

firstQUANTITY

0.9+

flash WadeTITLE

0.89+

singleQUANTITY

0.88+

each oneQUANTITY

0.88+

a dayQUANTITY

0.87+

a couple years agoDATE

0.85+

thousands of decisions per secondQUANTITY

0.83+

PremORGANIZATION

0.77+

primeCOMMERCIAL_ITEM

0.77+

EncoreLOCATION

0.74+

single addressableQUANTITY

0.72+

Big DataEVENT

0.72+

each individualQUANTITY

0.71+

AeonORGANIZATION

0.68+

Boston OrangeLOCATION

0.65+

VerticaTITLE

0.62+

egressORGANIZATION

0.62+

every singleQUANTITY

0.6+

last 10 yearsDATE

0.6+

a couple customerQUANTITY

0.59+

EonTITLE

0.55+

piecesQUANTITY

0.54+

petabytesQUANTITY

0.53+

flash bladeORGANIZATION

0.52+

EonORGANIZATION

0.51+

sub showsQUANTITY

0.5+

HadoopTITLE

0.49+

sixQUANTITY

0.49+

petabyteQUANTITY

0.48+

lotQUANTITY

0.47+

bigEVENT

0.43+

vertigoPERSON

0.34+