George Axberg, VAST Data | VeeamON 2022
>>Welcome back to the cubes coverage of Veeam on 2022 at the RS. Nice to be at the aria. My co-host Dave Nicholson here. We spend a lot of time at the Venetian convention center, formerly the sand. So it's nice to have a more intimate venue. I really like it here. George Burg is joining us. He's the vice president of data protection at vast data, a company that some of you may not know about. George. >>Welcome a pleasure. Thank you so much for having me. >>So VAs is smoking hot, raised a ton of dough. You've got great founders, hard charging, interesting tech. We've covered a little bit on the Wikibon research side, but give us the overview of the company. Yeah, >>If I could please. So we're here at the, you know, the Veeam show and, you know, the theme is modern data protection, and I don't think there's any company that epitomizes modern data protection more than vast data. The fact that we're able to do an all flash system at exabyte scale, but the economics of cloud object based deep, cheap, and deep archive type solutions and an extremely resilient platform is really game changing for the marketplace. So, and quite frankly, a marketplace from a data protection target space that I think is, is ripe for change and in need of change based on the things that are going on in the marketplace today. >>Yeah. So a lot of what you said is gonna be surprising to people, wait a minute, you're talking about data protection and all flash sure. I thought you'd use cheap and deep disc or, you know, even tape for that or, you know, spin it up in the cloud in a, in a deep archive or a glacier. Explain your approach in, in architecture. Yeah. At a >>High level. Yeah. So great question. We get that question every day and got it in the booth yesterday, probably about 40 or 50 times. How could it be all flash that at an economic point that is the fitting that of, you know, data protection. Yeah. >>What is this Ferrari minivan of which you speak? >>Yeah, yeah, yeah. The minivan that goes 180 miles an hour, right. That, you know, it's, it's really all about the architecture, right? The component tree is, is somewhat similar to what you'll see in other devices. However, it's how we're leveraging them in the architecture and design, you know, from our founders years ago and building a solution that just not, was not available in the marketplace. So yeah, sure. We're using, you know, all flash QLC drives, but the technology, you know, the advanced next generation algorithms or erasure coding or rage striping allows us to be extremely efficient. We also have some technologies around what we call similarity, some advanced data reduction. So you need less, less capacity if you will, with a vast system. So that obviously help obviously helps us out tremendously with their economics. But the other thing is I could sell a customer exactly what they need. If you think about the legacy data protection market purpose built back of appliances, for example, you know, ALA, Adele, Aita, and HP, you know, they're selling systems that are somewhat rigid. There's always a controller in a capacity. It's tied to a model number right. Soon as you need more performance, you buy another, as soon as you need more capacity, you buy another, it's really not modular in any way. It's great >>Model. If you want to just keep, keep billing the >>Customer. Yeah. If, if that, if yeah. And, and I, I think, I think at this point, the purpose, you know, Dave, the purpose built backup appliance market is, is hungry for a change. Right. You know, there's, there's not anyone that has one. It doesn't exist. I'm not just talking about having two because of replication. I'm it's because of organic growth. Ransomware needs to have a second unit, a second copy. And just, and just scalability. Well, you >>Guys saw that fatigue with that model of, oh, you need more buy more, >>Right? Oh, without a doubt, you said we're gonna attack that. Yeah. Yeah. Sorry. No, no, no. That's great. Without a doubt. So, so we can configure a solution exactly. To the need. Cause let's face it. Every single data center, every single vertical market, it's a work of art. You know, everyone's retention policies are different. Everyone's compliance needs are different. There might be some things that are self mandated or government mandated and they're all gonna be somewhat different. Right? The fact of the matter is the way that our, our architecture works, disaggregated shared everything. Architecture is different because when we go back to those model numbers and there's more rigid purpose built back of appliances, or, or maybe a raise designed specifically for data protection, they don't offer that flexibility. And, you know, I, I, I think our, our, our, our entry point is sized to exactly what the need is. Our ease of scalability. You need more performance. We just add another compute, another compute box, what we call our C box. If you need more capacity, we just add another data box, a D box, you know, where the data resides. And, you know, I, you know, especially here at Veeam, I think customers are really clamoring for that next generation solution. They love the idea that there's a low point of entry, but they also love the idea that, that it's easy to scale on demand, you know, as, as needed and as needed basis. >>So just, I wanna be just, I want to go down another layer on that architecturally. Cause I think it's important for people to understand. Sure, exactly what you're saying. When you're talking about scaling, there's this concept of the, of the sort of devil's triangle, the tyranny of this combination of memory, CPU and storage. Sure. And if you're too rigid, like in an appliance, you end up paying for things you don't need. Correct. When all I need is a little more capacity. Correct. All I need is a little more horsepower. Well, you wanna horsepower? No, you gotta buy a bunch of capacity. Exactly. Oh, need capacity. No, no. You need to buy expensive CPUs and suck a bunch of power. All I need is capacity. So what, so go through that, just a little more detail in terms of sure. How you cobble these systems together. Sure. My, the way my brain works, it's always about Legos. So feel free to use Legos. >>Yeah. We, so, so with our disaggregated solution, right. We've separated basically hardware from software. Right. So, so, so that's a good thing, right? From an economic standpoint, but also a design and architecture standpoint, but also an underlining underpinning of that solution is we've also separated the capacity from the performance. And as you just mentioned, those are typically relatively speaking for every other solution on the planet. Those are tied together. Right? Right. So we've disaggregated that as well within our architecture. So we, we again have basically three tier, tier's not the right word, three components that build out a vast cluster. And again, we don't sell like a solution designed by a model number. And that's typically our C boxes connected via NVMe over fabric to a D box C is all the performance D is all the capacity because they're modular. You can end up like our, our baseline product would start out as a one by one, one C box one D box, right? >>Connected again, via different, different size and Vme fabrics. And that could scale to hundreds. When we do have customers with dozens of C boxes, meeting high performance requirements, keep in mind when, when vast data came to market, our founders brought it to the market for high performance computing machine learning, AI data protection was an afterthought, but those found, you know, foundational things that we're able to build in that modularity with performance at scale, it behooves itself, it's perfect fit for data protection. So we see in clients today, just yesterday, two clients standing next to each other in the same market in the same vertical. I have a 30 day retention. I have a 90 day retention. I have to keep one year worth of full backups. I have to keep seven years worth of full backups. We can accommodate both and size it to exactly what the need is. >>Now, the moment that they need one more terabyte, we license into 100 terabyte increments so they can actually buy it in a sense, almost in arrears, we don't turn it off. We don't, there's not a hard cat. They have access to that capacity within the solution that they provide and they can have access immediate access. And without going through, let's face it. A lot of the other companies that we're both thinking of that have those traditional again, purpose-built solutions or arrays. They want you to buy everything up front in advance, signing license agreements. We're the exact opposite. We want you to buy for the need as, and as needed basis. And also because the fact that we're, multi-protocol multi-use case, you see people doing many things within even a single vast cluster. >>I, I wanna come back to the architecture if I, I can and just understand it better. And I said, David, Flo's written a lot about this on our site, but I've had three key meetings in my life with Mosia and I, and I you've obviously know the first week you showed up in my offices at IDC in the late 1980s said, tell me everything, you know about the IBM mainframe IO subsystem. I'm like, oh, this is gonna be a short meeting. And then they came back a year later and showed us symmetric. I was like, wow, that's pretty impressive. The second one was, I gave a speech at 43 south of 42 south. He came up and gave me a big hug. I'm like, wow. He knows me. And the third one, he was in my offices at, in Mabo several years ago. And we were arguing about the flash versus spinning disc. And he's like, I can outperform an all flash array because we've tuned our algorithms for spinning disc. Everybody else is missing that. You're basically saying the opposite. Correct. We've turned tuned our algorithms to, for QC David Flos says Dave, there's a lot of ways to skin a cat in this technology industry. So I wanted to make sure I got that right. Basically you're skinning the cat with different >>Approach. Yeah. We've also changed really the approach of backup. I mean, the, the term backup is really legacy. I mean, that's 10, 12 years of our recovery. The, the story today is really about, about restore resiliency and recovery. So when you think about those legacy solutions, right, they were built to ingest fast, right? We wanna move the data off our primary systems, our, our primary applications and we needed to fit within a backup window. Restore was an afterthought. Restore was, I might occasionally need to restore something. Something got lost, something got re corrupted. I have to restore something today with the, you know, let's face it, the digital pandemic of, of, of cyber threats and, and ransomware it's about sometimes restoring everything. So if you look at a legacy system, they ingest, I'm sorry. They, they, they write very fast. They, they, they can bring the data in very quickly, but their restore time is typically about 20 to 25%. >>So their reading at only 20, 25% of their right speed, you know, is their rate speed. We flip the script on that. We actually read eight times faster than we write. So I could size again to the performance that you need. If you need 40 terabytes, an hour 50 terabytes an hour, we can do that. But those systems that write at 40 terabytes an hour are restoring at only eight. We're writing at a similarly size system, which actually comes out about 51 terabytes an hour 54 terabytes. We're restoring at 432 terabytes an hour. So we've broken the mold of data protection targets. We're no longer the bottleneck. We're no longer part of your recovery plan going to be the issue right now, you gotta start thinking about network connectivity. Do I have, you know, you know, with the, with our Veeam partners, do we have the right data movers, whether virtual or physical, where am I gonna put the data? >>We've really helped customer aided customers to rethinking their whole Dr. Plan, cuz let's face it. When, when ransomware occurs, you might not be able to get in the building, your phones don't work. Who do you call right? By the time you get that all figured out and you get to the point where you're start, you want to start recovering data. If I could recover 50 times faster than a purpose built backup appliance. Right? Think about it. Is it one day or is it 50 days? Am I gonna be back online? Is it one hour? Is it 50 hours? How many millions of dollars, tens of thousands of dollars were like, will that cost us? And that's why our architecture though our thought process and how the system was designed lends itself. So well for the requirements of today, data protection, not backup it's about data protection. >>Can you give us a sense as to how much of your business momentum is from data protection? >>Yeah, sure. So I joined VAs as we were talking chatting before I come on about six months ago. And it's funny, we had a lot of vast customers on their own because they wanted to leverage the platform and they saw the power of VAs. They started doing that. And then as our founders, you know, decided to lean in heavily into this marketplace with investments, not just in people, but also in technology and research and development, and also partnering with the likes of, of Veeam. We, we don't have a data mover, right. We, we require a data mover to bring us the data we've leaned in tremendously. Last quarter was really our, probably our first quarter where we had a lot of marketing and momentum around data protection. We sold five X last quarter than we did all of last year. So right now the momentum's great pipeline looks phenomenal and you know, we're gonna continue to lean in here. >>Describe the relationship with Veeam, like kind of, sort of started recently. It sounds like as customer demand. Yeah. But what's that like, what are you guys doing in terms of engineering integration go to market? >>Yeah. So, so we've gone through all the traditional, you know, verifications and certifications and, and, and I'm proud to say that we kind of blew the, the, the roof off the requirements of a Veeam environ. Remember Veeam was very innovative. 10, 12 years ago, they were putting flash in servers because they, they, they want a high performing environment, a feature such as instant recovery. We've now enabled. When I talked about all those things about re about restore. We had customers yesterday come to us that have tens of thousands of VMs. Imagine that I can spin them up instantaneously and run Veeam's instant recovery solution. While then in the background, restoring those items that is powerful and you need a very fast high performance system to enable that instant. Recovery's not new. It's been in the market for very long, but you can ask nine outta 10 customers walk in the floor. >>They're not able to leverage that today in the systems that they have, or it's over architected and very expensive and somewhat cost prohibitive. So our relationship with Veeam is really skyrocketing actually, as part of that, that success and our, our last quarter, we did seven figure deals here in the United States. We've done deals in Australia. We were chatting. I, I, I happened to be in Dubai and we did a deal there with the government there. So, you know, there's no, there's no specific vertical market. They're all different. You know, it's, it's really driven by, you know, they have a great, you know, cyber resilient message. I mean, you get seen by the last couple of days today and they just want that power that vast. Now there are other systems in the marketplace today that leverage all flash, but they don't have the economic solution that we have. >>No, your, your design anticipated the era that we're we're in right now from it, it anticipated the ability to scale in, to scale, you know, in >>A variety. Well, listen, anticipation of course, co coincidental architecture. It's a fantastic fit either way, either way. I mean, it's a fantastic fit for today. And that's the conversations that we're having with, with all the customers here, it's really all about resiliency. And they know, I mean, one of the sessions, I think it was mentioned 82 or 84% of, of all clients interviewed don't believe that they can do a restore after a cyber attack or it'll cost them millions of dollars. So that there's a tremendous amount of risk there. So time is, is, is ultimately equals dollars. So we see a, a big uptick there, but we're, we're actually continuing our validation work and testing with Veeam. They've been very receptive, very receptive globally. Veeam's channel has also been very receptive globally because you know, their customers are, you know, hungry for innovation as well. And I really strongly believe ASBO brings that >>George, we gotta go, but thank you. Congratulations. Pleasure on the momentum. Say hi to Jeff for us. >>We'll we'll do so, you know, and we'll, can I leave you with one last thought? Yeah, >>Please do give us your final thought. >>If I could, in closing, I think it's pretty important when, when customers are, are evaluating vast, if I could give them three data points, 100% of customers that Triva test vast POC, vast BVAs 100% Gartner peer insights recently did a survey. You know, they, they do it with our, you know, blind survey, dozens of vast customers and never happened before where 100% of the respondents said, yes, I would recommend VA and I will buy VAs again. It was more >>Than two respondents. >>It was more, it was dozens. They won't do it. If it's not dozens, it's dozens. It's not dozen this >>Check >>In and last but not. And, and last but not least our customers are, are speaking with their wallet. And the fact of the matter is for every customer that spends a dollar with vast within a year, they spend three more. So, I mean, if there's no better endorsement, if you have a customer base, a client base that are coming back and looking for more use cases, not just data protection, but again, high performance computing machine learning AI for a company like VA data. >>Awesome. And a lot of investment in engineering, more investment in engineering than marketing. How do I know? Because your capacity nodes, aren't the C nodes. They're the D nodes somehow. So the engineers obviously won that naming. >>They'll always win that one and we, and we, and we let them, we need them. Thank you. So that awesome product >>Sales, it's the golden rule. All right. Thank you, George. Keep it right there. VEON 20, 22, you're watching the cube, Uber, Uber right back.
SUMMARY :
a company that some of you may not know about. Thank you so much for having me. We've covered a little bit on the Wikibon research side, So we're here at the, you know, the Veeam show and, you know, the theme is modern data protection, or, you know, even tape for that or, you know, spin it up in the cloud in a, the fitting that of, you know, data protection. all flash QLC drives, but the technology, you know, the advanced next generation algorithms If you want to just keep, keep billing the And, and I, I think, I think at this point, the purpose, you know, And, you know, I, you know, especially here at Veeam, you end up paying for things you don't need. And as you just mentioned, those are typically relatively you know, foundational things that we're able to build in that modularity with performance at scale, We want you to buy for the need as, and as needed basis. And the third one, he was in my offices at, I have to restore something today with the, you know, let's face it, the digital pandemic of, So I could size again to the performance that you need. By the time you get that all figured out and you get to the point where you're start, And then as our founders, you know, But what's that like, what are you guys doing in terms of engineering integration go to market? It's been in the market for very long, but you can ask nine outta know, it's, it's really driven by, you know, they have a great, you know, been very receptive globally because you know, their customers are, you know, Pleasure on the momentum. you know, blind survey, dozens of vast customers and never happened before where 100% of the respondents If it's not dozens, it's dozens. And the fact of the matter is for every customer that spends a dollar with vast within a year, So the engineers obviously won that naming. So that awesome product Sales, it's the golden rule.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Australia | LOCATION | 0.99+ |
George | PERSON | 0.99+ |
50 days | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
George Burg | PERSON | 0.99+ |
30 day | QUANTITY | 0.99+ |
George Axberg | PERSON | 0.99+ |
90 day | QUANTITY | 0.99+ |
50 hours | QUANTITY | 0.99+ |
50 times | QUANTITY | 0.99+ |
Dubai | LOCATION | 0.99+ |
40 terabytes | QUANTITY | 0.99+ |
seven years | QUANTITY | 0.99+ |
one year | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two clients | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
82 | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
432 terabytes | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
one hour | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
ALA | ORGANIZATION | 0.99+ |
one day | QUANTITY | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
seven figure | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Ferrari | ORGANIZATION | 0.99+ |
84% | QUANTITY | 0.99+ |
ASBO | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
David Flos | PERSON | 0.99+ |
today | DATE | 0.99+ |
eight times | QUANTITY | 0.99+ |
100 terabyte | QUANTITY | 0.99+ |
nine | QUANTITY | 0.99+ |
a year later | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
tens of thousands of dollars | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
late 1980s | DATE | 0.98+ |
2022 | DATE | 0.98+ |
second unit | QUANTITY | 0.98+ |
second copy | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Flo | PERSON | 0.98+ |
millions of dollars | QUANTITY | 0.98+ |
180 miles an hour | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
Aita | ORGANIZATION | 0.98+ |
10 | DATE | 0.98+ |
one more terabyte | QUANTITY | 0.97+ |
10 customers | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
third one | QUANTITY | 0.97+ |
Gartner | ORGANIZATION | 0.97+ |
12 years ago | DATE | 0.97+ |
Legos | ORGANIZATION | 0.96+ |
single | QUANTITY | 0.96+ |
Mabo | LOCATION | 0.96+ |
Renen Hallak & David Floyer | CUBE Conversation 2021
(upbeat music) >> In 2010 Wikibon predicted that the all flash data center was coming. The forecast at the time was that flash memory consumer volumes, would drive prices of enterprise flash down faster than those of high spin speed, hard disks. And by mid decade, buyers would opt for flash over 15K HDD for virtually all active data. That call was pretty much dead on and the percentage of flash in the data center continues to accelerate faster than that, of spinning disk. Now, the analyst that made this forecast was David FLoyer and he's with me today, along with Renen Hallak who is the founder and CEO of Vast Data. And they're going to discuss these trends and what it means for the future of data and the data center. Gentlemen, welcome to the program. Thanks for coming on. >> Great to be here. >> Thank you for having me. >> You're very welcome. Now David, let's start with you. You've been looking at this for over a decade and you know, frankly, your predictions have caused some friction, in the marketplace but where do you see things today? >> Well, what I was forecasting was based on the fact that the key driver in any technology is volume, volume reduces the cost over time and the volume comes from the consumers. So flash has been driven over the years by initially by the iPod in 2006 the Nano where Steve Jobs did a great job with Samsung and introducing large volumes of flash. And then the iPhone in 2008. And since then, all of mobile has been flash and mobile has been taking in a greater and greater percentage share. To begin with the PC dropped. But now the PCs are over 90% are using flash when there delivered. So flash has taken over the consumer market, very aggressively and that has driven down the cost of flash much much faster than the declining market of HDD. >> Okay and now, so Renen I wonder if we could come to you, we've got I want you to talk about the innovations that you're doing, but before we get there, talk about why you started Vast. >> Sure, so it was five years ago and it was basically the kill of the hard drive. I think what David is saying resonates very, very well. In fact, if you look at our original presentation for Vast Data. It showed flash and tape. There was no hard drive in the middle. And we said 10 years from now, and this was five years ago. So even the dates match up pretty well. We're not going to have hard drives anymore. Any piece of information that needs to be accessible at all will be on flash and anything that is dormant and never gets read will be on tape. >> So, okay. So we're entering this kind of new phase now, with which is being driven by QLC. David maybe you could give us a quick what is QLC? Just give us a bumper sticker there. >> There's 3D NAND, which is the thing that's growing, very very fast and it's growing on several dimensions. One dimension is the number of layers. Another dimension is the size of each of those pieces. And the third dimension is the number of bits which a QLC is five bits per cell. So those three dimensions have all been improving. And the result of that is that on a wafer of, that you create, more and more data can be stored on the whole wafer on the chip that comes from that wafer. And so QLC is the latest, set of 3D NAND flash NAND flash. That's coming off the lines at the moment. >> Okay, so my understanding is that there's new architectures that are entering the data center space, that could take advantage of QLC enter Vast. Someone said they've rented this, a nice set up for you and maybe before we get into the architecture, can you talk a little bit more about the company? I mean, maybe not everybody's familiar with with Vast, you share why you started it but what can you tell us about the business performance and any metrics you can share would be great? >> Sure, so the company as I said is five years old, about 170, 180 people today. We started selling product just around two years ago and have just hit $150 million in run rate. That's with eight sales people. And so, as you can imagine, there's a lot of demand for flash all the way down the stack in the way that David predicted. >> Wow, okay. So you got pretty comfortable. I think you've got product market fit, right? And now you're going to scale. I would imagine you're going to go after escape velocity and you're going to build your moat. Now part of that, I mean a lot of that is product, right? Product is sales. Those are the cool two golden pillars, but, and David when you think back to your early forecast last decade it was really about block storage. That was really what was under attack. You know, kind of fusion IO got it started with Facebook. They were trying to solve their SQL database performance problems. And then we saw pure storage. They hit escape velocity. They drove a truck through EMC sym metrics HDD based install base which precipitated the acquisition of XtremeIO by EMC. Something Renan knows a little bit about having led development, of the product but flash was late to the NAS party guys, Renan let me start with you. Why is that? And what is the relevance of QLC in that regard? >> The way storage has been always, it looks like a pyramid and you have your block devices up at the top and then your NAS underneath. And today you have object down at the bottom of that pyramid. And the pyramid basically represents capacity and the Y axis is price performance. And so if you could only serve a small subset of the capacity, you would go for block. And that is the subset that needed high performance. But as you go to QLC and PLC will soon follow the price of all flash systems goes down to a point where it can compete on the lower ends of that pyramid. And the capacity grows to a point where there's enough flash to support those workloads. And so now with QLC and a lot of innovation that goes with it it makes sense to build an all flash, NAS and object store. >> Yeah, okay. And David, you and I have talked about the volumes and Renan sort of just alluded to that, the higher volumes of NAS, not to mention the fact that NAS is hard, you know files difficult, but that's another piece of the equation here, isn't it? >> Absolutely, NAS is difficult. It's a large, very large scale. We're talking about petabytes of data. You're talking about very important data. And you're talking about data, which is at the moment very difficult to manage. It takes a lot of people to manage it, takes a lot of resources and it takes up a lot, a lot of space as well. So all of those issues with NAS and complexity is probably the biggest single problem. >> So maybe we could geek out a little bit here. You guys go at it, but Renan talk about the Vast architecture. I presume it was built from the ground up for flash since you were trying to kill HTD. What else do we need to know? >> It was built for flash. It was also built for Crosspoint which is a new technology that came out from Intel and micron about three years ago. Cross point is basically another level of persistent media above flash and below Ram. But what we really set out to do is, as I said to kill the hard drive, and for that what you need is to get the price parity. And of course, flash and hard drives are not at price parity today. As David said, they probably will be in a few years from now. And so we wanted to, jumpstart that, to accelerate that. And so we spent a lot of time in building a new type of architecture with a lot of new metadata structures and algorithms on top to bring that effective price down to a point where it's competitive today. And in fact, two years ago the way we did it was by going out to talk to these vendors Intel with 3D Crosspoint and QLC flash Mellanox with NVMe over fabrics, and very fast ethernet networks. And we took those building blocks and we thought how can we use this to build a completely different type of architecture, that doesn't just take flash one level down the stack but actually allows us to break that pyramid, to collapse it down and to build a single system that is as fast as your fastest all flash block device or faster but as affordable as your hard drive based archives. And once that happens you don't need to think about storage anymore. You have a single system that's big enough and cheap enough to throw everything at it. And it's fast enough such that everything is accessible as sub-millisecond latencies. The way the architecture is built is pretty much the opposite of the way scale-out storage has been done. It's not based on shared nothing. The way XtremIO was the way Isilon is the way Hadoop and the Google file system are. We're basing it on a concept called Dis-aggregated Shared Everything. And what that means is that we have the media on one set of devices, the logic running in containers, just software and you can scale each of those independently. So you can scale capacity independently from performance and you have this shared metadata space, that all of the containers can see. So the containers don't actually have to talk to each other in the synchronous path. That means that it's much more scalable. You can go up to hundreds of thousands of nodes rather than just a few dozen. It's much more resilient. You can have all of them fail and you still didn't lose any data. And it's much more easy to use to David's point about complexity. >> Thank you for that. And then you, you mentioned up front that you not only built for flash, but built for Crosspoint. So you're using Crosspoint today. It's interesting. There was always been this sort of debate about Crosspoint It's less expensive than Ram, or maybe I got that wrong but it's persistent, >> It is. >> Okay, but it's more expensive than flash. And it was sort of thought it was a fence sitter cause it didn't have the volume but you're using it today successfully. That's interesting. >> We're using it both to offset the deficiencies of the low cost flash. And the nice thing about QLC and PLC is that you get the same levels of read performance as you would from high-end flash. The only difference between high cost and low cost flash today is in right cycles and in right performance. And so Crosspoint helps us offset both of those. We use it as a large right buffer and we use it as a large metadata store. And that allows us not just to arrange the information in a very large persistent right buffer before we need to place it on the low cost flash. But it also allows us to develop new types of metadata structures and algorithms that allow us to make better use of the low cost flash and reduce the effective price down even lower than the rock capacity. >> Very cool. David, what are your thoughts on the architecture? give us kind of the independent perspective >> I think it's brilliant architecture. I'd like to just go one step down on the network side of things. The whole use of NBME over fabric allows the users all of the servers to get any data across this whole network directly to it. So you've got great performance right away across the stack. And then the other thing is that by using RDMA for NASS, you're able, if you need to, to get down in microseconds to the data. So overall that's a thousand times faster than any HDD system could manage. So this architecture really allows an any to any simple, single level of storage which is so much easier to think about, architect use or manage is just so much simpler. >> If you had I mean, I said I don't know if there's an answer to this question but if you had to pick one thing Renan that you really were dogmatic about and you bet on from an architectural standpoint, what would that be? >> I think what we bet on in the early days is the fact that the pyramid doesn't work anymore and that tiering doesn't work anymore. In fact, we stole Johnson and Johnson's tagline No More Tears. Only, It's not spelled the same way. The reason for that is not because of storage. It's because of the applications as we move to applications more and more that are machine-based and machines are now not just generating the data. They're also reading the data and analyzing it and providing insights for humans to consume. Then the workloads changed dramatically. And the one thing that we saw is that you can't choose which pieces of information need to be accessible anymore. These new algorithms, especially around AI and machine learning and deep learning they need fast access to the entirety of the dataset and they want to read it over and over and over again in order to generate those insights. And so that was the driving force behind us building this new type of architecture. And we're seeing every single day when we talk to customers how the old architecture is simply break down in the face of these new applications. >> Very cool speaking of customers. I wonder if you could talk about use cases, customers you know, and this NASS arena maybe you could add some color there. >> Sure, our customers are large in data. We started half a petabyte and we grow into the exabyte range. The system likes to be big as, as it grows it grows super linearly. If you have a 100 nodes or a 1000 nodes you get more than 10X in performance, in capacity efficiency and resilience, et cetera. And so that's where we thrive. And those workloads are today. Mainly analytics workloads, although not entirely. If you look at it geographically we have a lot of life science in Boston research institutes medical imaging, genomics universities pharmaceutical companies here in New York. We have a lot of financials, hedge funds, Analyzing everything from satellite imagery to trade data to Twitter feeds out in California. A lot of AI, autonomous driving vehicles as well as media and entertainment both generation of films like animation, as well as content distribution are being done on top of best. >> Great thank you and David, when you look at the forecast that you've made over the years and when I imagine that they match nicely with your assumptions. And so, okay, I get that, but that doesn't, not everybody agrees, David. I mean, certainly the HDD guys don't agree but they, they're obviously fighting to hang on to their awesome run for 50 years, but as well there's others to do in hybrids and the like, and they kind of challenge your assumptions and you don't have a dog in this fight. We just want the truth and try to do our best to report it. But let me start with this. One of the things I've seen is that you're comparing deduped and compressed flash with raw HDD. Is that true or false? >> It's in terms of the fundamentals of the forecast, et cetera, it's false. What I'm taking is the new egg price. And I did it this morning and I looked up a two terabyte disc drive, NAS disc drive. I think it was $54. And if you look at the cost of a a NAND for two terabytes, it's about $200. So it's a four to one ratio. >> So, >> So and that's coming down from what people saw last year, which was five or six and every year has been, that ratio has been coming down. >> The ratio between the cost Delta, between HDD is still cheaper. So Renan I wonder one of the other things that Floyer has said is that because of the advantages of flash, not only performance but also data sharing, et cetera, which really drives other factors like TCO. That it doesn't have to be at parody in order for customers to consume that. I certainly saw that on my laptop, I could have got more storage and it could have been cheaper for per bit for my laptop. I took the flash. I mean, no problem. That that was an intelligence test but what are you seeing from customers? And by the way Floyer I think is forecasting by what, 2026 there will be actually a raw to raw crossover. So then it's game over. But what are you seeing in terms of what customers are telling you or any evidence you have that it doesn't have to be, even that customers actually get more value even if it's more expensive from flash, what are you seeing? >> Yeah in the enterprise space customers aren't buying raw flash they're buying storage systems. And so even if the raw numbers flash versus hard drive are still not there there is a lot of things that can be done at the system level to equalize those two. In fact, a lot of our IP is based on that we are taking flash today is, as David said more expensive than hard drives, but at the system level it doesn't remain more expensive. And the reason for that is storage systems waste space. They waste it on metadata, they waste it on redundancy. We built our new metadata structures, such that they everything lives in Crosspoint and is so much smaller because of the way Crosspoint is accessible at byte level granularity, we built our erasure codes in a way where you can sustain 10, 20, 30 drive failures but you only pay two or 1% in overhead. We built our data reduction mechanisms such that they can reduce down data even if the application has already compressed it and already de-duplicated it. And so there's a lot of innovation that can happen at the software level as part of this new direct dis-aggregated shared everything architecture that allows us to bridge that cost gap today without having customers do fancy TCO calculations. And of course, as prices of flash over the next few years continue declining, all of those advantages remain and it will just widen the gap between hard drives and flash. And there really is no advantage to hard drives once the price thing is solved. >> So thank you. So David, the other thing I've seen around these forecasts is that the comments that you can't really data reduce effectively hard disk. And I understand why the overhead and of course you can in flash you can use all kinds of data reduction techniques and not affect performance, or it's not even noticeable like put the cloud guys, do it upstream. Others do it upstream. What's your comment on that? >> Yes, if you take sequential data and you do a lot of work upfront you can write out in very lot big blocks and that's a perfect sequentially, good way of doing it. The challenge for the HDD people is if they go for that for that sort of sequential type of application that the cheapest way of doing that is to use tape which comes back to the discussion that the two things that are going to remain are tape and flash. So that part of the HDD market in my assertion will go towards tape and tape libraries. And those are serving very well at the moment. >> Yeah I mean, It's just the economics of tape are really attractive. I just feel like I've said this many times that the marketing of tape is lacking. Like I'd like to see, better thinking around how it could play. Cause I think customers have this perception tape, but there's actually a lot of value there. I want to carry on, >> Small point there. Yeah, I mean, there's an opportunity in the same way that Vast have created an architecture for flash. There's an opportunity out there for the tech people with flash to make an architecture that allows you to take that workload and really lower the price, enormously. >> You've called it Flape >> Flape yes. >> There's some interesting metadata opportunities there but we won't go into that. And then David, I want to ask you about NAND shortages. We saw this in 2016 and 2017. A lot of people saying there's an NAND shortage again. So that's a flaw in your forecast prices of you're assuming prices of flash continue to come down faster than those of HDD but the shortages of NAND could be problematic. What do you say to that? >> Well, I've looked at that in some detail and one of the big, important things is what's happening in the flash market and the Chinese, YMTC Chinese company has introduced a lot more volume into the market. They're making 100,000 wafers a month for this year. That's around six to 8% of market of NAND at this year, as a result, Samsung, micron, Intel, Hynix they're all increasing their volumes of NAND so that they're all investing. So I don't see that NAND itself is going to be a problem. There is certainly a shortage of processor chips which drive the intelligence in the NAND itself. But that's a problem for everybody. That's a problem for cars. It's a problem for disk drives. >> You could argue that's going to create an oversupply, potentially. Let's not go there, but you know what at the end of the day it comes back to the customer and all this stuff. It's interesting. I love talking about the architecture but it's really all about customer value. And so, so Renan, I want you to sort of close there. What should customers be paying attention to? And what should observers of Vast Data really watch as indicators for progress for you guys milestones and things in the market that we should be paying attention to but start with the customers. What's your advice to them? >> Sure, for any customer that I talked to I always ask the same thing. Imagine where you'll be five years from now because you're making an investment now that is at least five years long. In our case, we guaranteed the lifespan of the devices for a decade, such that you know that it's going to be there for you and imagine what is going to happen over those next five years. What we're seeing in most customers is that they have a lot of doormen data and with the advances in analytics and AI they want to make use of that data. They want to turn it from a cost center to a profit center and to gain insight from that data and to improve their business based on that information that they have the same way the hyperscalers are doing in order to do that, you need one thing you need fast access to all of that information. Once you have that, you have the foundation to step into this next generation type world where you can actually make money off of your information. And the best way to get very, very fast access to all of your information is to put it on Vast media like flash and Crosspoint. If I can give one example, Hedge Funds. Hedge funds do a lot of back-testing on Vast. And what makes sense for them is to test as much information back as they possibly can but because of storage limitations, they can't do that. And the other thing that's important to them is to have a real-time experience to be able to run those simulations in a few minutes and not as a batch process overnight, but because of storage limitations, they can't do that either. The third thing is if you have many different applications and many different users on the same system they usually step on each other's toes. And so the Vast architecture is solves those three problems. It allows you a lot of information very fast access and fast processing an amazing quality of service where different users of the system don't even notice that somebody else is accessing the same piece of information. And so Hedge Funds is one example. Any one of these verticals that make use of a lot of information will benefit from this architecture in this system. And if it doesn't cost any more, there's really no real reason delay this transition into all flash. >> Excellent very clear thinking. Thanks for laying that out. And what about, you know, things that we should how should we judge you? What are the things that we should watch? >> I think the most important way to judge us is to look at customer adoption and what we're seeing and what we're showing investors is a very high net dollar retention number. What that means is basically a customer buys a piece of kit today, how much more will they buy over the next year, over the next two years? And we're seeing them buy more than three times more, within a year of the initial purchase. And we see more than 90% of them buying more within that first year. And that to me indicates that we're solving a real problem and that they're making strategic decisions to stop buying any other type of storage system. And to just put everything on Vast over the next few years we're going to expand beyond just storage services and provide a full stack for these AI applications. We'll expand into other areas of infrastructure and develop the best possible vertically integrated system to allow those new applications to thrive. >> Nice, yeah. Think investors love that lifetime value story. If you can get above 3X of the customer acquisition cost is to IPO in the way. Guys hey, thanks so much for coming to the Cube. We had a great conversation and really appreciate your time. >> Thank you. >> Thank you. >> All right, Thanks for watching everybody. This is Dave Volante for the Cube. We'll see you next time. (gentle music)
SUMMARY :
that the all flash data center was coming. in the marketplace but where and the volume comes from the consumers. the innovations that you're doing, kill of the hard drive. David maybe you could give And so QLC is the latest, and any metrics you can in the way that David predicted. having led development, of the product And the capacity grows to a point where And David, you and I have talked about the biggest single problem. the ground up for flash that all of the containers can see. that you not only built for cause it didn't have the volume and PLC is that you get the same levels David, what are your all of the servers to get any data And the one thing that we saw I wonder if you could talk And so that's where we thrive. One of the things I've seen is that of the forecast, et cetera, it's false. So and that's coming down And by the way Floyer I at the system level to equalize those two. the comments that you can't really So that part of the HDD market that the marketing of tape is lacking. and really lower the price, enormously. but the shortages of NAND and one of the big, important I love talking about the architecture that it's going to be there for you What are the things that we should watch? And that to me indicates that of the customer acquisition This is Dave Volante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Renen Hallak | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Renan | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
David FLoyer | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
$54 | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Hynix | ORGANIZATION | 0.99+ |
$150 million | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
California | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
50 years | QUANTITY | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Vast Data | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
three dimensions | QUANTITY | 0.99+ |
three problems | QUANTITY | 0.99+ |
YMTC | ORGANIZATION | 0.99+ |
Floyer | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Renen | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
100 nodes | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two terabytes | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 90% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
third dimension | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
two terabyte | QUANTITY | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
more than three times | QUANTITY | 0.98+ |
1000 nodes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
last decade | DATE | 0.98+ |
single problem | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
One dimension | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
about $200 | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
two years ago | DATE | 0.97+ |
single system | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
half a petabyte | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
micron | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Breaking Analysis: Legacy Storage Spending Wanes as Cloud Momentum Builds
(digital music) >> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> The storage business as we know it has changed forever. On-prem storage was once a virtually unlimited and untapped bastion of innovation, VC funding and lucrative exits. Today it's a shadow of its former self and the glory days of storage will not return. Hello everyone, and welcome to this week's Wikibon CUBE Insights Powered by ETR. In this breaking analysis, we'll lay out our premise for what's happening in the storage industry, and share some fresh insights from our ETR partners, and data that supports our thinking. We've had three decades of tectonic shifts in the storage business. From the simplified history of this industry shows us there've been five major waves of innovation spanning five decades. The dominant industry model has evolved from what was first the mainframe centric vertically integrated business, but of course by IBM and it became a disintegrated business that saw between like 70 or 80 Winchester disk drive companies that rose and then fell. They served a booming PC industry in this way it was led by the likes of Seagate. Now Seagate supplied the emergence of an intelligent controller based external disc array business that drove huge margins for functions that while lucrative was far cheaper than captive storage from system vendors, this era of course was led by EMC and NetApp. And then this business was disrupted by a flash and software defined model that was led by Pure Storage and also VMware. Now the future of storage is being defined by cloud and intelligent data management is being led by AWS and a three letter company that we'll just call TBD, otherwise known as Jump Ball Incorporated. Now, let's get into it here, the impact of AWS cannot be overstated now while legacy storage players, they're sick and tired of talking about the cloud, the reality cannot be ignored. The cloud has been the most disruptive force in storage over the past 10 years, and we've reported on the spending impact extensively. But cloud is not the only factor pressuring the on-prem storage business, flash has killed what we call performance by spindles. In other words, the practice of adding more disk drives to keep performance from tanking. So much flash has been injected into the data center that that no longer is required. But now as you drill down into the cloud, AWS has been by far the most significant factor in our view. Lots of people talked about object storage before AWS, but there sure wasn't much spending going on, S3 changed that. AWS is getting much more aggressive about expanding its storage portfolio and its offerings. S3 came out in 2006 and it was the very first AWS service and then Elastic Block Service EBS came out a couple of years later, nobody really paid much attention. Well last fall at storage day, we saw AWS announce a number of services, many fire-related and this year we saw four new announcements of Amazon at re:Invent. We think AWS' storage revenue will surpass 8 billion this year and could be as high as 10 billion. There's not much data out there, but this would mean that AWS' storage biz is larger than that of a NetApp, which means AWS is larger than every traditional storage player with the exception of Dell. Here's a little glimpse of what's coming at the legacy storage business. It's a clip of the vice-president of AWS storage, her name is Mahlon Thompson Bukovec, watch this. Okay now, you may say Dave, what the heck does that have to do with anything? Yeah, I don't know, but as an older white guy, that's been in this business for awhile, I just think it's badass that this woman boxes and runs a business that we think is approaching $10 billion. Now let's take a quick look at the storage announcements AWS made at re:Invent. The company made four announcements this year, let me try to be brief, the first is EBS io2 Block Express Volumes, got to love the names. AWS was claims this is the first storage area network or sand for the cloud and it offers up to 256,000 IOPS and 4,000 megabytes per second throughput and 64 terabytes of capacity. Hey, sounds pretty impressive right, Well let's dig in a little bit okay, first of all, this is not the first sand in the cloud, at least in my view there may be others but Pure Storage announced cloud block store in 2019 at its annual accelerate customer conference and it's pretty comparable here. Maybe not so much in the speeds and feeds, but the concept of better block storage in the cloud with higher availability. Now, as you may also be saying, what's the big deal? The performance come on, we can smoke that we're on-prem vendor We can bury that. Compared to what we do, AWS' announcement is really not that impressive okay, let me give you a point of comparison there's a startup out there called VAST Data. Just there for you and closure with bundled storage and compute can do 400,000 IOPS and 40,000 megabytes per second and that can be scaled, so yeah, I get it. And AWS also announced that io2 two was priced at 20% less than previous generation volumes, which you might say is also no big deal and I would agree 20% is not as aggressive as the average price decline per gigabyte of any storage technology. AWS loves to make a big deal about its price declines, it's essentially following the industry trends but the point is that this feature will be great for a lot of workloads and it's fully integrated with AWS services meaning for example, it will be very convenient for AWS customers to invoke this capability for example Aurora and other AWS databases through its RDS service, just another easy button for developers to push. This is specially important as we see AWS rapidly expanding its machine learning in AI capabilities with SageMaker, it's embedding ML into things like Redshift and driving analytics, so integration is very key for its customers. Now, is Amazon retail going to run its business on io2 volumes? I doubt it. I believe they're running on Oracle and they need much better performance, but this is a mainstream service for the EBS masses to tap. Now, the other notable announcement was EBS Gp3 volumes. This is essentially a service that lets let you programmatically set SLAs for IOPS and throughput independently without needing to add additional storage. Again, you may be saying things like, well atleast I remember when SolidFire let me do this several years ago and gave me more than 3000 IOPS and 125 megabytes per a second performance, but look, this is great for mainstream customers that want more consistent and predictable performance and that want to set some kind of threshold or floor and it's integrated again into the AWS stack. Two other announcements were made, one that automatically tiers data to colder storage tiers and a replication service. On the former, data migrates to tier two after 90 days of inaccess and tier three, after 180 days. AWS remember, they hired a bunch of folks out of EMC years ago and they put them up in the Boston Seaport area, so they've acquired lots of expertise in a lot of different areas I'm not sure if tiering came out of that group but look, this stuff is not rocket science, but it saves customers money. So these are tried and true techniques that AWS is applying but the important thing is it's in the cloud. Now for sure we'd like to see more policy options than say for example, a fixed 90 day or 180 day policy and more importantly we'd like to see intelligent tiering where the machine is smart enough to elevate and promote certain datasets when they're needed for instance, at the end of a quarter for comparison purposes or at the end of the year, but as NFL Hall of Fame Coach Hank Stram would have said, AWS is matriculating the ball down the field. Okay, let's look at some of the data that supports what we're saying here in our premise today. This chart shows spending across the ETR taxonomy. It depicts the net score or spending velocity for different sectors. We've highlighted storage, now don't put too much weight on the January data because the survey was just launched, but you can see storage continues to be a back burner item relative to some other spending priorities. Now as I've reported, CIOs are really focused on cloud, containers, container orchestration, automation, productivity and other key areas like security. Now let's take a look at some of the financial data from the storage crowd. This chart shows data for eight leading names in storage and we put storage in quotes because as we said earlier, the market is shifting and for sure companies like Cohesity and Rubrik, they're not positioning as storage players in fact, that's the last thing they want to do. Rather they're category creators around data management or intelligent data management but their inadjacency to storage, they're partnering with all the primary storage companies and they're in the ETR taxonomy. Okay, so as you can see, we're showing the year over year, quarterly revenue growth for the leading storage companies. NetApp is a big winner, they're growing at a whopping 2%. They beat expectations, but expectations were way down so you can see in the right most column upper right, we've added the ETR net score from October and net score of 10% says that if you ask customers, are you spending more or less with a company, there are 10% of the customers that are essentially spending more than are spending less, get into that a little further later. For comparison, a company like Snowflake, it has a net score approaching 70% Pure Storage used to be that high several years ago or high sixties anyway. So 10% is in the red zone and yet NetApp, is the big winner this quarter. Now Nutanix isn't really again a storage company, but they're an adjacency and they sell storage and like many of these companies, it's transitioning to a subscription pricing model, so that puts pressure on the income statement, that's why they went out and did a deal with Bain, Bain put in $750 million to help Bridge that transition so that's kind of an interesting move. Every company in this chart is moving to an annual recurring revenue model and that as a service approach is going to be the norm by the end of the decade. HPE's doing it with GreenLake, Dell has announced Apex, virtually every company is headed in this direction. Now speaking of HPE, it's Nimble business that has momentum, but other parts of the storage portfolio are quite a bit softer. Dell continues to see pressure on its storage business although VxRail is a bright spot. Everybody's got a bright spot, everybody's got new stuff that's growing much faster than the old stuff, the problem is the old stuff is much much bigger than the new stuff. IBM's mainframe storage cycle, well that's seems to have run its course, they had been growing for the last several quarters that looks like it's over. And so very very cyclical businesses here now as you can see, The data protection data management companies, they are showing spending momentum but they're not public so we don't have revenue data. But you got to wonder with all the money these guys have raised and the red hot IPO and tech markets, why haven't these guys gone public? The answer has to be that they're either not ready or maybe their a numbers weren't where they want them to be, maybe they're not predictable enough, maybe they don't have their operational act together or maybe they need to you get that in order, some combination of those factors is likely. They'll tell you, they'll give other answers if you ask them, but if they had their stuff together they'd be going out right now. Now here's another look at the spending data in terms of net score, which is again spending velocity. The ETR here is measuring the percent of respondents that are adopting new, spending more, spending flat, spending less or retiring the platform. So net score is adoptions, which is the lime green plus the spending more, which is the forest green. Add those two and then subtract spending less, which is the pink and then leaving the platform, which is the bright red, what's left over is net score. So, let's look at the picture here, Cohesity leads all players in the storage taxonomy, the ETR storage taxonomy, again they don't position that way, but that's the way the customers are answering. They've got 55% net score which is really solid and you can see the data in the upper right-hand corner, it's followed by Nutanix. Now they're really not again in the scope of Pure play storage play but speaking of Pure, its net score has come down from its high of 73% in January, 2016. It's not going to climb back up there, but it's going to be interesting to see if Pure net scorecard rebound in a post COVID world. We're also watching what Pure does in terms of unifying file and object and how it's fairing in cloud and what it does with the Portworx acquisition which is really designed to bring forth a new programming model. Now, Dell is doing fine with VxRail, but VSAN is well off its net score highs which we're in the 60% plus range a couple of years ago, VSAN is definitely been a factor from VMware, but again that's come off its highs, HPE with Nimble still has some room to improve, I think it actually will I think that these figures that we're showing here they're are somewhat depressed by the COVID factor, I expect Nimble is going to bounce back in future surveys. Dell and NetApp are the big leaders in terms of presence or market share in the data other than VMware, 'cause VMware has a lot of instances, it's software defined that's why they're so prominent. And with VMware's large share you'd expect them to have net scores that are tepid and you can see a similar pattern with IBM. So Dell, NetApp, tepid net scores as is IBM because of their large market share VMware, kind of a newer entry into the play and so doing pretty well there from a net score standpoint. Now Commvault like Cohesity and Rubrik is really around intelligent data management, trying to go beyond backup into business recovery, data protection, DevOps, bringing that analytics, bringing that to the cloud, we didn't put Veeam in here and we probably should have. They had pre-COVID net scores well in to the thirties and they have a steadily increasing share of the market, so we expect good things from Veeam going forward. They were acquired earlier this year by Insight, capital private equity firm. So big changes there as well, that was their kind of near-term exit maybe more to come. But look, it's all relative, this is a large and mature market that is moving to the cloud and moving to other adjacencies. And the core is still primary storage, that's the main supreme prerequisite and everything else flows from there, data protection, replication, everything else. This chart gives you another view of the competitive landscape, it's that classic XY chart it plots net score in the vertical axis and market share on the horizontal axis, market share remember is a measure of presence in the dataset. Now think about this from the CIO's perspective, they have their on-prem estate, got all this infrastructure and they're putting a brick wall around their core systems. And what do they want out of storage for that class of workload? They want it to perform consistently, they want it to be efficient and they want it to be cost-effective, so what are they going to do? they're going to consolidate, They're going to consolidate the number of vendors, they're going to consolidate the storage, they're going to minimize complexity, yeah, they're going to worry about the blast radius, but there's ways to architect around that. The last thing they want to worry about is managing a zillion storage vendors this business is consolidating, it has been for some time, we've seen the number of independent storage players that are going public as consolidated over the years, and it's going to continue. so on-prem storage arrays are not giving CIOs the innovation and strategic advantage back when things like storage virtualization, space efficient snapshots, data de-duplication and other storage services were worth maybe taking a flyer on a feature product like for example, a 3PAR or even a Data Domain. Now flash gave the CIOs more headroom and better performance and so as I said earlier, they're not just buying spindles to increase performance, so as more and more work gets pushed to the cloud, you're seeing a bunkering in on these large scale mission-critical workloads. As you saw earlier, the legacy storage market is consolidating and has been for a while as I just said, it's essentially becoming a managed decline business where RnD is going to increasingly get squeezed and go to other areas, both from the vendor community and on the buy-side where they're investing on things like cloud, containers and in building new layers in their business and of course the DX, the Digital Transformation. I mentioned VAST Data before, it is a company that's growing and another company that's growing is Infinidat and these guys are traditional storage on-prem models they don't bristle If I say traditional they're nexgen if you will but they don't own a cloud, so they were selling to the data center. Now Infinidat is focused on petabyte scale and as they say, they're growing revenues, they're having success consolidating storage that thing that I just talked about. Ironically, these are two Israeli founder based companies that are growing and you saw earlier, this is a share shift the market is not growing overall the part of that's COVID, but if you exclude cloud, the market is under pressure. Now these two companies that I'm mentioning, they're kind of the exception to the rule here, they're tiny in the grand scheme of things, they're really not going to shift the market and their end game is to get acquired so they can still share, but they're not going to reverse these trends. And every one on this chart, every on-prem player has to have a cloud strategy where they connect into the cloud, where they take advantage of native cloud services and they help extend their respective install bases into the cloud, including having a capability that is physically proximate to the cloud with a colo like an Equinix or some other approach. Now, for example at re:Invent, we saw that AWS has hybrid strategy, we saw that evolving. AWS is trying to bring AWS to the edge and they treat the data center as just another edge note, so outposts and smaller versions of outposts and things like local zones are all part of bringing AWS to the edge. And we saw a few companies Pure, Infinidant, Veeam come to mind that are connecting to outpost. They saw the Qumulo was in there, Clumio, Commvault, WekaIO is also in there and I'm sure I'm missing some so, DM me, email me, yell at me, I'm sorry I forgot you but you get the point. These companies that are selling on-prem are connecting to the cloud, they're forced to connect to the cloud much in the same way as they were forced to join the VMware ecosystem and try to add value, try to keep moving fast. So, that's what's going on here, what's the prognosis for storage in the coming year? Well, where've of all the good times gone? Look, we would never bet against data but the days of selling storage controllers that masks the deficiencies of spinning disc or add embedded hardware functions or easily picking off a legacy install base with flash, well, those days are gone. Repatriation, it ain't happening it's maybe tiny little pockets. CIOs are rationalizing their on-premises portfolios so they can invest in the cloud, AI, machine learning, machine intelligence, automation and they're re-skilling their teams. Low latency high bandwidth workloads with minimal jitter, that's the sweet spot for on-prem it's becoming the mainframe of storage. CIOs are also developing a cloud first strategy yes, the world is hybrid but what does that mean to CIOs? It means you're going to have some work in the cloud and some work on-prem, there's a hybrid We've got both. Everything that can go to the cloud, will go to the cloud, in our opinion and everything that can't or shouldn't won't. Yes, people will make mistakes and they'll "repatriate" but generally that's the trend. And the CIOs they're building an abstraction layer to connect workloads from an observability and manageability standpoint so they can maintain control and manage lock-in risk, they have options. Everything that doesn't go to the cloud will likely have some type of hybridicity to it, the reverse won't likely be the case. For vendors, cloud strategies involve supporting your install basis migration to the cloud, that's where they're going, that's where they want to go, they want your help there's business to be made there so enabling low latency hybrids in accommodating subscription models, well, that's a whole another topic, but that's the trend that we see and you rethink the business that you're in, for instance, data management and developing an edge strategy that recognizes that edge workloads are going to require new architecture and that's more efficient than what we've seen built around general purpose systems, and wow, that's a topic for another day. You're seeing this whole as a service model really reshape the entire cultures in the way in which the on-prem vendors are operating no longer is it selling a box that has dramatically marked up controllers and disc drives, it's really thinking about services that could be invoked in the cloud. Now remember, these episodes are all available as podcasts, wherever you listen, just search Breaking Analysis podcasts and please subscribe, I'd appreciate that checkout etr.plus for all the survey action. We also publish a full report every week on wikibon.com and siliconangle.com. A lot of ways to get in touch. You can email me at david.vellante@siliconangle.com. you could DM me @dvellante on Twitter, comment on our LinkedIn posts, I always appreciate that. This is Dave Vellante for theCUBE Insights Powered by ETR. Thanks for watching everyone stay safe and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and of course the DX, the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Seagate | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Hank Stram | PERSON | 0.99+ |
January, 2016 | DATE | 0.99+ |
October | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
TBD | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Jump Ball Incorporated | ORGANIZATION | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
20% | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
55% | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Boston Seaport | LOCATION | 0.99+ |
90 day | QUANTITY | 0.99+ |
73% | QUANTITY | 0.99+ |
125 megabytes | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
180 day | QUANTITY | 0.99+ |
8 billion | QUANTITY | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
$750 million | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
10% | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
10 billion | QUANTITY | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
400,000 IOPS | QUANTITY | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
2% | QUANTITY | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Nimble | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
Infinidant | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
90 days | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.98+ |
wikibon.com | OTHER | 0.98+ |
Robert Swanson, dcVAST | Veritas Vision Solution Day 2018
>> Narrator: From Chicago, it's theCUBE covering Veritas Vision's Solution Day 2018. Brought to you by Veritas. >> Welcome back to the Windy City everybody. We're here covering the Veritas Solution Days in Chicago. I'm Dave Vellante, and you're watching theCUBE, the leader in live tech coverage. Robert Swanson is here, CUBE alum from DC Vast, he runs sales at the organization. Great to see you again, thanks for coming back on. >> You as well, thanks for having me. >> You're very welcome. So last year we were at the Aria in Las Vegas, we talked a lot about Cloud and the big tent event, now Veritas is doing these Solution Days, going out to where the customers are. It's probably good for you 'cause you're Chicago based, right? >> Absolutely, yeah, it's good to have the event here in my hometown. >> So how was this for you today? What'd you learn, what's the conversation been like? >> Yeah, no, it was a good morning, I like having the regional approach, a little bit more of an intimate event, had a variety of customers here and colleagues of Veritas as well. It was definitely a great event this morning. >> Lot of hot stuff going on in data protection. There's Cloud, there's multi-Cloud security and data protection are kind of coming together. The distributive data center on the Edge, new ways, new modes of protecting data. What are you seeing as some of the big drivers out there as you talk to customers? >> That's a great question and you really can't avoid the subject of Cloud. At first I think we looked at data protection really as, excuse me, Cloud, as an enabler for data protection so thinking about on-premise data and how the Cloud can help protect that. Especially for mid-market companies, it really allowed them to do some really cool retention and disaster recovery things that they might not have been able to do before or afford to be able to do before. Now we're looking it more about, alright, there's workloads in the Cloud, there's Cloud native data, what do you do with that? The Cloud providers are guaranteeing you or providing you some SLAs or guidelines around availability but that's not backup so now what do we do with the Cloud native data? Really though, as workloads start getting put out, not only into the big hyperscale or Clouds, but into Office 365 and different file share services, and in SAS applications. It truly is IT anywhere now which really creates a challenge for data protection. I mean, I feel like data management and data protection, the complexity and challenge of it has just grown exponentially in the last few years because now there is important, sensitive data everywhere that companies have to figure out how to maintain and protect and secure and really work for them. >> Wonder if you could talk about just your businesses, the whole partner channel is just fascinating, something we've been tracking now for a while. Cloud was sort of a shot across the bow to a lot of business models. It used to be, hey, I'm going to take a bunch of margin, and resell a product, and buy a boat. But that's changed, you can't just be a quote unquote box seller, that's a metaphor just for reselling somebody else's technology. You have to be a solution provider. So Cloud was in one regards a threat, but it's become an opportunity. How have you guys responded? Talk about the shift toward a solution mindset. >> Yeah, no, you're absolutely right, it really is. The channel's at a bit of an inflection point with the Cloud and contrary to some popular belief, it's not our mission as a channel company to resell hardware or some piece of software. It's getting more and more important for our partners to be people that can be companies that can offer us technology to help kind of fit in to our model and not necessarily vice versa. So now... the Cloud providers have changed the, you know where the abstraction layer occurs and there's so much automation out there that some things that we might used to provide services or manage services around low-level sys admin type task, keep the lights on kind of things, are done in an automated manner right now. We really have to redefine what we do for our customers and Cloud is important so it's really helping customers identify, where is the appropriate place to run a workload? What's better on PREM, what's better in the Cloud? Make sure you have that data portability. We have to be able to provide them guidance and services and really help in that regard as they're navigating it with us. So helping them identify where to put things, how to protect things, how to manage the data, and really how to optimize the spend as well, is something we've kind of pivoted towards. >> It's becoming more complicated. Okay, it used to be, I've got an application server, I'm going to bolt on some back up because I've got to back up the data, okay, done. Virtualization changed things quite a bit but now you've got Clouds, you've got multiple Clouds, you've got SAS, you've got distributed data. You've got to worry about, okay as you were saying, where do I put that data? You're thinking about recovery. How fast can I recovery, so where does that recovery data live? And then, who's managing this whole thing? So I would think there's a huge opportunity for you guys to come in, consult with customers, architect solutions that actually address every customer's different, their unique situations. Maybe you could discuss that a little bit and how you're helping folks. >> The lines are really starting to get blurred too on what you do with data. What's securing it versus protecting it, versus backing it up, versus replicating it, versus it being discoverable. I think that's one of the areas where we're seeing Veritas really kind of evolve. I have the experience in data management and now with some of the technologies that they're launching kind of a platform with some of their different technologies containerized and put on to a single platform I think is really seeing all of this whole concept of data management converging. >> So where do you see this whole thing going? Last question is, you looked out two, three, four, five years, you're going to have lots of Clouds, you're going to have Edge, you've got all this data, digital transformation. Specifically in the context of data protection, how do you see that evolving and what does it look like in the next four or five years? >> I think I used the term already, data portability, and workload portability, and I like that and I think that's where it's going 'cause as the public Cloud market and even non-PREM private Cloud market continue to evolve, it's really going to be about portability. Where is the most appropriate place to run a workload to have certain data? Is it in the public Cloud, is it on-prem, and maybe that changes, right? Maybe the cost modeling changes, maybe the performance requirements changed, so that needs to be portable but with portability, we have to be able to follow that data and those workloads and be able to have some kind of consistent way to protect them. I really think that's the evolution, that's kind of the arms race with a lot of the vendors in this space right now and what everybody's trying to do 'cause that's where it's all headin'. >> Alright Bob, great, thanks very much for coming back in theCUBE, really good to see you again. 85% I think of Veritas's business goes through the channel, critical partners like you make it all happen. So I really appreciate your perspectives, thank you. >> Thanks again for having me, thanks for coming to Chicago. Hope to see you here again. >> You're welcome. Keep it right there everybody, we'll be back with our next guest right after this short break. You're watching theCUBE at Veritas Vision Solution Days from Chicago. Be right back. (digital music)
SUMMARY :
Brought to you by Veritas. Great to see you again, thanks for coming back on. going out to where the customers are. to have the event here in my hometown. I like having the regional approach, a little bit more out there as you talk to customers? it really allowed them to do some really cool You have to be a solution provider. and really how to optimize the spend as well, You've got to worry about, okay as you were saying, The lines are really starting to get blurred too So where do you see this whole thing going? Where is the most appropriate place to run in theCUBE, really good to see you again. Hope to see you here again. Keep it right there everybody,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Robert Swanson | PERSON | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Chicago | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Office 365 | TITLE | 0.99+ |
single platform | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
Cloud | TITLE | 0.97+ |
Veritas Solution Days | EVENT | 0.95+ |
Windy City | LOCATION | 0.92+ |
Veritas Vision Solution Days | EVENT | 0.92+ |
this morning | DATE | 0.9+ |
Veritas Vision's Solution Day 2018 | EVENT | 0.87+ |
Veritas Vision Solution Day 2018 | EVENT | 0.87+ |
Solution Days | EVENT | 0.86+ |
last few years | DATE | 0.83+ |
CUBE | ORGANIZATION | 0.81+ |
DC | LOCATION | 0.78+ |
Clouds | TITLE | 0.76+ |
theCUBE | ORGANIZATION | 0.72+ |
SAS | ORGANIZATION | 0.68+ |
Cloud | ORGANIZATION | 0.68+ |
Last | QUANTITY | 0.67+ |
first | QUANTITY | 0.65+ |
Edge | TITLE | 0.55+ |
Aria | ORGANIZATION | 0.53+ |
SAS | TITLE | 0.51+ |
next | DATE | 0.49+ |
Vast | ORGANIZATION | 0.46+ |
Liran Zvibel, WekaIO | CUBEConversation, April 2018
[Music] hi I'm Stu minimun and this is the cube conversation in Silicon angles Palo Alto office happy to welcome back to the program Lear on survival who is the co-founder and CEO of Weka IO thanks so much for joining me thank you for having me over alright so on our research side you know we've really been saying that data is at the center of everything it's in the cloud it's in the network and of course in the storage industry data has always been there but I think especially for customers it's been more front and center well you know why is data becoming more important it's not data growth and some of the other things that we've talked about for decades but you know how was it changing what are you hearing from customers today so I think the main difference is that organization they're starting to understand that the more data they have the better service they're going to provide to their customers and there will be an overall better company than their competitors so about 10 years ago we started hearing about big data and other ways that in a more simpler form just went over sieved through a lot of data and tried to get some sort of high-level meaning out of it last few years people are actually employing deep learning machine learning technique to their vast amounts of data and they're getting much higher level of intelligence out of their huge capacities of data and actually with deep learning the more data you have the better outputs you get before we go into kind of the m/l and the deep learning piece just did kind of a focus on data itself there's some that say you know digital transformation is it's this buzzword when I talk to users absolutely they're going through transformations you know we're saying everybody's becoming a software company but how does data specifically help them with that you know what what what is your viewpoint there and what are you hearing from your customers so if you look at it from the consumer perspective so people now keep track record of their lives at much higher resolution than the and I'm not talking about the images rigid listen I'm talking about the vast amount of data that they store so if I look at how many pictures I have of myself as a kid and how many pictures I have of my kids like you could fit all of my pictures into albums I can probably fit my my kids like a week's worth of time into albums so people keep a lot more data as consumers and then organization keep a lot more data of their customers in order to provide better service and better overall product you know the industry as an industry we saw a real mixed bag when it came to Big Data when I was saying great I have lots more volume of data that doesn't necessarily mean that I got more value out of it so what are the one of the trends that you're seeing why is you know where things like you deep learning machine learning AI you know is it going to be different or is this just kind of the next iteration of well we're trying and maybe we didn't hit as well with big data let's see if this does it does better so I think that Big Data had its glory days and now where they're coming to to the end of that crescendo because people realized that what they got was sort of aggregate of things that they couldn't make too much sense of and then people really understand that for you to make better use of your data you need to employ way similarly to how the brain works so look a lot of data and then you have to have some sense out of their data and once you've made some sense out of that data we can now get computers to go through way more data and make a similar amount of sense out of that and actually get much much better results so just instead of going finding anecdotes or this thing that you were able to do with big date you're actually now are able to generate intelligent systems you know what one of the other things we saw is it used to be okay I have this this huge back catalogue or I'm going to survey all the data I've collected today you know it's much more you know real times a word that's been thrown around for many years you know whether it do you say live data or you know if you're at sensors where I need to have something where I can you know train models react immediately that that kind of immediacy is much more important you know that's what I'm assuming that's something that you're seeing from customers to indeed so what we say is that customers end up collecting vast amounts of data and then they train their models on these kind of data and then they're pushing these intelligent models to the edges and then you're gonna have edges running inference and that could be a straight camera it could be a camera in the store or it could be your car and then usually you run these inference at the endpoints using all the things you've trained the models back then and you will still keep the data push it back and then you should you still run inference at the data center sort of doing QA and now the edges also know to mark where they couldn't make sense of what they saw so the the data center systems know what should we look at first how we make our models smarter for the next iteration because these are closed-loop systems you train them you push through the edges the edges tell you how well you think they think they understood your train again and things improve we're now at the infancy of a lot of these loops but I think the following probably two to five years will take us through a very very fascinating revolution where systems all around us will become way way more intelligent yeah and there's interesting architectural discussions going on if you talk about this edge environment if I'm an autonomous vehicle now from an airplane of course I need to react there I can't go back to the cloud but you know what what happens in the cloud versus what happens at the edge where do where does Weka fit into that that whole discussion so where we currently are running we're running at the data centers so at Weka we created the fastest file system that's perfect for AI and machine learning and training and we make sure that your GPU field servers that are very expensive never sit idle the second component of our system is tearing two very effective object storages that can run into exabytes so we have the system that makes sure you can have as many GPU servers churning all the time and getting the results getting the new models while having the ability to read any form of data that was collected in the several years really through hundreds of petabytes of data sets and now we have customers talking about exabytes of data sets representing a single application not throughout the organization just for that training application yeah so a I in ml you know Keita is that that the killer use case for your customers today so that's one killer application just because of the vast amount of data and the high-performance nature of the clients we actually show clients that runwa kayo finished training sessions ten times faster than how they would use traditional NFS based solutions but just based on the different way we handle data another very strong application for us is around Life Sciences and genomics where we show that we're the only storage that let these processes remain CPU bound so any other storage at some points becomes IO bound so you couldn't paralyzed paralyzed the processing anymore we actually doesn't matter how many servers you run as clients you double the amount of clients you either get the twice the result the same amount of time or you get the same result it's half the time and with genomics nowadays there are applications that are life-saving so hospitals run these things and they need results as fast as they can so faster storage means better healthcare yeah without getting too deep in it because you know the storage industry has lots of wonkiness and it's there's so many pieces there but you know I hear life scientists I think object storage I hear nvme I think block storage your file storage when it comes down to it you know why is that the right architecture you know for today and what advantages does that give you so we we are actually the only company that went through the hassles and the hurdles of utilizing nvme and nvme of the fabrics for a parallel file system all other solutions went the easier route and created the block and the reason we've created a file system is that this is what computers understand this is what the operating system understand when you go to university you learn computer science they teach you how to write programs they need a file system now if you want to run your program over to servers or ten servers what you need is a shirt file system up until we came gold standard was using NFS for sharing files across servers but NFS was actually created in the 80s when Ethernet run at 10 megabit so currently most of our customers run already 100 gigabytes which is four orders of magnitude faster so they're seeing that they cannot run a network protocol that was designed four orders of magnitude last speed with the current demanding workloads so this explains why we had to go and and pick a totally different way of pushing data to the to the clients with regarding to object storages object storages are great because they allow customers to aggregate hard drives into inexpensive large capacity solutions the problem with object storages is that the programming model is different than the standard file system that computers can understand in too thin two ways a when you write something you don't know when it's going to get actually stored it's called eventual consistency and it's very difficult for mortal programmers to actually write a system that is sound that is always correct when you're writing eventual consistency storage the second thing is that objects cannot change you cannot modify them you need to create them you get them or you can delete them they can have versions but this is also much different than how the average programmer is used to write its programs so we are actually tying between the highest performance and vme of the fabrics at the first year and these object storages that are extremely efficient but very difficult to work with at the back and tier two a single solution that is highest performance and best economics right there on I want to give you the last word give us a little bit of a long view you talked about where we've gone how parallel you know architecture helps now that we're at you know 100 Gig look out five years in the future what's gonna happen you know blockchain takes over the world cloud dominates everything but from an infrastructure application in you know storage world you know where does wek I think that the things look like so one one very strong trend that we are saying is around encryption so it doesn't matter what industry I think storing things in clear-text for many organizations just stops making sense and people will demand more and more of day of their data to be encrypted and tighter control around everything that's one very strong trend that we're seeing another very strong trend that we're seeing is enterprises would like to leverage the public cloud but in an efficient way so if you were to run economics moving all your application to the public cloud may end up being more expensive than running everything on Prem and I think a lot of organizations realized that the the trick is going to be each organisation will have to find a balance to what kind of services are run on Prem and these are going to be the services that are run around the clock and what services have the more of a bursty nature and then organization will learn how to leverage the public cloud for its elasticity because if you're just running on the cloud you're not leveraging the elasticity you're doing it wrong and we're actually helping a lot of our customers do it with our hybrid cloud ability to have local workloads and the cloud workloads and getting these whole workflows to actually run is a fascinating process they're on thank you so much for joining us great to hear the update not only on Weka but really where the industry is going dynamic times here in the industry data at the center of all cubes looking to cover it at all the locations including here and our lovely Palo Alto Studio I'm Stu minimun thanks so much for watching the cube thank you very much [Music] you
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Liran Zvibel | PERSON | 0.99+ |
100 gigabytes | QUANTITY | 0.99+ |
April 2018 | DATE | 0.99+ |
10 megabit | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Weka IO | ORGANIZATION | 0.99+ |
Weka | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
five years | QUANTITY | 0.98+ |
second component | QUANTITY | 0.98+ |
each organisation | QUANTITY | 0.98+ |
first year | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Stu minimun | PERSON | 0.97+ |
two ways | QUANTITY | 0.97+ |
Prem | ORGANIZATION | 0.96+ |
ten times | QUANTITY | 0.95+ |
about 10 years ago | DATE | 0.94+ |
one | QUANTITY | 0.94+ |
Stu minimun | PERSON | 0.94+ |
last few years | DATE | 0.93+ |
hundreds of petabytes of data sets | QUANTITY | 0.93+ |
first | QUANTITY | 0.92+ |
several years | QUANTITY | 0.92+ |
80s | DATE | 0.91+ |
single application | QUANTITY | 0.9+ |
decades | QUANTITY | 0.9+ |
a lot of data | QUANTITY | 0.89+ |
Silicon angles | LOCATION | 0.89+ |
half the time | QUANTITY | 0.87+ |
ten servers | QUANTITY | 0.87+ |
two very effective object | QUANTITY | 0.87+ |
single solution | QUANTITY | 0.86+ |
four orders | QUANTITY | 0.85+ |
four orders | QUANTITY | 0.85+ |
a week | QUANTITY | 0.84+ |
Palo Alto Studio | ORGANIZATION | 0.8+ |
lot more data | QUANTITY | 0.78+ |
WekaIO | ORGANIZATION | 0.78+ |
100 Gig | QUANTITY | 0.74+ |
Lear on | TITLE | 0.72+ |
double | QUANTITY | 0.72+ |
many pieces | QUANTITY | 0.65+ |
Keita | ORGANIZATION | 0.63+ |
lot of data | QUANTITY | 0.6+ |
lot | QUANTITY | 0.58+ |
lots | QUANTITY | 0.58+ |
application | QUANTITY | 0.56+ |
vast amounts of data | QUANTITY | 0.54+ |
exabytes | QUANTITY | 0.53+ |
trend | QUANTITY | 0.52+ |
CEO | PERSON | 0.5+ |
Big Data | ORGANIZATION | 0.45+ |