Is Supercloud an Architecture or a Platform | Supercloud2
(electronic music) >> Hi everybody, welcome back to Supercloud 2. I'm Dave Vellante with my co-host John Furrier. We're here at our tricked out Palo Alto studio. We're going live wall to wall all day. We're inserting a number of pre-recorded interviews, folks like Walmart. We just heard from Nir Zuk of Palo Alto Networks, and I'm really pleased to welcome in David Flynn. David Flynn, you may know as one of the people behind Fusion-io, completely changed the way in which people think about storing data, accessing data. David Flynn now the founder and CEO of a company called Hammerspace. David, good to see you, thanks for coming on. >> David: Good to see you too. >> And Dr. Nelu Mihai is the CEO and founder of Cloud of Clouds. He's actually built a Supercloud. We're going to get into that. Nelu, thanks for coming on. >> Thank you, Happy New Year. >> Yeah, Happy New Year. So I'm going to start right off with a little debate that's going on in the community if you guys would bring out this slide. So Bob Muglia early today, he gave a definition of Supercloud. He felt like we had to tighten ours up a little bit. He said a Supercloud is a platform, underscoring platform, that provides programmatically consistent services hosted on heterogeneous cloud providers. Now, Nelu, we have this shared doc, and you've been in there. You responded, you said, well, hold on. Supercloud really needs to be an architecture, or else we're going to have this stove pipe of stove pipes, really. And then you went on with more detail, what's the information model? What's the execution model? How are users going to interact with Supercloud? So I start with you, why architecture? The inference is that a platform, the platform provider's responsible for the architecture? Why does that not work in your view? >> No, the, it's a very interesting question. So whenever I think about platform, what's the connotation, you think about monolithic system? Yeah, I mean, I don't know whether it's true or or not, but there is this connotation of of monolithic. On the other hand, if you look at what's a problem right now with HyperClouds, from the customer perspective, they're very complex. There is a heterogeneous world where actually every single one of this HyperClouds has their own architecture. You need rocket scientists to build a cloud applications. Always there is this contradiction between cost and performance. They fight each other. And I'm quoting here a former friend of mine from Bell Labs who work at AWS who used to say "Cloud is cheap as long as you don't use it too much." (group chuckles) So clearly we need something that kind of plays from the principle point of view the role of an operating system, that seats on top of this heterogeneous HyperCloud, and there's nothing wrong by having these proprietary HyperClouds, think about processors, think about operating system and so on, so forth. But in order to build a system that is simple enough, I think we need to go deeper and understand. >> So the argument, the counterargument to that, David, is you'll never get there. You need a proprietary system to get to market sooner, to solve today's problem. Now I don't know where you stand on this platform versus architecture. I haven't asked you, but. >> I think there are aspects of both for sure. I mean it needs to be an architecture in the sense that it's broad based and open and so forth. But you know, platform, you could say as long as people can instantiate it themselves, on their own infrastructure, as long as it's something that can be deployed as, you know, software defined, you don't want the concept of platform being the monolith, you know, combined hardware and software. So it really depends on what you're focused on when you're saying platform, you know, I'd say as long as they software defined thing, to where it can literally run anywhere. I mean, because I really think what we're talking about here is the original concept of cloud computing. The ability to run anything anywhere, without having to care about the physical infrastructure. And what we have today is not that, the cloud today is a big mainframe in the sky, that just happens to be large enough that once you select which region, generally you have enough resources. But, you know, nowadays you don't even necessarily have enough resources in one region. and then you're kind of stuck. So we haven't really gotten to that utility model of computing. And you're also asked to rewrite your application, you know, to abandon the conveniences of high performance file access. You got to rewrite it to use object storage stuff. We have to get away from that. >> Okay, I want to just drill on that, 'cause I think I like that point about, there's not enough availability, but on the developer cloud, the original AWS premise was targeting developers, 'cause at that time, you have to provision a Sun box get a Cisco DSU/CSU, now you get on the cloud. But I think you're giving up the scale question, 'cause I think right now, scale is huge, enterprise grade versus cloud for developers. >> That's Right. >> Because I mean look at, Amazon, Azure, they got compute, they got storage, they got queuing, and some stuff. If you're doing a startup, you throw your app up there, localhost to cloud, no big deal. It's the scale thing that gets me- >> And you can tell by the fact that, in regions that are under high demand, right, like in London or LA, at least with the clients we work with in the median entertainment space, it costs twice as much for the exact same cloud instances that do the exact same amount of work, as somewhere out in rural Canada. So why is it you have such a cost differential, it has to do with that supply and demand, and the fact that the clouds aren't really the ability to run anything anywhere. Even within the same cloud vendor, you're stuck in a specific region. >> And that was never the original promise, right? I mean it was, we turned it into that. But the original promise was get rid of the heavy lifting of IT. >> Not have to run your own, yeah, exactly. >> And then it became, wow, okay I can run anywhere. And then you know, it's like web 2.0. You know people say why Supercloud, you and I talked about this, why do you need a name for Supercloud? It's like web 2.0. >> It's what Cloud was supposed to be. >> It's what cloud was supposed to be, (group laughing and talking) exactly, right. >> Cloud was supposed to be run anything anywhere, or at least that's what we took it as. But you're right, originally it was just, oh don't have to run your own infrastructure, and you can choose somebody else's infrastructure. >> And you did that >> But you're still bound to that. >> Dave: And People said I want more, right? >> But how do we go from here? >> That's, that's actually, that's a very good point, because indeed when the first HyperClouds were designed, were designed really focus on customers. I think Supercloud is an opportunity to design in the right way. Also having in mind the computer science rigor. And we should take advantage of that, because in fact actually, if cloud would've been designed properly from the beginning, probably wouldn't have needed Supercloud. >> David: You wouldn't have to have been asked to rewrite your application. >> That's correct. (group laughs) >> To use REST interfaces to your storage. >> Revisist history is always a good one. But look, cloud is great. I mean your point is cloud is a good thing. Don't hold it back. >> It is a very good thing. >> Let it continue. >> Let it go as as it is. >> Yeah, let that thing continue to grow. Don't impose restrictions on the cloud. Just refactor what you need to for scale or enterprise grade or availability. >> And you would agree with that, is that true or is it problem you're solving? >> Well yeah, I mean it, what the cloud is doing is absolutely necessary. What the public cloud vendors are doing is absolutely necessary. But what's been missing is how to provide a consistent interface, especially to persistent data. And have it be available across different regions, and across different clouds. 'cause data is a highly localized thing in current architecture. It only exists as rendered by the storage system that you put it in. Whether that's a legacy thing like a NetApp or an Isilon or even a cloud data service. It's localized to a specific region of the cloud in which you put that. We have to delocalize data, and provide a consistent interface to it across all sites. That's high performance, local access, but to global data. >> And so Walmart earlier today described their, what we call Supercloud, they call it the Walmart cloud native platform. And they use this triplet model. They have AWS and Azure, no, oh sorry, no AWS. They have Azure and GCP and then on-prem, where all the VMs live. When you, you know, probe, it turns out that it's only stateless in the cloud. (John laughs) So, the state stuff- >> Well let's just admit it, there is no such thing as stateless, because even the application binaries and libraries are state. >> Well I'm happy that I'm hearing that. >> Yeah, okay. >> Because actually I have a lot of debate (indistinct). If you think about no software running on a (indistinct) machine is stateless. >> David: Exactly. >> This is something that was- >> David: And that's data that needs to be distributed and provided consistently >> (indistinct) >> Across all the clouds, >> And actually, it's a nonsense, but- >> Dave: So it's an illusion, okay. (group talks over each other) >> (indistinct) you guys talk about stateless. >> Well, see, people make the confusion between state and persistent state, okay. Persistent state it's a different thing. State is a different thing. So, but anyway, I want to go back to your point, because there's a lot of debate here. People are talking about data, some people are talking about logic, some people are talking about networking. In my opinion is this triplet, which is data logic and connectivity, that has equal importance. And actually depending on the application, can have the center of gravity moving towards data, moving towards what I call execution units or workloads. And connectivity is actually the most important part of it. >> David: (indistinct). >> Some people are saying move the logic towards the data, some other people, and you are saying actually, that no, you have to build a distributed data mesh. What I'm saying is actually, you have to consider all these three variables, all these vector in order to decide, based on application, what's the most important. Because sometimes- >> John: So the application chooses >> That's correct. >> Well it it's what operating systems were in the past, was principally the thing that runs and manages the jobs, the job scheduler, and the thing that provides your persistent data (indistinct). >> Okay. So we finally got operating system into the equation, thank you. (group laughs) >> Nelu: I actually have a PhD in operating system. >> Cause what we're talking about is an operating system. So forget platform or architecture, it's an operating environment. Let's use it as a general term. >> All right. I think that's about it for me. >> All right, let's take (indistinct). Nelu, I want ask you quick, 'cause I want to give a, 'cause I believe it's an operating system. I think it's going to be a reset, refactored. You wrote to me, "The model of Supercloud has to be open theoretical, has to satisfy the rigors of computer science, and customer requirements." So unique to today, if the OS is going to be refactored, it's not going to be, may or may not be Red Hat or somebody else. This new OS, obviously requirements are for customers too but is what's the computer science that is needed? Where are we, what's the missing? Where's the science in this shift? It's not your standard OS it's not like an- (group talks over each other) >> I would beg to differ. >> (indistinct) truly an operation environment. But the, if you think about, and make analogies, what you need when you design a distributed system, well you need an information model, yeah. You need to figure out how the data is located and distributed. You need a model for the execution units, and you need a way to describe the interactions between all these objects. And it is my opinion that we need to go deeper and formalize these operations in order to make a step forward. And when we design Supercloud, and design something that is better than the current HyperClouds. And actually that is when we design something better, you make a system more efficient and it's going to be better from the cost point of view, from the performance point of view. But we need to add some math into all this customer focus centering and I really admire AWS and their executive team focusing on the customer. But now it's time to go back and see, if we apply some computer science, if you try to formalize to build a theoretical model of cloud, can we build a system that is better than existing ones? >> So David, how do you- >> this is what I'm saying. >> That's a good question >> How do You see the operating system of a, or operating environment of a decentralized cloud? >> Well I think it's layered. I mean we have operating systems that can run systems quite efficiently. Linux has sort of one in the data center, but we're talking about a layer on top of that. And I think we're seeing the emergence of that. For example, on the job scheduling side of things, Kubernetes makes a really good example. You know, you break the workload into the most granular units of compute, the containerized microservice, and then you use a declarative model to state what is needed and give the system the degrees of freedom that it can choose how to instantiate it. Because the thing about these distributed systems, is that the complexity explodes, right? Running a piece of hardware, running a single server is not a problem, even with all the many cores and everything like that. It's when you start adding in the networking, and making it so that you have many of them. And then when it's going across whole different data centers, you know, so, at that level the way you solve this is not manually (group laughs) and not procedurally. You have to change the language so it's intent based, it's a declarative model, and what you're stating is what is intended, and you're leaving it to more advanced techniques, like machine learning to decide how to instantiate that service across the cluster, which is what Kubernetes does, or how to instantiate the data across the diverse storage infrastructure. And that's what we do. >> So that's a very good point because actually what has been neglected with HyperClouds is really optimization and automation. But in order to be able to do both of these things, you need, I'm going back and I'm stubborn, you need to have a mathematical model, a theoretical model because what does automation mean? It means that we have to put machines to do the work instead of us, and machines work with what? Formula, with algorithms, they don't work with services. So I think Supercloud is an opportunity to underscore the importance of optimization and automation- >> Totally agree. >> In HyperCloud, and actually by doing that, we can also have an interesting connotation. We are also contributing to save our planet, because if you think right now. we're consuming a lot of energy on this HyperClouds and also all this AI applications, and I think we can do better and build the same kind of application using less energy. >> So yeah, great point, love that call out, the- you know, Dave and I always joke about the old, 'cause we're old, we talk about, you know, (Nelu Laughs) old history, OS/2 versus DOS, okay, OS's, OS/2 is silly better, first threaded OS, DOS never went away. So how does legacy play into this conversation? Because I buy the theoretical, I love the conversation. Okay, I think it's an OS, totally see it that way myself. What's the blocker? Is there a legacy that drags it back? Is the anchor dragging from legacy? Is there a DOS OS/2 moment? Is there an opportunity to flip the script? This is- >> I think that's a perfect example of why we need to support the existing interfaces, Operating Systems, real operating systems like Linux, understands how to present data, it's called a file system, block devices, things that that plumb in there. And by, you know, going to a REST interface and S3 and telling people they have to rewrite their applications, you can't even consume your application binaries that way, the OS doesn't know how to pull that sort of thing. So we, to get to cloud, to get to the ability to host massive numbers of tenants within a centralized infrastructure, you know, we abandoned these lower level interfaces to the OS and we have to go back to that. It's the reason why DOS ultimately won, is it had the momentum of the install base. We're seeing the same thing here. Whatever it is, it has to be a real file system and not a come down file system >> Nelu, what's your reaction, 'cause you're in the theoretical bandwagon. Let's get your reaction. >> No, I think it's a good, I'll give, you made a good analogy between OS/2 and DOS, but I'll go even farther saying, if you think about the evolution operating system didn't stop the evolution of underlying microprocessors, hardware, and so on and so forth. On the contrary, it was a catalyst for that. So because everybody could develop their own hardware, without worrying that the applications on top of operating system are going to modify. The same thing is going to happen with Supercloud. You're going to have the AWSs, you're going to have the Azure and the the GCP continue to evolve in their own way proprietary. But if we create on top of it the right interface >> The open, this is why open is important. >> That's correct, because actually you're going to see sometime ago, everybody was saying, remember venture capitals were saying, "AWS killed the world, nobody's going to come." Now you see what Oracle is doing, and then you're going to see other players. >> It's funny, Amazon's trying to be more like Microsoft. Microsoft's trying to be more like Amazon and Google- Oracle's just trying to say they have cloud. >> That's, that's correct, (group laughs) so, my point is, you're going to see a multiplication of this HyperClouds and cloud technology. So, the system has to be open in order to accommodate what it is and what is going to come. Okay, so it's open. >> So the the legacy- so legacy is an opportunity, not a blocker in your mind. And you see- >> That's correct, I think we should allow them to continue to to to be their own actually. But maybe you're going to find a way to connect with it. >> Amazon's the processor, and they're on the 80 80 80 right? >> That's correct. >> You're saying you love people trying to get put to work. >> That's a good analogy. >> But, performance levels you say good luck, right? >> Well yeah, we have to be able to take traditional applications, high performance applications, those that consume file system and persistent data. Those things have to be able to run anywhere. You need to be able to put, put them onto, you know, more elastic infrastructure. So, we have to actually get cloud to where it lives up to its billing. >> And that's what you're solving for, with Hammerspace, >> That's what we're solving for, making it possible- >> Give me the bumper sticker. >> Solving for how do you have massive quantities of unstructured file data? At the end of the day, all data ultimately is unstructured data. Have that persistent data available, across any data center, within any cloud, within any region on-prem, at the edge. And have not just the same APIs, but have the exact same data sets, and not sucked over a straw remote, but at extreme high performance, local access. So how do you have local access to globally shared distributed data? And that's what we're doing. We are orchestrating data globally across all different forms of storage infrastructure, so you have a consistent access at the highest performance levels, at the lowest level innate built into the OS, how to consume it as (indistinct) >> So are you going into the- all the clouds and natively building in there, or are you off cloud? >> So This is software that can run on cloud instances and provide high performance file within the cloud. It can take file data that's on-prem. Again, it's software, it can run in virtual or on physical servers. And it abstracts the data from the existing storage infrastructure, and makes the data visible and consumable and orchestratable across any of it. >> And what's the elevator pitch for Cloud of Cloud, give that too. >> Well, Cloud of Clouds creates a theoretical model of cloud, and it describes every single object in the cloud. Where is data, execution units, and connectivity, with one single class of very simple object. And I can, I can give you (indistinct) >> And the problem that solves is what? >> The problem that solves is, it creates this mathematical model that is necessary in order to do other interesting things, such as optimization, using sata engines, using automation, applying ML for instance. Or deep learning to automate all this clouds, if you think about in the industrial field, we know how to manage and automate huge plants. Why wouldn't it do the same thing in cloud? It's the same thing you- >> That's what you mean by theoretical model. >> Nelu: That's correct. >> Lay out the architecture, almost the bones of skeleton or something, or, and then- >> That's correct, and then on top of it you can actually build a platform, You can create your services, >> when you say math, you mean you put numbers to it, you kind of index it. >> You quantify this thing and you apply mathematical- It's really about, I can disclose this thing. It's really about describing the cloud as a knowledge graph for every single object in the graph for node, an edge is a vector. And then once you have this model, then you can apply the field theory, and linear algebra to do operation with these vectors. And it's, this creates a very interesting opportunity to let the math do this thing for us. >> Okay, so what happens with hyperscale, or it's like AWS in your model. >> So in, in my model actually, >> Are they happy with this, or they >> I'm very happy with that. >> Will they be happy with you? >> We create an interface to every single HyperCloud. We actually, we don't need to interface with the thousands of APIs, but you know, if we have the 80 20 rule, and we map these APIs into this graph, and then every single operation that is done in this graph is done from the beginning, in an optimized manner and also automation ready. >> That's going to be great. David, I want us to go back to you before we close real quick. You've had a lot of experience, multiple ventures on the front end. You talked to a lot of customers who've been innovating. Where are the classic (indistinct)? Cause you, you used to sell and invent product around the old school enterprises with storage, you know that that trajectory storage is still critical to store the data. Where's the classic enterprise grade mindset right now? Those customers that were buying, that are buying storage, they're in the cloud, they're lifting and shifting. They not yet put the throttle on DevOps. When they look at this Supercloud thing, Are they like a deer in the headlights, or are they like getting it? What's the, what's the classic enterprise look like? >> You're seeing people at different stages of adoption. Some folks are trying to get to the cloud, some folks are trying to repatriate from the cloud, because they've realized it's better to own than to rent when you use a lot of it. And so people are at very different stages of the journey. But the one thing that's constant is that there's always change. And the change here has to do with being able to change the location where you're doing your computing. So being able to support traditional workloads in the cloud, being able to run things at the edge, and being able to rationalize where the data ought to exist, and with a declarative model, intent-based, business objective-based, be able to swipe a mouse and have the data get redistributed and positioned across different vendors, across different clouds, that, we're seeing that as really top of mind right now, because everybody's at some point on this journey, trying to go somewhere, and it involves taking their data with them. (John laughs) >> Guys, great conversation. Thanks so much for coming on, for John, Dave. Stay tuned, we got a great analyst power panel coming right up. More from Palo Alto, Supercloud 2. Be right back. (bouncy music)
SUMMARY :
and I'm really pleased to And Dr. Nelu Mihai is the CEO So I'm going to start right off On the other hand, if you look at what's So the argument, the of platform being the monolith, you know, but on the developer cloud, It's the scale thing that gets me- the ability to run anything anywhere. of the heavy lifting of IT. Not have to run your And then you know, it's like web 2.0. It's what Cloud It's what cloud was supposed to be, and you can choose somebody bound to that. Also having in mind the to rewrite your application. That's correct. I mean your point is Yeah, let that thing continue to grow. of the cloud in which you put that. So, the state stuff- because even the application binaries If you think about no software running on Dave: So it's an illusion, okay. (indistinct) you guys talk And actually depending on the application, that no, you have to build the job scheduler, and the thing the equation, thank you. a PhD in operating system. about is an operating system. I think I think it's going to and it's going to be better at that level the way you But in order to be able to and build the same kind of Because I buy the theoretical, the OS doesn't know how to Nelu, what's your reaction, of it the right interface The open, this is "AWS killed the world, to be more like Microsoft. So, the system has to be open So the the legacy- to continue to to to put to work. You need to be able to put, And have not just the same APIs, and makes the data visible and consumable for Cloud of Cloud, give that too. And I can, I can give you (indistinct) It's the same thing you- That's what you mean when you say math, and linear algebra to do Okay, so what happens with hyperscale, the thousands of APIs, but you know, the old school enterprises with storage, and being able to rationalize Stay tuned, we got a
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Nelu | PERSON | 0.99+ |
David Flynn | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
AWS | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
LA | LOCATION | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
OS/2 | TITLE | 0.99+ |
Nir Zuk | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Hammerspace | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Bell Labs | ORGANIZATION | 0.99+ |
Nelu Mihai | PERSON | 0.99+ |
DOS | TITLE | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
Palo Alto Networks | ORGANIZATION | 0.99+ |
twice | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Canada | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
Nelu Laughs | PERSON | 0.98+ |
thousands | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
Linux | TITLE | 0.97+ |
HyperCloud | TITLE | 0.97+ |
Cloud of Cloud | TITLE | 0.97+ |
one | QUANTITY | 0.96+ |
Cloud of Clouds | ORGANIZATION | 0.95+ |
GCP | TITLE | 0.95+ |
Azure | TITLE | 0.94+ |
three variables | QUANTITY | 0.94+ |
one single class | QUANTITY | 0.94+ |
single server | QUANTITY | 0.94+ |
triplet | QUANTITY | 0.94+ |
one region | QUANTITY | 0.92+ |
NetApp | TITLE | 0.92+ |
DOS OS/2 | TITLE | 0.92+ |
Azure | ORGANIZATION | 0.92+ |
earlier today | DATE | 0.92+ |
Cloud of Clouds | TITLE | 0.91+ |
Prakash Darji, Pure Storage
(upbeat music) >> Hello, and welcome to the special Cube conversation that we're launching in conjunction with Pure Accelerate. Prakash Darji is here, is the general manager of Digital Experience. They actually have a business unit dedicated to this at Pure Storage. Prakash, welcome back, good to see you. >> Yeah Dave, happy to be here. >> So a few weeks back, you and I were talking about the Shift 2 and as a service economy and which is a good lead up to Accelerate, held today, we're releasing this video in LA. This is the fifth in person Accelerate. It's got a new tagline techfest so you're making it fun, but still hanging out to the tech, which we love. So this morning you guys made some announcements expanding the portfolio. I'm really interested in your reaffirmed commitment to Evergreen. That's something that got this whole trend started in the introduction of Evergreen Flex. What is that all about? What's your vision for Evergreen Flex? >> Well, so look, this is one of the biggest moments that I think we have as a company now, because we introduced Evergreen and that was and probably still is one of the largest disruptions to happen to the industry in a decade. Now, Evergreen Flex takes the power of modernizing performance and capacity to storage beyond the box, full stop. So we first started on a project many years ago to say, okay, how can we bring that modernization concept to our entire portfolio? That means if someone's got 10 boxes, how do you modernize performance and capacity across 10 boxes or across maybe FlashBlade and FlashArray. So with Evergreen Flex, we first are starting to hyper disaggregate performance and capacity and the capacity can be moved to where you need it. So previously, you could have thought of a box saying, okay, it has this performance or capacity range or boundary, but let's think about it beyond the box. Let's think about it as a portfolio. My application needs performance or capacity for storage, what if I could bring the resources to it? So with Evergreen Flex within the QLC family with our FlashBlade and our FlashArray QLC projects, you could actually move QLC capacity to where you need it. And with FlashArray X and XL or TLC family, you could move capacity to where you need it within that family. Now, if you're enabling that, you have to change the business model because the capacity needs to get build where you use it. If you use it in a high performance tier, you could build at a high performance rate. If you use it as a lower performance tier, you could build at a lower performance rate. So we changed the business model to enable this technology flexibility, where customers can buy the hardware and they get a pay per use consumption model for the software and services, but this enables the technology flexibility to use your capacity wherever you need. And we're just continuing that journey of hyper disaggregated. >> Okay, so you solve the problem of having to allocate specific capacity or performance to a particular workload. You can now spread that across whatever products in the portfolio, like you said, you're disaggregating performance and capacity. So that's very cool. Maybe you could double click on that. You obviously talk to customers about doing this. They were in pain a little bit, right? 'Cause they had this sort of stovepipe thing. So talk a little bit about the customer feedback that led you here. >> Well, look, let's just say today if you're an application developer or you haven't written your app yet, but you know you're going to. Well, you need that at least say I need something, right? So someone's going to ask you what kind of storage do you need? How many IOPS, what kind of performance capacity, before you've written your code. And you're going to buy something and you're going to spend that money. Now at that point, you're going to go write your application, run it on that box and then say, okay, was I right or was I wrong? And you know what? You were guessing before you wrote the software. After you wrote the software, you can test it and decide what you need, how it's going to scale, et cetera. But if you were wrong, you already bought something. In a hyper disaggregated world, that capacity is not a sunk cost, you can use it wherever you want. You can use capacity of somewhere else and bring it over there. So in the world of application development and in the world of storage, today people think about, I've got a workload, it's SAP, it's Oracle, I've built this custom app. I need to move it to a tier of storage, a performance class. Like you think about the application and you think about moving the application. And it takes time to move the application, takes performance, takes loan, it's a scheduled event. What if you said, you know what? You don't have to do any of that. You just move the capacity to where you need it, right? >> Yep. >> So the application's there and you actually have the ability to instantaneously move the capacity to where you need it for the application. And eventually, where we're going is we're looking to do the same thing across the performance hearing. So right now, the biggest benefit is the agility and flexibility a customer has across their fleet. So Evergreen was great for the customer with one array, but Evergreen Flex now brings that power to the entire fleet. And that's not tied to just FlashArray or FlashBlade. We've engineered a data plane in our direct flash fabric software to be able to take on the personality of the system it needs to go into. So when a data pack goes into a FlashBlade, that data pack is optimized for use in that scale out architecture with the metadata for FlashBlade. When it goes into a FlashArray C it's optimized for that metadata structure. So our Purity software has made this transformative to be able to do this. And we created a business model that allowed us to take advantage of this technology flexibility. >> Got it. Okay, so you got this mutually interchangeable performance and capacity across the portfolio beautiful. And I want to come back to sort of the Purity, but help me understand how this is different from just normal Evergreen, existing evergreen options. You mentioned the one array, but help us understand that more fully. >> Well, look, so in addition to this, like we had Evergreen Gold historically. We introduced Evergreen Flex and we had Pure as a service. So you had kind of two spectrums previously. You had Evergreen Gold on one hand, which modernized the performance and capacity of a box. You had Pure as a service that said don't worry about the box, tell me how many IOPS you have and will run and operate and manage that service for you. I think we've spoken about that previously on theCUBE. >> Yep. >> Now, we have this model where it's not just about the box, we have this model where we say, you know what, it's your fleet. You're going to run and operate and manage your fleet and you could move the capacity to where you need it. So as we started thinking about this, we decided to unify our entire portfolio of sub software and subscription services under the Evergreen brand. Evergreen Gold we're renaming to Evergreen Forever. We've actually had seven customers just crossed a decade of updates Forever Evergreen within a box. So Evergreen Forever is about modernizing a box. Evergreen Flex is about modernizing your fleet and Evergreen one, which is our rebrand of Pure as a service is about modernizing your labor. Instead of you worrying about it, let us do it for you. Because if you're an application developer and you're trying to figure out, where should I put my capacity? Where should I do it? You can just sign up for the IOPS you need and let us actually deliver and move the components to where you need it for performance, capacity, management, SLAs, et cetera. So as we think about this, for us this is a spectrum and a continuum of where you're at in the modernization journey to software subscription and services. >> Okay, got it. So why did you feel like now was the right time for the rebranding and the renaming convention, what's behind? What was the thing? Take us inside the internal conversations and the chalkboard discussion? >> Well, look, the chalkboard discussion's simple. It's everything was built on the Evergreen stateless architecture where within a box, right? We disaggregated the performance and capacity within the box already, 10 years ago within Evergreen. And that's what enabled us to build Pure as a service. That's why I say like when companies say they built a service, I'm like it's not a service if you have to do a data migration. You need a stateless architecture that's disaggregated. You can almost think of this as the anti hyper-converge, right? That's going the other way. It's hyper disaggregated. >> Right. >> And that foundation is true for our whole portfolio. That was fundamental, the Evergreen architecture. And then if Gold is modernizing a box and Flex is modernizing your fleet and your portfolio and Pure as a service is modernizing the labor, it is more of a continuation in the spectrum of how do you ensure you get better with age, right? And it's like one of those things when you think about a car. Miles driven on a car means your car's getting older and it doesn't necessarily get better with age, right? What's interesting when you think about the human body, yeah, you get older and some people deteriorate with age and some people it turns out for a period of time, you pick up some muscle mass, you get a little bit older, you get a little bit wiser and you get a little bit better with age for a while because you're putting in the work to modernize, right? But where in infrastructure and hardware and technology are you at the point where it always just gets better with age, right? We've introduced that concept 10 years ago. And we've now had proven industry success over a decade, right? As I mentioned, our first seven customers who've had a decade of Evergreen update started with an FA-300 way back when, and since then performance and capacity has been getting better over time with Evergreen Forever. So this is the next 10 years of it getting better and better for the company and not just tying it to the box because now we've grown up, we've got customers with like large fleets. I think one of our customers just hit 900 systems, right? >> Wow. >> So when you have 900 systems, right? And you're running a fleet you need to think about, okay, how am I using these resources? And in this day and age in that world, power becomes a big thing because if you're using resources inefficiently and the cost of power and energy is up, you're going to be in a world of hurt. So by using Flex where you can move the capacity to where it's needed, you're creating the most efficient operating environment, which is actually the lowest power consumption environment as well. >> Right. >> So we're really excited about this journey of modernizing, but that rebranding just became kind of a no brainer to us because it's all part of the spectrum on your journey of whether you're a single array customer, you're a fleet customer, or you don't want to even run, operate and manage. You can actually just say, you know what, give me the guarantee in the SLA. So that's the spectrum that informed the rebranding. >> Got it. Yeah, so to your point about the human body, all you got to do is look at Tom Brady's NFL combine videos and you'll see what a transformation. Fine wine is another one. I like the term hyper disaggregated because that to me is consistent with what's happening with the cloud and edge. We're building this hyper distributed or disaggregated system. So I want to just understand a little bit about you mentioned Purity so there's this software obviously is the enabler here, but what's under the covers? Is it like a virtualizer or megaload balancer, metadata manager, what's the tech behind this? >> Yeah, so we'll do a little bit of a double tech, right? So we have this concept of drives where in Purity, we build our own software for direct flash that takes the NAND and we do the NAND management as we're building our drives in Purity software. Now ,that advantage gives us the ability to say how should this drive behave? So in a FlashArray C system, it can behave as part of a FlashArray C and its usable capacity that you can write because the metadata and some of the system information is in NVRAM as part of the controller, right? So you have some metadata capability there. In a legend architecture for example, you have a distributed Blade architecture. So you need parts of that capacity to operate almost like a single layer chip where you can actually have metadata operations independent of your storage operations that operate like QLC. So we actually manage the NAND in a very very different way based on the persona of the system it's going into, right? So this capacity to make it usable, right? It's like saying a competitor could go ahead name it, Dell that has power max in Isilon, HPE that has single store and three power and nimble and like you name, like can you really from a technology standpoint say your capacity can be used anywhere or all these independent systems. Everyone's thinking about the world like a system, like here's this system, here's that system, here's that system. And your capacity is locked into a system. To be able to unlock that capacity to the system, you need to behave differently with the media type in the operating environment you're going into and that's what Purity does, right? So we are doing that as part of our direct Flex software around how we manage these drives to enable this. >> Well, it's the same thing in the cloud precaution, right? I mean, you got different APIs and primitive for object, for block, for file. Now, it's all programmable infrastructure so that makes it easier, but to the point, it's still somewhat stovepipe. So it's funny, it's good to see your commitment to Evergreen, I think you're right. You lay down the gauntlet a decade plus ago. First everybody ignored you and then they kind of laughed at you, then they criticized you, and then they said, oh, then you guys reached the escape velocity. So you had a winning hand. So I'm interested in that sort of progression over the past decade where you're going, why this is so important to your customers, where you're trying to get them ultimately. >> Well, look, the thing that's most disappointing is if I bought 100 terabytes still have to re-buy it every three or five years. That seems like a kind of ridiculous proposition, but welcome to storage. You know what I mean? That's what most people do with Evergreen. We want to end data migrations. We want to make sure that every software updates, hardware updates, non disruptive. We want to make it easy to deploy and run at scale for your fleet. And eventually we want everyone to move to our Evergreen one, formerly Pure as a service where we can run and operate and manage 'cause this is all about trust. We're trying to create trust with the customer to say, trust us, to run and operate and scale for you and worry about your business because we make tech easy. And like think about this hyper disaggregated if you go further. If you're going further with hyper disaggregated, you can think about it as like performance and capacity is your Lego building blocks. Now for anyone, I have a son, he wants to build a Lego Death Star. He didn't have that manual, he's toast. So when you move to at scale and you have this hyper disaggregated world and you have this unlimited freedom, you have unlimited choice. It's the problem of the cloud today, too much choice, right? There's like hundreds of instances of this, what do I even choose? >> Right. >> Well, so the only way to solve that problem and create simplicity when you have so much choice is put data to work. And that's where Pure one comes in because we've been collecting and we can scan your landscape and tell you, you should move these types of resources here and move those types of resources there, right? In the past, it was always about you should move this application there or you should move this application there. We're actually going to turn the entire industry on it's head. It's not like applications and data have gravity. So let's think about moving resources to where that are needed versus saying resources are a fixed asset, let's move the applications there. So that's a concept that's new to the industry. Like we're creating that concept, we're introducing that concept because now we have the technology to make that reality a new efficient way of running storage for the world. Like this is that big for the company. >> Well, I mean, a lot of the failures in data analytics and data strategies are a function of trying to jam everything into a single monolithic system and hyper centralize it. Data by its very nature is distributed. So hyper disaggregated fits that model and the pendulum's clearly swinging to that. Prakash, great to have you, purestorage.com I presume is where I can learn more? >> Oh, absolutely. We're super excited and our pent up by demand I think in this space is huge so we're looking forward to bringing this innovation to the world. >> All right, hey, thanks again. Great to see you, I appreciate you coming on and explaining this new model and good luck with it. >> All right, thank you. >> All right, and thanks for watching. This is David Vellante, and appreciate you watching this Cube conversation, we'll see you next time. (upbeat music)
SUMMARY :
is the general manager So this morning you guys capacity to where you need it. in the portfolio, like you So someone's going to ask you the capacity to where you and capacity across the the box, tell me how many IOPS you have capacity to where you need it. and the chalkboard discussion? if you have to do a data migration. and technology are you at the point So when you have 900 systems, right? So that's the spectrum that disaggregated because that to me and like you name, like can you really So you had a winning hand. and you have this hyper and create simplicity when you have and the pendulum's to bringing this innovation to the world. appreciate you coming on and appreciate you watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Vellante | PERSON | 0.99+ |
Evergreen | ORGANIZATION | 0.99+ |
Prakash | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
LA | LOCATION | 0.99+ |
10 boxes | QUANTITY | 0.99+ |
10 boxes | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Accelerate | ORGANIZATION | 0.99+ |
Prakash Darji | PERSON | 0.99+ |
today | DATE | 0.99+ |
Tom Brady | PERSON | 0.99+ |
900 systems | QUANTITY | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
Pure Accelerate | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
seven customers | QUANTITY | 0.99+ |
first seven customers | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.98+ |
Evergreen Gold | ORGANIZATION | 0.98+ |
Evergreen Forever | ORGANIZATION | 0.98+ |
First | QUANTITY | 0.98+ |
one array | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
fifth | QUANTITY | 0.97+ |
purestorage.com | OTHER | 0.95+ |
single | QUANTITY | 0.95+ |
Forever Evergreen | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.93+ |
Evergreen Flex | ORGANIZATION | 0.93+ |
single layer | QUANTITY | 0.93+ |
FlashArray C | TITLE | 0.91+ |
single store | QUANTITY | 0.91+ |
two spectrums | QUANTITY | 0.9+ |
a decade plus ago | DATE | 0.9+ |
TLC | ORGANIZATION | 0.89+ |
NFL | ORGANIZATION | 0.89+ |
single array | QUANTITY | 0.88+ |
three | QUANTITY | 0.87+ |
FA-300 | COMMERCIAL_ITEM | 0.87+ |
SAP | ORGANIZATION | 0.85+ |
hundreds of instances | QUANTITY | 0.83+ |
past | DATE | 0.83+ |
over a decade | QUANTITY | 0.82+ |
double | QUANTITY | 0.8+ |
Shift 2 | TITLE | 0.79+ |
Purity | TITLE | 0.79+ |
FlashBlade | COMMERCIAL_ITEM | 0.78+ |
Death Star | COMMERCIAL_ITEM | 0.78+ |
Miles | QUANTITY | 0.77+ |
next 10 years | DATE | 0.73+ |
Pure | COMMERCIAL_ITEM | 0.73+ |
Isilon | LOCATION | 0.73+ |
every three | QUANTITY | 0.73+ |
this morning | DATE | 0.72+ |
a decade | QUANTITY | 0.71+ |
Purity | ORGANIZATION | 0.71+ |
a few weeks back | DATE | 0.71+ |
Pure | ORGANIZATION | 0.69+ |
Eric Herzog, Infinidat | CUBEconversations
(upbeat music) >> Despite its 70 to $80 billion total available market, computer storage is like a small town, everybody knows everybody else. We say in the storage world, there are a hundred people, and 99 seats. Infinidat is a company that was founded in 2011 by storage legend, Moshe Yanai. The company is known for building products with rock solid availability, simplicity, and a passion for white glove service, and client satisfaction. Company went through a leadership change recently, in early this year, appointed industry vet, Phil Bullinger, as CEO. It's making more moves, bringing on longtime storage sales exec, Richard Bradbury, to run EMEA, and APJ Go-To-Market. And just recently appointed marketing maven, Eric Hertzog to be CMO. Hertzog has worked at numerous companies, ranging from startups that were acquired, two stints at IBM, and is SVP of product marketing and management at Storage Powerhouse, EMC, among others. Hertzog has been named CMO of the year as an OnCon Icon, and top 100 influencer in big data, AI, and also hybrid cloud, along with yours truly, if I may say so. Joining me today, is the newly minted CMO of Infinidat, Mr.Eric Hertzog. Good to see you, Eric, thanks for coming on. >> Dave, thank you very much. You know, we love being on theCUBE, and I am of course sporting my Infinidat logo wear already, even though I've only been on the job for two weeks. >> Dude, no Hawaiian shirt, okay. That's a pretty buttoned up company. >> Well, next time, I'll have a Hawaiian shirt, don't worry. >> Okay, so give us the backstory, how did this all come about? you know Phil, my 99 seat joke, but, how did it come about? Tell us that story. >> So, I have known Phil since the late 90s, when he was a VP at LSA of Engineering, and he had... I was working at a company called Milax, which was acquired by IBM. And we were doing a product for HP, and he was providing the subsystem, and we were providing the fiber to fiber, and fiber to SCSI array controllers back in the day. So I met him then, we kept in touch for years. And then when I was a senior VP at EMC, he started originally as VP of engineering for the EMC Isilon team. And then he became the general manager. So, while I didn't work for him, I worked with him, A, at LSA, and then again at EMC. So I just happened to congratulate him about some award he won, and he said "Hey Herzog, "we should talk, I have a CMO opening". So literally happened over LinkedIn discussion, where I reached out to him, and congratulate him, he said "Hey, I need a CMO, let's talk". So, the whole thing took about three weeks in all honesty. And that included interviewing with other members of his exec staff. >> That's awesome, that's right, he was running the Isilon division for awhile at the EMC. >> Right. >> You guys were there, and of course, you talk about Milax, LSA, there was a period of time where, you guys were making subsystems for everybody. So, you sort of saw the whole landscape. So, you got some serious storage history and chops. So, I want to ask you what attracted you to Infinidat. I mean, obviously they're a leader in the magic quadrant. We know about InfiniBox, and the petabyte scale, and the low latency, what are the... When you look at the market, you obviously you see it, you talk to everybody. What were the trends that were driving your decision to join Infinidat? >> Well, a couple of things. First of all, as you know, and you guys have talked about it on theCUBE, most CIOs don't know anything about storage, other than they know a guy got to spend money on it. So the Infinidat message of optimizing applications, workloads, and use cases with 100% guaranteed availability, unmatched reliability, the set and forget ease of use, which obviously AIOps is driving that, and overall IT operations management was very attractive. And then on top of that, the reality is, when you do that consolidation, which Infinidat can do, because of the performance that it has, you can dramatically free up rack, stack, power, floor, and operational manpower by literally getting rid of, tons and tons of arrays. There's one customer that they have, you actually... I found out when I got here, they took out a hundred arrays from EMC Hitachi. And that company now has 20 InfiniBoxes, and InfiniBox SSAs running the exact same workloads that used to be, well over a hundred subsystems from the other players. So, that's got a performance angle, a CapEx and OPEX angle, and then even a clean energy angle because reducing Watson slots. So, lots of different advantages there. And then I think from just a pure marketing perspective, as someone has said, they're the best kept secret to the storage industry. And so you need to, if you will, amp up the message, get it out. They've expanded the portfolio with the InfiniBox SSA, the InfiniGuard product, which is really optimized, not only as the PBA for backup perspective, and it works with all the backup vendors, but also, has an incredible play on data and cyber resilience with their capability of local logical air gapping, remote logical air gapping, and creating a clean room, if you will, a vault, so that you can then recover their review for malware ransomware before you do a full recovery. So it's got the right solutions, just that most people didn't know who they were. So, between the relationship with Phil, and the real opportunity that this company could skyrocket. In fact, we have 35 job openings right now, right now. >> Wow, okay, so yeah, I think it was Duplessy called them the best kept secret, he's not the only one. And so that brings us to you, and your mission because it's true, it is the best kept secret. You're a leader in the Gartner magic quadrant, but I mean, if you're not a leader in a Gartner magic quadrant, you're kind of nobody in storage. And so, but you got chops and block storage. You talked about the consolidation story, and I've talked to many folks in Infinidat about that. Ken Steinhardt rest his soul, Dr. Rico, good business friend, about, you know... So, that play and how you handle the whole blast radius. And that's always a great discussion, and Infinidat has proven that it can operate at very very high performance, low latency, petabyte scale. So how do you get the word out? What's your mission? >> Well, so we're going to do a couple of things. We're going to be very, very tied to the channel as you know, EMC, Dell EMC, and these are articles that have been in CRN, and other channel publications is pulling back from the channel, letting go of channel managers, and there's been a lot of conflict. So, we're going to embrace the channel. We already do well over 90% of our business within general globally. So, we're doing that. In fact, I am meeting, personally, next week with five different CEOs of channel partners. Of which, only one of them is doing business with Infinidat now. So, we want to expand our channel, and leverage the channel, take advantage of these changes in the channel. We are going to be increasing our presence in the public relations area. The work we do with all the industry analysts, not just in North America, but in Europe as well, and Asia. We're going to amp up, of course, our social media effort, both of us, of course, having been named some of the best social media guys in the world the last couple of years. So, we're going to open that up. And then, obviously, increase our demand generation activities as well. So, we're going to make sure that we leverage what we do, and deliver that message to the world. Deliver it to the partner base, so the partners can take advantage, and make good margin and revenue, but delivering products that really meet the needs of the customers while saving them dramatically on CapEx and OPEX. So, the partner wins, and the end user wins. And that's the best scenario you can do when you're leveraging the channel to help you grow your business. >> So you're not only just the marketing guy, I mean, you know product, you ran product management at very senior levels. So, you could... You're like a walking spec sheet, John Farrier says you could just rattle it off. Already impressed that how much you know about Infinidat, but when you joined EMC, it was almost like, there was too many products, right? When you joined IBM, even though it had a big portfolio, it's like it didn't have enough relevant products. And you had to sort of deal with that. How do you feel about the product portfolio at Infinidat? >> Well, for us, it's right in the perfect niche. Enterprise class, AI based software defined storage technologies that happens run on a hybrid array, an all flash array, has a variant that's really tuned towards modern data protection, including data and cyber resilience. So, with those three elements of the portfolio, which by the way, all have a common architecture. So while there are three different solutions, all common architecture. So if you know how to use the InfiniBox, you can easily use an InfiniGuard. You got an InfiniGuard, you can easily use an InfiniBox SSA. So the capability of doing that, helps reduce operational manpower and hence, of course, OPEX. So the story is strong technically, the story has a strong business tie in. So part of the thing you have to do in marketing these days. Yeah, we both been around. So you could just talk about IOPS, and latency, and bandwidth. And if the people didn't... If the CIO didn't know what that meant, so what? But the world has changed on the expenditure of infrastructure. If you don't have seamless integration with hybrid cloud, virtual environments and containers, which Infinidat can do all that, then you're not relevant from a CIO perspective. And obviously with many workloads moving to the cloud, you've got to have this infrastructure that supports core edge and cloud, the virtualization layer, and of course, the container layer across a hybrid environment. And we can do that with all three of these solutions. Yet, with a common underlying software defined storage architecture. So it makes the technical story very powerful. Then you turn that into business benefit, CapEX, OPEX, the operational manpower, unmatched availability, which is obviously a big deal these days, unmatched performance, everybody wants their SAP workload or their Oracle or Mongo Cassandra to be, instantaneous from the app perspective. Excuse me. And we can do that. And that's the kind of thing that... My job is to translate that from that technical value into the business value, that can be appreciated by the CIO, by the CSO, by the VP of software development, who then says to VP of industry, that Infinidat stuff, we actually need that for our SAP workload, or wow, for our overall corporate cybersecurity strategy, the CSO says, the key element of the storage part of that overall corporate cybersecurity strategy are those Infinidat guys with their great cyber and data resilience. And that's the kind of thing that my job, and my team's job to work on to get the market to understand and appreciate that business value that the underlying technology delivers. >> So the other thing, the interesting thing about Infinidat. This was always a source of spirited discussions over the years with business friends from Infinidat was the company figured out a way, it was formed in 2011, and at the time the strategy perfectly reasonable to say, okay, let's build a better box. And the way they approached that from a cost standpoint was you were able to get the most out of spinning disk. Everybody else was moving to flash, of course, floyers work a big flash, all flash data center, etc, etc. But Infinidat with its memory cache and its architecture, and its algorithms was able to figure out how to magically get equivalent or better performance in an all flash array out of a system that had a lot of spinning disks, which is I think unique. I mean, I know it's unique, very rare anyway. And so that was kind of interesting, but at the time it made sense, to go after a big market with a better mouse trap. Now, if I were starting a company today, I might take a different approach, I might try to build, a storage cloud or something like that. Or if I had a huge install base that I was trying to protect, and maybe go into that. But so what's the strategy? You still got huge share gain potentials for on-prem is that the vector? You mentioned hybrid cloud, what's the cloud strategy? Maybe you could summarize your thoughts on that? >> Sure, so the cloud strategy, is first of all, seamless integration to hybrid cloud environments. For example, we support Outpost as an example. Second thing, you'd be surprised at the number of cloud providers that actually use us as their backend, either for their primary storage, or for their secondary storage. So, we've got some of the largest hyperscalers in the world. For example, one of the Telcos has 150 Infiniboxes, InfiniBox SSAS and InfiniGuards. 150 running one of the largest Telcos on the planet. And a huge percentage of that is their corporate cloud effort where they're going in and saying, don't use Amazon or Azure, why don't you use us the giant Telco? So we've got that angle. We've got a ton of mid-sized cloud providers all over the world that their backup is our servers, or their primary storage that they offer is built on top of Infiniboxes or InfiniBox SSA. So, the cloud strategy is one to arm the hyperscalers, both big, medium, and small with what they need to provide the right end user services with the right outside SLAs. And the second thing is to have that hybrid cloud integration capability. For example, when I talked about InfiniGuard, we can do air gapping locally to give almost instantaneous recovery, but at the same time, if there's an earthquake in California or a tornado in Kansas City, or a tsunami in Singapore, you've got to have that remote air gapping capability, which InfiniGuard can do. Which of course, is essentially that logical air gap remote is basically a cloud strategy. So, we can do all of that. That's why it has a cloud strategy play. And again we have a number of public references in the cloud, US signal and others, where they talk about why they use the InfiniBox, and our technologies to offer their storage cloud services based on our platform. >> Okay, so I got to ask you, so you've mentioned earthquakes, a lot of earthquakes in California, dangerous place to live, US headquarters is in Waltham, we're going to pry you out of the Golden State? >> Let's see, I was born at Stanford hospital where my parents met when they were going there. I've never lived anywhere, but here. And of course, remember when I was working for EMC, I flew out every week, and I sort of lived at that Milford Courtyard Marriott. So I'll be out a lot, but I will not be moving, I'm a Silicon Valley guy, just like that old book, the Silicon Valley Guy from the old days, that's me. >> Yeah, the hotels in Waltham are a little better, but... So, what's your priority? Last question. What's the priority first 100 days? Where's your focus? >> Number one priority is team assessment and integration of the team across the other teams. One of the things I noticed about Infinidat, which is a little unusual, is there sometimes are silos and having done seven other small companies and startups, in a startup or a small company, you usually don't see that silo-ness, So we have to break down those walls. And by the way, we've been incredibly successful, even with the silos, imagine if everybody realized that business is a team sport. And so, we're going to do that, and do heavy levels of integration. We've already started to do an incredible outreach program to the press and to partners. We won a couple awards recently, we're up for two more awards in Europe, the SDC Awards, and one of the channel publications is going to give us an award next week. So yeah, we're amping up that sort of thing that we can leverage and extend. Both in the short term, but also, of course, across a longer term strategy. So, those are the things we're going to do first, and yeah, we're going to be rolling into, of course, 2022. So we've got a lot of work we're doing, as I mentioned, I'm meeting, five partners, CEOs, and only one of them is doing business with us now. So we want to get those partners to kick off January with us presenting at their sales kickoff, going "We are going with Infinidat "as one of our strong storage providers". So, we're doing all that upfront work in the first 100 days, so we can kick off Q1 with a real bang. >> Love the channel story, and you're a good guy to do that. And you mentioned the silos, correct me if I'm wrong, but Infinidat does a lot of business in overseas. A lot of business in Europe, obviously the affinity to the engineering, a lot of the engineering work that's going on in Israel, but that's by its very nature, stovepipe. Most startups start in the US, big market NFL cities, and then sort of go overseas. It's almost like Infinidat sort of simultaneously grew it's overseas business, and it's US business. >> Well, and we've got customers everywhere. We've got them in South Africa, all over Europe, Middle East. We have six very large customers in India, and a number of large customers in Japan. So we have a sales team all over the world. As you mentioned, our white glove service includes not only our field systems engineers, but we have a professional services group. We've actually written custom software for several customers. In fact, I was on the forecast meeting earlier today, and one of the comments that was made for someone who's going to give us a PO. So, the sales guy was saying, part of the reason we're getting the PO is we did some professional services work last quarter, and the CIO called and said, I can't believe it. And what CIO calls up a storage company these days, but the CIO called him and said "I can't believe the work you did. We're going to buy some more stuff this quarter". So that white glove service, our technical account managers to go along with the field sales SEs and this professional service is pretty unusual in a small company to have that level of, as you mentioned yourself, white glove service, when the company is so small. And that's been a real hidden gem for this company, and will continue to be so. >> Well, Eric, congratulations on the appointment, the new role, excited to see what you do, and how you craft the story, the strategy. And we've been following Infinidat since, sort of day zero and I really wish you the best. >> Great, well, thank you very much. Always appreciate theCUBE. And trust me, Dave, next time I will have my famous Hawaiian shirt. >> Ah, I can't wait. All right, thanks to Eric, and thank you for watching everybody. This is Dave Vellante for theCUBE, and we'll see you next time. (bright upbeat music)
SUMMARY :
Hertzog has been named CMO of the year on the job for two weeks. That's a pretty buttoned up company. a Hawaiian shirt, don't worry. you know Phil, my 99 seat joke, So, the whole thing took about division for awhile at the EMC. and the low latency, what are the... the reality is, when you You're a leader in the And that's the best scenario you can do just the marketing guy, and of course, the container layer and at the time the strategy And the second thing the Silicon Valley Guy from Yeah, the hotels in Waltham and integration of the team a lot of the engineering work and one of the comments that was made the new role, excited to see what you do, Great, well, thank you very much. and thank you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Phil Bullinger | PERSON | 0.99+ |
Eric | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
2011 | DATE | 0.99+ |
India | LOCATION | 0.99+ |
Phil | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Ken Steinhardt | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Japan | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Israel | LOCATION | 0.99+ |
Eric Hertzog | PERSON | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
South Africa | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
70 | QUANTITY | 0.99+ |
John Farrier | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Hertzog | PERSON | 0.99+ |
two weeks | QUANTITY | 0.99+ |
99 seats | QUANTITY | 0.99+ |
Asia | LOCATION | 0.99+ |
Herzog | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Golden State | LOCATION | 0.99+ |
Waltham | LOCATION | 0.99+ |
Richard Bradbury | PERSON | 0.99+ |
Rico | PERSON | 0.99+ |
next week | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
five partners | QUANTITY | 0.99+ |
LSA | ORGANIZATION | 0.99+ |
Kansas City | LOCATION | 0.99+ |
2022 | DATE | 0.99+ |
Milax | ORGANIZATION | 0.99+ |
Duplessy | PERSON | 0.99+ |
Middle East | LOCATION | 0.99+ |
EMEA | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
seven | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
last quarter | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one customer | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
Singapore | LOCATION | 0.98+ |
EMC Hitachi | ORGANIZATION | 0.98+ |
Storage Powerhouse | ORGANIZATION | 0.98+ |
Kaushik Ghosh, Dell Technologies | CUBE Conversation, September 2021
>>Hey, welcome to this cube conversation with Dell technologies. I'm Lisa Martin. I've got kosha ghost here with me. He's back on the cube director of product management for unified NAS solutions at Dell technologies. CATIA. Great to see you again. >>Yes. I raped a great to be here again. >>We're going to be talking about the major announcement that Dell technologies just made with their scale-out file storage system that has Dell EMC power scale. We're going to unpack the recent announcement, new features, capabilities, all that good stuff. Kaushik let's go ahead and start. Just give us that high level view of Dell EMC power scale. >>Yes, absolutely. Itself power scale is a high-performance scale-out file storage solution. Um, it's the successor to the Isilon family record, which as you guys know, I mean, there's one of the leading file solutions in the market today. Um, power scale one best, which is the file system that runs on power scale and also the Isilon family, um, is offers an exceptional simplicity, flexibility, and performance, um, which is what Isilon and Parscale is known for. I mean, um, if you look at Gardner's magic ordered one, Fs has been listed as the leader in that, uh, in the distributed and object file system. So, uh, so that basically is now our scaled. We launched our first Parscale all flash products last year. And then this year with this launch, we are sort of completing that portfolio, um, with, uh, with new hybrid and archive, uh, platforms. >>Excellent. And we're going to get into that as well. Let's go ahead and start unpacking this announcement. Walk me through some of the key things that are new and announced in this recent announcement. >>Yeah, except we just launched the hybrid archive platforms, um, on, as part of the Parscale family, are there two archive platforms and two hybrid platforms that we launched and, uh, they offer better CPU, performance, cash, and all that stuff, but, but we don't want to go into the speeds and feeds what I really want to hide breast is the, is the software capabilities that is far skull rings for starters. Um, it is, uh, it now includes inline data compression in 99 reduction. It's all built into it. Um, we support now new ransomware protection capabilities with, uh, with this product. Um, there's a new data protection capability that we now support with our, um, with our, our protected data data manager. Um, and, um, and then the, all the goodness of, uh, Iceland, one Fs and Parscale one Fs that sort of continues. >>I imagine, since the launch last year, cash took a lot of customer conversations that helped to drive this launch and the complete transition on the innovation of what we now see as power scale. >>Yeah. Yeah. I mean, uh, there, there have been some great conversations. People have been, um, people have been really waiting for this product offering because now, uh, they can basically combine those flash platforms that we launched last year with these hybrid platforms and can offer a really a solution that only gives you that performance, but also the, the cost and, uh, savings and the value that, um, that, uh, only our powers skill in Iceland can give you, >>Give me a good overview of some of those key capabilities that the existing Isilon customers and the prospective new customers of power scale are going to be able to take advantage of. >>Yes. So the new, some of the new capabilities as in line efficiency, as I mentioned earlier, that's now built into the product. Um, we have a line efficiency today on our all flash platforms. Uh, so now introducing it with these hybrid and archived nodes, what that means is that when you set up a mixed cluster with all flash and hybrid, when you gear the data down from the national hybrid, the data does not have to be rehydrated. They stay compressed, they stay in protected and so on and so forth. So that's one big advantage that you get. Second, um, these power skill hybrid type platforms were built ground up, uh, with our own custom hardware, unlike the flashback phones with be leverage powered servers for these ones, we use our custom hardware. And the reason for that is because what those archive and storage, the whole story we want that density, we can store up to 500 terabytes of usable capacity, effective usable capacity in, in these archive nodes in a single, uh, one U rack unit. And then, uh, of course, uh, from a software perspective, uh, we talked about ransomware protection. So, so we have a new capability with trends of bear protection. And then there's this new capability that we just launched in regard to backups, more efficient, more faster backups with, uh, with our, our protector will be bought predict family of products. >>Excellent. I want to dig into the ransomware and data protection in a minute, but I want to get a sense of the overall theme of the launch. You talked about this being the completion of that tech refresh some of the new capabilities and enhancements that customers are going to be able to take advantage of it. Give me that higher level kind of thematic look at this news. >>The big team of this is basically finishing that Parscale family that we started last year, right? So we started with launching the whole flash. Now with this hybrid and archive. Now we have the flu family done, um, all products, not support in line efficiency, so we can move the data around, you get the same, uh, data doesn't get high rehydrated. Um, you are, you can make it part of a single cluster. Um, and you get all the performance benefits, um, the scalability benefits of one Fs, um, and new data management capabilities, the, um, so all of that, that we started all of that goodness that we started with our skill, all flash. Um, we soft, continued now with this, uh, with this platform. >>Got it. And I know you guys did your own internal study and I'd like you to share some of the results with the audience, you guys compared power scale to competitors in traditional NAS in flash only NAS in mixed NAS, San and software only NAS give us a snapshot into what some of those results were for power scale. >>Yeah. I mean, uh, the big take away out here is that, um, that when it comes to power scale, um, they, we don't have a competitor when it comes to scalability, right? Uh, the fact that you can now work, uh, on petabytes of capacity under a single namespace, a single file system, and also give you that performance. Um, we, there is none to today, right? And, um, and then there may be some which can do those also, but then they don't have the enterprise capabilities like replication, um, and, uh, the rich enterprise capabilities that one, if fit, sets so off performance, scale capabilities and all the, uh, the simplicity of one Fs. And that's basically what the unique thing about our skaters >>W performance scale and simplicity, three things that I'm sure enterprises, small, medium businesses in any industry appreciate you. You talked about the, um, what's new in terms of the hybrid notes and the archive nodes. Can you help us understand what workloads does nos are best targeted for? >>Absolutely so hybrid and archive. What we have realized is that not every data can be a compressed RDU, right? So, so it's not, we would love customers to use our all flash products. They get the deduplication, they get the compression, then it lowers the cost. And clearly then you get the performance and the cost, but there are workloads like media and entertainment, video surveillance, where you will not be able to compress or that guest, rather than it being for a very expensive flash. You could put those data sets in our lower cost archive platforms as an example. And if you have situations where look, I need some performance, but there is a lot of old data and you can actually mix and match it also. So you're going to have those flash platform is giving that performance. And then you have our archive platforms, which is basically giving you the lowest cost storage for that data. And it is not so frequent giving access. >>And there's the flexibility there. So how can, this is the tech refresh? He said, this has been completed now a power scale from Isilon. How can existing Isilon customers take advantage? What are their next steps to be able to take advantage of the newer capabilities and technologies? >>Yeah, absolutely. I mean, one thing we, we, our scale has it, that's very different from others is that Parscale has this mantra called the no, no left behind. So if you are an existing Isilon customer, you can basically add these Parscale nodes to your existing Iceland cluster without breaking any donut. Then we put our scale, we lock them, magically redistribute, rebalance your workloads across these new nodes. And you sort of keep on expanding our cluster. And when you, when you feel like that, you can, uh, take out the older nodes, uh, at the time of your choosing, right? So that stuff, um, that's a huge benefit that we get. So in fact, in some customer environments, their data has been there for almost 10 to 12 years now, uh, uh, because they've never had to do a forklift upgrade. So that sort of continues with this family. Um, if you learn to learn more about it, I would encourage, uh, going to Dell technologies slash power scale, uh, or contact your Dell technologies, uh, rep >>Let's kind of wrap up things here with talking about, dig into ransomware. We've seen ransomware become a household word, the colonial pipeline, the meat packing organization that was attacked earlier this summer. We know that that a lot of data show that there's a one ransomware attack happens every 11 seconds. And of course we only hear about the really big attacks. Um, I've had the opportunity to talk to a lot of cybersecurity leaders lately, and they're showing that ransomware is up, you know, at least 10 X in the last year with this massive pivot to work from home now, work from anywhere. Talk to me about some of the focus that Dell has put in power scale now with perspective of ransomware protection and recovery. >>Yeah. So for ransomware product, we have to do things that we are doing. So one is this concept of a detection. So when an attack is happening, we want it to be able to detect with date at an attack is happening and take some corrective measures, right? And so we have this product called Sabrina eyeglass, which is exclusively built for, uh, uh, built for, uh, uh, our scale and using this product they use, we use AIS, uh, to basically to figure out that if an attack is happening, we detect it. And based on that based on policies, we can then either, if it's happening with only one user, we can start off, um, uh, start off, uh, prevent, uh, sort of lock it down that particular user profile or, or take other corrective actions taking meaning set up. So that's one aspect of it, which is about the detection of it and taking some quick steps. >>Then there's a second aspect of it, which is all about recovery, right? So, so we do have a replications event. If the customer chooses, we can have reputations set up from your Parscale, uh, production cluster to another cluster. And, um, and in that replication, uh, we can introduce an air gap so that, uh, any, anything bad thing is happening here does not get, uh, uh, does not get replicated to that, uh, remote in mine. So, um, so, so those are the two ways, one detecting, and second basically protecting it. Um, and not only just protecting it, but ensuring that air gap, um, capabilities, data as well, so that, uh, the ransom reason not replicated there as well. >>Absolutely critical. Given some of the things that you and I mentioned a few minutes ago in terms of the explosion of ransomware, which hopefully in our remote, remote work hybrid environment, as more technologies like this come out from Dell technologies and its partners, we'll start to see those ransomware numbers go down. Lastly, I want you to just restate, you mentioned a URL where folks can go to learn more information. Now you've got several different links to point folks to, can you go ahead and remind us what those are again? >>Yes, absolutely. I mean, uh, the easiest you are to go to is Dell technologies slash flower scale. I mean, if you're, that's a one-to-one URL and I'd like you to remember, once you go there, there'll be videos, articles, blogs, and you can, uh, look through a much going and then whatever you want from them. >>Excellent contract. Thank you for joining me today. Talking to me about what's new with power scale, congratulations on the completion of the refresh, a lot of new capabilities and, and, uh, technologies that your customers, existing Isilon and feature perspective, power scale customers are going to be able to take advantage of, look forward to hearing in the next few months in customer success stories. Thanks for your time. Thank >>You >>For Casha gauche. I'm Lisa Martin. You're watching a cube conversation.
SUMMARY :
Great to see you again. We're going to be talking about the major announcement that Dell technologies just made with their scale-out file storage I mean, um, if you look at Gardner's magic ordered one, of the key things that are new and announced in this recent announcement. and all that stuff, but, but we don't want to go into the speeds and feeds what I really want to hide breast is the, I imagine, since the launch last year, cash took a lot of customer conversations that helped to a solution that only gives you that performance, but also the, and the prospective new customers of power scale are going to be able to take advantage of. And then there's this new capability that we just launched in of the new capabilities and enhancements that customers are going to be able to take advantage of it. not support in line efficiency, so we can move the data around, you get the same, And I know you guys did your own internal study and I'd like you to share some of the results with the audience, comes to power scale, um, they, we don't have a competitor when it comes to scalability, Can you help us understand what And clearly then you get the performance and the cost, but there are workloads like media and to be able to take advantage of the newer capabilities and technologies? So that stuff, um, that's a huge benefit that we get. And of course we only hear about the really big attacks. And based on that based on policies, we can then either, if it's happening with only If the customer chooses, we can have reputations set up from Given some of the things that you and I mentioned a few minutes ago in terms of the explosion of ransomware, I mean, uh, the easiest you are to go to is Dell power scale customers are going to be able to take advantage of, look forward to hearing in the next few months in customer I'm Lisa Martin.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
September 2021 | DATE | 0.99+ |
Iceland | LOCATION | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
Kaushik Ghosh | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Parscale | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two ways | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
two archive platforms | QUANTITY | 0.98+ |
one aspect | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
12 years | QUANTITY | 0.98+ |
Gardner | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
one user | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
CATIA | ORGANIZATION | 0.96+ |
two hybrid platforms | QUANTITY | 0.94+ |
earlier this summer | DATE | 0.93+ |
up to 500 terabytes | QUANTITY | 0.93+ |
single namespace | QUANTITY | 0.93+ |
Fs | COMMERCIAL_ITEM | 0.91+ |
single cluster | QUANTITY | 0.9+ |
kosha | PERSON | 0.89+ |
Casha | PERSON | 0.88+ |
one U rack unit | QUANTITY | 0.87+ |
Sabrina | ORGANIZATION | 0.84+ |
almost 10 | QUANTITY | 0.84+ |
single file system | QUANTITY | 0.8+ |
Kaushik | PERSON | 0.79+ |
single | QUANTITY | 0.79+ |
few minutes ago | DATE | 0.78+ |
at least 10 X | QUANTITY | 0.77+ |
99 reduction | QUANTITY | 0.77+ |
three | QUANTITY | 0.77+ |
one ransomware attack | QUANTITY | 0.7+ |
one big advantage | QUANTITY | 0.69+ |
AIS | ORGANIZATION | 0.69+ |
a minute | QUANTITY | 0.68+ |
every 11 seconds | QUANTITY | 0.68+ |
Fs | ORGANIZATION | 0.67+ |
Parscale | COMMERCIAL_ITEM | 0.52+ |
EMC | COMMERCIAL_ITEM | 0.39+ |
Breaking Analysis: Thinking Outside the Box...AWS signals a new era for storage
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante by our estimates aws will generate around nine billion dollars in storage revenue this year and is now the second largest supplier of enterprise storage behind dell we believe aws storage revenue will hit 11 billion in 2022 and continue to outpace on-prem storage growth by more than a thousand basis points for the next three to four years at its third annual storage day event aws signaled a continued drive to think differently about data storage and transform the way customers migrate manage and add value to their data over the next decade hello and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we'll give you a brief overview of what we learned at aws's storage day share our assessment of the big announcement of the day a deal with netapp to run ontap natively in the cloud as a managed service and we'll share some new data on how we see the market evolving with aws executive perspectives on its strategy how it thinks about hybrid and where it fits into the emerging data mesh conversation let's start with a snapshot of the announcements made at storage day now as with most aws events this one had a number of announcements and introduced them at a pace that was predictably fast and oftentimes hard to follow here's a quick list of most of them with some comments on each the big big news is the announcement with netapp netapp and aws have engineered a solution which ports the rich netapp stack onto aws and will be delivered as a fully managed service this is a big deal because previously customers either had they had to make a trade-off they had a settle for cloud-based file service with less functionality than you could get with netapp on-prem or it had to lose the agility and elasticity of the cloud and the whole pay-by-the-drink model now customers can get access to a fully functional netapp stack with services like data reduction snaps clones the full multi-protocol support replication all the services ontap delivers in the cloud as a managed service through the aws console our estimate is that 80 of the data on-prem is stored in file format and that's not the revenue but that's the data and we all know about s3 object storage but the biggest market from a capacity standpoint is file storage you know this announcement reminds us quite a bit of the vmware cloud on aws deal but applied to storage netapp's aunt anthony lai told me dave this is bigger and we're going to come back to that in a moment aws announced s3 multi-region access points it's a service that optimizes storage performance it takes into account latency network congestion and the location of data copies to deliver data via the best route to ensure our best performance this is something we've talked about for quite some time using metadata to optimize that that access aws also announced improvements to s3 tiering where it will no longer charge for small objects of less than 128k so for example customers won't be charged for most metadata and other smaller objects remember aws years ago hired a bunch of emc engineers and those guys built a lot of tiering functionality into their boxes and we'll come back to that later in this episode aws also announced backup and monitoring tools to ensure backups are in compliance with regulations and corporate edicts this frankly is table stakes and was was overdue in my view aws also made a number of other announcements that have been well covered in the press around block storage and simplified data migration tools so we'll leave that to your perusal through other outlets i want to come back to the big picture on the market dynamics now as we've reported in previous breaking analysis segments aws storage revenue is on a path to 10 billion dollars we reported this last year this chart puts the market in context it shows our estimates for worldwide enterprise storage revenue in the calendar year 2021. this data is meant to include all storage revenue including primary secondary and archival storage and related maintenance services dell is the leader in the 60 billion market with aws now hot on its tail with 15 of the market in terms of the way we've cut it now in the pre-cloud days customers would tell us our storage strategy is the following we buy emc for block and netapp for file keeping it simple while remnants of this past habit continue the market is definitely changing as you can see here the companies highlighted in red represent the growing hyperscaler presence and you can see in the pi on the right they now account for around 25 percent of the market and they're growing much much faster than the on-prem vendors well over that thousand basis points when you combine them all a couple of other things to note in the data we're excluding kindrel from ibm's figures that's ibm spinout but including our estimates of storage software for example spectrums protect that is sold as part of the ibm cloud but not reported in ibm's income statement by the way pre-kindred spin ibm storage business we believe would approach the size of netapp's business now in the yellow we've highlighted the portion of hyper-converged that comprises storage this includes vmware nutanix cisco and others vmware and nutanix are the largest hci players but in total the storage piece of that market is less than two billion okay so the way to look at this market is changing traditional on-prem is vying for budgets with cloud storage services which are rapidly gaining presence in the market and we're seeing the on-prem piece evolve of course into as a service models with hpe's green lake dell's apex and other on-prem cloud-like models now let's come back to the netapp aws deal netapp as we know is the gold standard for file services they've been the market leader for a long long time and other than pure which is considerably smaller netapp is the one company that consistently was able to beat emc in the market emc developed its its nas business and developed on its own nasdaq and it bought isilon to compete with netapp with isilon's excellent global file system but generally netapp remains the best file storage company today now emerging disruptors like cumulo vast weka they would take issue with this statement and rightly so as they have really promising technology but netapp remains the king of the file hill you can't debate that now netapp however has had some serious headwinds as the largest independent storage player as seen in this etr chart the data shows a nine-year view of netapp's presence in the etr survey presence is referred to by etr as market share it's not traditional market share it measures the pervasiveness of responses in the etr survey over a thousand customers each quarter so the percentage of mentions essentially that netapp is getting and you can see well netapp remains a leader it has had a difficult time expanding its tam and it's become frankly less relevant in the eye in the grand scheme and the grand eyes of it buyers the company hit headwinds when it began migrating its base to ontap 8 and was late riding a number of new waves including flash but generally it is recovered from those headwinds and it's really now focused on the cloud opportunity opportunity as evidenced by this deal with aws now as i said earlier netapp evp anthony lai told me that this deal is bigger than vmware cloud on aws like me you may be wondering how can that be vmware is the leader in the data center it has half a million customers its deal with aws has been a tremendous success as seen in this etr chart the data here shows spending momentum or net score from when vmware cloud on aws was picked up in the etr surveys with a meaningful n which today is approaching 100 responses in the survey the yellow line is there for context it's vmware's overall business so repeat it buyers who responded vmware versus specifically vmware cloud on aws so you see vmware overall has a huge presence in the survey more than 600 n the red line is vmware cloud on aws and that red dotted line you see that that's that's my magic 40 mark anything above that line we consider elevated net score or spending velocity and while we saw some deceleration earlier this year in that line that top line for vmware cloud vmware cloud and aws has been consistently showing well in the survey well above that 40 percent line so could this netapp deal be bigger than vmware cloud on aws well probably not in our view but we like the strategy of netapp going cloud native on aws and aws's commitment to deliver this as a managed service now where could get interesting is across clouds in other words if netapp can take a page out of snowflake and build an abstraction layer that hides the underlying complexity of not only the aws cloud but also gcp and azure where you log into the netapp cloud netapp data cloud if you will just go ahead and steal steal it from snowflake and then netapp optimizes your on-prem your aws your azure and or your gcp file storage we see that as a winning strategy that could dramatically expand netapp's tam politically it may not sit well with aws but so what netapp has to go multi-cloud to expand that tam when the vmware deal was announced many people felt it was a one-way street where all the benefit would eventually accrue to aws in reality this has certainly been a near-term winner for aws and vmware and of course importantly vmware and aws join customers now longer term it's going to clearly be a win for aws because it gets access to vmware's customer base but we also think it will serve vmware well because it gives the company a clear and concise cloud strategy especially if it can go across clouds and eventually get to the edge so with this netapp aws deal will it be as big probably not in our view but it is big netapp in our view just leapfrogged the competition because of the deep engineering commitment aws has made this isn't a marketplace deal it's a native managed service and we think that's pretty huge okay we're going to close with a few thoughts on aws storage strategy and some other thoughts on hybrid talk about capturing mission critical workloads and where aws fits in the overall data mesh conversation which is one of our favorite topics first let's talk about aws's storage strategy overall as with other services aws approach is to give builders access to tools at a very granular level that means it does mean a lot of apis and access to primitives that are essentially building blocks while this may require greater developer skills it also allows aws to get to market quickly and add functionality faster than the competition enterprises however where they will pay up for solutions so this leaves some nice white space for partners and also competitors and especially the on-prem folks but let's hear from an aws executive i spoke to milan thompson bucheveck an aws vp on the cube and asked her to describe aws's storage strategy here's what she said play the clip we are dynamically and constantly evolving our aws storage services based on what the application and the customer want that is fundamentally what we do every day we talked a little bit about those deployments that are happening right now dave that is something that idea of constant dynamic evolution just can't be replicated by on-premises where you buy a box and it sits in your data center for three or more years and what's unique about us among the cloud services is again that perspective of the 15 years where we are building applications in ways that are unique because we have more customers and we have more customers doing more things so you know i i've said this before uh it's all about speed of innovation dave time and change wait for no one and if you're a business and you're trying to transform your business and base it on a set of technologies that change rapidly you have to use aws services i mean if you look at some of the launches that we talk about today and you think about s3's multi-region access points that's a fundamental change for customers that want to store copies of their data in any number of different regions and get a 60 performance improvement by leveraging the technology that we've built up over over time the the ability for us to route to intelligently router requests across our network that and fsx for netapp ontap nobody else has these capabilities today and it's because we are at the forefront of talking to different customers and that dynamic evolution of storage that's the core of our strategy so as you hear and can see by milan's statements how these guys think outside the box mentality at the end of the day customers want rock solid storage that's dirt cheap and lightning fast they always have and they always will but what i'm hearing from aws is they think about delivering these capabilities in the broader context of an application or a business think deeper business integration not the traditional suppliers don't think about that as well but the services mentality the cloud services mentality is different than dropping off a box at a loading dock turning it over to a professional services organization and then moving on to the next deal now i also had a chance to speak with wayne dusso he's another aws vp in the storage group wayne do so is a long time tech athlete for years he was responsible for building storage arrays at emc aws as i said hired a bunch of emcs years ago and those guys did a lot of tiered storage so i asked wayne what's the difference in mentality when you're building boxes versus cloud services here's what he said you have physical constraints you have to worry about the physical resources on that device for the life of that device which is years think about what changes in three or five years think about the last two years alone and what's changed can you imagine having being constrained by only uh having boxes available to you during this last two years versus having the cloud and being able to expand or contract based on your business needs that would be really tough right and it has been tough and that's why we've seen customers from every industry accelerate uh their use of the cloud during these last two years so i get that so what's your mindset when you're building storage services and data services so so each of the surfaces that we have in object block file movement services data services each of them provides very specific customer value in each are deeply integrated with the rest of aws so that when you need object services you start using them the integrations come along with you when if you're using traditional block we talked about ebs io2 block express when using file just the example alone today with ontap you know you get to use what you need when you need it and the way that you're used to using it without any concern so so the big difference is no constraints in the box but lots of opportunities to blend in with other services now all that said there are cases where the box is gonna win because of locality and and physics and latency issues you know particularly where latency is king that's where a box is gonna be advantageous and we'll come back to that in a bit okay but what about hybrid how does aws think about hybrid and on-prem here's my take and then let's hear from milan again the cloud is expanding it's moving out to the edge and aws looks at the data center as just another edge node and it's bringing its infrastructure as code mentality to that edge and of course to data centers so if aws is truly customer centric which we believe it is it will naturally have to accommodate on-prem use cases and it is doing just that here's how milan thompson-bucheveck explained how aws is thinking about hybrid roll the clip for us dave it always comes back to what the customer is asking for and we were talking to customers and they were talking about their edge and what they wanted to do with it we said how are we going to help and so if i just take s3 for outposts as an example or ebs and outposts you know we have customers like morningstar and morningstar wants outposts because they are using it as a step in their journey to being on the cloud if you take a customer like first adudabi bank they're using outposts because they need data residency for their compliance requirements and then we have other customers that are using outposts to help like dish networks as an example to place the storage as close as account to the applications for low latency all of those are customer driven requirements for their architecture for us dave we think in the fullness of time every customer and all applications are going to be on the cloud because it makes sense and those businesses need that speed of innovation but when we build things like our announcement today of fxs for netapp ontap we build them because customers asked us to help them with their journey to the cloud just like we built s3 and evs for outposts for the same reason so look this is a case where the box or the appliance wins latency matters as we said and aws gets that this is where matt baker of dell is right it's not a zero-sum game this is especially accurate as it pertains to the cloud versus on-prem discussion but a budget dollar is a budget dollar and the dollar can't go to two places so the battle will come down to who has the best solution the best relationships and who can deliver the most rock solid storage at the lowest cost and highest performance let's take a look at mission critical workloads for a second we're seeing aws go after these it's doing a database it's doing it with block storage we're talking about oracle sap microsoft sql server db2 that kind of stuff high volume oltp transactions mission critical work now there's no doubt that aws is picking up a lot of low hanging fruit with business critical workloads but the really hard to move work isn't going without a fight frankly it's not going that fast aws and mace has made some improvements to block storage to remove some of the challenges related but generally we see this is a very long road ahead for aws and other cloud suppliers oracle is the king of mission critical work along with ibm mainframes and those infrastructures generally it's not easy to move to the cloud it's too risky it's too expensive and the business case oftentimes isn't there because very frequently you have to freeze applications to do so what generally what people are doing is they're building an abstraction layer over that putting that abstraction layer maybe in the cloud building new apps that can connect to the back end and the into the cloud but that back end is largely cemented and fossilized look it's all in the definition no doubt there's plenty of mission critical work that is going to move but just really depends on how you define it even aws struggles to move its most critical transaction systems off of oracle but we'll continue to keep an open mind there it's just that today we define the most mission-critical workloads as we define them we don't see a lot of movement to the hyperscale clouds and we're going to close with some thoughts on data mesh so one of our favorite topics we've written extensively about this and interviewed and are collaborating with jamaa dagani who has coined the term and we've announced a media collaboration with the data mesh community and believe it's a strong direction for the industry so we wanted to understand how aws thinks about data mesh and where it fits in the conversation here's what milan had to say about that play the clip we have customers today that are taking the data mesh architectures and implementing them with aws services and dave i want to go back to the start of amazon when amazon first began we grew because the amazon technologies were built in microservices fundamentally a data match is about separation or abstraction of what individual components do and so if i look at data mesh really you're talking about two things you're talking about separating the data storage and the characteristics of data from the data services that interact and operate on that storage and with data mesh it's all about making sure that the businesses the decentralized business model can work with that data now our aws customers are putting their storage in a centralized place because it's easier to track it's easier to view compliance and it's easier to predict growth and control costs but we started with building blocks and we deliberately built our storage services separate from our data services so we have data services like lake formation and glue we have a number of these data services that our customers are using to build that customized data mesh on top of that centralized storage so really it's about at the end of the day speed it's about innovation it's about making sure that you can decentralize and separate your data services from your storage so businesses can go faster so it's very true that aws has customers that are implementing data mess data mesh data mess data mesh can be a data mess if you don't do it right jpmorgan chase is a firm that is doing that we've we've covered that they've got a great video out there check out the breaking analysis archive you'll see that hellofresh has also initiated a data mesh architecture in the cloud and several others are starting to pop up i think the point is the issues and challenges around data mesh are more organizational and process related and less focused on the technology platform look data by its very nature is decentralized so when mylan talks about customers building on centralized storage that's a logical view of the storage but not necessarily physically centralized it may be in a in a hybrid device it may be a copy that lives outside of that same physical location this is an important point as jpmorgan chase pointed out the data mesh must accommodate data products and services that are in the cloud and also on-prem it's got to be inclusive the data mesh looks at the data store as a node on the data mesh it shouldn't be confined by the technology whether it's a data warehouse a data hub a data mart or an s3 bucket so i would say this while people think of the cloud as a centralized walled garden and in many respects it is that very same cloud is expanding into a massively distributed architecture and that fits with the data mesh architectural model as i say the big challenges of data mesh are less technical and more cultural and we're super excited to see how data mesh plays out over time and we're really excited to be part of part of the the community and a media partner of the data mesh community okay that's it for now remember i publish each week on wikibon.com and siliconangle.com and these episodes they're all available as podcasts all you do is search for breaking analysis podcasts you can always connect on twitter i'm at d vellante or email me at david.velante at siliconangle.com i appreciate the comments you guys make on linkedin and don't forget to check out etr.plus for all the survey action this is dave vellante for the cube insights powered by etr be well and we'll see you next time [Music] you
SUMMARY :
and the dollar can't go to two places so
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2022 | DATE | 0.99+ |
10 billion dollars | QUANTITY | 0.99+ |
40 percent | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
less than two billion | QUANTITY | 0.99+ |
11 billion | QUANTITY | 0.99+ |
nine-year | QUANTITY | 0.99+ |
wayne dusso | PERSON | 0.99+ |
isilon | ORGANIZATION | 0.99+ |
morningstar | ORGANIZATION | 0.99+ |
aws | ORGANIZATION | 0.99+ |
two places | QUANTITY | 0.99+ |
100 responses | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
15 years | QUANTITY | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
five years | QUANTITY | 0.99+ |
more than 600 | QUANTITY | 0.99+ |
each week | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
jpmorgan chase | ORGANIZATION | 0.99+ |
dave vellante | PERSON | 0.99+ |
boston | LOCATION | 0.98+ |
less than 128k | QUANTITY | 0.98+ |
amazon | ORGANIZATION | 0.98+ |
nutanix | ORGANIZATION | 0.98+ |
over a thousand customers | QUANTITY | 0.98+ |
d vellante | PERSON | 0.98+ |
wayne | PERSON | 0.98+ |
around nine billion dollars | QUANTITY | 0.98+ |
microsoft | ORGANIZATION | 0.98+ |
milan thompson-bucheveck | PERSON | 0.97+ |
vmware | ORGANIZATION | 0.97+ |
two things | QUANTITY | 0.97+ |
40 | QUANTITY | 0.97+ |
around 25 percent | QUANTITY | 0.97+ |
netapp | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
more than a thousand basis points | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
matt baker | PERSON | 0.96+ |
netapp | TITLE | 0.96+ |
AWS | ORGANIZATION | 0.96+ |
jamaa dagani | PERSON | 0.96+ |
first | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
third annual | QUANTITY | 0.95+ |
60 performance | QUANTITY | 0.95+ |
milan | PERSON | 0.95+ |
ORGANIZATION | 0.95+ | |
one-way | QUANTITY | 0.95+ |
Andrew MacKay and Parasar Kodati | CUBE Conversation, August 2021
(upbeat music) >> Welcome to this CUBE conversation. I'm Lisa Martin. Today, we're going to be talking about the cyber protection and recovery solutions for unstructured data. I have two guests joining me, Andrew Mackay is here, The President of Superna, and Parasar Kodati, Senior Consultant, ISG Product Marketing at Dell technologies. Guys, great to have you on the program talking about cybersecurity, cyber resiliency. Something that we've heard a lot in the news in the last 18 months or so. Parasar, let's go ahead and start with you. Talk to us about what you're seeing from a cybersecurity perspective, some of the challenges the last 18 months or so, and then tell us what Dell is doing specifically to really infuse its storage solutions to enable customers to have that cyber resiliency that they need. >> Sure, Lisa. So today, there's no question that cyberattacks have become a serious threat for business operations, for organizations of all sizes across all industries. And if you look at the consequences, there is a huge financial impact of course, through the, like 70% of the cyberattacks when they're financially motivated. Look at the ransom part, which is a big financial impact in itself, but look at the lost revenue from disrupted operations, legal expenses, and sometimes regulatory fines, and so on, add up to the financial impact. And if you look at the data after data loss that is involved, data being such a critical asset for organizations, think about losing customer data, losing access to customer data or critical applications that depend on customer data. Similarly, data related to your business operations data that is source of your competitive advantage data that could be very confidential information as well. And when it comes to government organization, institutions, there is also the issue of national security and the need to protect critical infrastructure that depend on these IT systems as well. So absolutely it is becoming an imperative for IT organizations to improve the cyber resiliency, to boost the cyber resiliency of their organization. At Dell technologies, for the storage products that we offer, we have integrated solutions to protect the data in terms of detecting patterns of data access, to detect the cyber attacks in advance, to kind of put IT a step ahead of these attackers and also have the tools and technologies to recover from a cyberattack rapidly so that the business can continue to run. >> That recovery is absolutely critical. It's one thing to have all this data, customer data, PII, competitive advantage data, but you have to be able to recover it because as you said, we've seen this now become a matter of national security, infrastructure being threatened. The ransomware rise we have seen in the last 18 months has been unprecedented. I want to talk now, Andrew, about Superna. Talk to us about what you guys do and how you're a partner with Dell technologies and helping customers recover and really be cyber resilient. >> Yeah, we've been working with Dell for years. In fact, our products are built in targeting the Isilon PowerScale platform. So we're at very closely tightly integrated solution that focuses on solving one problem, solving it really well. >> Talk to me a little bit about what you guys are doing specifically with the Dell technology storage solutions to help customers in any industry be able to recover. As we know now, ransomware is not, if it happens to us, it's a wand. Give us a little bit more of a dissection of those solutions. >> So when we looked at this problem, it's associated with files, right? But today, there's files and objects, objects and other types of unstructured data. So we've built a solution that addresses both file and object. But one of the areas that we think is important for customers to consider is the framework that they choose. They shouldn't just jump in and start looking for products. They should step back and take a look at what frameworks exist. For instance, the NIST framework, that guides them in how they build and tick off all the key boxes and how to build a cyber resilient solution. >> So for companies that are using traditional legacy tools, backup and restore it, how was what Superna enables, how is it different? >> So the buzzword these days is zero trust. So I'm going to use the buzzword. So we use a zero trust model, but really that comes down to being proactive. And I consider a backup/restore, a bit of a legacy approach. That's just restore the data after you've been attacked. So we think you should get in front of the problem and don't trust any of the access to the storage and try to take care of the problem at the source, which means detection patterns, locking usually out of the file system, reacting in real time to real-time IO that's being processed by a storage device. >> Got it. Parasar, let's talk now about unstructured data specifically and why does it need protection against the attacks? >> Traditionally, structured data or the enterprise databases have been the more critical data to protect, but more and more unstructured data is also becoming a source of competitive differentiation for customers. Think about artificial intelligence, machine learning, internet of things, a lot of edge computing. And a lot of this data is actually being stored on highly scalable NAS platforms like Dell EMC PowerScale. And this is where, given the volume of the data involved, we actually have a unique solution for unstructured data to protect it from cyberattacks and also having the recovery mechanisms in place. So most of the audience might have already heard about the PowerProtect Cyber Recovery solution, but for unstructured data, we have something unique in the industry in terms of rapid recovery of large amount of data within a few hours for a business to be up and running in the event of a cyberattack. So when it comes to the data protection technologies on the PowerScale platform, we have, starting from the operating system, the OneFS, already has a great foundation in terms of access control, separate access zones that can be protected. And these things work across multiple protocols, which is a really key thing about how this technology works in terms of access control. But thanks to the great technology that Andrew and his team is building, the Ransomware Defender, real-time access auditing. These products from the core, kind of cyber resiliency framework when it comes to unstructured data on power skill platforms. >> Got it. Andrew, let's talk about the NIST framework. As we've talked about in the last few minutes, cybersecurity has really become quite a business. Unfortunately, in the last 18 months, we've seen huge x-fold increases in ransomware attacks of any type of company. Talk to me about how, where are those conversations? Are you having conversations at the board level, at the C-level, in terms of the right cyber resiliency framework that organizations need to put in place? >> Yeah, we talk with customers almost daily. That's a daily conversation we have with customers about the requirements and the frameworks offer. And then this one, especially offers all of the key requirements from detection to prevention to recovery. And if you look at all of those requirements, you may end up with multiple products. And so we've built a solution that can address all of the key requirements in a single product. So for example, I mentioned detection and mitigation and recovery. Well, that's our protect the data at the source strategy, but the number one recommendation these days is to have an offline copy of your data. And that requires a cyber vault solution where you're going to take a copy of your data, place it in an offline storage device and you're going to manage that through some sort of automation. We've married those two requirements into a single product. So we actually look at the whole framework and can comply with all aspects of that, including the offline component. And that's one of the sort of secret sauce, part of our solution is that we can both protect at the source and maintain and monitor the offline copy of the data as well at the same time. >> So, the offline copy, interesting. Talk to me about how frequently is that updated so that if a business has to go back and restore and recover, they can. What's that timeframe of how frequently that's updated? >> So generally, we recommend about 24 hours. Because in reality, it's going to take time to uncover that there's something seriously wrong with your production data. In the case of our solution, the hope and intent is that really the problem is addressed right at the source, meaning we've detected ransomware on the source data and we can protect it and stop it before it actually ends up in your cyber vault. That's really the key to our solution. But if you have that day, recovery with the Isilon PowerScale snapshotting features, you can revert petabytes of data and bring it online in a worst-case scenario. And we tell customers, you need to work backwards from what is the worst case. And if you do that, you're going to realize that what you need is petabyte scale data recovery with your offline data. And that's a very hard problem to solve that we think we've solved really well with the PowerScale. >> And just sticking with you for a second. In the last year and a half, since things have been so turbulent, have you seen any industries in particular that have come to you saying, we really need to get ahead of this challenging situation as we've seen attacks across infrastructure? I mean, you name it, we've seen it. >> Yeah, the number one vertical for sure is healthcare. Healthcare has been the target. In fact, it was last October. I think the FBI made an announcement to all healthcare organizations to improve their cybersecurity. That's probably our largest vertical, but there really isn't a vertical that doesn't feel the need to do something more than they are today. Finance of course, manufacturing, retail. Basically, there's no target that isn't the target these days. But I would say for sure, it's going to be healthcare because they have a willingness and a need to have their data online all the time. >> Right, and it's absolutely a such critical information. Parasar, back to you. I'm curious to understand maybe any joint customers that you guys are working together with and how they have, what are some of the recovery time and the recovery point objectives that you're able to help them achieve? >> Sure, Lisa. So with Ransomware Defender, for example, there are more than thousand customers, almost thousand, we are very close, I think the exact number is around 970 or something, but have adopted this set of tools to boost their cyber resiliency in terms of being able to detect these attack patterns or any indications of a compromised through the way data is being accessed or the kinds of users that are accessing the data and so on. But also when it comes to isolation of the data, that has also been a lot of interest for customers to be able to have this cyber ward, which is air gap from your primary infrastructure. And of course, which is regulated with a lot of intelligence in terms of looking for any flags to close the connection and continue to replicate data or to terminate the connection and keep the cyber ward secure. So, absolutely. >> Andrew, how do you guys help? First of all, is it possible for companies to be able to stay ahead of the attackers? The attackers are also taking advantage of the emerging technologies that businesses are, but how do you, if the answer is yes, how do you help companies stay ahead of those attackers? >> I think a prime example of that is if you look at ransomware today and there's publicized versions or variants or names of it, they all attack files. But the bad actors are looking for the weak link. They're always looking for the weak link to go after the corporate data. And so the new frontier is object storage because these types of systems are compliance data. It's frequently used to store backup data, and that is a prime target for attackers. And so the security tools and the maturity of the technology to protect object data is nowhere near what's in place for file data. So we've announced and released the ability to protect object data in real-time, the same way we've already done it for years for file data, because we understand that that's just the next target. And so we were offering that type of solution in a unified single product. >> And the last question, Parasar for you. Where can folks go to learn more about this joint solution and how can they get started with it? >> Sure, Lisa. delltechnologies.com/powerscale, that's the unstructured data platform or the scale of NAS platform from Dell technologies. And we have great content there to educate customers about the nature of this cyberattacks and what kind of data is at risk and what is the kind of steps that can take to the point that Andrew mentioned, to build a cyber resiliency strategy as well as how to use these tools effectively to protect against attacks and also be very agile when it comes to recovery. >> Right, that agility with respect to recovery is critical because as we know, the trends are that we're only going to see cybersecurity and risks and attacks increase in some businesses and every industry are vulnerable and really need to put in place the right types of strategies and solutions to be able to recover when something happens. Guys, thank you so much for joining me. This is such an interesting topic. Great to hear about the partnership with Superna and Dell Technologies. And I'm sure your joint customers are very appreciative of the work that you're doing together. >> Thank you, Lisa. >> Great, thank you. >> From my guests, I'm Lisa Martin and you're watching a CUBE conversation. (upbeat music)
SUMMARY :
in the last 18 months or so. and the need to protect Talk to us about what you guys do in targeting the Isilon if it happens to us, it's a wand. and how to build a cyber the access to the storage protection against the attacks? So most of the audience Unfortunately, in the last 18 months, and monitor the offline copy so that if a business has to go back That's really the key to our solution. that have come to you saying, that isn't the target these days. and the recovery point objectives and keep the cyber ward secure. the ability to protect And the last question, Parasar for you. about the nature of this cyberattacks of the work that you're doing together. I'm Lisa Martin and you're
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Andrew | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
FBI | ORGANIZATION | 0.99+ |
Andrew Mackay | PERSON | 0.99+ |
August 2021 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
70% | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Superna | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
Andrew MacKay | PERSON | 0.99+ |
Parasar Kodati | PERSON | 0.99+ |
last October | DATE | 0.99+ |
two requirements | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
around 970 | QUANTITY | 0.98+ |
ISG | ORGANIZATION | 0.98+ |
more than thousand customers | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
single product | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
about 24 hours | QUANTITY | 0.97+ |
zero trust | QUANTITY | 0.97+ |
last year and a half | DATE | 0.96+ |
one | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
last 18 months | DATE | 0.95+ |
one problem | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.93+ |
Lisa. delltechnologies.com/powerscale | OTHER | 0.9+ |
Isilon | ORGANIZATION | 0.88+ |
both file | QUANTITY | 0.85+ |
years | QUANTITY | 0.82+ |
Ransomware Defender | TITLE | 0.8+ |
Superna | TITLE | 0.8+ |
PowerScale | TITLE | 0.79+ |
OneFS | COMMERCIAL_ITEM | 0.79+ |
NIST | ORGANIZATION | 0.73+ |
First | QUANTITY | 0.7+ |
Parasar | PERSON | 0.69+ |
ransomware | TITLE | 0.69+ |
PowerScale | COMMERCIAL_ITEM | 0.69+ |
thousand | QUANTITY | 0.68+ |
Parasar | ORGANIZATION | 0.6+ |
last few minutes | DATE | 0.58+ |
second | QUANTITY | 0.58+ |
PowerProtect | COMMERCIAL_ITEM | 0.54+ |
cyberattacks | QUANTITY | 0.52+ |
President | PERSON | 0.52+ |
EMC | COMMERCIAL_ITEM | 0.46+ |
Maurizio Davini, University of Pisa and Kaushik Ghosh, Dell Technologies | CUBE Conversation 2021
>>Hi, Lisa Martin here with the cube. You're watching our coverage of Dell technologies world. The digital virtual experience. I've got two guests with me here today. We're going to be talking about the university of Piza and how it is leaning into all flash data lakes powered by Dell technologies. One of our alumni is back MERITO, Debbie, and the CTO of the university of PISA. Maricio welcome back to the cube. Thank you. Very excited to talk to you today. CAUTI Gosha is here as well. The director of product management at Dell technologies. Kaushik. Welcome to the cube. Thank you. So here we are at this virtual event again, Maricio you were last on the cube at VMworld a few months ago, the virtual experience as well, but talk to her audience a little bit before we dig into the technology and some of these demanding workloads that the university is utilizing. Talk to me a little bit about your role as CTO and about the university. >>So my role as CTO at university of PISA is, uh, uh, regarding the, uh, data center operations and, uh, scientific computing support for these, the main, uh, occupation that, uh, that, uh, yeah. Then they support the world, saw the technological choices that university of PISA is, uh, is doing, uh, during the latest, uh, two or three years. >>Talk to me about some, so this is a, in terms of students we're talking about 50,000 or so students 3000 faculty and the campus is distributed around the town of PISA, is that correct? Maricio >>Uh, the university of PISA is sort of a, uh, town campus in the sense that we have 20 departments that are, uh, located inside the immediate eval town, uh, but due to the choices, but university of peace, I S uh, the, uh, last, uh, uh, nineties, uh, we are, uh, owner of, uh, of a private fiber network connecting all our, uh, departments and allow the templates. And so we can use the town as a sort of white board to design, uh, uh, new services, a new kind of support for teaching. Uh, and, uh, and so, >>So you've really modernized the data infrastructure for the university that was founded in the middle ages. Talk to me now about some of the workloads and that are generating massive amounts of data, and then we'll get into what you're doing with Dell technologies. >>Oh, so the university of PISA as a, uh, quite old on HPC, traditional HPC. So we S we are supporting, uh, uh, the traditional workloads from, uh, um, CAE or engineering or chemistry or oil and gas simulations. Uh, of course it during, uh, uh, the pandemic year, last year, especially, uh, we have new, uh, kind of work you'll scan, uh, summer related, uh, to the, uh, fast movement of the HPC workload from let's say, traditional HPC to AI and machine learning. And those are the, um, request that you support a lot of remote activities coming from, uh, uh, uh, distance learning, uh, to remote ties, uh, uh, laboratories or stations or whatever, most elder in presence in the past. And so the impact either on the infrastructure or, and the specialty and the storage part was a significant. >>So you talked about utilizing the high performance computing environments for awhile and for scientific computing and things. I saw a case study that you guys have done with Dell, but then during the pandemic, the challenge and the use case of remote learning brought additional challenges to your environment from that perspective, how, how were you able to transfer your curriculum to online and enable the scientists, the physicists that oil and gas folks doing research to still access that data at the speed that they needed to, >>Uh, you know, for what you got, uh, uh, uh, distance learning? Of course. So we were, uh, based on the cloud services were not provided internally by Yas. So we lie, we based on Microsoft services, so Google services and so on, but what regards, uh, internal support, uh, scientific computing was completely, uh, remote dies either on support or experience, uh, because, uh, I can, uh, I, can I, uh, bring some, uh, some examples, uh, for example, um, laboratory activities, uh, we are, the access to the laboratories, uh, was the of them, uh, as much as possible. Uh, we design a special networker to connect all the and to give the researcher the possibility of accessing the data on visit special network. So as sort of a collector of data, uh, inside our, our university network, uh, you can imagine that the, uh, for example, was, was a key factor for us because utilization was, uh, uh, for us, uh, and flexible way to deliver new services, uh, in an easy way, uh, especially if you have to, uh, have systems for remote. So, as, as I told you before about the, uh, network, as well as a white board, but also the computer infrastructure, it was VM-ware visualization and treated as a, as a sort of what we were designing with services either, either for interactive services or especially for, uh, scientific computing. For example, we have an experience with it and a good polarization of HPC workload. We start agents >>Talk to me about the storage impact, because as we know, we talk about, you know, these very demanding, unstructured workloads, AI machine learning, and that can be, those are difficult for most storage systems to handle the radio. Talk to us about why you leaned into all flash with Dell technologies and talk to us a little bit about the technologies that you've implemented. >>So, uh, if I, if I have to think about our, our storage infrastructure before the pandemic, I have to think about Iceland because our HPC workloads Moss, uh, mainly based off, uh, Isilon, uh, as a storage infrastructure, uh, together with some, uh, final defense system, as you can imagine, we were deploying in-house, uh, duty independently, especially with the explosion of the AI, with them, uh, blueprint of the storage requests change the law because of what we have, uh, uh, deal dens. And in our case, it was an, I breathed the Isilon solution didn't fit so well for HB for AI. And this is why we, uh, start with the data migration. That was, it was not really migration, but the sort of integration of the power scaler or flash machine inside our, uh, environment, because then the power scale, all flesh and especially, uh, IO in the future, uh, the MVME support, uh, is a key factor for the storage. It just support, uh, we already have experience as some of the, uh, NBME, uh, possibilities, uh, on the power PowerMax so that we have here, uh, that we use part for VDI support, uh, but off, um, or fleshly is the minimum land and EME, uh, is what we need to. >>Gotcha. Talk to me about what Dell technologies has seen the uptick in the demand for this, uh, as Maricio said, they were using Isilon before adding in power scale. What are some of the changing demands that, that Dell technologies has seen and how does technologies like how our scale and the F 900 facilitate these organizations being able to rapidly change their environment so that they can utilize and extract the value from data? >>Yeah, no, absolutely. What occupational intelligence is an area that, uh, continues to amaze me. And, uh, personally I think the, the potential here is immense. Um, uh, as Maurizio said, right, um, the, the data sets, uh, with artificial intelligence, I have, uh, grown significantly and, and not only the data has become, um, uh, become larger the models, the AI models that, that we, that are used have become more complex. Uh, for example, uh, one of the studies suggests that, uh, the, uh, that for a modeling of, uh, natural language processing, um, uh, one of the fields in AI, uh, the number of parameters used, could exceed like about a trillion in, uh, in a few years, right? So almost a size of a human brain. So, so not only that means that there's a lot of fear mounted to be, uh, data, to be processed, but, uh, by, uh, the process stored in yesterday, uh, but probably has to be done in the same amount of Dinah's before, perhaps even a smaller amount of time, right? So a larger data theme time, or perhaps even a smaller amount of time. So, absolutely. I agree. I mean, those type of, for these types of workloads, you need a storage that gives you that high-performance access, but also being able to store the store, that data is economically. >>And how does Dell technologies deliver that? The ability to scale the economics what's unique and differentiated about power skill? >>Uh, so power scale is, is, is our all flash, uh, system it's, uh, it's, uh, it's bad users, dark techno does some of the same capabilities that, uh, Isilon, um, products use used to offer, uh, one of his fault system capabilities, some of the capabilities that Maurizio has used and loved in the past, some of those, some of those same capabilities are brought forward. Now on this spar scale platform, um, there are some changes, like for example, on new Parscale's platform supports Nvidia GPU direct, right? So for, uh, artificial intelligence, uh, workloads, you do need these GPU capable machines. And, uh, and, uh, Parscale supports that those, uh, high high-performance Jupiter rec machines, uh, through, through the different technologies that we offer. And, um, the Parscale F 900, which should, which we are going to launch very soon, um, um, is, is, is our best hype, highest performance all-flash and the most economic allowed slash uh, to date. So, um, so it is, um, it not only is our fastest, but also offers, uh, the most economic, uh, most economical way of storing the data. Um, so, so ideal far for these type of high-performance workloads, like AIML, deep learning and so on. Excellent. >>So talk to me about some of the results that the university is achieving so far. I did read a three X improvement in IO performance. You were able to get nearly a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts that Dell technologies has helping the university to achieve. >>Oh, we had, uh, we had an old, uh, in all the Dell customer, and if you, uh, give a Luca walk, we have that inside the insomnia, our data centers. Uh, we typically joking, we define them as a sort of, uh, Dell technologies supermarket in the sense that, uh, uh, degreed part of our, our servers storage environment comes from, uh, from that technology said several generations of, uh, uh, PowerEdge servers, uh, uh, power, my ex, uh, Isaac along, uh, powers, Gale power store. So we, uh, we are, uh, um, using a lot of, uh, uh, Dell technologies here, here, and of course, uh, um, in the past, uh, our traditional, uh, workloads were well supported by that technologies. And, uh, Dell technologies is, uh, uh, driving ourselves versus, uh, the, what we call the next generation workloads, uh, because we are, uh, uh, combining gas, uh, in, um, in the transition of, uh, um, uh, the next generation of computing there, but to be OPA who, uh, to ask here, and he was walked through our research of looking for, cause if I, if I have to, to, to, to give a look to what we are, uh, doing, uh, mostly here, healthcare workloads, uh, deep learning, uh, uh, data analysis, uh, uh, image analysis in C major extraction that everything have be supported, especially from, uh, the next next generation servers typically keep the, uh, with, with GPU's. >>This is why GPU activities is, is so important for answer, but also, uh, supported on the, on the, on the networking side. But because of that, the, the, the speed, the, and the, of the storage, and must be tired to the next generation networking. Uh, low-latency high-performance because at the end of the day, you have to, uh, to bring the data in storage and DP. Can you do it? Uh, so, uh, they're, uh, one of the low latency, uh, uh, I performance, if they're connected zones is also a side effect of these new work. And of course that the college is, is, is. >>I love how you described your data centers as a Dell technologies supermarket, maybe a different way of talking about a center of excellence question. I want to ask you about, I know that the university of PISA is SCOE for Dell. Talk to me about in the last couple of minutes we have here, what that entails and how Dell helps customers become a center of excellence. >>Yeah, so Dell, um, like talked about has a lot of the Dell Dell products, uh, today, and, and, and in fact, he mentioned about the pirate servers, the power scale F 900 is, is actually based on a forehead server. So, so you can see, so a lot of these technologies are sort of in the linked with each other, they talk to each other, they will work together. Um, and, and, and that sort of helps, helps customers manage the entire, uh, ecosystem lifecycle data, life cycle together, versus as piece parts, because we have solutions that solve all aspects of, of, of the, uh, of, of, uh, of our customer like Mauricio's needs. Right. So, um, so yeah, I'm glad Maurizio is, is leveraging Dell and, um, and I'm happy we are able to help help more issue or solve solve, because, uh, all his use cases, uh, and UN >>Excellent. Maricio last question. Are you going to be using AI machine learning, powered by Dell to determine if the tower of PISA is going to continue to lean, or if it's going to stay where it is? >>Uh, the, the, the leaning tower is, uh, an engineering miracle. Uh, some years ago, uh, an engineering, uh, incredible worker, uh, was able, uh, uh, to fix them. They leaning for a while and let's open up the tower visa, stay there because he will be one of our, uh, beauty that you can come to to visit. >>And that's one part of Italy I haven't been to. So as pandemic, I gotta add that to my travel plans, MERITO and Kaushik. It's been a pleasure talking to you about how Dell is partnering with the university of PISA to really help you power AI machine learning workloads, to facilitate many use cases. We are looking forward to hearing what's next. Thanks for joining me this morning. Thank you for my guests. I'm Lisa Martin. You're watching the cubes coverage of Dell technologies world. The digital event experience.
SUMMARY :
We're going to be talking about the university of Piza and how it is leaning into all flash data uh, scientific computing support for these, the main, uh, uh, uh, nineties, uh, we are, uh, Talk to me now about some of the workloads and that are generating massive amounts of data, a lot of remote activities coming from, uh, uh, scientists, the physicists that oil and gas folks doing research to still access that data at the speed that the access to the laboratories, uh, was the of them, uh, Talk to me about the storage impact, because as we know, we talk about, you know, these very demanding, unstructured workloads, uh, Isilon, uh, as a storage infrastructure, uh, together with for this, uh, as Maricio said, they were using Isilon before adding in power that means that there's a lot of fear mounted to be, uh, data, to be processed, but, and the most economic allowed slash uh, to date. a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts the sense that, uh, uh, degreed part of our, they're, uh, one of the low latency, uh, uh, I know that the university of PISA is SCOE for Dell. a lot of the Dell Dell products, uh, today, and, and, if the tower of PISA is going to continue to lean, or if it's going to stay where it is? Uh, the, the, the leaning tower is, uh, an engineering miracle. So as pandemic, I gotta add that to my travel plans,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Maurizio | PERSON | 0.99+ |
MERITO | PERSON | 0.99+ |
Maurizio Davini | PERSON | 0.99+ |
Maricio | PERSON | 0.99+ |
Debbie | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
University of Pisa | ORGANIZATION | 0.99+ |
20 departments | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Italy | LOCATION | 0.99+ |
Kaushik | PERSON | 0.99+ |
PISA | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
CAUTI Gosha | PERSON | 0.99+ |
last year | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
F 900 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
Mauricio | PERSON | 0.98+ |
yesterday | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
3000 faculty | QUANTITY | 0.98+ |
about a trillion | QUANTITY | 0.97+ |
Isilon | ORGANIZATION | 0.96+ |
Dell Technologies | ORGANIZATION | 0.96+ |
SCOE | ORGANIZATION | 0.96+ |
Parscale | ORGANIZATION | 0.96+ |
Yas | ORGANIZATION | 0.95+ |
Iceland | LOCATION | 0.94+ |
about 50,000 | QUANTITY | 0.94+ |
nineties | QUANTITY | 0.93+ |
VMworld | ORGANIZATION | 0.91+ |
Moss | ORGANIZATION | 0.89+ |
one part | QUANTITY | 0.88+ |
Jupiter | ORGANIZATION | 0.87+ |
Kaushik Ghosh | PERSON | 0.87+ |
CTO | PERSON | 0.85+ |
this morning | DATE | 0.84+ |
few months ago | DATE | 0.8+ |
Gale power store | ORGANIZATION | 0.79+ |
hundred percent | QUANTITY | 0.76+ |
university of Piza | ORGANIZATION | 0.75+ |
some years ago | DATE | 0.75+ |
university of PISA | ORGANIZATION | 0.71+ |
Eric Herzog & Sam Werner, IBM | CUBEconversation
(upbeat music) >> Hello everyone, and welcome to this "Cube Conversation." My name is Dave Vellante and you know, containers, they used to be stateless and ephemeral but they're maturing very rapidly. As cloud native workloads become more functional and they go mainstream persisting, and protecting the data that lives inside of containers, is becoming more important to organizations. Enterprise capabilities such as high availability or reliability, scalability and other features are now more fundamental and important and containers are linchpin of hybrid cloud, cross-cloud and edge strategies. Now fusing these capabilities together across these regions in an abstraction layer that hides that underlying complexity of the infrastructure, is where the entire enterprise technology industry is headed. But how do you do that without making endless copies of data and managing versions not to mention the complexities and costs of doing so. And with me to talk about how IBM thinks about and is solving these challenges are Eric Herzog, who's the Chief Marketing Officer and VP of Global Storage Channels. For the IBM Storage Division is Sam Werner is the vice president of offering management and the business line executive for IBM Storage. Guys, great to see you again, wish should, were face to face but thanks for coming on "theCUBE." >> Great to be here. >> Thanks Dave, as always. >> All right guys, you heard me my little spiel there about the problem statement. Eric, maybe you could start us off. I mean, is it on point? >> Yeah, absolutely. What we see is containers are going mainstream. I frame it very similarly to what happened with virtualization, right? It got brought in by the dev team, the test team, the applications team, and then eventually of course, it became the main state. Containers is going through exactly that right now. Brought in by the dev ops people, the software teams. And now it's becoming again, persistent, real use clients that want to deploy a million of them. Just the way they historically have deployed a million virtual machines, now they want a million containers or 2 million. So now it's going mainstream and the feature functions that you need once you take it out of the test sort of play with stage to the real production phase, really changes the ball game on the features you need, the quality of what you get, and the types of things you need the underlying storage and the data services that go with that storage,. to do in a fully container world. >> So Sam how'd we get here? I mean, container has been around forever. You look inside a Linux, right? But then they did, as Eric said, go mainstream. But it started out the, kind of little experimental, As I said, their femoral didn't really need to persist them, but it's changed very quickly. Maybe you could talk to that evolution and how we got here. >> I mean, well, it's been a look, this is all about agility right? It's about enterprises trying to accelerate their innovation. They started off by using virtual machines to try to accelerate access to IT for developers, and developers are constantly out, running ahead. They got to go faster and they have to deliver new applications. Business lines need to figure out new ways to engage with their customers. Especially now with the past year we had it even further accelerated this need to engage with customers in new ways. So it's about being agile. Containers promise or provide a lot of the capabilities you need to be agile. What enterprises are discovering, a lot of these initiatives are starting within the business lines and they're building these applications or making these architectural decisions, building dev ops environments on containers. And what they're finding is they're not bringing the infrastructure teams along with them. And they're running into challenges that are inhibiting their ability to achieve the agility they want because their storage needs aren't keeping up. So this is a big challenge that enterprises face. They want to use containers to build a more agile environment to do things like dev ops, but they need to bring the infrastructure teams along. And that's what we're focused on now. Is how do you make that agile infrastructure to support these new container worlds? >> Got it, so Eric, you guys made an announcement to directly address these issues. Like it's kind of a fire hose of innovation. Maybe you could take us through and then we can unpack that a little bit. >> Sure, so what we did is on April 27th, we announced IBM Spectrum Fusion. This is a fully container native software defined storage technology that integrates a number of proven battle-hardened technologies that IBM has been deploying in the enterprise for many years. That includes a global scalable file system that can span edge core and cloud seamlessly with a single copy of the data. So no more data silos and no more 12 copies of the data which of course drive up CapEx and OpEx. Spectrum Fusion reduces that and makes it easier to manage. Cuts the cost from a CapEx perspective and cuts a cost for an OpEx perspective. By being fully container native, it's ready to go for the container centric world and could span all types of areas. So what we've done is create a storage foundation which is what you need at the bottom. So things like the single global namespace, single accessibility, we have local caching. So with your edge core cloud, regardless of where the data is, you think the data's right with you, even if it physically is not. So that allows people to work on it. We have file locking and other technologies to ensure that the data is always good. And then of course we'd imbued it with the HA Disaster Recovery, the backup and restore technology, which we've had for years and have now made of fully container native. So spectrum fusion basically takes several elements of IBM's existing portfolio has made them container native and brought them together into a single piece of software. And we'll provide that both as a software defined storage technology early in 2022. And our first pass will be as a hyperconverged appliance which will be available next quarter in Q3 of 2021. That of course means it'll come with compute, it'll come with storage, come with a rack even, come with networking. And because we can preload everything for the end users or for our business partners, it would also include Kubernetes, Red Gat OpenShift and Red Hat's virtualization technology all in one simple package, all ease of use and a single management gooey to manage everything, both the software side and the physical infrastructure that's part of the hyperconverged system level technologies. >> So, maybe it can help us understand the architecture and maybe the prevailing ways in which people approach container storage, what's the stack look like? And how have you guys approached it? >> Yeah, that's a great question. Really, there's three layers that we look at when we talk about container native storage. It starts with the storage foundation which is the layer that actually lays the data out onto media and does it in an efficient way and makes that data available where it's needed. So that's the core of it. And the quality of your storage services above that depend on the quality of the foundation that you start with. Then you go up to the storage services layer. This is where you bring in capabilities like HA and DR. People take this for granted, I think as they move to containers. We're talking about moving mission critical applications now into a container and hybrid cloud world. How do you actually achieve the same levels of high availability you did in the past? If you look at what large enterprises do, they run three site, for site replication of their data with hyper swap and they can ensure high availability. How do you bring that into a Kubernetes environment? Are you ready to do that? We talk about how only 20% of applications have really moved into a hybrid cloud world. The thing that's inhibiting the other 80% these types of challenges, okay? So the storage services include HA DR, data protection, data governance, data discovery. You talked about making multiple copies of data creates complexity, it also creates risk and security exposures. If you have multiple copies of data, if you needed data to be available in the cloud you're making a copy there. How do you keep track of that? How do you destroy the copy when you're done with it? How do you keep track of governance and GDPR, right? So if I have to delete data about a person how do I delete it everywhere? So there's a lot of these different challenges. These are the storage services. So we talk about a storage services layer. So layer one data foundation, layer two storage services, and then there needs to be connection into the application runtime. There has to be application awareness to do things like high availability and application consistent backup and recovery. So then you have to create the connection. And so in our case, we're focused on open shift, right? When we talk about Kubernetes how do you create the knowledge between layer two, the storage services and layer three of the application services? >> And so this is your three layer cake. And then as far as like the policies that I want to inject, you got an API out and entries in, can use whatever policy engine I want. How does that work? >> So we're creating consistent sets of APIs to bring those storage services up into the application, run time. We in IBM have things like IBM cloud satellite which bring the IBM public cloud experience to your data center and give you a hybrid cloud or into other public cloud environments giving you one hybrid cloud management experience. We'll integrate there, giving you that consistent set of storage services within an IBM cloud satellite. We're also working with Red Hat on their Advanced Cluster Manager, also known as RACM to create a multi-cluster management of your Kubernetes environment and giving that consistent experience. Again, one common set of APIs. >> So the appliance comes first? Is that a no? Okay, so is that just time to market or is there a sort of enduring demand for appliances? Some customers, you know, they want that, maybe you could explain that strategy. >> Yeah, so first let me take it back a second. Look at our existing portfolio. Our award-winning products are both software defined and system-based. So for example Spectrum Virtualize comes on our flash system. Spectrum Scale comes on our elastic storage system. And we've had this model where we provide the exact same software, both on an array or as standalone piece of software. This is unique in the storage industry. When you look at our competitors, when they've got something that's embedded in their array, their array manager, if you will, that's not what they'll try to sell you. It's software defined storage. And of course, many of them don't offer software defined storage in any way, shape or form. So we've done both. So with spectrum fusion, we'll have a hyper-converged configuration which will be available in Q3. We'll have a software defined configuration which were available at the very beginning of 2022. So you wanted to get out of this market feedback from our clients, feedback from our business partners by doing a container native HCI technology, we're way ahead. We're going to where the park is. We're throwing the ball ahead of the wide receiver. If you're a soccer fan, we're making sure that the mid guy got it to the forward ahead of time so you could kick the goal right in. That's what we're doing. Other technologies lead with virtualization, which is great but virtualization is kind of old hat, right? VMware and other virtualization layers have been around for 20 now. Container is where the world is going. And by the way, we'll support everything. We still have customers in certain worlds that are using bare metal, guess what? We work fine with that. We worked fine with virtual as we have a tight integration with both hyper V and VMware. So some customers will still do that. And containers is a new wave. So with spectrum fusion, we are riding the wave not fighting the wave and that way we could meet all the needs, right? Bare metal, virtual environments, and container environments in a way that is all based on the end users applications, workloads, and use cases. What goes, where and IBM Storage can provide all of it. So we'll give them two methods of consumption, by early next year. And we started with a hyper-converged first because, A, we felt we had a lead, truly a lead. Other people are leading with virtualization. We're leading with OpenShift and containers where the first full container-native OpenShift ground up based hyper-converged of anyone in the industry versus somebody who's done VMware or some other virtualization layer and then sort of glommed on containers and as an afterthought. We're going to where the market is moving, not to where the market has been. >> So just follow up on that. You kind of, you got the sort of Switzerland DNA. And it's not just OpenShift and Red Hat and the open source ethos. I mean, it just goes all the way back to San Volume Controller back in the day where you could virtualize anybody's storage. How is that carrying through to this announcement? >> So Spectrum Fusion is doing the same thing. Spectrum Fusion, which has many key elements brought in from our history with Spectrum Scale supports not IBM storage, for example, EMC Isilon NFS. It will support, Fusion will support Spectrum Scale, Fusion will support our elastic storage system. Fusion will support NetApp filers as well. Fusion will support IBM cloud object storage both software defined storage, or as an array technology and Amazon S3 object stores and any other object storage vendor who's compliant with S3. All of those can be part of the global namespace, scalable file system. We can bring in, for example, object data without making a duplicate copy. The normal way to do that as you make a duplicate copy. So you had a copy in the object store. You make a copy and to bring that into the file. Well, guess what, we don't have to do that. So again, cutting CapEx and OpEx and ease of management. But just as we do with our flash systems product and our Spectrum Virtualize and the SAN Volume Controller, we support over 550 storage arrays that are not ours that are our competitors. With Spectrum Fusion, we've done the same thing, fusion, scale the IBM ESS, IBM cloud object storage, Amazon S3 object store, as well as other compliance, EMC Isilon NFS, and NFS from NetApp. And by the way, we can do the discovery model as well not just integration in the system. So we've made sure that we really do protect existing investments. And we try to eliminate, particularly with discovery capability, you've got AI or analytics software connecting with the API, into the discovery technology. You don't have to traverse and try to find things because the discovery will create real time, metadata cataloging, and indexing, not just of our storage but the other storage I'd mentioned, which is the competition. So talk about making it easier to use, particularly for people who are heterogeneous in their storage environment, which is pretty much the bulk of the global fortune 1500, for sure. And so we're allowing them to use multiple vendors but derive real value with Spectrum Fusion and get all the capabilities of Spectrum Fusion and all the advantages of the enterprise data services but not just for our own product but for the other products as well that aren't ours. >> So Sam, we understand the downside of copies, but then, so you're not doing multiple copies. How do you deal with latency? What's the secret sauce here? Is it the file system? Is there other magic in here? >> Yeah, that's a great question. And I'll build a little bit off of what Eric said, but look one of the really great and unique things about Spectrum Scale is its ability to consume any storage. And we can actually allow you to bring in data sets from where they are. It could have originated in object storage we'll cash it into the file system. It can be on any block storage. It can literally be on any storage you can imagine as long as you can integrate a file system with it. And as you know most applications run on top of the file system. So it naturally fits into your application stack. Spectrum Scale uniquely is a globally parallel file system. So there's not very many of them in the world and there's none that can achieve what Spectrum Scale can do. We have customers running in the exabytes of data and the performance improves with scales. So you can actually deploy Spectrum Scale on-prem, build out an environment of it, consuming whatever storage you have. Then you can go into AWS or IBM cloud or Azure, deploy an instance of it and it will now extend your file system into that cloud. Or you can deploy it at the edge and it'll extend your file system to that edge. This gives you the exact same set of files and visibility and we'll cash in only what's needed. Normally you would have to make a copy of data into the other environment. Then you'd have to deal with that copy later, let's say you were doing a cloud bursting use case. Let's look at that as an example, to make this real. You're running an application on-prem. You want to spin up more compute in the cloud for your AI. The data normally you'd have to make a copy of the data. You'd run your AI. They have to figure out what to do with that data. Do you copy some of the fact? Do we sync them? Do you delete it? What do you do? With Spectrum Scale just automatically cash in whatever you need. It'll run there and you get assigned to spin it down. Your copy is still on-prem. You know, no data is lost. We can actually deal with all of those scenarios for you. And then if you look at what's happening at the edge, a lot of say video surveillance, data pouring in. Looking at the manufacturing {for} looking for defects. You can run a AI right at the edge, make it available in the cloud, make that data available in your data center. Again, one file system going across all. And that's something unique in our data foundation built on Spectrum Scale. >> So there's some metadata magic in there as well, and that intelligence based on location. And okay, so you're smart enough to know where the data lives. What's the sweet spot for this Eric? Are there any particular use cases or industries that we should be focused on or is it through? >> Sure, so first let's talk about the industries. We see certain industries going more container quicker than other industries. So first is financial services. We see it happening there. Manufacturing, Sam already talked about AI based manufacturing platforms. We actually have a couple clients right now. We're doing autonomous driving software with us on containers right now, even before Spectrum Fusion with Spectrum Scale. We see public of course, healthcare and in healthcare don't just think delivery at IBM. That includes the research guys. So the genomic companies, the biotech companies, the drug companies are all included in that. And then of course, retail, both on-prem and off-prem. So those are sort of the industries. Then we see from an application workload, basically AI analytics and big data applications or workloads are the key things that Spectrum Fusion helps you because of its file system. It's high performance. And those applications are tending to spread across core ,edge and cloud. So those applications are spreading out. They're becoming broader than just running in the data center. And by the way they want to run it just into the data center, that's fine. Or perfect example, we had giant global auto manufacturer. They've got factories all over. And if you think there isn't compute resources in every factory, there is because those factories I just saw an article, actually, those factories cost about a billion dollars to build them, a billion. So they've got their own IT, now it's connected to their core data center as well. So that's a perfect example that enterprise edge where spectrum fusion would be an ideal solution whether they did it as software defined only, or of course when you got a billion dollar factory, just to make it let alone produce the autos or whatever you're producing. Silicon, for example, those fabs, all cost a billion. That's where the enterprise edge fits in very well with Spectrum Fusion. >> So are those industries, what's driving the adoption of containers? Is it just, they just want to modernize? Is it because they're doing some of those workloads that you mentioned or is there's edge? Like you mentioned manufacturing, I could see that potentially being an edge is the driver. >> Well, it's a little bit of all of those Dave. For example, virtualization came out and virtualization offered advantages over bare metal, okay? Now containerization has come out and containerization is offering advantage over virtualization. The good thing at IBM is we know we can support all three. And we know again, in the global fortune 2000, 1500 they're probably going to run all three based on the application workload or use case. And our storage is really good at bare metal. Very good at virtualization environments. And now with Spectrum Fusion are container native outstanding for container based environments. So we see these big companies will probably have all three and IBM storage is one of the few vendors if not the only vendor that could adroitly support all three of those various workload types. So that's why we see this as a huge advantage. And again, the market is going to containers. We are, I'm a native California. You don't fight the wave, you ride the wave. and the wave is containers and we're riding that wave. >> If you don't ride the wave you become driftwood as Pat Gelsinger would say. >> And that is true, another native California. I'm a whole boss. >> So okay, so, I wonder Sam I sort of hinted upfront in my little narrative there but the way we see this, as you've got on-prem hybrid, you got public clouds across cloud moving to the edge. Open shift is I said is the linchpin to enabling some of those. And what we see is this layer that abstracts the complexity, hides the underlying complexity of the infrastructure that becomes kind of an implementation detail. Eric talked about skating to the park or whatever sports analogy you want to use. Is that where the park is headed? >> Yeah, I mean, look, the bottom line is you have to remove the complexity for the developers. Again, the name of the game here is all about agility. You asked why these industries are implementing containers? It's about accelerating their innovation and their services for their customers. It's about leveraging AI to gain better insights about their customers and delivering what they want and proving their experience. So if it's all about agility developers don't want to wait around for infrastructure. You need to automate it as much as possible. So it's about building infrastructure that's automated, which requires consistent API APIs. And it requires abstracting out the complexity of things like HA and DR. You don't want every application owner to have to figure out how to implement that. You want to make those storage services available and easy for a developer to implement and integrate into what they're doing. You want to ensure security across everything you do as you bring more and more of your data of your information about your customers into these container worlds. You've got to have security rock solid. You can't leave any exposures there and you can't afford downtime. There's increasing threats from things like ransomware. You don't see it in the news every day but it happens every single day. So how do you make sure you can recover when an event happens to you? So yes, you need to build a abstracted layer of storage services and you need to make it simply available to the developers in these dev ops environments. And that's what we're doing with spectrum fusion. We're taking, I think, extremely unique and one of a kind storage foundation with Spectrum Scale that gives you single namespace globally. And we're building onto it an incredible set of storage services, making extremely simple to deploy enterprise class container applications. >> So what's the bottom line business impact. I mean, how does this change? I mean, Sam, you I think articulated very well through all about serving the developers versus you know, storage, admin provisioning, a LUN. So how does this change my organization, my business? What's the impact there? >> I've mentioned one other point that we talk about an IBM a lot, which is the AI ladder. And it's about how do you take all of this information you have and be able to take it to build new insights, to give your company and advantage. An incumbent in an industry shouldn't be able to be disrupted if they're able to leverage all the data they have about the industry and their customers. But in order to do that, you have to be able to get to a single source of data and be able to build it into the fabric of your business operations. So that all decisions you're making in your company, all services you deliver to your customers, are built on that data foundation and information and the only way to do that and infuse it into your culture is to make this stuff real time. And the only way to do that is to build out a containerized application environment that has access to real-time data. The ultimate outcome, sorry, I know you asked for business results is that you will, in real time understand your clients, understand your industry and deliver the best possible services. And the absolute, business outcome is you will continue to gain market share and your environment and grow revenue. I mean, that's the outcome every business wants. >> Yeah, it's all about speed. Everybody's kind of, everybody's last year was forced into digital transformation. It was sort of rushed into and compressed and now they get some time to do it right. And so modernizing apps, containers, dev ops developer led sort of initiatives are really key to modernization. All right, Eric, we've got, we're out of time but give us the bottom summary. We didn't talk, actually, we had to talk about the 3,200. Maybe you could give us a little insight on that before we close. >> Sure, so in addition to what we're doing with Fusion we also introduced a new elastic storage system, 3,200 and it's all flash. It gets 80 gigs, a second sustained at the node level and we can cluster them infinitely. So for example, I've got 10 of them. I'm delivering 800 gigabytes, a second sustained. And of course, AI, big data analytic workloads are extremely, extremely susceptible to bandwidth and or data transfer rate. That's what they need to deliver their application base properly. It comes with Spectrum Scale built in so that comes with it. So you get the advantage of Spectrum Scale. We talked a lot about Spectrum Scale because it is if you will, one of the three fathers of spectrum fusion. So it's ideal with it's highly parallel file system. It's used all over in high performance computing and super computing, in drug research, in health care in finance, probably about 80% of the world's largest banks in the world use Spectrum Scale already for AI, big data analytics. So the new 3,200 is an all flash version twice as fast as the older version and all the benefit of Spectrum Scale including the ability of seamlessly integrating into existing Spectrum Scale or ESS deployments. And when Fusion comes out, you'll be able to have Fusion. And you could also add 3,200 to it if you want to do that because of the capability of our global namespace and our single file system across edge, core and cloud. So that's the 3,200 in a nutshell, Dave. >> All right, give us a bottom line, Eric. And we got to go, what's the bumper sticker. >> Yeah, bumper sticker is, you got to ride the wave of containers and IBM storage is company that can take you there so that you win the big surfing context and get the big prize. >> Eric and Sam, thanks so much, guys. It's great to see you and miss you guys. Hopefully we'll get together soon. So get your jabs and we'll have a beer. >> All right. >> All right, thanks, Dave. >> Nice talking to you. >> All right, thank you for watching everybody. This is Dave Vellante for "theCUBE." We'll see you next time. (upbeat music)
SUMMARY :
and protecting the data about the problem statement. and the types of things you Maybe you could talk to that a lot of the capabilities Got it, so Eric, you the data is, you think So that's the core of it. you got an API out and entries in, into the application, run time. So the appliance comes first? that the mid guy got it to in the day where you could And by the way, we can do Is it the file system? and the performance improves with scales. What's the sweet spot for this Eric? And by the way they want to run it being an edge is the driver. and IBM storage is one of the few vendors If you don't ride the And that is true, but the way we see this, as So how do you make sure What's the impact there? and the only way to do that and infuse it and now they get some time to do it right. So that's the 3,200 in a nutshell, Dave. the bumper sticker. so that you win the big It's great to see you and miss you guys. All right, thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Sam Werner | PERSON | 0.99+ |
April 27th | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
80 gigs | QUANTITY | 0.99+ |
12 copies | QUANTITY | 0.99+ |
3,200 | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2 million | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
CapEx | TITLE | 0.99+ |
800 gigabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
single copy | QUANTITY | 0.99+ |
OpEx | TITLE | 0.98+ |
three layers | QUANTITY | 0.98+ |
Spectrum Fusion | COMMERCIAL_ITEM | 0.98+ |
20% | QUANTITY | 0.98+ |
EMC | ORGANIZATION | 0.98+ |
first pass | QUANTITY | 0.98+ |
S3 | TITLE | 0.98+ |
Global Storage Channels | ORGANIZATION | 0.98+ |
a billion | QUANTITY | 0.97+ |
two | QUANTITY | 0.97+ |
20 | QUANTITY | 0.97+ |
Spectrum Scale | TITLE | 0.97+ |
three fathers | QUANTITY | 0.97+ |
early next year | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
GDPR | TITLE | 0.96+ |
Red Hat | ORGANIZATION | 0.96+ |
OpenShift | TITLE | 0.96+ |
Maurizio Davini & Kaushik Ghosh | CUBE Conversation, May 2021
(upbeat music) >> Hi, Lisa Martin here with theCUBE. You're watching our coverage of Dell Technologies World, the Digital Virtual Experience. I've got two guests with me here today. We're going to be talking about the University of Pisa and how it is leaning into all flash deal that is powered by Dell Technologies. One of our alumni is back, Maurizio Davini, the CTO of the University of Pisa. Maurizio, welcome back to theCUBE. >> Thank you. You're always welcome. >> Very excited to talk to you today. Kaushik Ghosh is here as well, The Director of Product Management at Dell Technologies. Kaushik, welcome to theCUBE. >> Thank you. >> So here we are at this virtual event again. Maurizio, you were last on theCUBE at VM world a few months ago, the virtual experience as well. But talk to our audience a little bit, before we dig into the technology and some of these demanding workloads that the University is utilizing, talk to me a little bit about your role as CTO and about the University. >> So my role as CTO at University of Pisa is regarding the data center operations and scientific computing support. It is the main occupation that I have. Then I support also, the technological choices That the University of Pisa is doing during the latest two or three years. >> Talk to me about something, so this is in terms of students, we're talking about 50,000 or so students, 3000 faculty and the campus is distributed around the town of Pisa. Is that correct, Maurizio? >> The University of Pisa is sort of a town campus in the sense that we have 20 departments that are located inside the medieval town, but due to the choices that University of Pisa has done in the last '90s, we are owner of a private fiber network connecting all our departments and all our (indistinct). And so we can use the town as a sort of white board to design new services, new kind of support for teaching and so on. >> So you've really modernized the data infrastructure for the University that was founded in the middle ages. Talk to me now about some of the workloads, Maurizio, that are generating massive amounts of data and then we'll get into what you're doing with Dell Technologies. >> Oh, so the University of Pisa has a quite old historian HPC, traditional HPC. So we are supporting the traditional workloads from CAE or engineering or chemistry or oil and gas simulations. Of course, during the pandemic year, last year especially, we have new kind of workload scan, some related to the fast movement of the HPC workload from let's say, traditional HPC to AI and machine learning. And also, they request to support a lot of remote activities coming from distance learning to remotize laboratories or stations or whatever, most elder in presence in the past. And so the impact either on the infrastructure or, and especially on the storage part, was significant. >> So you talked about utilizing the high performance computing environments for a while and for scientific computing and things, I saw a case study that you guys have done with Dell, but then during the pandemic, the challenge and the use case of remote learning brought additional challenges to your environment. From that perspective, how were you able to transfer your curriculum to online and enable the scientists, the physicists, the oil and gas folks doing research to still access that data at the speed that they needed to? >> You know, for what you got distance learning, of course, we were based on cloud services that were not provided internally by us. So we based on Microsoft services, on Google services and so on. But what regards internal support, scientific computing was completely remotized, either on support or experience, because how can I bring some examples? For example, laboratory activities were remotized. The access to the laboratories was (indistinct) remote as much as possible. We designed a special network to connect all the laboratories and to give the researcher the possibility of accessing the data on this special network. So a sort of a collector of data inside our university network. You can imagine that... Utilization, for example, was a key factor for us because utilization was, for us, a flexible way to deliver new services in an easy way, especially, if you have to administer systems for remote. So as I told you before about the network as a white board, also, the computer infrastructure was (indistinct) utilization treated as a sort of (indistinct). We were designing new services, either for interactive services, or especially for scientific computing. For example, we have an experience with utilization of HPC workload, storage and so on. >> Talk to me about the storage impact because as we know, we talk about these very demanding unstructured workloads, AI, machine learning, and those are difficult for most storage systems to handle. Maurizio, talk to us about why you leaned into all flash with Dell Technologies and talk to us a little bit about the technologies that you've implemented. >> So if I have to think about our storage infrastructure before the pandemic, I have to think about Isilon, because our HPC workloads was mainly based off Isilon as a storage infrastructure. Together, with some final defense system, as you can imagine, we were deploying in our homes. During the pandemic, but especially with the explosion of the AI, the blueprint of the storage requests changed a lot because what we had until then, and in our case, was an hybrid Isilon solution. Didn't fit so well for HB, for AI (indistinct) and this is why we started the migration. It was not really migration, but the sort of integration of the Power Scale or flash machine inside our environment, because then the Power Scale or flash, and especially, I hope in the future, the MVME support is a key factor for the storage, storage support. We already have experienced some of the MVME possibilities on the Power Max that we have here that we use (indistinct) and part for VDI support, but flash is the minimum and MVME is what we need to support in the right way the AI workloads. >> Lisa: Kaushik, talk to me about what Dell Technologies has seen. The optic the demand for this. As Maurizio said, they were using Isilon before, adding in Power Scale. What are some of the changing demands that Dell technologies has seen and how does technologies like Power Scale and the F900 facilitate these organizations being able to rapidly change their environment so that they can utilize and extract the value from data? >> Yeah, no, absolutely. Artificial intelligence is an area that continues to amaze me and personally, I think the potential here is immense. As Maurizio said, right? The data sets with artificial intelligence have grown significantly, and not only the data has become larger, the models, the AI models that are used have become more complex. For example, one of the studies suggests that for a modeling of natural language processing, one of the fields in AI, the number of parameters used could exceed like a trillion in a few years, right? So almost the size of a human brain. So not only that means that there's a lot of data to be processed, but the process stored ingested, but probably has to be done in the same amount of time as before or perhaps even a smaller amount of time, right? So larger data, same time, or perhaps even a smaller amount of time. So, absolutely, I agree. For these types of workloads, you need a storage that gives you that high-performance access, but also being able to store that data economically. >> Lisa: And Kaushik, how does Dell technologies deliver that? The ability to scale the economics. What's unique and differentiated about Power Scale? >> So Power Scale is our all flash system. It uses some of the same capabilities that Isilon products used to offer. The 1 FS file system capabilities. Some of the same capabilities that (indistinct) has used and loved in the past. So some of those same capabilities are brought forward now. on this Power Scale platform. There are some changes, like for example, our new Power Scale platform supports NVDR GPU direct, right? So for artificial intelligence workloads, you do need these GPU capable machines and Power Scale supports those high-performance GPU direct machines through the different technologies that we offer, and the Power Scale F 900, which we are going to launch very soon is our best highest performance all flash and the most economical all flash to date. So it not only is our fastest, but also offers the most economical way of storing the data. So ideal for these type of high-performance workloads, like AIML, deep learning and so on. >> Excellent. Maurizio, talk to me about some of the results that the University is achieving so far. I did read a three X improvement in IO performance. You were able to get nearly a hundred percent of the curriculum online pretty quickly, but talk to me about some of the other impacts that Dell technologies is helping the University to achieve. >> Oh, we are an old Dell customer and if you give a look what we have inside our data centers, we typically joking. We define as a sort of Dell technologies supermarket in the sense that the great part of our servers storage environment comes from Dell technology. Several generations of Power Edge servers, Power Max, Isilon, Power Scale, Power Sore. So we are using a lot of Dell technologies here, and of course, in the past, our traditional workloads were well supported by Dell technologies. And Dell technologies is driving us versus what we call the next generation workloads, because they are accompanying us in the transition versus the next generation computing, but to hope to adhere and (indistinct) to our researchers are looking for, because if I had to give a look to what we are doing mostly here, healthcare workloads, deep learning, data analysis, image analysis, same major extraction. Everything have to be supported, especially from the next generation servers, typically to keep with GPUs. This is why GPU direct is so important for us, but also, supported on the networking side, because the speed of the storage must be tied to the next generation networking. Low latency, high performance, because at the end of the day, you have to bring the data to the storage room, and typically, you do it by importing it. So they're one of the low latency, high performance interconnections. Zones is also a side effect of this new (indistinct). And of course, Dell Technologies is with us in this transition. >> I loved how you described your data centers as a Dell Technologies supermarket. Maybe a different way of talking about a center of excellence. Kaushik, I want to ask you about... I know that the University of Pisa is a SCOE for Dell. Talk to me about, in the last couple of minutes we have here, what that entails and how Dell helps customers become a center of excellence. >> Yeah. So Dell, like Maurizio has talked about, has a lot of the Dell products today. And in fact, he mentioned about the powered servers, the Power Scale F 900 is actually based on a powered server. So you can see. So a lot of these technologies are sort of interlinked with each other. They talk to each other, they work together and that sort of helps customers manage their entire ecosystem life cycle, data life cycle together versus as piece spots, because we have solutions that solve all aspects of our customer, like Maurizio's needs, right? So, yeah, I'm glad Maurizio is leveraging Dell and I'm happy we are able to help Maurizio solve all his use cases and when. >> Lisa: Excellent. Maurizio, last question, are you going to be using AI machine learning powered by Dell to determine if the tower of Pisa is going to continue to lean or if it's going to stay where it is? >> The leaning tower is an engineering miracle. Some years ago, an incredible engineering worker was able to fix the leaning for a while, and let's hope that the tower of Pisa stay there because it's one of our beauty that you can come to visit. >> And that's one part of Italy I haven't been to. So post pandemic, I got to add that to my travel plans. Maurizio and Kaushik, it's been a pleasure talking to you about how Dell is partnering with the University of Pisa to really help you power AI machine learning workloads to facilitate many use cases. We are looking forward to hearing what's next. Thanks for joining me this morning. >> Kaushik: Thank you. >> Maurizio: Thank you. For my guests, I'm Lisa Martin. You're watching theCUBE's coverage of Dell technologies world, the digital event experience. (upbeat music)
SUMMARY :
about the University of Pisa Thank you. Very excited to talk to you today. that the University is utilizing, It is the main occupation that I have. and the campus is distributed in the sense that we have 20 departments of the workloads, Maurizio, and especially on the storage the speed that they needed to? of accessing the data about the technologies and especially, I hope in the future, and the F900 facilitate and not only the data has become larger, The ability to scale the economics. and the most economical all flash to date. the University to achieve. of the storage must be tied I know that the University has a lot of the Dell products today. if the tower of Pisa and let's hope that the it's been a pleasure talking to you the digital event experience.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Maurizio | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Kaushik | PERSON | 0.99+ |
Kaushik Ghosh | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
University of Pisa | ORGANIZATION | 0.99+ |
Maurizio Davini | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Pisa | LOCATION | 0.99+ |
20 departments | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
3000 faculty | QUANTITY | 0.99+ |
May 2021 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
Italy | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Power Scale F 900 | COMMERCIAL_ITEM | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
three years | QUANTITY | 0.98+ |
F900 | COMMERCIAL_ITEM | 0.96+ |
pandemic | EVENT | 0.96+ |
University of Pisa | ORGANIZATION | 0.95+ |
about 50,000 | QUANTITY | 0.94+ |
Power Max | COMMERCIAL_ITEM | 0.93+ |
theCUBE | ORGANIZATION | 0.92+ |
last '90s | DATE | 0.91+ |
Power Edge | COMMERCIAL_ITEM | 0.89+ |
Power Scale | TITLE | 0.87+ |
Renen Hallak & David Floyer | CUBE Conversation 2021
(upbeat music) >> In 2010 Wikibon predicted that the all flash data center was coming. The forecast at the time was that flash memory consumer volumes, would drive prices of enterprise flash down faster than those of high spin speed, hard disks. And by mid decade, buyers would opt for flash over 15K HDD for virtually all active data. That call was pretty much dead on and the percentage of flash in the data center continues to accelerate faster than that, of spinning disk. Now, the analyst that made this forecast was David FLoyer and he's with me today, along with Renen Hallak who is the founder and CEO of Vast Data. And they're going to discuss these trends and what it means for the future of data and the data center. Gentlemen, welcome to the program. Thanks for coming on. >> Great to be here. >> Thank you for having me. >> You're very welcome. Now David, let's start with you. You've been looking at this for over a decade and you know, frankly, your predictions have caused some friction, in the marketplace but where do you see things today? >> Well, what I was forecasting was based on the fact that the key driver in any technology is volume, volume reduces the cost over time and the volume comes from the consumers. So flash has been driven over the years by initially by the iPod in 2006 the Nano where Steve Jobs did a great job with Samsung and introducing large volumes of flash. And then the iPhone in 2008. And since then, all of mobile has been flash and mobile has been taking in a greater and greater percentage share. To begin with the PC dropped. But now the PCs are over 90% are using flash when there delivered. So flash has taken over the consumer market, very aggressively and that has driven down the cost of flash much much faster than the declining market of HDD. >> Okay and now, so Renen I wonder if we could come to you, we've got I want you to talk about the innovations that you're doing, but before we get there, talk about why you started Vast. >> Sure, so it was five years ago and it was basically the kill of the hard drive. I think what David is saying resonates very, very well. In fact, if you look at our original presentation for Vast Data. It showed flash and tape. There was no hard drive in the middle. And we said 10 years from now, and this was five years ago. So even the dates match up pretty well. We're not going to have hard drives anymore. Any piece of information that needs to be accessible at all will be on flash and anything that is dormant and never gets read will be on tape. >> So, okay. So we're entering this kind of new phase now, with which is being driven by QLC. David maybe you could give us a quick what is QLC? Just give us a bumper sticker there. >> There's 3D NAND, which is the thing that's growing, very very fast and it's growing on several dimensions. One dimension is the number of layers. Another dimension is the size of each of those pieces. And the third dimension is the number of bits which a QLC is five bits per cell. So those three dimensions have all been improving. And the result of that is that on a wafer of, that you create, more and more data can be stored on the whole wafer on the chip that comes from that wafer. And so QLC is the latest, set of 3D NAND flash NAND flash. That's coming off the lines at the moment. >> Okay, so my understanding is that there's new architectures that are entering the data center space, that could take advantage of QLC enter Vast. Someone said they've rented this, a nice set up for you and maybe before we get into the architecture, can you talk a little bit more about the company? I mean, maybe not everybody's familiar with with Vast, you share why you started it but what can you tell us about the business performance and any metrics you can share would be great? >> Sure, so the company as I said is five years old, about 170, 180 people today. We started selling product just around two years ago and have just hit $150 million in run rate. That's with eight sales people. And so, as you can imagine, there's a lot of demand for flash all the way down the stack in the way that David predicted. >> Wow, okay. So you got pretty comfortable. I think you've got product market fit, right? And now you're going to scale. I would imagine you're going to go after escape velocity and you're going to build your moat. Now part of that, I mean a lot of that is product, right? Product is sales. Those are the cool two golden pillars, but, and David when you think back to your early forecast last decade it was really about block storage. That was really what was under attack. You know, kind of fusion IO got it started with Facebook. They were trying to solve their SQL database performance problems. And then we saw pure storage. They hit escape velocity. They drove a truck through EMC sym metrics HDD based install base which precipitated the acquisition of XtremeIO by EMC. Something Renan knows a little bit about having led development, of the product but flash was late to the NAS party guys, Renan let me start with you. Why is that? And what is the relevance of QLC in that regard? >> The way storage has been always, it looks like a pyramid and you have your block devices up at the top and then your NAS underneath. And today you have object down at the bottom of that pyramid. And the pyramid basically represents capacity and the Y axis is price performance. And so if you could only serve a small subset of the capacity, you would go for block. And that is the subset that needed high performance. But as you go to QLC and PLC will soon follow the price of all flash systems goes down to a point where it can compete on the lower ends of that pyramid. And the capacity grows to a point where there's enough flash to support those workloads. And so now with QLC and a lot of innovation that goes with it it makes sense to build an all flash, NAS and object store. >> Yeah, okay. And David, you and I have talked about the volumes and Renan sort of just alluded to that, the higher volumes of NAS, not to mention the fact that NAS is hard, you know files difficult, but that's another piece of the equation here, isn't it? >> Absolutely, NAS is difficult. It's a large, very large scale. We're talking about petabytes of data. You're talking about very important data. And you're talking about data, which is at the moment very difficult to manage. It takes a lot of people to manage it, takes a lot of resources and it takes up a lot, a lot of space as well. So all of those issues with NAS and complexity is probably the biggest single problem. >> So maybe we could geek out a little bit here. You guys go at it, but Renan talk about the Vast architecture. I presume it was built from the ground up for flash since you were trying to kill HTD. What else do we need to know? >> It was built for flash. It was also built for Crosspoint which is a new technology that came out from Intel and micron about three years ago. Cross point is basically another level of persistent media above flash and below Ram. But what we really set out to do is, as I said to kill the hard drive, and for that what you need is to get the price parity. And of course, flash and hard drives are not at price parity today. As David said, they probably will be in a few years from now. And so we wanted to, jumpstart that, to accelerate that. And so we spent a lot of time in building a new type of architecture with a lot of new metadata structures and algorithms on top to bring that effective price down to a point where it's competitive today. And in fact, two years ago the way we did it was by going out to talk to these vendors Intel with 3D Crosspoint and QLC flash Mellanox with NVMe over fabrics, and very fast ethernet networks. And we took those building blocks and we thought how can we use this to build a completely different type of architecture, that doesn't just take flash one level down the stack but actually allows us to break that pyramid, to collapse it down and to build a single system that is as fast as your fastest all flash block device or faster but as affordable as your hard drive based archives. And once that happens you don't need to think about storage anymore. You have a single system that's big enough and cheap enough to throw everything at it. And it's fast enough such that everything is accessible as sub-millisecond latencies. The way the architecture is built is pretty much the opposite of the way scale-out storage has been done. It's not based on shared nothing. The way XtremIO was the way Isilon is the way Hadoop and the Google file system are. We're basing it on a concept called Dis-aggregated Shared Everything. And what that means is that we have the media on one set of devices, the logic running in containers, just software and you can scale each of those independently. So you can scale capacity independently from performance and you have this shared metadata space, that all of the containers can see. So the containers don't actually have to talk to each other in the synchronous path. That means that it's much more scalable. You can go up to hundreds of thousands of nodes rather than just a few dozen. It's much more resilient. You can have all of them fail and you still didn't lose any data. And it's much more easy to use to David's point about complexity. >> Thank you for that. And then you, you mentioned up front that you not only built for flash, but built for Crosspoint. So you're using Crosspoint today. It's interesting. There was always been this sort of debate about Crosspoint It's less expensive than Ram, or maybe I got that wrong but it's persistent, >> It is. >> Okay, but it's more expensive than flash. And it was sort of thought it was a fence sitter cause it didn't have the volume but you're using it today successfully. That's interesting. >> We're using it both to offset the deficiencies of the low cost flash. And the nice thing about QLC and PLC is that you get the same levels of read performance as you would from high-end flash. The only difference between high cost and low cost flash today is in right cycles and in right performance. And so Crosspoint helps us offset both of those. We use it as a large right buffer and we use it as a large metadata store. And that allows us not just to arrange the information in a very large persistent right buffer before we need to place it on the low cost flash. But it also allows us to develop new types of metadata structures and algorithms that allow us to make better use of the low cost flash and reduce the effective price down even lower than the rock capacity. >> Very cool. David, what are your thoughts on the architecture? give us kind of the independent perspective >> I think it's brilliant architecture. I'd like to just go one step down on the network side of things. The whole use of NBME over fabric allows the users all of the servers to get any data across this whole network directly to it. So you've got great performance right away across the stack. And then the other thing is that by using RDMA for NASS, you're able, if you need to, to get down in microseconds to the data. So overall that's a thousand times faster than any HDD system could manage. So this architecture really allows an any to any simple, single level of storage which is so much easier to think about, architect use or manage is just so much simpler. >> If you had I mean, I said I don't know if there's an answer to this question but if you had to pick one thing Renan that you really were dogmatic about and you bet on from an architectural standpoint, what would that be? >> I think what we bet on in the early days is the fact that the pyramid doesn't work anymore and that tiering doesn't work anymore. In fact, we stole Johnson and Johnson's tagline No More Tears. Only, It's not spelled the same way. The reason for that is not because of storage. It's because of the applications as we move to applications more and more that are machine-based and machines are now not just generating the data. They're also reading the data and analyzing it and providing insights for humans to consume. Then the workloads changed dramatically. And the one thing that we saw is that you can't choose which pieces of information need to be accessible anymore. These new algorithms, especially around AI and machine learning and deep learning they need fast access to the entirety of the dataset and they want to read it over and over and over again in order to generate those insights. And so that was the driving force behind us building this new type of architecture. And we're seeing every single day when we talk to customers how the old architecture is simply break down in the face of these new applications. >> Very cool speaking of customers. I wonder if you could talk about use cases, customers you know, and this NASS arena maybe you could add some color there. >> Sure, our customers are large in data. We started half a petabyte and we grow into the exabyte range. The system likes to be big as, as it grows it grows super linearly. If you have a 100 nodes or a 1000 nodes you get more than 10X in performance, in capacity efficiency and resilience, et cetera. And so that's where we thrive. And those workloads are today. Mainly analytics workloads, although not entirely. If you look at it geographically we have a lot of life science in Boston research institutes medical imaging, genomics universities pharmaceutical companies here in New York. We have a lot of financials, hedge funds, Analyzing everything from satellite imagery to trade data to Twitter feeds out in California. A lot of AI, autonomous driving vehicles as well as media and entertainment both generation of films like animation, as well as content distribution are being done on top of best. >> Great thank you and David, when you look at the forecast that you've made over the years and when I imagine that they match nicely with your assumptions. And so, okay, I get that, but that doesn't, not everybody agrees, David. I mean, certainly the HDD guys don't agree but they, they're obviously fighting to hang on to their awesome run for 50 years, but as well there's others to do in hybrids and the like, and they kind of challenge your assumptions and you don't have a dog in this fight. We just want the truth and try to do our best to report it. But let me start with this. One of the things I've seen is that you're comparing deduped and compressed flash with raw HDD. Is that true or false? >> It's in terms of the fundamentals of the forecast, et cetera, it's false. What I'm taking is the new egg price. And I did it this morning and I looked up a two terabyte disc drive, NAS disc drive. I think it was $54. And if you look at the cost of a a NAND for two terabytes, it's about $200. So it's a four to one ratio. >> So, >> So and that's coming down from what people saw last year, which was five or six and every year has been, that ratio has been coming down. >> The ratio between the cost Delta, between HDD is still cheaper. So Renan I wonder one of the other things that Floyer has said is that because of the advantages of flash, not only performance but also data sharing, et cetera, which really drives other factors like TCO. That it doesn't have to be at parody in order for customers to consume that. I certainly saw that on my laptop, I could have got more storage and it could have been cheaper for per bit for my laptop. I took the flash. I mean, no problem. That that was an intelligence test but what are you seeing from customers? And by the way Floyer I think is forecasting by what, 2026 there will be actually a raw to raw crossover. So then it's game over. But what are you seeing in terms of what customers are telling you or any evidence you have that it doesn't have to be, even that customers actually get more value even if it's more expensive from flash, what are you seeing? >> Yeah in the enterprise space customers aren't buying raw flash they're buying storage systems. And so even if the raw numbers flash versus hard drive are still not there there is a lot of things that can be done at the system level to equalize those two. In fact, a lot of our IP is based on that we are taking flash today is, as David said more expensive than hard drives, but at the system level it doesn't remain more expensive. And the reason for that is storage systems waste space. They waste it on metadata, they waste it on redundancy. We built our new metadata structures, such that they everything lives in Crosspoint and is so much smaller because of the way Crosspoint is accessible at byte level granularity, we built our erasure codes in a way where you can sustain 10, 20, 30 drive failures but you only pay two or 1% in overhead. We built our data reduction mechanisms such that they can reduce down data even if the application has already compressed it and already de-duplicated it. And so there's a lot of innovation that can happen at the software level as part of this new direct dis-aggregated shared everything architecture that allows us to bridge that cost gap today without having customers do fancy TCO calculations. And of course, as prices of flash over the next few years continue declining, all of those advantages remain and it will just widen the gap between hard drives and flash. And there really is no advantage to hard drives once the price thing is solved. >> So thank you. So David, the other thing I've seen around these forecasts is that the comments that you can't really data reduce effectively hard disk. And I understand why the overhead and of course you can in flash you can use all kinds of data reduction techniques and not affect performance, or it's not even noticeable like put the cloud guys, do it upstream. Others do it upstream. What's your comment on that? >> Yes, if you take sequential data and you do a lot of work upfront you can write out in very lot big blocks and that's a perfect sequentially, good way of doing it. The challenge for the HDD people is if they go for that for that sort of sequential type of application that the cheapest way of doing that is to use tape which comes back to the discussion that the two things that are going to remain are tape and flash. So that part of the HDD market in my assertion will go towards tape and tape libraries. And those are serving very well at the moment. >> Yeah I mean, It's just the economics of tape are really attractive. I just feel like I've said this many times that the marketing of tape is lacking. Like I'd like to see, better thinking around how it could play. Cause I think customers have this perception tape, but there's actually a lot of value there. I want to carry on, >> Small point there. Yeah, I mean, there's an opportunity in the same way that Vast have created an architecture for flash. There's an opportunity out there for the tech people with flash to make an architecture that allows you to take that workload and really lower the price, enormously. >> You've called it Flape >> Flape yes. >> There's some interesting metadata opportunities there but we won't go into that. And then David, I want to ask you about NAND shortages. We saw this in 2016 and 2017. A lot of people saying there's an NAND shortage again. So that's a flaw in your forecast prices of you're assuming prices of flash continue to come down faster than those of HDD but the shortages of NAND could be problematic. What do you say to that? >> Well, I've looked at that in some detail and one of the big, important things is what's happening in the flash market and the Chinese, YMTC Chinese company has introduced a lot more volume into the market. They're making 100,000 wafers a month for this year. That's around six to 8% of market of NAND at this year, as a result, Samsung, micron, Intel, Hynix they're all increasing their volumes of NAND so that they're all investing. So I don't see that NAND itself is going to be a problem. There is certainly a shortage of processor chips which drive the intelligence in the NAND itself. But that's a problem for everybody. That's a problem for cars. It's a problem for disk drives. >> You could argue that's going to create an oversupply, potentially. Let's not go there, but you know what at the end of the day it comes back to the customer and all this stuff. It's interesting. I love talking about the architecture but it's really all about customer value. And so, so Renan, I want you to sort of close there. What should customers be paying attention to? And what should observers of Vast Data really watch as indicators for progress for you guys milestones and things in the market that we should be paying attention to but start with the customers. What's your advice to them? >> Sure, for any customer that I talked to I always ask the same thing. Imagine where you'll be five years from now because you're making an investment now that is at least five years long. In our case, we guaranteed the lifespan of the devices for a decade, such that you know that it's going to be there for you and imagine what is going to happen over those next five years. What we're seeing in most customers is that they have a lot of doormen data and with the advances in analytics and AI they want to make use of that data. They want to turn it from a cost center to a profit center and to gain insight from that data and to improve their business based on that information that they have the same way the hyperscalers are doing in order to do that, you need one thing you need fast access to all of that information. Once you have that, you have the foundation to step into this next generation type world where you can actually make money off of your information. And the best way to get very, very fast access to all of your information is to put it on Vast media like flash and Crosspoint. If I can give one example, Hedge Funds. Hedge funds do a lot of back-testing on Vast. And what makes sense for them is to test as much information back as they possibly can but because of storage limitations, they can't do that. And the other thing that's important to them is to have a real-time experience to be able to run those simulations in a few minutes and not as a batch process overnight, but because of storage limitations, they can't do that either. The third thing is if you have many different applications and many different users on the same system they usually step on each other's toes. And so the Vast architecture is solves those three problems. It allows you a lot of information very fast access and fast processing an amazing quality of service where different users of the system don't even notice that somebody else is accessing the same piece of information. And so Hedge Funds is one example. Any one of these verticals that make use of a lot of information will benefit from this architecture in this system. And if it doesn't cost any more, there's really no real reason delay this transition into all flash. >> Excellent very clear thinking. Thanks for laying that out. And what about, you know, things that we should how should we judge you? What are the things that we should watch? >> I think the most important way to judge us is to look at customer adoption and what we're seeing and what we're showing investors is a very high net dollar retention number. What that means is basically a customer buys a piece of kit today, how much more will they buy over the next year, over the next two years? And we're seeing them buy more than three times more, within a year of the initial purchase. And we see more than 90% of them buying more within that first year. And that to me indicates that we're solving a real problem and that they're making strategic decisions to stop buying any other type of storage system. And to just put everything on Vast over the next few years we're going to expand beyond just storage services and provide a full stack for these AI applications. We'll expand into other areas of infrastructure and develop the best possible vertically integrated system to allow those new applications to thrive. >> Nice, yeah. Think investors love that lifetime value story. If you can get above 3X of the customer acquisition cost is to IPO in the way. Guys hey, thanks so much for coming to the Cube. We had a great conversation and really appreciate your time. >> Thank you. >> Thank you. >> All right, Thanks for watching everybody. This is Dave Volante for the Cube. We'll see you next time. (gentle music)
SUMMARY :
that the all flash data center was coming. in the marketplace but where and the volume comes from the consumers. the innovations that you're doing, kill of the hard drive. David maybe you could give And so QLC is the latest, and any metrics you can in the way that David predicted. having led development, of the product And the capacity grows to a point where And David, you and I have talked about the biggest single problem. the ground up for flash that all of the containers can see. that you not only built for cause it didn't have the volume and PLC is that you get the same levels David, what are your all of the servers to get any data And the one thing that we saw I wonder if you could talk And so that's where we thrive. One of the things I've seen is that of the forecast, et cetera, it's false. So and that's coming down And by the way Floyer I at the system level to equalize those two. the comments that you can't really So that part of the HDD market that the marketing of tape is lacking. and really lower the price, enormously. but the shortages of NAND and one of the big, important I love talking about the architecture that it's going to be there for you What are the things that we should watch? And that to me indicates that of the customer acquisition This is Dave Volante for the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Renen Hallak | PERSON | 0.99+ |
2008 | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Renan | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
10 | QUANTITY | 0.99+ |
David FLoyer | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
$54 | QUANTITY | 0.99+ |
2006 | DATE | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Hynix | ORGANIZATION | 0.99+ |
$150 million | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
California | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
50 years | QUANTITY | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Vast Data | ORGANIZATION | 0.99+ |
20 | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
three dimensions | QUANTITY | 0.99+ |
three problems | QUANTITY | 0.99+ |
YMTC | ORGANIZATION | 0.99+ |
Floyer | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
Renen | PERSON | 0.99+ |
30 | QUANTITY | 0.99+ |
100 nodes | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
two terabytes | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
more than 90% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
five years ago | DATE | 0.99+ |
third dimension | QUANTITY | 0.99+ |
one example | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
two terabyte | QUANTITY | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
more than three times | QUANTITY | 0.98+ |
1000 nodes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
last decade | DATE | 0.98+ |
single problem | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
One dimension | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
five years | QUANTITY | 0.98+ |
one set | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
about $200 | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
two years ago | DATE | 0.97+ |
single system | QUANTITY | 0.97+ |
first year | QUANTITY | 0.97+ |
half a petabyte | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.97+ |
micron | ORGANIZATION | 0.97+ |
One | QUANTITY | 0.97+ |
Pure Storage Convergence of File and Object FULL SHOW V1
we're running what i would call a little mini series and we're exploring the convergence of file and object storage what are the key trends why would you want to converge file an object what are the use cases and architectural considerations and importantly what are the business drivers of uffo so-called unified fast file and object in this program you'll hear from matt burr who is the gm of pure's flashblade business and then we'll bring in the perspectives of a solutions architect garrett belsner who's from cdw and then the analyst angle with scott sinclair of the enterprise strategy group esg he'll share some cool data on our power panel and then we'll wrap with a really interesting technical conversation with chris bond cb bond who is a lead data architect at microfocus and he's got a really cool use case to share with us so sit back and enjoy the program from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president and general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so um when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or um you know ai and ml type workloads uh you start to sort of see this um i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's going to require a tremendous amount of dams which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale um so you start to look at things like the complexity of daz you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device uh replaces something that might be you know the size of three or four or five refrigerators so matt what why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network um and quite frankly storage throughput and you know i can give you two sort of real primary examples here right you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device is processing in real time unstructured data in its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly um if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour uh that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to add i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file which appointment i get the fast recovery but how how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product uh is a great way to go about architecting against ransomware i got to put my my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can you turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or roll back role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could and we see this happening again it was originally we forecast the the the death of of quote-unquote high spin speed disc drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build uh and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that uh inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data and i'm going to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up up to this point right but we're starting to approach the point where you sort of reach a a 3x sort of um you know differentiator between the cost of an hdd and an std and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a a slow decline uh which i think is going to become even more rapid kind of probably starting around next year um where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is that it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and d-dupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is green field applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation uh while at the same time dramatically simplifying uh the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap uh the drawback is you don't necessarily associate it with high performance and and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no uh but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work et cetera then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're going to sort of take the thing that that you've had and we're going to modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file an object i mean if you bring in additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen uh with customers yeah i mean look i'll i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage power bills matter in big in big data centers um you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to yoran kaz's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a bespoke environment for this application and this book environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from from a customer actually and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that um but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about smb uh you know we we are uh on the path through to releasing um you know smb uh full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an s b portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today um and so you know going through the next couple years we'll be looking at uh you know developing some some um you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s p component yeah nice tailwind good tam expansion strategy matt thanks so much really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you [Music] okay we're back with the convergence of file and object in a power panel this is a special content program made possible by pure storage and co-created with the cube now in this series what we're doing is we're exploring the coming together of file and object storage trying to understand the trends that are driving this convergence the architectural considerations that users should be aware of and which use cases make the most sense for so-called unified fast file in object storage and with me are three great guests to unpack these issues garrett belsner is the data center solutions architect he's with cdw scott sinclair is a senior analyst at enterprise strategy group he's got deep experience on enterprise storage and brings that independent analyst perspective and matt burr is back with us gentlemen welcome to the program thank you hey scott let me let me start with you uh and get your perspective on what's going on the market with with object the cloud a huge amount of unstructured data out there that lives in files give us your independent view of the trends that you're seeing out there well dave you know where to start i mean surprise surprise date is growing um but one of the big things that we've seen is we've been talking about data growth for what decades now but what's really fascinating is or changed is because of the digital economy digital business digital transformation whatever you call it now people are not just storing data they actually have to use it and so we see this in trends like analytics and artificial intelligence and what that does is it's just increasing the demand for not only consolidation of massive amounts of storage that we've seen for a while but also the demand for incredibly low latency access to that storage and i think that's one of the things that we're seeing that's driving this need for convergence as you put it of having multiple protocols consolidated onto one platform but also the need for high performance access to that data thank you for that a great setup i got like i wrote down three topics that we're going to unpack as a result of that so garrett let me let me go to you maybe you can give us the perspective of what you see with customers is is this is this like a push where customers are saying hey listen i need to converge my file and object or is it more a story where they're saying garrett i have this problem and then you see unified file and object as a solution yeah i think i think for us it's you know taking that consultative approach with our customers and really kind of hearing pain around some of the pipelines the way that they're going to market with data today and kind of what are the problems that they're seeing we're also seeing a lot of the change driven by the software vendors as well so really being able to support a disaggregated design where you're not having to upgrade and maintain everything as a single block has really been a place where we've seen a lot of customers pivot to where they have more flexibility as they need to maintain larger volumes of data and higher performance data having the ability to do that separate from compute and cache and those other layers are is really critical so matt i wonder if if you could you know follow up on that so so gary was talking about this disaggregated design so i like it you know distributed cloud etc but then we're talking about bringing things together in in one place right so square that circle how does this fit in with this hyper-distributed cloud edge that's getting built out yeah you know i mean i i could give you the easy answer on that but i could also pass it back to garrett in the sense that you know garrett maybe it's important to talk about um elastic and splunk and some of the things that you're seeing in in that world and and how that i think the answer to dave's question i think you can give you can give a pretty qualified answer relative what your customers are seeing oh that'd be great please yeah absolutely no no problem at all so you know i think with um splunk kind of moving from its traditional design and classic design whatever you want you want to call it up into smart store um that was kind of one of the first that we saw kind of make that move towards kind of separating object out and i think you know a lot of that comes from their own move to the cloud and updating their code to basically take advantage of object object in the cloud uh but we're starting to see you know with like vertica eon for example um elastic other folks taking that same type of approach where in the past we were building out many 2u servers we were jamming them full of uh you know ssds and nvme drives that was great but it doesn't really scale and it kind of gets into that same problem that we see with you know hyper convergence a little bit where it's you know you're all you're always adding something maybe that you didn't want to add um so i think it you know again being driven by software is really kind of where we're seeing the world open up there but that whole idea of just having that as a hub and a central place where you can then leverage that out to other applications whether that's out to the edge for machine learning or ai applications to take advantage of it i think that's where that convergence really comes back in but i think like scott mentioned earlier it's really folks are now doing things with the data where before i think they were really storing it trying to figure out what are we going to actually do with it when we need to do something with it so this is making it possible yeah and dave if i could just sort of tack on to the end of garrett's answer there you know in particular vertica with neon mode the ability to leverage sharded subclusters give you um you know sort of an advantage in terms of being able to isolate performance hot spots you an advantage to that is being able to do that on a flashblade for example so um sharded subclusters allow you to sort of say i'm you know i'm going to give prioritization to you know this particular element of my application and my data set but i can still share those share that data across those across those subclusters so um you know as you see you know vertica advance with eon mode or you see splunk advance with with smart store you know these are all sort of advancements that are you know it's a chicken in the egg thing um they need faster storage they need you know sort of a consolidated data storage data set um and and that's what sort of allows these things to drive forward yeah so vertica eon mode for those who don't know it's the ability to separate compute and storage and scale independently i think i think vertica if they're if they're not the only one they're one of the only ones i think they might even be the only one that does that in the cloud and on-prem and that sort of plays into this distributed you know nature of this hyper-distributed cloud i sometimes call it and and i'm interested in the in the data pipeline and i wonder scott if we could talk a little bit about that maybe we're unified object and file i mean i'm envisioning this this distributed mesh and then you know uffo is sort of a node on that that i i can tap when i need it but but scott what are you seeing as the state of infrastructure as it relates to the data pipeline and the trends there yeah absolutely dave so when i think data pipeline i immediately gravitate to analytics or or machine learning initiatives right and so one of the big things we see and this is it's an interesting trend it seems you know we continue to see increased investment in ai increased interest and people think and as companies get started they think okay well what does that mean well i got to go hire a data scientist okay well that data scientist probably needs some infrastructure and what they end what often happens in these environments is where it ends up being a bespoke environment or a one-off environment and then over time organizations run into challenges and one of the big challenges is the data science team or people whose jobs are outside of it spend way too much time trying to get the infrastructure to to keep up with their demands and predominantly around data performance so one of the one of the ways organizations that especially have artificial intelligence workloads in production and we found this in our research have started mitigating that is by deploying flash all across the data pipeline we have we have data on this sorry interrupt but yeah if you could bring up that that chart that would be great um so take us through this uh uh scott and share with us what we're looking at here yeah absolutely so so dave i'm glad you brought this up so we did this study um i want to say late last year uh one of the things we looked at was across artificial intelligence environments now one thing that you're not seeing on this slide is we went through and we asked all around the data pipeline and we saw flash everywhere but i thought this was really telling because this is around data lakes and when when or many people think about the idea of a data lake they think about it as a repository it's a place where you keep maybe cold data and what we see here is especially within production environments a pervasive use of flash storage so i think that 69 of organizations are saying their data lake is mostly flash or all flash and i think we have zero percent that don't have any flash in that environment so organizations are finding out that they that flash is an essential technology to allow them to harness the value of their data so garrett and then matt i wonder if you could chime in as well we talk about digital transformation and i sometimes call it you know the coveted forced march to digital transformation and and i'm curious as to your perspective on things like machine learning and the adoption and scott you may have a perspective on this as well you know we had to pivot we had to get laptops we had to secure the end points you know and vdi those became super high priorities what happened to you know injecting ai into my applications and and machine learning did that go in the back burner was that accelerated along with the need to digitally transform garrett i wonder if you could share with us what you saw with with customers last year yeah i mean i think we definitely saw an acceleration um i think folks are in in my market are still kind of figuring out how they inject that into more of a widely distributed business use case but again this data hub and allowing folks to now take advantage of this data that they've had in these data lakes for a long time i agree with scott i mean many of the data lakes that we have were somewhat flash accelerated but they were typically really made up of you know large capacity slower spinning near-line drive accelerated with some flash but i'm really starting to see folks now look at some of those older hadoop implementations and really leveraging new ways to look at how they consume data and many of those redesigned customers are coming to us wanting to look at all flash solutions so we're definitely seeing it we're seeing an acceleration towards folks trying to figure out how to actually use it in more of a business sense now or before i feel it goes a little bit more skunk works kind of people dealing with uh you know in a much smaller situation maybe in the executive offices trying to do some testing and things scott you're nodding away anything you can add in here yeah so first off it's great to get that confirmation that the stuff we're seeing in our research garrett's seeing you know out in the field and in the real world um but you know as it relates to really the past year it's been really fascinating so one of the things we study at esg is i.t buying intentions what are things what are initiatives that companies plan to invest in and at the beginning of 2020 we saw a heavy interest in machine learning initiatives then you transition to the middle of 2020 in the midst of covid some organizations continued on that path but a lot of them had the pivot right how do we get laptops to everyone how do we continue business in this new world well now as we enter into 2021 and hopefully we're coming out of this uh you know the pandemic era um we're getting into a world where organizations are pivoting back towards these strategic investments around how do i maximize the usage of data and actually accelerating those because they've seen the importance of of digital business initiatives over the past year yeah matt i mean when we exited 2019 we saw a narrowing of experimentation and our premise was you know that that organizations are going to start now operationalizing all their digital transformation experiments and and then we had a you know 10 month petri dish on on digital so what do you what are you seeing in this regard a 10 month petri dish is an interesting way to interesting way to describe it um you know we saw another there's another there's another candidate for pivot in there around ransomware as well right um you know security entered into the mix which took people's attention away from some of this as well i mean look i'd like to bring this up just a level or two um because what we're actually talking about here is progress right and and progress isn't is an inevitability um you know whether it's whether whether you believe that it's by 2025 or you or you think it's 2035 or 2050 it doesn't matter we're on a forced march to the eradication of disk and that is happening in many ways uh you know in many ways um due to some of the things that garrett was referring to and what scott was referring to in terms of what are customers demands for how they're going to actually leverage the data that they have and that brings me to kind of my final point on this which is we see customers in three phases there's the first phase where they say hey i have this large data store and i know there's value in there i don't know how to get to it or i have this large data store and i've started a project to get value out of it and we failed those could be customers that um you know marched down the hadoop path early on and they they got some value out of it um but they realized that you know hdfs wasn't going to be a modern protocol going forward for any number of reasons you know the first being hey if i have gold.master how do i know that i have gold.4 is consistent with my gold.master so data consistency matters and then you have the sort of third group that says i have these large data sets i know how to extract value from them and i'm already on to the verticas the elastics you know the splunks etc um i think those folks are the folks that that ladder group are the folks that kept their their their projects going because they were already extracting value from them the first two groups we we're seeing sort of saying the second half of this year is when we're going to begin really being picking up on these on these types of initiatives again well thank you matt by the way for for hitting the escape key because i think value from data really is what this is all about and there are some real blockers there that i kind of want to talk about you mentioned hdfs i mean we were very excited of course in the early days of hadoop many of the concepts were profound but at the end of the day it was too complicated we've got these hyper-specialized roles that are that are you know serving the business but it still takes too long it's it's too hard to get value from data and one of the blockers is infrastructure that the complexity of that infrastructure really needs to be abstracted taking up a level we're starting to see this in in cloud where you're seeing some of those abstraction layers being built from some of the cloud vendors but more importantly a lot of the vendors like pew are saying hey we can do that heavy lifting for you uh and we you know we have expertise in engineering to do cloud native so i'm wondering what you guys see uh maybe garrett you could start us off and other students as some of the blockers uh to getting value from data and and how we're going to address those in the coming decade yeah i mean i i think part of it we're solving here obviously with with pure bringing uh you know flash to a market that traditionally was utilizing uh much slower media um you know the other thing that i that i see that's very nice with flashblade for example is the ability to kind of do things you know once you get it set up a blade at a time i mean a lot of the things that we see from just kind of more of a you know simplistic approach to this like a lot of these teams don't have big budgets and being able to kind of break them down into almost a blade type chunk i think has really kind of allowed folks to get more projects and and things off the ground because they don't have to buy a full expensive system to run these projects so that's helped a lot i think the wider use cases have helped a lot so matt mentioned ransomware you know using safe mode as a place to help with ransomware has been a really big growth spot for us we've got a lot of customers very interested and excited about that and the other thing that i would say is bringing devops into data is another thing that we're seeing so kind of that push towards data ops and really kind of using automation and infrastructure as code as a way to now kind of drive things through the system the way that we've seen with automation through devops is really an area we're seeing a ton of growth with from a services perspective guys any other thoughts on that i mean we're i'll tee it up there we are seeing some bleeding edge which is somewhat counterintuitive especially from a cost standpoint organizational changes at some some companies uh think of some of the the the internet companies that do uh music uh for instance and adding podcasts etc and those are different data products we're seeing them actually reorganize their data architectures to make them more distributed uh and actually put the domain heads the business heads in charge of the the data and the data pipeline and that is maybe less efficient but but it's again some of these bleeding edge what else are you guys seeing out there that might be yes some harbingers of the next decade uh i'll go first um you know i think specific to um the the construct that you threw out dave one of the things that we're seeing is um you know the the application owner maybe it's the devops person but it's you know maybe it's it's it's the application owner through the devops person they're they're becoming more technical in their understanding of how infrastructure um interfaces with their with their application i think um you know what what we're seeing on the flashblade side is we're having a lot more conversations with application people than um just i.t people it doesn't mean that the it people aren't there the it people are still there for sure they have to deliver the service etc um but you know the days of of i.t you know building up a catalog of services and a business owner subscribing to one of those services you know picking you know whatever sort of fits their need um i don't think that constru i think that's the construct that changes going forward the application owner is becoming much more prescriptive about what they want the infrastructure to fit how they want the infrastructure to fit into their application and that's a big change and and for for um you know certainly folks like like garrett and cdw um you know they do a good job with this being able to sort of get to the application owner and bring those two sides together there's a tremendous amount of value there for us it's been a little bit of a retooling we've traditionally sold to the i.t side of the house and um you know we've had to teach ourselves how to go talk the language of of applications so um you know i think you pointed out a good a good a good construct there and and you know that that application owner taking playing a much bigger role in what they're expecting uh from the performance of it infrastructure i think is is is a key is a key change interesting i mean that definitely is a trend that's put you guys closer to the business where the the infrastructure team is is serving the business as opposed to sometimes i talk to data experts and they're frustrated uh especially data owners or or data product builders who are frustrated that they feel like they have to beg beg the the data pipeline team to get you know new data sources or get data out how about the edge um you know maybe scott you can kick us off i mean we're seeing you know the emergence of edge use cases ai inferencing at the edge a lot of data at the edge what are you seeing there and and how does this unified object i'll bring us back to that and file fit wow dave how much time do we have um two minutes first of all scott why don't you why don't you just tell everybody what the edge is yeah you got it figured out all right how much time do you have matt at the end of the day and that that's that's a great question right is if you take a step back and i think it comes back today of something you mentioned it's about extracting value from data and what that means is when you extract value from data what it does is as matt pointed out the the influencers or the users of data the application owners they have more power because they're driving revenue now and so what that means is from an i.t standpoint it's not just hey here are the services you get use them or lose them or you know don't throw a fit it is no i have to i have to adapt i have to follow what my application owners mean now when you bring that back to the edge what it means is is that data is not localized to the data center i mean we just went through a nearly 12-month period where the entire workforce for most of the companies in this country had went distributed and business continued so if business is distributed data is distributed and that means that means in the data center that means at the edge that means that the cloud that means in all other places in tons of places and what it also means is you have to be able to extract and utilize data anywhere it may be and i think that's something that we're going to continue to and continue to see and i think it comes back to you know if you think about key characteristics we've talked about things like performance and scale for years but we need to start rethinking it because on one hand we need to get performance everywhere but also in terms of scale and this ties back to some of the other initiatives and getting value from data it's something i call that the massive success problem one of the things we see especially with with workloads like machine learning is businesses find success with them and as soon as they do they say well i need about 20 of these projects now all of a sudden that overburdens it organizations especially across across core and edge and cloud environments and so when you look at environments ability to meet performance and scale demands wherever it needs to be is something that's really important you know so dave i'd like to um just sort of tie together sort of two things that um i think that i heard from scott and garrett that i think are important and it's around this concept of scale um you know some of us are old enough to remember the day when kind of a 10 terabyte blast radius was too big of a blast radius for people to take on or a terabyte of storage was considered to be um you know an exemplary budget environment right um now we sort of think as terabytes kind of like we used to think of as gigabytes in some ways um petabyte like you don't have to explain anybody what a petabyte is anymore um and you know what's on the horizon and it's not far are our exabyte type data set workloads um and you start to think about what could be in that exabyte of data we've talked about how you extract that value we've talked about sort of um how you start but if the scale is big not everybody's going to start at a petabyte or an exabyte to garrett's point the ability to start small and grow into these products or excuse me these projects i think a is a really um fundamental concept here because you're not going to just go by i'm going to kick off a five petabyte project whether you do that on disk or flash it's going to be expensive right but if you could start at a couple hundred terabytes not just as a proof of concept but as something that you know you could get predictable value out of that then you could say hey this either scales linearly or non-linearly in a way that i can then go map my investments to how i can go dig deeper into this that's how all of these things are gonna that's how these successful projects are going to start because the people that are starting with these very large you know sort of um expansive you know greenfield projects at multi-petabyte scale it's gonna be hard to realize near-term value excellent we gotta wrap but but garrett i wonder if you could close when you look forward you talk to customers do you see this unification of of file and object is it is this an evolutionary trend is it something that is that that is that is that is going to be a lever that customers use how do you see it evolving over the next two three years and beyond yeah i mean i think from our perspective i mean just from what we're seeing from the numbers within the market the amount of growth that's happening with unstructured data is really just starting to finally really kind of hit this data deluge or whatever you want to call it that we've been talking about for so many years it really does seem to now be becoming true as we start to see things scale out and really folks settle into okay i'm going to use the cloud to to start and maybe train my models but now i'm going to get it back on prem because of latency or security or whatever the the um decision points are there this is something that is not going to slow down and i think you know folks like pure having the ability to have the tools that they give us um to use and bring to market with our customers are really key and critical for us so i see it as a huge growth area and a big focus for us moving forward guys great job unpacking a topic that you know it's covered a little bit but i think we we covered some ground that is uh that is new and so thank you so much for those insights and that data really appreciate your time thanks steve thanks yeah thanks dave okay and thank you for watching the convergence of file and object keep it right there right back after this short break innovation impact influence welcome to the cube disruptors developers and practitioners learn from the voices of leaders who share their personal insights from the hottest digital events around the globe enjoy the best this community has to offer on the cube your global leader in high-tech digital coverage [Music] okay now we're going to get the customer perspective on object and we'll talk about the convergence of file and object but really focusing on the object piece this is a content program that's being made possible by pure storage and it's co-created with the cube christopher cb bond is here he's a lead architect for microfocus the enterprise data warehouse and principal data engineer at microfocus cb welcome good to see you thanks dave good to be here so tell us more about your role at microfocus it's a pan microfocus role of course we know the company is a multinational software firm and acquired the software assets of hp of course including vertica tell us where you fit yeah so microfocus is uh you know it's like i said wide worldwide uh company that uh sells a lot of software products all over the place to governments and so forth and um it also grows often by acquiring other companies so there is the problem of of integrating new companies and their data and so what's happened over the years is that they've had a a number of different discrete data systems so you've got this data spread all over the place and they've never been able to get a full complete introspection on the entire business because of that so my role was come in design a central data repository an enterprise data warehouse that all reporting could be generated against and so that's what we're doing and we selected vertica as the edw system and pure storage flashblade as the communal repository okay so you obviously had experience with with vertica in your in your previous role so it's not like you were starting from scratch but but paint a picture of what life was like before you embarked on this sort of consolidated a approach to your your data warehouse what was it just disparate data all over the place a lot of m a going on where did the data live right so again the data was all over the place including under people's desks in just dedicated you know their their own private uh sql servers it a lot of data in in um microfocus is run on sql server which has pros and cons because that's a great uh transactional database but it's not really good for analytics in my opinion so uh but a lot of stuff was running on that they had one vertica instance that was doing some select uh reporting wasn't a very uh powerful system and it was what they call vertica enterprise mode where had dedicated nodes which um had the compute and storage um in the same locus on each uh server okay so vertica eon mode is a whole new world because it separates compute from storage you mentioned eon mode uh and the ability to to to scale storage and compute independently we wanted to have the uh analytics olap stuff close to the oltp stuff right so that's why they're co-located very close to each other and so uh we could what's nice about this situation is that these s3 objects it's an s3 object store on the pure flash plate we could copy those over if we needed to uh aws and we could spin up um a version of vertica there and keep going it's it's like a tertiary dr strategy because we actually have a we're setting up a second flashblade vertica system geo-located elsewhere for backup and we can get into it if you want to talk about how the latest version of the pure software for the flashblade allows synchronization across network boundaries of those flash plays which is really nice because if uh you know there's a giant sinkhole opens up under our colo facility and we lose that thing then we just have to switch the dns and we were back in business off the dr and then if that one was to go we could copy those objects over to aws and be up and running there so we're feeling pretty confident about being able to weather whatever comes along so you're using the the pure flash blade as an object store um most people think oh object simple but slow uh not the case for you is that right not the case at all it's ripping um well you have to understand about vertica and the way it stores data it stores data in what they call storage containers and those are immutable okay on disk whether it's on aws or if you had a enterprise mode vertica if you do an update or delete it actually has to go and retrieve that object container from disk and it destroys it and rebuilds it okay which is why you don't you want to avoid updates and deletes with vertica because the way it gets its speed is by sorting and ordering and encoding the data on disk so it can read it really fast but if you do an operation where you're deleting or updating a record in the middle of that then you've got to rebuild that entire thing so that actually matches up really well with s3 object storage because it's kind of the same way uh it gets destroyed and rebuilt too okay so that matches up very well with vertica and we were able to design this system so that it's append only now we had some reports that were running in sql server okay uh which were taking seven days so we moved that to uh to vertica from sql server and uh we rewrote the queries which were which had been written in t sql with a bunch of loops and so forth and we were to get this is amazing it went from seven days to two seconds to generate this report which has tremendous value uh to the company because it would have to have this long cycle of seven days to get a new introspection in what they call their knowledge base and now all of a sudden it's almost on demand two seconds to generate it that's great and that's because of the way the data is stored and uh the s3 you asked about oh you know is it slow well not in that context because what happens really with vertica eon mode is that it can they have um when you set up your compute nodes they have local storage also which is called the depot it's kind of a cache okay so the data will be drawn from the flash and cached locally uh and that was it was thought when they designed that oh you know it's that'll cut down on the latency okay but it turns out that if you have your compute nodes close meaning minimal hops to the flashblade that you can actually uh tell vertica you know don't even bother caching that stuff just read it directly on the fly from the from the flashblade and the performance is still really good it depends on your situation but i know for example a major telecom company that uh uses the same topology as we're talking about here they did the same thing they just they just dropped the cache because the flash player was able to to deliver the the data fast enough so that's you're talking about that that's speed of light issues and just the overhead of of of switching infrastructure is that that gets eliminated and so as a result you can go directly to the storage array that's correct yeah it's it's like it's fast enough that it's it's almost as if it's local to the compute node uh but every situation is different depending on your uh your knees if you've got like a few tables that are heavily used uh then yeah put them um put them in the cash because that'll be probably a little bit faster but if you have a lot of ad hoc queries that are going on you know you may exceed the storage of the local cache and then you're better off having it uh just read directly from the uh from the flash blade got it look it pure's a fit i mean i sound like a fanboy but pure is all about simplicity so is object so that means you don't have to you know worry about wrangling storage and worrying about luns and all that other you know nonsense and and file i've been burned by hardware in the past you know where oh okay they're building to a price and so they cheap out on stuff like fans or other things and these these components fail and the whole thing goes down but this hardware is super super good quality and uh so i'm i'm happy with the quality that we're getting so cb last question what's next for you where do you want to take this uh this this initiative well we are in the process now of we um when so i i designed this system to combine the best of the kimball approach to data warehousing and the inland approach okay and what we do is we bring over all the data we've got and we put it into a pristine staging layer okay like i said it's uh because it's append only it's essentially a log of all the transactions that are happening in this company just they appear okay and then from the the kimball side of things we're designing the data marts now so that that's what the end users actually interact with and so we're we're taking uh the we're examining the transactional systems to say how are these business objects created what's what's the logic there and we're recreating those logical models in uh in vertica so we've done a handful of them so far and it's working out really well so going forward we've got a lot of work to do to uh create just about every object that that the company needs cb you're an awesome guest to really always a pleasure talking to you and uh thank you congratulations and and good luck going forward stay safe thank you [Music] okay let's summarize the convergence of file and object first i want to thank our guests matt burr scott sinclair garrett belsener and c.b bohn i'm your host dave vellante and please allow me to briefly share some of the key takeaways from today's program so first as scott sinclair of esg stated surprise surprise data's growing and matt burr he helped us understand the growth of unstructured data i mean estimates indicate that the vast majority of data will be considered unstructured by mid-decade 80 or so and obviously unstructured data is growing very very rapidly now of course your definition of unstructured data and that may vary across across a wide spectrum i mean there's video there's audio there's documents there's spreadsheets there's chat i mean these are generally considered unstructured data but of course they all have some type of structure to them you know perhaps it's not as strict as a relational database but there's certainly metadata and certain structure to these types of use cases that i just mentioned now the key to what pure is promoting is this idea of unified fast file and object uffo look object is great it's inexpensive it's simple but historically it's been less performant so good for archiving or cheap and deep types of examples organizations often use file for higher performance workloads and let's face it most of the world's data lives in file formats what pure is doing is bringing together file and object by for example supporting multiple protocols ie nfs smb and s3 s3 of course has really given new life to object over the past decade now the key here is to essentially enable customers to have the best of both worlds not having to trade off performance for object simplicity and a key discussion point that we've had on the program has been the impact of flash on the long slow death of spinning disk look hard disk drives they had a great run but hdd volumes they peaked in 2010 and flash as you well know has seen tremendous volume growth thanks to the consumption of flash in mobile devices and then of course its application into the enterprise and that's volume is just going to keep growing and growing and growing the price declines of flash are coming down faster than those of hdd so it's the writing's on the wall it's just a matter of time so flash is riding down that cost curve very very aggressively and hdd has essentially become you know a managed decline business now by bringing flash to object as part of the flashblade portfolio and allowing for multiple protocols pure hopes to eliminate the dissonance between file and object and simplify the choice in other words let the workload decide if you have data in a file format no problem pure can still bring the benefits of simplicity of object at scale to the table so again let the workload inform what the right strategy is not the technical infrastructure now pure course is not alone there are others supporting this multi-protocol strategy and so we asked matt burr why pure or what's so special about you and not surprisingly in addition to the product innovation he went right to pure's business model advantages i mean for example with its evergreen support model which was very disruptive in the marketplace you know frankly pure's entire business disrupted the traditional disk array model which was fundamentally was flawed pure forced the industry to respond and when it achieved escape velocity velocity and pure went public the entire industry had to react and a big part of the pure value prop in addition to this business model innovation that we just discussed is simplicity pure's keep its simple approach coincided perfectly with the ascendancy of cloud where technology organizations needed cloud-like simplicity for certain workloads that were never going to move into the cloud they're going to stay on-prem now i'm going to come back to this but allow me to bring in another concept that garrett and cb really highlighted and that is the complexity of the data pipeline and what do you mean what do i mean by that and why is this important so scott sinclair articulated he implied that the big challenge is organizations their data full but insights are scarce scarce a lot of data not as much insights it takes time too much time to get to those insights so we heard from our guests that the complexity of the data pipeline was a barrier to getting to faster insights now cb bonds shared how he streamlined his data architecture using vertica's eon mode which allowed him to scale compute independently of storage so that brought critical flexibility and improved economics at scale and flashblade of course was the back-end storage for his data warehouse efforts now the reason i think this is so important is that organizations are struggling to get insights from data and the complexity associated with the data pipeline and data life cycles let's face it it's overwhelming organizations and there the answer to this problem is a much longer and different discussion than unifying object and file that's you know i can spend all day talking about that but let's focus narrowly on the part of the issue that is related to file and object so the situation here is that technology has not been serving the business the way it should rather the formula is twisted in the world of data and big data and data architectures the data team is mired in complex technical issues that impact the time to insights now part of the answer is to abstract the underlying infrastructure complexity and create a layer with which the business can interact that accelerates instead of impedes innovation and unifying file and object is a simple example of this where the business team is not blocked by infrastructure nuance like does this data reside in a file or object format can i get to it quickly and inexpensively in a logical way or is the infrastructure in a stovepipe and blocking me so if you think about the prevailing sentiment of how the cloud is evolving to incorporate on premises workloads that are hybrid and configurations that are working across clouds and now out to the edge this idea of an abstraction layer that essentially hides the underlying infrastructure is a trend we're going to see evolve this decade now is uffo the be all end-all answer to solving all of our data pipeline challenges no no of course not but by bringing the simplicity and economics of object together with the ubiquity and performance of file uffo makes it a lot easier it simplifies life organizations that are evolving into digital businesses which by the way is every business so we see this as an evolutionary trend that further simplifies the underlying technology infrastructure and does a better job supporting the data flows for organizations so they don't have to spend so much time worrying about the technology details that add a little value to the business okay so thanks for watching the convergence of file and object and thanks to pure storage for making this program possible this is dave vellante for the cube we'll see you next time [Music] you
SUMMARY :
on the nfs side um but you know we
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
garrett belsner | PERSON | 0.99+ |
matt burr | PERSON | 0.99+ |
2010 | DATE | 0.99+ |
2050 | DATE | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
seven days | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
scott sinclair | PERSON | 0.99+ |
2035 | DATE | 0.99+ |
2019 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two seconds | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
matt burr | PERSON | 0.99+ |
first phase | QUANTITY | 0.99+ |
dave | PERSON | 0.99+ |
dave vellante | PERSON | 0.99+ |
scott sinclair | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
250 terabytes | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
zero percent | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
steve | PERSON | 0.99+ |
gary | PERSON | 0.99+ |
two billion dollar | QUANTITY | 0.99+ |
garrett | PERSON | 0.99+ |
two minutes | QUANTITY | 0.99+ |
two weeks later | DATE | 0.99+ |
three topics | QUANTITY | 0.99+ |
two sides | QUANTITY | 0.99+ |
two weeks ago | DATE | 0.99+ |
billion dollars | QUANTITY | 0.99+ |
mid-decade 80 | DATE | 0.99+ |
today | DATE | 0.99+ |
cdw | PERSON | 0.98+ |
three phases | QUANTITY | 0.98+ |
80 | QUANTITY | 0.98+ |
billions of objects | QUANTITY | 0.98+ |
10 month | QUANTITY | 0.98+ |
one device | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.98+ |
one platform | QUANTITY | 0.98+ |
scott | ORGANIZATION | 0.97+ |
last year | DATE | 0.97+ |
five petabyte | QUANTITY | 0.97+ |
scott | PERSON | 0.97+ |
cassandra | PERSON | 0.97+ |
one | QUANTITY | 0.97+ |
single block | QUANTITY | 0.97+ |
one system | QUANTITY | 0.97+ |
next decade | DATE | 0.96+ |
tons of places | QUANTITY | 0.96+ |
both worlds | QUANTITY | 0.96+ |
vertica | TITLE | 0.96+ |
matt | PERSON | 0.96+ |
both | QUANTITY | 0.96+ |
69 of organizations | QUANTITY | 0.96+ |
billion dollars | QUANTITY | 0.95+ |
pandemic | EVENT | 0.95+ |
first | QUANTITY | 0.95+ |
three great guests | QUANTITY | 0.95+ |
next year | DATE | 0.95+ |
Matt Burr, General Manager, FlashBlade, Pure Storage | The Convergence of File and Object
from around the globe it's thecube presenting the convergence of file and object brought to you by pure storage we're back with the convergence of file and object a special program made possible by pure storage and co-created with the cube so in this series we're exploring that convergence between file and object storage we're digging into the trends the architectures and some of the use cases for unified fast file and object storage uffo with me is matt burr who's the vice president general manager of flashblade at pure storage hello matt how you doing i'm doing great morning dave how are you good thank you hey let's start with a little 101 you know kind of the basics what is unified fast file and object yeah so look i mean i think you got to start with first principles talking about the rise of unstructured data so when we think about unstructured data you sort of think about the projections 80 of data by 2025 is going to be unstructured data whether that's machine generated data or you know ai and ml type workloads you start to sort of see this i don't want to say it's a boom uh but it's sort of a renaissance for unstructured data if you will where we move away from you know what we've traditionally thought of as general purpose nas and and file shares to you know really things that focus on uh fast object taking advantage of s3 cloud native applications that need to integrate with applications on site um you know ai workloads ml workloads tend to look to share data across uh you know multiple data sets and you really need to have a platform that can deliver both highly performant and scalable fast file and object from one system so talk a little bit more about some of the drivers that you know bring forth that need to unify file an object yeah i mean look you know there's a there's there's a real challenge um in managing you know bespoke uh bespoke infrastructure or architectures around general purpose nas and daz etc so um if you think about how a an architect sort of looks at an application they might say well okay i need to have um you know fast daz storage proximal to the application um but that's gonna require a tremendous amount of dabs which is a tremendous amount of drives right hard drives are you know historically pretty pretty pretty unwieldy to manage because you're replacing them relatively consistently at multi-petabyte scale so you start to look at things like the complexity of das you start to look at the complexity of general purpose nas and you start to just look at quite frankly something that a lot of people don't really want to talk about anymore but actual data center space right like consolidation matters the ability to take you know something that's the size of a microwave like a modern flash blade or a modern um you know uffo device replaces something that might be you know the size of three or four or five refrigerators so matt why is is now the right time for this i mean for years nobody really paid much attention to object s3 already obviously changed you know that course most of the world's data is still stored in file formats and you get there with nfs or smb why is now the time to think about unifying object and and file well because we're moving to things like a contactless society um you know the the things that we're going to do are going to just require a tremendous amount more compute power network and quite frankly storage throughput and you know i can give you two sort of real primary examples here right um you know warehouses are being you know taken over by robots if you will um it's not a war it's a it's a it's sort of a friendly advancement in you know how do i how do i store a box in a warehouse and you know we have we have a customer who focuses on large sort of big box distribution warehousing and you know a box that carried a an object uh two weeks ago might have a different box size two weeks later well that robot needs to know where the space is in the data center in order to put it but also needs to be able to process hey i don't want to put the thing that i'm going to access the most in the back of the warehouse i'm going to put that thing in the front of the warehouse all of those types of data you know sort of real time you can think of the robot as almost an edge device uh is processing in real time unstructured data and its object right so it's sort of the emergence of these new types of workloads and i give you the opposite example the other end of the spectrum is ransomware right you know today you know we'll talk to customers and they'll say quite commonly hey if you know anybody can sell me a backup device i need something that can restore quickly if you had the ability to restore something in 270 terabytes an hour or 250 terabytes an hour that's much faster when you're dealing with a ransomware attack you want to get your data back quickly you know so i want to actually i was going to ask you about that later but since you brought it up what is the right i guess call it architecture for for for ransomware i mean how and explain like how unified object and file would support me i get the fast recovery but how would you recommend a customer uh go about architecting a ransomware proof you know system yeah well you know with with flashblade and and with flasharray there's an actual feature called called safe mode and that safe mode actually protects uh the snapshots and and the data from uh sort of being is a part of the of the ransomware event and so if you're in a type of ransomware situation like this you're able to leverage safe mode and you say okay what happens in a ransomware attack is you can't get access to your data and so you know the bad guy the perpetrator is basically saying hey i'm not going to give you access to your data until you pay me you know x in bitcoin or whatever it might be right um with with safe mode those snapshots are actually protected outside of the ransomware blast zone and you can bring back those snapshots because what's your alternative if you're not doing something like that your alternative is either to pay and unlock your data or you have to start retouring restoring excuse me from tape or slow disk that could take you days or weeks to get your data back so leveraging safe mode um you know in either the flash for the flash blade product is a great way to go about uh architecting against ransomware i got to put my i'm thinking like a customer now so safe mode so that's an immutable mode right can't change the data um is it can can an administrator go in and change that mode can he turn it off do i still need an air gap for example what would you recommend there yeah so there there are still um uh you know sort of our back or rollback role-based access control policies uh around who can access that safe mode and who can right okay so uh anyway subject for a different day i want to i want to actually bring up uh if you don't object a topic that i think used to be really front and center and it now be is becoming front and center again i mean wikibon just produced a research note forecasting the future of flash and hard drives and those of you who follow us know we've done this for quite some time and you can if you could bring up the chart here you you could see and we see this happening again it was originally we forecast the the death of of quote unquote high spin speed disk drives which is kind of an oxymoron but you can see on here on this chart this hard disk had a magnificent journey but they peaked in volume in manufacturing volume in 2010 and the reason why that is is so important is that volumes now are steadily dropping you can see that and we use wright's law to explain why this is a problem and wright's law essentially says that as you your cumulative manufacturing volume doubles your cost to manufacture decline by a constant percentage now i won't go too much detail on that but suffice it to say that flash volumes are growing very rapidly hdd volumes aren't and so flash because of consumer volumes can take advantage of wright's law and that constant reduction and that's what's really important for the next generation which is always more expensive to build and so this kind of marks the beginning of the end matt what do you think what what's the future hold for spinning disc in your view uh well i can give you the answer on two levels on a personal level uh it's why i come to work every day uh you know the the eradication or or extinction of an inefficient thing um you know i like to say that inefficiency is the bane of my existence uh and i think hard drives are largely inefficient and i'm willing to accept the sort of long-standing argument that um you know we've seen this transition in block right and we're starting to see it repeat itself in in unstructured data um and i'm willing to accept the argument that cost is a vector here and it most certainly is right hdds have been considerably cheaper uh than than than flash storage um you know even to this day uh you know up to this point right but we're starting to approach the point where you sort of reach a 3x sort of you know differentiator between the cost of an hdd and an sdd and you know that really is that point in time when uh you begin to pick up a lot of volume and velocity and so you know that tends to map directly to you know what you're seeing here which is you know a slow decline uh which i think is going to become even more rapid kind of probably starting around next year where you start to see sds excuse me ssds uh you know really replacing hdds uh at a much more rapid clip particularly on the unstructured data side and it's largely around cost the the workloads that we talked about robots and warehouses or you know other types of advanced machine learning and artificial intelligence type applications and workflows you know they require a degree of performance that a hard drive just can't deliver we are we are seeing sort of the um creative innovative uh disruption of an entire industry right before our eyes it's a fun thing to live through yeah and and we would agree i mean it doesn't the premise there is it doesn't have to be less expensive we think it will be by you know the second half or early second half of this decade but even if it's a we think around a 3x delta the value of of ssd relative to spinning disk is going to overwhelm just like with your laptop you know it got to the point where you said why would i ever have a spinning disc in my laptop we see the same thing happening here um and and so and we're talking about you know raw capacity you know put in compression and dedupe and everything else that you really can't do with spinning discs because of the performance issues you can do with flash okay let's come back to uffo can we dig into the challenges specifically that that this solves for customers give me give us some examples yeah so you know i mean if we if we think about the examples um you know the the robotic one um i think is is is the one that i think is the marker for you know kind of of of the the modern side of of of what we see here um but what we're you know what we're what we're seeing from a trend perspective which you know not everybody's deploying robots right um you know there's there's many companies that are you know that aren't going to be in either the robotic business uh or or even thinking about you know sort of future type oriented type things but what they are doing is greenfield applications are being built on object um generally not on not on file and and not on block and so you know the rise of of object as sort of the the sort of let's call it the the next great protocol for um you know for uh for for modern workloads right this is this is that that modern application coming to the forefront and that could be anything from you know financial institutions you know right down through um you know we've even see it and seen it in oil and gas uh we're also seeing it across across healthcare uh so you know as as as companies take the opportunity as industries to take this opportunity to modernize you know they're modernizing not on things that are are leveraging you know um you know sort of archaic disk technology they're they're they're really focusing on on object but they still have file workflows that they need to that they need to be able to support and so having the ability to be able to deliver those things from one device in a capacity orientation or a performance orientation while at the same time dramatically simplifying the overall administration of your environment both physically and non-physically is a key driver so the great thing about object is it's simple it's a kind of a get put metaphor um it's it scales out you know because it's got metadata associated with the data uh and and it's cheap the drawback is you don't necessarily associate it with high performance and and as well most applications don't you know speak in that language they speak in the language of file you know or as you mentioned block so i i see real opportunities here if i have some some data that's not necessarily frequently accessed you know every day but yet i want to then whether end of quarter or whatever it is i want to i want to or machine learning i want to apply some ai to that data i want to bring it in and then apply a file format uh because for performance reasons is that right maybe you could unpack that a little bit yeah so um you know we see i mean i think you described it well right um but i don't think object necessarily has to be slow um and nor does it have to be um you know because when you think about you brought up a good point with metadata right being able to scale to a billions of objects being able to scale to billions of objects excuse me is of value right um and i think people do traditionally associate object with slow but it's not necessarily slow anymore right we we did a sort of unofficial survey of of of our of our customers and our employee base and when people described object they thought of it as like law firms and storing a word doc if you will um and that that's just you know i think that there's a lack of understanding or a misnomer around what modern what modern object has become and perform an object particularly at scale when we're talking about billions of objects you know that's the next frontier right um is it at pace performance wise with you know the other protocols no but it's making leaps and grounds so you talked a little bit more about some of the verticals that you see i mean i think when i think of financial services i think transaction processing but of course they have a lot of tons of unstructured data are there any patterns you're seeing by by vertical market um we're you know we're not that's the interesting thing um and you know um as a as a as a as a company with a with a block heritage or a block dna those patterns were pretty easy to spot right there were a certain number of databases that you really needed to support oracle sql some postgres work etc then kind of the modern databases around cassandra and things like that you knew that there were going to be vmware environments you know you could you could sort of see the trends and where things were going unstructured data is such a broader horizontal um thing right so you know inside of oil and gas for example you have you know um you have specific applications and bespoke infrastructures for those applications um you know inside of media entertainment you know the same thing the the trend that we're seeing the commonality that we're seeing is the modernization of you know object as a starting point for all the all of the net new workloads within within those industry verticals right that's the most common request we see is what's your object roadmap what's your you know what's your what's your object strategy you know where do you think where do you think object is going so um there isn't any um you know sort of uh there's no there's no path uh it's really just kind of a wide open field in front of us with common requests across all industries so the amazing thing about pure just as a kind of a little you know quasi you know armchair historian the industry is pure was really the only company in many many years to be able to achieve escape velocity break through a billion dollars i mean three part couldn't do it isilon couldn't do it compellent couldn't do it i could go on but pure was able to achieve that as an independent company uh and so you become a leader you look at the gartner magic quadrant you're a leader in there i mean if you've made it this far you've got to have some chops and so of course it's very competitive there are a number of other storage suppliers that have announced products that unify object and file so i'm interested in how pure differentiates why pure um it's a great question um and it's one that uh you know having been a long time puritan uh you know i take pride in answering um and it's actually a really simple answer um it's it's business model innovation and technology right the the technology that goes behind how we do what we do right and i don't mean the product right innovation is product but having a better support model for example um or having on the business model side you know evergreen storage right where we sort of look at your relationship to us as a subscription right um you know we're gonna sort of take the thing that that you've had and we're gonna modernize that thing in place over time such that you're not rebuying that same you know terabyte or you know petabyte of storage that you've that you that you've paid for over time so um you know sort of three legs of the stool uh that that have made you know pure clearly differentiated i think the market has has recognized that um you're right it's it's hard to break through to a billion dollars um but i look forward to the day that you know we we have two billion dollar products and i think with uh you know that rise in in unstructured data growing to 80 by 2025 and you know the massive transition that you know you guys have noted in in in your hdd slide i think it's a huge opportunity for us on you know the other unstructured data side of the house you know the other thing i'd add matt and i've talked to cause about this is is it's simplicity first i've asked them why don't you do this why don't you do it and the answer is always the same is that adds complexity and we we put simplicity for the customer ahead of everything else and i think that served you very very well what about the economics of of unified file and object i mean if you bringing additional value presumably there's a there there's a cost to that but there's got to be also a business case behind it what kind of impact have you seen with customers yeah i mean look i'll i'll go back to something i mentioned earlier which is just the reclamation of floor space and power and cooling right um you know there's a you know there's people people people want to search for kind of the the sexier element if you will when it comes to looking at how we how you derive value from something but the reality is if you're reducing your power consumption by you know by by a material percentage um power bills matter in big in big data centers you know customers typically are are facing you know a paradigm of well i i want to go to the cloud but you know the clouds are not being more expensive than i thought it was going to be or you know i've figured out what i can use in the cloud i thought it was going to be everything but it's not going to be everything so hybrid's where we're landing but i want to be out of the data center business and i don't want to have a team of 20 storage people to match you know to administer my storage um you know so there's sort of this this very tangible value around you know hey if i could manage um you know multiple petabytes with one full-time engineer uh because the system uh to your and kaza's point was radically simpler to administer didn't require someone to be running around swapping drives all the time would that be a value the answer is yes 100 of the time right and then you start to look at okay all right well on the uffo side from a product perspective hey if i have to manage a you know bespoke environment for this application if i have to manage a bespoke environment for this application and a spoke environment for this application and this focus environment for this application i'm managing four different things and can i actually share data across those four different things there's ways to share data but most customers it just gets too complex how do you even know what your what your gold.master copy is of data if you have it in four different places or you try to have it in four different places and it's four different siloed infrastructures so when you get to the sort of the side of you know how do we how do you measure value in uffo it's actually being able to have all of that data concentrated in one place so that you can share it from application to application got it i'm interested we use a couple minutes left i'm interested in the the update on flashblade you know generally but also i have a specific question i mean look getting file right is hard enough uh you just announced smb support for flashblade i'm interested in you know how that fits in i think it's kind of obvious with file and object converging but give us the update on on flashblade and maybe you could address that specific question yeah so um look i mean we're we're um you know tremendously excited about the growth of flashblade uh you know we we we found workloads we never expected to find um you know the rapid restore workload was one that was actually brought to us from from a customer actually um and has become you know one of our one of our top two three four you know workloads so um you know we're really happy with the trend we've seen in it um and you know mapping back to you know thinking about hdds and ssds you know we're well on a path to building a billion dollar business here so you know we're very excited about that but to your point you know you don't just snap your fingers and get there right um you know we've learned that doing file and object uh is is harder than block um because there's more things that you have to go do for one you're basically focused on three protocols s b nfs and s3 not necessarily in that order um but to your point about s b uh you know we we are on the path through to releasing um you know smb full full native smb support in in the system that will allow us to uh service customers we have a limitation with some customers today where they'll have an smb portion of their nfs workflow um and we do great on the nfs side um but you know we didn't we didn't have the ability to plug into the s p component of their workflow so that's going to open up a lot of opportunity for us um on on that front um and you know we continue to you know invest significantly across the board in in areas like security which is you know become more than just a hot button you know today security's always been there but it feels like it's blazing hot today and so you know going through the next couple years we'll be looking at uh you know developing some some uh you know pretty material security elements of the product as well so uh well on a path to a billion dollars is the net on that and uh you know we're we're fortunate to have have smb here and we're looking forward to introducing that to to those customers that have you know nfs workloads today with an s b component yeah nice tailwind good tam expansion strategy matt thanks so much we're out of time but really appreciate you coming on the program we appreciate you having us and uh thanks much dave good to see you all right good to see you and you're watching the convergence of file and object keep it right there we'll be back with more right after this short break [Music]
SUMMARY :
i need to have um you know fast daz
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2010 | DATE | 0.99+ |
Matt Burr | PERSON | 0.99+ |
250 terabytes | QUANTITY | 0.99+ |
270 terabytes | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
matt burr | PERSON | 0.99+ |
today | DATE | 0.99+ |
billion dollars | QUANTITY | 0.98+ |
two levels | QUANTITY | 0.98+ |
billions of objects | QUANTITY | 0.98+ |
two weeks later | DATE | 0.98+ |
80 | QUANTITY | 0.98+ |
two weeks ago | DATE | 0.98+ |
one system | QUANTITY | 0.98+ |
an hour | QUANTITY | 0.97+ |
cassandra | PERSON | 0.97+ |
matt | PERSON | 0.97+ |
next year | DATE | 0.96+ |
billions of objects | QUANTITY | 0.96+ |
dave | PERSON | 0.96+ |
one device | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
first principles | QUANTITY | 0.93+ |
second half | QUANTITY | 0.93+ |
billion dollar | QUANTITY | 0.91+ |
petabyte | QUANTITY | 0.9+ |
four different siloed infrastructures | QUANTITY | 0.89+ |
two billion dollar | QUANTITY | 0.89+ |
one place | QUANTITY | 0.89+ |
next couple years | DATE | 0.88+ |
80 of data | QUANTITY | 0.88+ |
early second half of this decade | DATE | 0.87+ |
20 storage people | QUANTITY | 0.86+ |
four different things | QUANTITY | 0.86+ |
five refrigerators | QUANTITY | 0.86+ |
one | QUANTITY | 0.84+ |
oracle sql | TITLE | 0.81+ |
one full-time | QUANTITY | 0.8+ |
wikibon | ORGANIZATION | 0.79+ |
four different places | QUANTITY | 0.79+ |
first | QUANTITY | 0.79+ |
3x | QUANTITY | 0.78+ |
a lot of people | QUANTITY | 0.78+ |
FlashBlade | ORGANIZATION | 0.78+ |
end of quarter | DATE | 0.77+ |
a couple minutes | QUANTITY | 0.77+ |
two sort | QUANTITY | 0.75+ |
isilon | ORGANIZATION | 0.74+ |
s3 | TITLE | 0.74+ |
three part | QUANTITY | 0.72+ |
100 of | QUANTITY | 0.7+ |
terabyte | QUANTITY | 0.7+ |
three legs | QUANTITY | 0.68+ |
two | QUANTITY | 0.68+ |
multiple petabytes | QUANTITY | 0.68+ |
vice president | PERSON | 0.65+ |
many years | QUANTITY | 0.61+ |
flashblade | ORGANIZATION | 0.57+ |
many companies | QUANTITY | 0.56+ |
tons | QUANTITY | 0.55+ |
gartner | ORGANIZATION | 0.53+ |
General Manager | PERSON | 0.53+ |
multi | QUANTITY | 0.51+ |
general manager | PERSON | 0.45+ |
Pure | ORGANIZATION | 0.34+ |
Sam Grocott, Dell Technologies | Exascale Day
>> Narrator: From around the globe. It's theCUBE. With digital coverage of Dell Technologies World-Digital Experience. Brought to you by Dell Technologies. >> Hello everyone, and welcome back to theCUBE's continuing coverage of Dell Tech World 2020. This is Dave Vellante, and I'm here with Sam Groccot. Who's the Senior Vice President of Product Marketing at Dell Technologies. Sam, great to see you. Welcome. >> Great to be here, Dave. >> All right, we're going to talk generally about cloud in the coming decade. And really how the cloud model is evolving. But I want to specifically ask Sam about the as a service news that Dell's making at DTW. What those solutions look like. How they're going to evolve. Maybe Sam, we can hit on some of the customer uptake and the feedback as well. Does that sound good? >> Yeah, sounds great. Let's dive right in. >> All right, let's do that. So look, you've come from the world of disrupter. When you joined Isilon, they got acquired by EMC and then Dell. So, you've been on both sides of the competitive table. And cloud is obviously a major force actually I'd say the major disruptive force in our industry. Let's talk about how Dell is responding to the cloud trend generally. Then we'll get into the announcements. >> Yeah, certainly. And you're right I've been on both sides of this. There is no doubt if you look at just over the last decade or so. How customers and partners are really looking at evaluating how they can take advantage of the value of moving workloads to the cloud. And we've seen it happen over the last decade or so. And it's happening at a more frequent pace. There's no doubt that is really what planted the seed of this new operating experience. Kind of a new lifestyle so to speak, around as a service. Because when you go to the cloud, that's the only way they roll. Is you get an as a service experience. So, that really has started to come into the data center. As organizations are moving specific workloads or applications to the cloud. Of hey, how do I get that in an on-premise experience? I think throwing gasoline on that is certainly the pandemic and COVID-19. Has really made organizations evaluate how to move much quicker and more agilely by moving some applications to the cloud. Because frankly on-prem just wasn't able to move as fast as they'd like to see. We're seeing that macrotrend accelerate. I think we're in good shape to take advantage of that as we go forward. >> Well, that brings us to the hard news of what you're calling Project Apex i.e your as a service initiative. What specifically are you announcing this week? >> Yeah. So, Project Apex is one of our big announcements and that's really where we're targeting. How we're bringing together and unifying our product development. Our sales go-to-market. Our marketing go-to-market. Everything coming together underneath Project Apex. Which is our as a service and cloud like experience. Look, we know in that world where customers we're constantly evaluating which applications stay on-prem. Which applications and workloads should go to the cloud. I think the market has certainly voted clearly that it's going to be both. It's going to be a hybrid multicloud world. But what they absolutely are clear that they want is a simple, easy to use as a service experience. Regardless of if they're on-prem or off-prem. And that's where really the traditional on-prem solutions fall down. Because it's just too darn complex still. They've got many different tools, managing many different applications that oversee their cloud operations, their various infrastructure, whether it's server or compute or networking. They all run different tools. So, it gets very, very complex. It also very rigid to scale. You can't move as fast as the cloud. It can't deploy as fast. It requires manual intervention to buy more. You typically got to get a sales rep in-house to come in and extend your environment and grow your environment. And then of course, the traditional method is very CapEx heavy. In a world where organizations are really trying to preserve cash. Cash is king. It doesn't really give them the flexibility traditionally or going forward that they'd like to see on that front. So, what they want to see is a consistent operating experience for their on and off-prem environments. They want to see a single tool that can manage, report and grow and do commerce across that environment. Regardless of if it's on or off-prem. They want something that can scale quickly. Now look, when you're moving equipment on-prem, it's not going to be a click of a button. But you should be able to buy and procure that with a click of a button. And then very quickly, within less than a handful of days. That equipment should be stood up deployed and running in their environment. And then finally, it's got to deliver this more flexible finance model. Whether it's leveraging a flexible subscription models or OPEX friendly models. Customers are really looking for that more OPEX friendly approach. Which we're going to be providing with Project Apex. So very, very excited about kind of the goals and the aspirations of Project Apex. We're going to see a lot of it come to market early next year. I think we're well situated, as I said, to take advantage of this opportunity. >> So, when I was looking through the announcement and sort of squinting through it. The three things jumped out and you've definitely hit on those. One is choice. But sometimes you don't want to give customers too much choice. So, it's got to be simple and it's got to be consistent. So, it feels like you're putting this abstraction layer over your entire portfolio and trying to hit on those three items. Which is somewhat of a balancing act. Is that right? >> Yeah. No, you're exactly right. The kind of the pillars of the Project Apex value proposition so to speak, is simplicity choice and consistency. So, we've got to deliver that simple kind of end to end journey view of their entire cloud and as a service experience. It needs to span our entire portfolio. So, whether it's servers, storage or networking or PCs or cloud. All of that needs to be integrated into essentially a large, single web interface that gives you visibility across all of that. And of course, the ease of scale up and frankly scaled down. Should be able to do that in real time through the system. Choice is a big, big factor for us. We've got the broadest portfolio in the industry. We want to provide customers the ability to consume infrastructure any way they want. Clearly they can consume it the traditional way. But this more as a service flexible consumption approach is fundamental to making sure customers only pay for what they use. So, highly metered environment. Pay as they go. You leverage subscriptions. Essentially give them that OPEX flexibility that they've been looking for. And then finally, I think the real key differentiator is that consistent operating experience. So, whether you move workloads on or off-prem. It's got to be in a single environment that doesn't require you to jump around between different application and management experiences. >> Alright, so I've got to ask you the tough question. I want to hear your answer to it. I mean, we've seen the cloud model. Everybody knows it very well. But why now? People are going to say, okay, you're just responding to HPE. What's different between what you're doing and what some of your competitors are doing? >> Yeah. So, I think it really comes down to the choice and breadth of what we're bringing to the table. So, we're not going to force our customers to go down one of these routes. We're going to provide that ultimate flexibility. And I think what will really define ourselves against them and shine ourselves against them is, that consistent operating experience. We've got that opportunity to provide both an on-prem, Edge and cloud experience. That doesn't require them to move out of that operating experience to jump between different tools. So, whether you're running a Storage as a service environment. Which we'll have in the first half of next year. Looking through our new cloud console that is coming out early next year as well. You're going to be able to have that single view of everything that's going on across your environment. And also be able to move workloads from on-prem and off-prem without breaking that consistent experience. I think that is probably the biggest differentiator we're going to have. When you ladder that onto just the general Dell Technologies value of being able to meet and deliver our solutions anywhere in the world at any point of the data center, at the Edge, or even cloud-native. We've got the broadest portfolio to meet our customer needs wherever we need to go. >> So, my understanding is the offerings, it's designed to encompass the entire Dell Technologies portfolio. >> That's right. >> From client solutions, ISG, et cetera. Not VMware specifically. It's really that whole Dell Technologies portfolio. Correct? >> Yeah and look, over time we totally expect to be able to transact to VMware through this. We do expect that to be part of the solution eventually. So yeah, it is across, PC as a service, Storage as a service, Infrastructure as a service. Our cloud offers all of our services, traditional services that are helping to deliver this as a service experience. And even our traditional financial flexible consumption models will be included in this. Because again, we want to offer ultimate choice and flexibility. We're not going to force our customers to go down any of these paths. But what we want to do is present these paths and go wherever they want to go. We've got the breadth of the portfolio and the offers to get them there. >> Oh, okay. So, it's really a journey. You mentioned Storage as a service coming out first and then as well. If I understand it, the idea is to, I'm going to have visibility and control over my entire state on-prem, cloud, Edge, kind of the whole enchilada. Maybe not right out of the shoot, but that's the vision. >> Absolutely. You've got to be able to see all of that and we'll continue to iterate over time and bring more environments, more applications, more cloud environments into this. But that is absolutely the vision of Project Apex is to deliver that fully integrated core, Edge, cloud partner experience. To all of the environments our customers could be running in. >> I want to put my customer hat on my CFO, CIO hat. Okay, what's the fine print. What are the minimum bars to get in? What's the minimum commitment I need to make? What are some of those nuances? >> Yeah. So, both the Storage as a service, which will be our first offer of many in our portfolio. And the cloud console, which will give you that single web interface to kind of manage, report and kind of thrive in this as a service experience. All that will be released in the first half of the next year. So, we're still frankly defining what that will look like. But we want to make sure that we deliver a solution that can span all segments. From small business to medium business, to the biggest enterprises out there. Globally goal expansion through our channel partners. We're going to have Geos and channel partners fully integrated as well. Service providers as well. As a fundamental important piece of our delivery model and delivering this experience to our customers. So, the fine print Dave will be out early next year. As we GA these releases and bring into market. But ultimate flexibility and choice, up and down the stack and geographically wide is the goal and the intent we plan to deliver that. >> Can you add any color to the sort of product journey, if you will? I even hesitate Sam, to use the word product. Because you're really sort of transferring your mindset into a platform mindset and a services mindset. As opposed to bolting services on top of a price. You sell a product and say okay, service guys you take it from here. You have to sort of rethink, how you deliver. And so you're saying, you start with storage. And then so what can we expect over the next midterm-longterm? >> Yeah. I'll give you an example. Look, we sell a ton of as a service and flexible consumption today. We've been at it for 10 years. In fact in Q2, we sold our annual recurring revenue rate is 1.3 billion growing at 30% very, very pleased. So, this is not new to us. But how you described it Dave is right. We adopt products, customers then pick their product. They pick their service that they want to bolt on. Then they pick their financial payment model they bolted on. So, it's a very good, customized way to build it. That's great. And customers are going to continue to want that and will continue to deliver that. But there is an emerging segment that wants more just kind of think of it as the big easy button. They want to focus on an outcome. Storage as a service is a great example where they're less concerned about what individual product element is part of that. They want it fully managed by Dell Technologies or one of our partners. They don't want to manage it themselves. And of course they want it to be pay-for-use on an OPEX plan that works for their business and gives them that flexibility. So, when customers going forward want to go down this as a service outcome driven path. They're simply going to say, hey, what data service do I want? I want file or block unified object. They pick their data service based on their workload. They pick their performance and capacity tier. There is a term limit, right now we're planning one to five years. Depending on the amount of terms you want to do. And then that's it. It's managed by Dell Technologies. It's on our books from Dell Technologies and it's of course leveraging our great technology portfolio to bring that service and that experience to our customers. So, the service is the product now. It really is making that shift. We are moving into a services driven, services outcome driven set of portfolio and solutions for our customers. >> So, you actually have a lot of data on this. I mean, you talk about a billion dollar business. Maybe talk a little bit about customer uptake. I don't know what you can share in terms of numbers and a number of subscription customers. But I'm really interested in the learnings and the feedback and how that's informed your strategy? >> Yeah. I mean, you're right. Again, we've been at this for many, many years. We have over 2000 customers today that have chosen to take advantage of our flexible consumption and as a service offers that we have today. Nevermind kind of as we move into these kind of turn-key, easy button as a service offers that are to come that early next year. So, we've leveraged all of that learnings and we've heard all of that feedback. It's why it's really important that choice and flexibility is fundamental to the Project Apex strategy. There are some of those customers that they want to build their own. They want to make sure they're running the latest PowerMax or the latest PowerStore. They want to choose their network. They want to choose how they protect it. They want to choose what type of service. They want to cover some of the services. They may want very little from us or vice versa. And then they want to maybe leverage additional, more traditional means to acquire that based on their business goals. That feedback has been loud and clear. But there is that segment that is like, no, no, no. I need to focus more on my business and not my infrastructure. And that's where you're going to see these more turn-key as a service solutions fit that need. Where they want to just define SLAs, outcomes. They want us to take on the burden of managing it for them. So, they can really focus on their applications and their business, not their infrastructure. So, things like metering. Tons of feedback on how we'll want to meter this. Tons of feedback on the types of configurations and scale they're looking for. The applications and workloads that they're targeting for this world. Is very different than the more traditional world. So, we're leveraging all of that information to make sure we deliver our Infrastructure as a service and then eventually Solutions as a service. You think about SAP as a service, VDI as a service. AI machine learning as a service. We'll be moving up the stack as well to meet more of a application integrated as a service experience as well. >> So, I want to ask you. You've given us a couple of data points there, billion dollar plus business. A couple thousand customers. You've got decent average contract values if I do my math right. So, it's not just the little guys. I'm sorry, it's not just the big guys, but there's some fat middle as well that are taking this up. Is that fair to say? >> Totally. I mean, I would say frankly in the enterprise space. It's the mid to larger sides historically and we expect they'll continue to want to kind of choose their best of breed apart. Best of breed of products, Best of breed services. Best of breed financial consumption. Great. And we're in great shape there. We're very confident or competitive and competing in that space today. I think going into the turn-key as a service space that will play up-market. But it will really play down-market, mid-market, smaller businesses. It gives us the opportunity to really drive a solution there. Where they don't have the resources to maybe manage a large storage infrastructure or a backup infrastructure or compute infrastructure. They're going to frankly look to us to provide that experience for them. I think our as a service offers will really play stronger in that mid and kind of lower end of the market. >> So, tell us again. The sort of availability of like the console, for example. When can I actually get-- >> Yeah. >> I can do as a service today. I can buy subscriptions from you. >> Absolutely. >> This is where it all comes together. What's the availability and rollout details? >> Sure. As we look to move to our integrated kind of turn-key as a service offers. The console we're announcing at Dell Technologies World as it's in public preview now. So, for organizations, customers that want to start using it. They can start using it now. The Storage as a service offer is going to be available in the first half of next year. So, we're rapidly kind of working on that now. Looking to early next year to bring that to market. So, you'll see the console and the first as a service offer with storage as a service available in the first half of next year. Readily available to any and everyone that wants to deploy it. We're not that far off right now. But we felt it was really, really important to make sure our customers. Our partners and the industry really understands how important this transformation to as a service and cloud is for Dell Technologies. That's why frankly, externally and internally Project Apex will be that north star to bring our end to end value together across the business. Across our customers, across our teams. And that's why we're really making sure that everybody understands Project Apex and as a services is the future for Dell. And we're very much focused on that. >> As the head of product marketing. This is really a mindset, a cultural change really. You're really becoming the head of service marketing in a way. How are you guys thinking about that mindset shift? >> Well really, it's how am I thinking about it? How is the broader marketing organization thinking about it? How is engineering clearly thinking about it? How is finance thinking about it? How is sale? Like this is transformative across every single function within Dell technologies has a role to play, to do things very differently. Now it's going to take time. It's not going to happen overnight. Various estimates have this as a fairly small percentage of business today in our segments. But we do expect that to start to, and it has started to accelerate ramp. We're preparing for a large percentage of our business to be consumed this way very, very soon. That requires changes in how we sell. Changes in how we market clearly. Changes in how we build products and so forth. And then ultimately, how we account for this has to change. So, we're approaching it I think the right way Dave. Where we're looking at this truly end to end. This isn't a tweak in how we do things or an evolution. This is a revolution. For us to kind of move faster to this model. Again, building on the learnings that we have today with our strong customer base and experience we've built up over the years. But this is a big shift. This isn't an incremental turn of the crank. We know that. I think you expect that. Our customers expect that. And that's the mission we're on with Project Apex. >> Well, I mean, with 30% growth. I mean, that's a clear indicator and people like growth. No doubt. That's a clear indicator that customers are glomming onto this. I think many folks want to buy this way, and I think increasingly that's how they buy SaaS. That's how they buy cloud. Why not buy infrastructure the same way? Give us your closing thoughts Sam. What are the big takeaways? >> Yeah. The big takeaways is from a Dell Technologies perspective. Project Apex is that strategic vision of bringing together our as a service and cloud capabilities into a easy to consume, simple, flexible offer. That provides ultimate choice to our customers. Look, the market has spoken. We're going to be living in a hybrid multicloud world. I think the market is also starting to speak. That they want that to be an as a service experience, regardless if it's on or off ground. It's our job. It's our responsibility to bring that ease. That simplicity and elegance to the on-prem world. It's not certainly not going anywhere. So, that's the mission that we're on with Project Apex. I like the hand we've been dealt. I like the infrastructure and the solutions that we have across our portfolio. And we're going to be after this, for the next couple of years. To refine this and build this out for our customers. This is just the beginning. >> Wow, it's awesome. Thank you so much for coming to theCUBE. We're seeing the cloud model. It's extending on-prem, cloud, multicloud it's going to the Edge. And the way in which customers want to transact business is moving at the same direction. So, Sam good luck with this and thanks so much. Appreciate your time. >> Yeah, thanks Dave. Thanks everyone. Take care. >> All right and thank you for watching. This is Dave Vellante for theCUBE and our continuing coverage of Dell Tech World 2020. The virtual CUBE. We'll be right back right after this short break. (gentle music)
SUMMARY :
Brought to you by Dell Technologies. Sam, great to see you. and the feedback as well. Let's dive right in. is responding to the Kind of a new lifestyle so to speak, of what you're calling Project Apex that it's going to be both. and it's got to be consistent. All of that needs to be integrated into People are going to say, okay, We've got that opportunity to it's designed to encompass It's really that whole Dell and the offers to get them there. kind of the whole enchilada. is to deliver that fully integrated What are the minimum bars to get in? and the intent we plan to deliver that. to the sort of product So, this is not new to us. and the feedback and how that are to come that early next year. Is that fair to say? It's the mid to larger sides historically of like the console, for example. I can do as a service today. What's the availability and as a services is the future for Dell. As the head of product marketing. and it has started to accelerate ramp. What are the big takeaways? and the solutions that we it's going to the Edge. Yeah, thanks Dave. and our continuing coverage
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
PCCW | ORGANIZATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Michelle Dennedy | PERSON | 0.99+ |
Matthew Roszak | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Mark Ramsey | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Jeff Swain | PERSON | 0.99+ |
Andy Kessler | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Matt Roszak | PERSON | 0.99+ |
Frank Slootman | PERSON | 0.99+ |
John Donahoe | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dan Cohen | PERSON | 0.99+ |
Michael Biltz | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Michael Conlin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Melo | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Joe Brockmeier | PERSON | 0.99+ |
Sam | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Jeff Garzik | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
George Canuck | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Rebecca Night | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Dave Valante | PERSON | 0.99+ |
NUTANIX | ORGANIZATION | 0.99+ |
Neil | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Mike Nickerson | PERSON | 0.99+ |
Jeremy Burton | PERSON | 0.99+ |
Fred | PERSON | 0.99+ |
Robert McNamara | PERSON | 0.99+ |
Doug Balog | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Alistair Wildman | PERSON | 0.99+ |
Kimberly | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Sam Groccot | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Rebecca | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Bill Sharp V1
>> Announcer: From around the globe, it's theCUBE! With digital coverage of Dell Technologies World, digital experience. Brought to you by Dell Technologies. >> Welcome to theCUBE's coverage of Dell Technologies World 2020, the digital coverage. I'm Lisa Martin, and I'm excited to be talking with one of Dell Technologies' customers EarthCam. Joining me is Bill Sharp, the senior VP of product development and strategy from EarthCam. Bill, welcome to theCUBE. >> Thank you so much. >> So talk to me a little bit about what EarthCam does. This is very interesting webcam technology. You guys have tens of thousands of cameras and sensors all over the globe. Give our audience an understanding of what you guys are all about. >> Sure thing. The world's leading provider of webcam technologies, you mentioned content and services, we're leaders in live streaming, time-lapse imaging, primary focus in the vertical construction. So with a lot of these, the most ambitious, largest construction projects around the world that you see these amazing time-lapse movies, we're capturing all of that imagery basically around the clock, these cameras are sending all of that image content to us and we're generating these time-lapse movies from it. >> You guys are headquartered in New Jersey. I was commenting before we went live about your great background. So you're actually getting to be onsite today? >> Yes, yes. We're live from our headquarters in upper Saddle River, New Jersey. >> Excellent, so in terms of the types of information that you're capturing, so I was looking at the website, and see from a construction perspective, some of the big projects you guys have done, the Hudson Yards, the Panama Canal expansion, the 9/11 museum. But you talked about one of the biggest focuses that you have is in the construction industry. In terms of what type of data you're capturing from all of these thousands of edge devices, give us a little bit of an insight into how much data you're capturing per day, how it gets from the edge, presumably, back to your core data center for editing. >> Sure, and it's not just construction. We're also in travel, hospitality, tourism, security, architecture, engineering, basically any industry that need high resolution visualization of their projects or their performance or their product flow. So it's high resolution documentation is basically our business. There are billions of files in the Isilon system right now. We are ingesting millions of images a month. We are also creating very high resolution panoramic imagery where we're taking hundreds and sometimes multiple hundreds of images, very high resolution images and stitching these together to make panoramas that are up to 30 gigapixel sometimes. Typically around one to two gigapixel but that composite imagery represents millions of images per month coming into the storage system and then being stitched together to those composites. >> So millions of images coming in every month, you mentioned Isilon. Talk to me a little bit about before you were working with Dell EMC and PowerScale, how were you managing this massive volume of data? >> Sure, we've used a number of other enterprise storage systems. It was really nothing was as easy to manage as Isilon really is. There was a lot of problems with overhead, the amount of time necessary from a systems administrator resource standpoint, to manage that. And it's interesting with the amount of data that we handle, being billions of relatively small files. They're, you know, a half a megabyte to a couple of megabytes each. It's an interesting data profile which Isilon really is well suited for. >> So if we think about some of the massive changes that we've all been through in the last, in 2020, what are some of the changes that EarthCam hasn't seen with respect to the needs for organizations, or you mentioned other industries like travel, hospitality, since none of us can get to these great travel destinations, have you seen a big drive up in the demand and the need to process more data faster? >> Yeah, that's an interesting point with the pandemic. I mean, obviously we had to pivot and move a lot of people to working from home, which we were able to do pretty quickly, but there's also an interesting opportunity that arose from this where so many of our customers and other people also have to do the same. And there is an increased demand for our technology. So people can remotely collaborate. They can work at a distance, they can stay at home and see what's going on in these project sites. So we really saw kind of an uptick in the need for our products and services. And we've also created some basically virtual travel applications. We have an application on the Amazon Fire TV which is the number one app in the travel platform, and people can kind of virtually travel when they can't really get out there. So it's, we've been doing kind of giving back to people that are having some issues with being able to travel around. We've done the fireworks at the Washington Mall around the Statue of Liberty for July 4th. And this year we'll be webcasting New Years in Times Square for our 25th year, actually. So again, helping people travel virtually and maintain connectivity with each other, and with their projects. >> Which is so essential during these times where for the last six, seven months, everyone is trying to get a sense of community and most of us just have the internet. So I also heard you guys were available on the Apple TV, someone should fire that up later and maybe virtually travel. But tell me a little bit about how working in conjunction with Dell Technologies and PowerScale. How has that enabled you to manage this massive volume change that you've experienced this year? Because as you said, it's also about facilitating collaboration which is largely online these days. >> Yeah, and I mean, the great things of working with Dell has been just our confidence in this infrastructure. Like I said, the other systems we've worked with in the past we've always found ourselves kind of second guessing. We're constantly innovating. Obviously resolutions are increasing. The camera performance is increasing, streaming video is, everything is constantly getting bigger and better, faster, more, and we're always innovating. We found ourselves on previous storage platforms having to really kind of go back and look at them, second guess where we're at with it. With the Dell infrastructure it's been fantastic. We don't really have to think about that as much. We just continue innovating, everything scales as we need it to do. It's much easier to work with. >> So you've got PowerScale at your core data center in New Jersey. Tell me a little bit about how data gets from these tens of thousands of devices at the edge, back to your editors for editing, and how PowerScale facilitates faster editing, for example. >> Well, basically you can imagine every one of these cameras, and it's not just cameras. It's also, you know, we have 360 virtual reality kind of bubble cameras. We have mobile applications, we have fixed position and robotic cameras. There's all these different data acquisition systems we're integrating with weather sensors and different types of telemetry. All of that data is coming back to us over the internet. So these are all endpoints in our network. So that's constantly being ingested into our network and saved to Isilon. The big thing that's really been a time saver working with the video editors is instead of having to take that content, move it into an editing environment where we have a whole team of award-winning video editors creating these time lapses. We don't need to keep moving that around. We're working natively on Isilon clusters. They're doing their editing there, and subsequent edits. Anytime we have to update or change these movies as a project evolves, that's all, can happen right there on that live environment. And the retention is there. If we have to go back later on, all of our customers' data is really kept within that one area, it's consolidated and it's secure. >> I was looking at the Dell Tech website, and there's a case study that you guys did, EarthCam did with Dell Tech saying that the video processing time has been reduced 20%. So that's a pretty significant increase. I can imagine with the volumes changing so much now, not only is huge to your business but to the demands that your customers have as well, depending on where those demands are coming from. >> Absolutely. And just being able to do that a lot faster and be more nimble allows us to scale. We've added actually, again, speaking of during this pandemic, we've actually added personnel, we've been hiring people. A lot of those people are working remotely as we've stated before. And it's just with the increase in business, we have to continue to keep building on that, and this storage environment's been great. >> Tell me about what you guys really kind of think about with respect to PowerScale in terms of data management, not storage management, and what that difference means to your business. >> Well, again, I mean, number one was really eliminating the amount of resources. The amount of time we have to spend managing it. We've almost eliminated any downtime of any kind. We have greater storage density, we're able to have better visualization on how our data is being used, how it's being accessed. So as these things are evolving, we really have good visibility on how the storage system is being used in both our production and also in our backup environments. It's really, really easy for us to make our business decisions as we innovate and change processes, having that continual visibility and really knowing where we stand. >> And you mentioned hiring folks during the pandemic, which is fantastic, but also being able to do things in a much more streamlined way with respect to managing all of this data. But I am curious in terms of innovation and new product development, what have you been able to achieve? Because you've got more resources presumably to focus on being more innovative rather than managing storage. >> Well, again, it's, we're always really pushing the envelope of what the technology can do. As I mentioned before, we're getting things into, you know, 20 and 30 gigapixels, people are talking about megapixel images, we're stitching hundreds of these together. We're just really changing the way imagery is used both in the time lapse and also just in archival process. A lot of these things we've done with the interior, we have this virtual reality product where you can walk through and see in a 360 bubble, we're taking that imagery and we're combining it with these BIM models. So we're actually taking the 3D models of the construction site and combining it with the imagery. And we can start doing things to visualize progress, and different things that are happening on the site, look for clashes or things that aren't built like they're supposed to be built, things that maybe aren't done on the proper schedule or things that are maybe ahead of schedule, doing a lot of things to save people time and money on these construction sites. We've also introduced AI and machine learning applications directly into the workflow in the storage environment. So we're detecting equipment and people and activities in the site where a lot of that would have been difficult with our previous infrastructure. It really is seamless and working with Isilon now. >> I imagine by being able to infuse AI and machine learning, you're able to get insights faster, to be able to either respond faster to those construction customers, for example, or alert them if perhaps something isn't going according to plan. >> Yeah, a lot of it's about schedule, it's about saving money, about saving time. And again, with not as many people traveling to these sites, they really just have to have constant visualization of what's going on day to day. We're detecting things like different types of construction equipment and things that are happening on the site. We're partnering with people that are doing safety analytics and things of that nature. So these are all things that are very important to construction sites. >> What are some of the things as we are rounding out the calendar year 2020, what are some of the things that you're excited about going forward in 2021, that EarthCam is going to be able to get into and to deliver? >> Just more and more people really finally seeing the value. I mean I've been doing this for 20 years and it's just, it's amazing how we're constantly seeing new applications and more people understanding how valuable these visual tools are. That's just a fantastic thing for us because we're really trying to create better lives through visual information. We're really helping people with the things they can do with this imagery. That's what we're all about. And that's really exciting to us in a very challenging environment right now is that people are recognizing the need for this technology and really starting to put it on a lot more projects. >> Well, you can kind of consider it an essential service whether or not it's a construction company that needs to manage and oversee their projects, making sure they're on budget, on schedule, as you said, or maybe even just the essentialness of helping folks from any country in the world connect with a favorite travel location, or (indistinct) to help from an emotional perspective. I think the essentialness of what you guys are delivering is probably even more impactful now, don't you think? >> Absolutely. And again about connecting people when they're at home, and recently we webcast the president's speech from the Flight 93 9/11 observation from the memorial, there was something where only the immediate families were allowed to travel there. We webcast that so people could see that around the world. We've documented, again, some of the biggest construction projects out there, the new Raiders stadium was one of the recent ones, just delivering this kind of flagship content. Wall Street Journal has used some of our content recently to really show the things that have happened during the pandemic in Times Square. We have these cameras around the world. So again, it's really bringing awareness. So letting people virtually travel and share and really remain connected during this challenging time. And again, we're seeing a real increased demand in the traffic in those areas as well. >> I can imagine some of these things that you're doing that you're achieving now are going to become permanent not necessarily artifacts of COVID-19, as you now have the opportunity to reach so many more people and probably the opportunity to help industries that might not have seen the value of this type of video to be able to reach consumers that they probably could never reach before. >> Yeah, I think the whole nature of business and communication and travel and everything is really going to be changed from this point forward. It's really, people are looking at things very, very differently. And again, seeing that the technology really can help with so many different areas that it's just, it's going to be a different kind of landscape out there we feel. And that's really continuing to be seen as on the uptick in our business and how many people are adopting this technology. We're developing a lot more partnerships with other companies, we're expanding into new industries. And again, you know, we're confident that the current platform is going to keep up with us and help us really scale and evolve as these needs are growing. >> It sounds to me like you have the foundation with Dell Technologies, with PowerScale, to be able to facilitate the massive growth that you were saying and the scale in the future, you've got that foundation, you're ready to go. >> Yeah, we've been using the system for five years already. We've already added capacity. We can add capacity on the fly, really haven't hit any limits in what we can do. It's almost infinitely scalable, highly redundant. It gives everyone a real sense of security on our side. And you know, we can just keep innovating, which is what we do, without hitting any technological limits with our partnership. >> Excellent, well, Bill, I'm going to let you get back to innovating for EarthCam. It's been a pleasure talking to you. Thank you so much for your time today. >> Thank you so much. It's been a pleasure. >> For Bill Sharp, I'm Lisa Martin, you're watching theCUBE's digital coverage of Dell Technologies World 2020. Thanks for watching. (calm music)
SUMMARY :
Brought to you by Dell Technologies. excited to be talking of what you guys are all about. of that image content to us to be onsite today? in upper Saddle River, New Jersey. one of the biggest focuses that you have coming into the storage system Talk to me a little bit about before the amount of time necessary and move a lot of people and most of us just have the internet. Yeah, and I mean, the great of devices at the edge, is instead of having to take that content, not only is huge to your business And just being able to means to your business. on how the storage system is being used also being able to do things and activities in the site to be able to either respond faster and things that are happening on the site. and really starting to put any country in the world see that around the world. and probably the opportunity And again, seeing that the to be able to facilitate We can add capacity on the fly, I'm going to let you get back Thank you so much. of Dell Technologies World 2020.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Bill Sharp | PERSON | 0.99+ |
Dell Tech | ORGANIZATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies' | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
New Jersey | LOCATION | 0.99+ |
Bill | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Washington Mall | LOCATION | 0.99+ |
20 | QUANTITY | 0.99+ |
Times Square | LOCATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
July 4th | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
millions of images | QUANTITY | 0.99+ |
Fire TV | COMMERCIAL_ITEM | 0.99+ |
25th year | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Apple TV | COMMERCIAL_ITEM | 0.98+ |
Isilon | ORGANIZATION | 0.98+ |
New Years | EVENT | 0.98+ |
hundreds of images | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
PowerScale | ORGANIZATION | 0.98+ |
Statue of Liberty | LOCATION | 0.98+ |
30 gigapixels | QUANTITY | 0.98+ |
tens of thousands of cameras | QUANTITY | 0.97+ |
Dell Technologies World | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
seven months | QUANTITY | 0.96+ |
two gigapixel | QUANTITY | 0.96+ |
Panama Canal | LOCATION | 0.96+ |
up to 30 gigapixel | QUANTITY | 0.96+ |
billions of files | QUANTITY | 0.96+ |
this year | DATE | 0.95+ |
Dell Technologies | ORGANIZATION | 0.94+ |
EarthCam | ORGANIZATION | 0.94+ |
360 bubble | QUANTITY | 0.93+ |
Dell EMC | ORGANIZATION | 0.91+ |
second | QUANTITY | 0.9+ |
tens of thousands of devices | QUANTITY | 0.9+ |
Dell Technologies World 2020 | EVENT | 0.9+ |
9/11 | EVENT | 0.89+ |
EarthCam | COMMERCIAL_ITEM | 0.87+ |
millions of images a month | QUANTITY | 0.87+ |
Saddle River, New Jersey | LOCATION | 0.85+ |
a half a megabyte | QUANTITY | 0.84+ |
pandemic | EVENT | 0.84+ |
couple of megabytes | QUANTITY | 0.82+ |
year 2020 | DATE | 0.82+ |
Wall Street Journal | ORGANIZATION | 0.81+ |
PowerScale | TITLE | 0.8+ |
thousands of edge devices | QUANTITY | 0.79+ |
one area | QUANTITY | 0.79+ |
Raiders stadium | LOCATION | 0.77+ |
360 virtual | QUANTITY | 0.77+ |
around one | QUANTITY | 0.73+ |
Eric Herzog, IBM | VMworld 2020
>> Announcer: From around the globe, it's theCUBE. With digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman. This is theCUBE's coverage of VMworld 2020 of course, happening virtually. And there are certain people that we talk to every year at theCUBE, and this guest, I believe, has been on theCUBE at VMworld more than any others. It's actually not Pat Gelsinger, Eric Herzog. He is the chief marketing officer and vice president of global storage channels at IBM. Eric, Mr. Zoginstor, welcome back to theCUBE, nice to see you. >> Thank you very much, Stu. IBM always enjoys hanging with you, John, and Dave. And again, glad to be here, although not in person this time at VMworld 2020 virtual. Thanks again for having IBM. >> Alright, so, you know, some things are the same, others, very different. Of course, Eric, IBM, a long, long partner of VMware's. Why don't you set up for us a little bit, you know, 2020, the major engagements, what's new with IBM and VMware? >> So, a couple of things, first of all, we have made our Spectrum Virtualize software, software defined block storage work in virtual machines, both in AWS and IBM Cloud. So we started with IBM Cloud and then earlier this year with AWS. So now we have two different cloud platforms where our Spectrum Virtualize software sits in a VM at the cloud provider. The other thing we've done, of course, is V7 support. In fact, I've done several VMUGs. And in fact, my session at VMworld is going to talk about both our support for V7 but also what we're doing with containers, CSI, Kubernetes overall, and how we can support that in a virtual VMware environment, and also we're doing with traditional ESX and VMware configurations as well. And of course, out to the cloud, as I just talked about. >> Yeah, that discussion of hybrid cloud, Eric, is one that we've been hearing from IBM for a long time. And VMware has had that message, but their cloud solutions have really matured. They've got a whole group going deep on cloud native. The Amazon solutions have been something that they've been partnering, making sure that, you know, data protection, it can span between, you know, the traditional data center environment where VMware is so dominant, and the public clouds. You're giving a session on some of those hybrid cloud solutions, so share with us a little bit, you know, where do the visions completely agree? What's some of the differences between what IBM is doing and maybe what people are hearing from VMware? >> Well, first of all, our solutions don't always require VMware to be installed. So for example, if you're doing it in a container environment, for example, with Red Hat OpenShift, that works slightly different. Not that you can't run Red Hat products inside of a virtual machine, which you can, but in this case, I'm talking Red Hat native. We also of course do VMware native and support what VMware has announced with their Kubernetes based solutions that they've been talking about since VMworld last year, obviously when Pat made some big announcements onstage about what they were doing in the container space. So we've been following that along as well. So from that perspective, we have agreement on a virtual machine perspective and of course, what VMware is doing with the container space. But then also a slightly different one when we're doing Red Hat OpenShift as a native configuration, without having a virtual machine involved in that configuration. So those are both the commonalities and the differences that we're doing with VMware in a hybrid cloud configuration. >> Yeah. Eric, you and I both have some of those scars from making sure that storage works in a virtual environment. It took us about a decade to get things to really work at the VM level. Containers, it's been about five years, it feels like we've made faster progress to make sure that we can have stateful environments, we can tie up with storage, but give us a little bit of a look back as to what we've learned and how we've made sure that containerized, Kubernetes environments, you know, work well with storage for customers today. >> Well, I think there's a couple of things. First of all, I think all the storage vendors learn from VMware. And then the expansion of virtual environments beyond VMware to other virtual environments as well. So I think all the storage vendors, including IBM learned through that process, okay, when the next thing comes, which of course in this case happens to be containers, both in a VMware environment, but in an open environment with the Kubernetes management framework, that you need to be able to support it. So for example, we have done several different things. We support persistent volumes in file block and object store. And we started with that almost three years ago on the block side, then we added the file side and now the object storage side. We also can back up data that's in those containers, which is an important feature, right? I am sitting there and I've got data now and persistent volume, but I got to back it up as well. So we've announced support for container based backup either with Red Hat OpenShift or in a generic Kubernetes environment, because we're realistic at IBM. We know that you have to exist in the software infrastructure milieu, and that includes VMware and competitors of VMware. It includes Red Hat OpenShift, but also competitors to Red Hat. And we've made sure that we support whatever the end user needs. So if they're going with Red Hat, great. If they're going with a generic container environment, great. If they're going to use VMware's container solutions, great. And on the virtualization engines, the same thing. We started with VMware, but also have added other virtualization engines. So you think the storage community as a whole and IBM in particular has learned, we need to be ready day one. And like I said, three years ago, we already had persistent volume support for block store. It's still the dominant storage and we had that three years ago. So for us, that would be really, I guess, two years from what you've talked about when containers started to take off. And within two years we had something going that was working at the end user level. Our sales team could sell our business partners. As you know, many of the business partners are really rallying around containers, whether it be Red Hat or in what I'll call a more generic environment as well. They're seeing the forest through the trees. I do think when you look at it from an end user perspective, though, you're going to see all three. So, particularly in the Global Fortune 1000, you're going to see Red Hat environments, generic Kubernetes environments, VMware environments, just like you often see in some instances, heterogeneous virtualization environments, and you're still going to see bare metal. So I think it's going to vary by application workload and use case. And I think all, I'd say midsize enterprise up, let's say, $5 billion company and up, probably will have at least two, if not all three of those environments, container, virtual machine, and bare metal. So we need to make sure that at IBM we support all those environments to keep those customers happy. >> Yeah, well, Eric, I think anybody, everybody in the industry knows, IBM can span those environments, you know, support through generations. And very much knows that everything in IT tends to be additive. You mentioned customers, Eric, you talk to a lot of customers. So bring us inside, give us a couple examples if you would, how are they dealing with this transition? For years we've been talking about, you know, enabling developers, having them be tied more tightly with what the enterprise is doing. So what are you seeing from some of your customers today? >> Well, I think the key thing is they'd like to use data reuse. So, in this case, think of a backup, a snap or replica dataset, which is real world data, and being able to use that and reuse that. And now the storage guys want to make sure they know who's, if you will, checked it out. We do that with our Spectrum Copy Data Management. You also have, of course, integration with the Ansible framework, which IBM supports, in fact, we'll be announcing some additional support for more features in Ansible coming at the end of October. We'll be doing a large launch, very heavily on containers. Containers and primary storage, containers in hybrid cloud environments, containers in big data and AI environments, and containers in the modern data protection and cyber resiliency space as well. So we'll be talking about some additional support in this case about Ansible as well. So you want to make sure, one of the key things, I think, if you're a storage guy, if I'm the VP of infrastructure, or I'm the CIO, even if I'm not a storage person, in fact, if you think about it, I'm almost 70 now. I have never, ever, ever, ever met a CIO who used to be a storage guy, ever. Whether I, I've been with big companies, I was at EMC, I was at Seagate Maxtor, I've been at IBM actually twice. I've also done seven startups, as you guys know at theCUBE. I have never, ever met a CIO who used to be a storage person. Ever, in all those years. So, what appeals to them is, how do I let the dev guys and the test guys use that storage? At the same time, they're smart enough to know that the software guys and the test guys could actually screw up the storage, lose the data, or if they don't lose the data, cost them hundreds of thousands to millions of dollars because they did something wrong and they have to reconfigure all the storage solutions. So you want to make sure that the CIO is comfortable, that the dev and the test teams can use that storage properly. It's a part of what Ansible's about. You want to make sure that you've got tight integration. So for example, we announced a container native version of our Spectrum Discover software, which gives you comprehensive metadata, cataloging and indexing. Not only for IBM's scale-out file, Spectrum Scale, not only for IBM object storage, IBM cloud object storage, but also for Amazon S3 and also for NetApp filers and also for EMC Isilon. And it's a container native. So you want to make sure in that case, we have an API. So the AI software guys, or the big data software guys could interface with that API to Spectrum Discover, let them do all the work. And we're talking about a piece of software that can traverse billions of objects in two seconds, billions of them. And is ideal to use in solutions that are hundreds of petabytes, up into multiple exabytes. So it's a great way that by having that API where the CIO is confident that the software guys can use the API, not mess up the storage because you know, the storage guys and the data scientists can configure Spectrum Discover and then save it as templates and run an AI workload every Monday, and then run a big data workload every Tuesday, and then Wednesday run a different AI workload and Thursday run a different big data. And so once they've set that up, everything is automated. And CIOs love automation, and they really are sensitive. Although they're all software guys, they are sensitive to software guys messing up the storage 'cause it could cost them money, right? So that's their concern. We make it easy. >> Absolutely, Eric, you know, it'd be lovely to say that storage is just invisible, I don't need to think about it, but when something goes wrong, you need those experts to be able to dig in. You spent some time talking about automation, so critically important. How about the management layer? You know, you think back, for years it was, vCenter would be the place that everything can plug in. You could have more generalists using it. The HCI waves were people kind of getting away from being storage specialists. Today VMware has, of course vCenter's their main estate, but they have Tanzu. On the IBM and Red Hat side, you know, this year you announced the Advanced Cluster Management. What's that management landscape look like? How does the storage get away from managing some of the bits and bytes and, you know, just embrace more of that automation that you talked about? >> So in the case of IBM, we make sure we can support both. We need to appeal to the storage nerd, the storage geek if you will. The same time to a more generalist environment, whether it be an infrastructure manager, whether it be some of the software guys. So for example, we support, obviously vCenter. We're going to be supporting all of the elements that are going to happen in a container environment that VMware is doing. We have hot integration and big time integration with Red Hat's management framework, both with Ansible, but also in the container space as well. We're announcing some things that are coming again at the end of October in the container space about how we interface with the Red Hat management schema. And so you don't always have to have the storage expert manage the storage. You can have the Red Hat administrator, or in some cases, the DevOps guys do it. So we're making sure that we can cover both sides of the fence. Some companies, this just my personal belief, that as containers become commonplace while the software guys are going to want to still control it, there eventually will be a Red Hat/container admin, just like all the big companies today have VMware admins. They all do. Or virtualization admins that cover VMware and VMware's competitors such as Hyper-V. They have specialized admins to run that. And you would argue, VMware is very easy to use, why aren't the software guys playing with it? 'Cause guess what? Those VMs are sitting on servers containing both apps and data. And if the software guy comes in to do something, messes it up, so what have of the big entities done? They've created basically a virtualization admin layer. I think that over time, either the virtualization admins become virtualization/container admins, or if it's a big enough for both estates, there'll be container admins at the Global Fortune 500, and they'll also be virtualization admins. And then the software guys, the devOps guys will interface with that. There will always be a level of management framework. Which is why we integrate, for example, with vCenter, what we're doing with Red Hat, what we do with generic Kubernetes, to make sure that we can integrate there. So we'll make sure that we cover all areas because a number of our customers are very large, but some of our customers are very small. In fact, we have a company that's in the software development space for autonomous driving. They have over a hundred petabytes of IBM Spectrum Scale in a container environment. So that's a small company that's gone all containers, at the same time, we have a bunch of course, Global Fortune 1000s where IBM plays exceedingly well that have our products. And they've got some stuff sitting in VMware, some such sitting in generic Kubernetes, some stuff sitting in Red Hat OpenShift and some stuff still in bare metal. And in some cases they don't want their software people to touch it, in other cases, these big accounts, they want their software people empowered. So we're going to make sure we could support both and both management frameworks. Traditional storage management framework with each one of our products and also management frameworks for virtualization, which we've already been doing. And now management frame first with container. We'll make sure we can cover all three of those bases 'cause that's what the big entities will want. And then in the smaller names, you'll have to see who wins out. I mean, they may still use three in a small company, you really don't know, so you want to make sure you've got everything covered. And it's very easy for us to do this integration because of things we've already historically done, particularly with the virtualization environment. So yes, the interstices of the integration are different, but we know here's kind of the process to do the interconnectivity between a storage management framework and a generic management framework, in, originally of course, vCenter, and now doing it for the container world as well. So at least we've learned best practices and now we're just tweaking those best practices in the difference between a container world and a virtualization world. >> Eric, VMworld is one of the biggest times of the year, where we all get together. I know how busy you are going to the show, meeting with customers, meeting with partners, you know, walking the hallways. You're one of the people that traveled more than I did pre-COVID. You know, you're always at the partner shows and meeting with people. Give us a little insight as to how you're making sure that, partners and customers, those conversations are still happening. We understand everything over video can be a little bit challenging, but, what are you seeing here in 2020? How's everybody doing? >> Well, so, a couple of things. First of all, I already did two partner meetings today. (laughs) And I have an end user meeting, two end user meetings tomorrow. So what we've done at IBM is make sure we do a couple things. One, short and to the point, okay? We have automated tools to actually show, drawing, just like the infamous walk up to the whiteboard in a face to face meeting, we've got that. We've also now tried to make sure everybody is being overly inundated with WebEx. And by the way, there's already a lot of WebEx anyway. I can think of meeting I had with a telco, one of the Fortune 300, and this was actually right before Thanksgiving. I was in their office in San Jose, but they had guys in Texas and guys in the East Coast all on. So we're still over WebEx, but it also was a two and a half hour meeting, actually almost a three hour meeting. And both myself and our Flash CTO went up to the whiteboard, which you could then see over WebEx 'cause they had a camera showing up onto the whiteboard. So now you have to take that and use integrated tools. One, but since people are now, I would argue, over WebEx. There is a different feel to doing the WebEx than when you're doing it face to face. We have to fly somewhere, or they have to fly somewhere. We have to even drive somewhere, so in between meetings, if you're going to do four customer calls, Stu, as you know, I travel all over the world. So I was in Sweden actually right before COVID. And in one day, the day after we had a launch, we launched our new Flash System products in February on the 11th, on February 12th, I was still in Stockholm and I had two partner meetings and two end user meetings. But the sales guy was driving me around. So in between the meetings, you'd be in the car for 20 minutes or half an hour. So it connects different when you can do WebEx after WebEx after WebEx with basically no break. So you have to be sensitive to that when you're talking to your partners, sensitive of that when you're talking to the customers sensitive when you're talking to the analysts, such as you guys, sensitive when you're talking to the press and all your various constituents. So we've been doing that at IBM, really, since the COVID thing got started, is coming up with some best practices so we don't overtax the end users and overtax our channel partners. >> Yeah, Eric, the joke I had on that is we're all following the Bill Belichick model now, no days off, just meeting, meeting, meeting every day, you can stack them up, right? You used to enjoy those downtimes in between where you could catch up on a call, do some things. I had to carve out some time to make sure that stack of books that normally I would read in the airports or on flights, everything, you know. I do enjoy reading a book every now and again, so. Final thing, I guess, Eric. Here at VMworld 2020, you know, give us final takeaways that you want your customers to have when it comes to IBM and VMware. >> So a couple of things, A, we were tightly integrated and have been tightly integrated for what they've been doing in their traditional virtualization environment. As they move to containers we'll be tightly integrated with them as well, as well as other container platforms, not just from IBM with Red Hat, but again, generic Kubernetes environments with open source container configurations that don't use IBM Red Hat and don't use VMware. So we want to make sure that we span that. In traditional VMware environments, like with Version 7 that came out, we make sure we support it. In fact, VMware just announced support for NVMe over Fibre Channel. Well, we've been shipping NVMe over Fibre Channel for just under two years now. It'll be almost two years, well, it will be two years in October. So we're sitting here in September, it's almost been two years since we've been shipping that. But they haven't supported it, so now of course we actually, as part of our launch, I pre say something, as part of our launch, the last week of October at IBM's TechU it'll be on October 27th, you can join for free. You don't need to attend TechU, we'll have a free registration page. So just follow Zoginstor or look at my LinkedIns 'cause I'll be posting shortly when we have the link, but we'll be talking about things that we're doing around V7, with support for VMware's announcement of NVMe over Fibre Channel, even though we've had it for two years coming next month. But they're announcing support, so we're doing that as well. So all of those sort of checkbox items, we'll continue to do as they push forward into the container world. IBM will be there right with them as well because we know it's a very large world and we need to support everybody. We support VMware. We supported their competitors in the virtualization space 'cause some customers have, in fact, some customers have both. They've got VMware and maybe one other of the virtualization elements. Usually VMware is the dominant of course, but if they've got even a little bit of it, we need to make sure our storage works with it. We're going to do the same thing in the container world. So we will continue to push forward with VMware. It's a tight relationship, not just with IBM Storage, but with the server group, clearly with the cloud team. So we need to make sure that IBM as a company stays very close to VMware, as well as, obviously, what we're doing with Red Hat. And IBM Storage makes sure we will do both. I like to say that IBM Storage is a Switzerland of the storage industry. We work with everyone. We work with all these infrastructure players from the software world. And even with our competitors, our Spectrum Virtualized software that comes on our Flash Systems Array supports over 550 different storage arrays that are not IBM's. Delivering enterprise-class data services, such as snapshot, replication data, at rest encryption, migration, all those features, but you can buy the software and use it with our competitors' storage array. So at IBM we've made a practice of making sure that we're very inclusive with our software business across the whole company and in storage in particular with things like Spectrum Virtualize, with what we've done with our backup products, of course we backup everybody's stuff, not just ours. We're making sure we do the same thing in the virtualization environment. Particularly with VMware and where they're going into the container world and what we're doing with our own, obviously sister division, Red Hat, but even in a generic Kubernetes environment. Everyone's not going to buy Red Hat or VMware. There are people going to do Kubernetes industry standard, they're going to use that, if you will, open source container environment with Kubernetes on top and not use VMware and not use Red Hat. We're going to make sure if they do it, what I'll call generically, if they use Red Hat, if they use VMware or some combo, we will support all of it and that's very important for us at VMworld to make sure everyone is aware that while we may own Red Hat, we have a very strong, powerful connection to VMware and going to continue to do that in the future as well. >> Eric Herzog, thanks so much for joining us. Always a pleasure catching up with you. >> Thank you very much. We love being with theCUBE, you guys do great work at every show and one of these days I'll see you again and we'll have a beer. In person. >> Absolutely. So, definitely, Dave Vellante and John Furrier send their best, I'm Stu Miniman, and thank you as always for watching theCUBE. (relaxed electronic music)
SUMMARY :
brought to you by VMware He is the chief marketing officer And again, glad to be here, you know, 2020, the major engagements, So we started with IBM Cloud so share with us a little bit, you know, and the differences that we're doing to make sure that we can and now the object storage side. So what are you seeing from and containers in the On the IBM and Red Hat side, you know, So in the case of IBM, we and meeting with people. and guys in the East Coast all on. in the airports or on and maybe one other of the Always a pleasure catching up with you. We love being with theCUBE, and thank you as always
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Zoginstor | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Stockholm | LOCATION | 0.99+ |
Sweden | LOCATION | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
$5 billion | QUANTITY | 0.99+ |
San Jose | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
February | DATE | 0.99+ |
September | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
October 27th | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
two seconds | QUANTITY | 0.99+ |
half an hour | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Thursday | DATE | 0.99+ |
Wednesday | DATE | 0.99+ |
Red Hat | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
February 12th | DATE | 0.99+ |
Red Hat OpenShift | TITLE | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two years | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
end of October | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
two and a half hour | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
October | DATE | 0.99+ |
Switzerland | LOCATION | 0.99+ |
hundreds of petabytes | QUANTITY | 0.99+ |
hundreds of thousands | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Seagate Maxtor | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
three years ago | DATE | 0.99+ |
Phil Bullinger, Western Digital | CUBE Conversation, August 2020
>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a Cube conversation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in our Palo Alto studios, COVID is still going on, so all of the interviews continue to be remote, but we're excited to have a Cube alumni, he hasn't been on for a long time, and this guy has been in the weeds of the storage industry for a very very long time and we're happy to have him on and get an update because there continues to be a lot of exciting developments. He's Phil Bullinger, he is the SVP and general manager, data center business unit from Western Digital joining us, I think for Colorado, so Phil, great to see you, how's the weather in Colorado today? >> Hi Jeff, it's great to be here. Well, it's a hot, dry summer here, I'm sure like a lot of places. But yeah, enjoying the summer through these unusual times. >> It is unusual times, but fortunately there's great things like the internet and heavy duty compute and store out there so we can get together this way. So let's jump into it. You've been in the he business a long time, you've been at Western Digital, you were at EMC, you worked on Isilon, and you were at storage companies before that. And you've seen kind of this never-ending up and to the right slope that we see kind of ad nauseum in terms of the amount of storage demands. It's not going anywhere but up, and please increase complexity in terms of unstructure data, sources of data, speed of data, you know the kind of classic big V's of big data. So I wonder, before we jump into specifics, if you can kind of share your perspective 'cause you've been kind of sitting in the Catford seat, and Western Digital's a really unique company; you not only have solutions, but you also have media that feeds other people solutions. So you guys are really seeing and ultimately all this compute's got to put this data somewhere, and a whole lot of it's sitting on Western Digital. >> Yeah, it's a great intro there. Yeah, it's been interesting, through my career, I've seen a lot of advances in storage technology. Speeds and feeds like we often say, but the advancement through mechanical innovation, electrical innovation, chemistry, physics, just the relentless growth of data has been driven in many ways by the relentless acceleration and innovation of our ability to store that data, and that's been a very virtuous cycle through what, for me, has been 30 years in enterprise storage. There are some really interesting changes going on though I think. If you think about it, in a relatively short amount of time, data has gone from this artifact of our digital lives to the very engine that's driving the global economy. Our jobs, our relationships, our health, our security, they all kind of depend on data now, and for most companies, kind of irrespective of size, how you use data, how you store it, how you monetize it, how you use it to make better decisions to improve products and services, it becomes not just a matter of whether your company's going to thrive or not, but in many industries, it's almost an existential question; is your company going to be around in the future, and it depends on how well you're using data. So this drive to capitalize on the value of data is pretty significant. >> It's a really interesting topic, we've had a number of conversations around trying to get a book value of data, if you will, and I think there's a lot of conversations, whether it's accounting kind of way, or finance, or kind of good will of how do you value this data? But I think we see it intrinsically in a lot of the big companies that are really data based, like the Facebooks and the Amazons and the Netflixes and the Googles, and those types of companies where it's really easy to see, and if you see the valuation that they have, compared to their book value of assets, it's really baked into there. So it's fundamental to going forward, and then we have this thing called COVID hit, which I'm sure you've seen all the memes on social media. What drove your digital transformation, the CEO, the CMO, the board, or COVID-19? And it became this light switch moment where your opportunities to think about it are no more; you've got to jump in with both feet, and it's really interesting to your point that it's the ability to store this and think about it now differently as an asset driving business value versus a cost that IT has to accommodate to put this stuff somewhere, so it's a really different kind of a mind shift and really changes the investment equation for companies like Western Digital about how people should invest in higher performance and higher capacity and more unified and kind of democratizing the accessibility that data, to a much greater set of people with tools that can now start making much more business line and in-line decisions than just the data scientist kind of on Mahogany Row. >> Yeah, as you mentioned, Jeff, here at Western Digital, we have such a unique kind of perch in the industry to see all the dynamics in the OEM space and the hyperscale space and the channel, really across all the global economies about this growth of data. I have worked at several companies and have been familiar with what I would have called big data projects and fleets in the past. But at Western Digital, you have to move the decimal point quite a few digits to the right to get the perspective that we have on just the volume of data that the world has just relentless insatiably consuming. Just a couple examples, for our drive projects we're working on now, our capacity enterprise drive projects, you know, we used to do business case analysis and look at their lifecycle capacities and we measured them in exabytes, and not anymore, now we're talking about zettabyte, we're actually measuring capacity enterprise drive families in terms of how many zettabyte they're going to ship in their lifecycle. If we look at just the consumption of this data, the last 12 months of industry TAM for capacity enterprise compared to the 12 months prior to that, that annual growth rate was north of 60%. And so it's rare to see industries that are growing at that pace. And so the world is just consuming immense amounts of data, and as you mentioned, the COVID dynamics have been both an accelerant in some areas, as well as headwinds in others, but it's certainly accelerated digital transformation. I think a lot of companies we're talking about, digital transformation and hybrid models and COVID has really accelerated that, and it's certainly driving, continues to drive just this relentless need to store and access and take advantage of data. >> Yeah, well Phil, in advance of this interview, I pulled up the old chart with all the different bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, and zettabytes, and just per the Wikipedia page, what is a zettabyte? It's as much information as there are grains of sand in all the world's beaches. For one zettabyte. You're talking about thinking in terms of those units, I mean, that is just mind boggling to think that that is the scale in which we're operating. >> It's really hard to get your head wrapped around a zettabyte of storage, and I think a lot of the industry thinks when we say zettabyte scale era, that it's just a buzz word, but I'm here to say it's a real thing. We're measuring projects in terms of zettabytes now. >> That's amazing. Well, let's jump into some of the technology. So I've been fortunate enough here at theCUBE to be there at a couple of major announcements along the way. We talked before we turned the cameras on, the helium announcement and having the hard drive sit in the fish bowl to get all types of interesting benefits from this less dense air that is helium versus oxygen. I was down at the Mammer and Hammer announcement, which was pretty interesting; big heavy technology moves there, to again, increase the capacity of the hard drive's base systems. You guys are doing a lot of stuff on RISC-V I know is an Open source project, so you guys have a lot of things happening, but now there's this new thing, this new thing called zonedd storage. So first off, before we get into it, why do we need zoned storage, and really what does it now bring to the table in terms of a capability? >> Yeah, great question, Jeff. So why now, right? Because I mentioned storage, I've been in storage for quite some time. In the last, let's just say in the last decade, we've seen the advent of the hyperscale model and certainly a whole nother explosion level of data and just the veracity with which they hyperscalers can create and consume and process and monetize data. And of course with that, has also come a lot of innovation, frankly, in the compute space around how to process that data and moving from what was just a general purpose CPU model to GPU's and DPU's and so we've seen a lot of innovation on that side, but frankly, in the storage side, we haven't seen much change at all in terms of how operating systems, applications, file systems, how they actually use the storage or communicate with the storage. And sure, we've seen advances in storage capacities; hard drives have gone from two to four, to eight, to 10 to 14, 16, and now our leading 18 and 20 terabyte hard drives. And similarly, on the SSD side, now we're dealing with the capacities of seven, and 15, and 30 terabytes. So things have gotten larger, as you expect. And some interfaces have improved, I think NVME, which we'll talk about, has been a nice advance in the industry; it's really now brought a very modern scalable low latency multi-threaded interface to a NAM flash to take advantage of the inherent performance of transistor based persistent storage. But really when you think about it, it hasn't changed a lot. But what has changed is workloads. One thing that definitely has evolved in the space of the last decade or so is this, the thing that's driving a lot of this explosion of data in the industry is around workloads that I would characterize as sequential in nature, they're serial, you can capture it in written. They also have a very consistent life cycle, so you would write them in a big chunk, you would read them maybe in smaller pieces, but the lifecycle of that data, we can treat more as a chunk of data, but the problem is applications, operating systems, vial systems continue to interface with storage using paradigms that are many decades old. The old 512 byte or even Forte, Sector size constructs were developed in the hard drive industry just as convenient paradigms to structure what is an unstructured sea of magnetic grains into something structured that can be used to store and access data. But the reality is when we talk about SSDs, structure really matters, and so what has changed in the industry is the workloads are driving very very fresh looks at how more intelligence can be applied to that application OS storage device interface to drive much greater efficiency. >> Right, so there's two things going on here that I want to drill down on. On one hand, you talked about kind of the introduction of NAND and flash, and treating it like you did, generically you did a regular hard drive. But you could get away and you could do some things because the interface wasn't taking full advantage of the speed that was capable in the NAND. But NVME has changed that, and now forced kind of getting rid of some of those inefficient processes that you could live with, so it's just kind of classic next level step up and capabilities. One is you get the better media, you just kind of plug it into the old way. Now actually you're starting to put in processes that take full advantage of the speed that that flash has. And I think obviously prices have come down dramatically since the first introduction, where before it was always kind of a clustered off or super high end, super low latency, super high value apps, it just continues to spread and proliferate throughout the data center. So what did NVME force you to think about in terms of maximizing the return on the NAND and flash? >> Yeah, NVME, which we've been involved in the standardization, I think it's been a very successful effort, but we have to remember NVME is about a decade old, or even more when the original work started around defining this interface, but it's been very successful. The NVME standard's body is very productive cross company effort, it's really driven a significant change, and what we see now is the rapid adoption of NVME in all of data center architectures, whether it's very large hyperscale to classic on prem enterprise to even smaller applications, it's just a very efficient interface mechanism for connecting SSDs into a server. So we continue to see evolution at NVME, which is great, and we'll talk about ZNS today as one of those evolutions. We're also very keenly interested in NVME protocol over fabrics, and so one of the things that Western Digital has been talking about a lot lately is incorporating NVME over fabrics as a mechanism for now connecting shared storage into multiple post architectures. We think this is a very attractive way to build shared storage architectures of the future that are scalable, that are composable, that really have a lot more agility with respect to rack level infrastructure and applying that infrastructure to applications. >> Right, now one thing that might strike some people as kind of counterintuitive is within the zoned storage in zoning off parts of the media, to think of the data also kind of in these big chunks, is it feels contrary to kind of atomization that we're seeing in the rest of the data center, right? So smaller units of compute, smaller units of store, so that you can assemble and disassemble them in different quantities as needed. So what was the special attribute that you had to think about and actually come back and provide a benefit in actually kind of re-chunking, if you will, in these zones versus trying to get as atomic as possible? >> Yeah, it's a great question, Jeff, and I think it's maybe not intuitive in terms of why zoned storage actually creates a more efficient storage paradigm when you're storing stuff essentially in larger blocks of data, but this is really where the intersection of structure and workload and sort of the nature of the data all come together. If you turn back the clock maybe four or five years when SMR hard drives host managers SMR hard drives first emerged on the scene. This was really taking advantage of the fact that the right head on a hard disk drive is larger than the read head, or the read head can be much smaller, and so the notion of overlapping or shingling the data on the drive, giving the read head a smaller target to read, but the writer a larger write pad to write the data could actually, what we found was it increases aerial density significantly. And so that was really the emergence of this notion of sequentially written larger blocks of data being actually much more efficiently stored when you think about physically how it's being stored. What's very new now and really gaining a lot of traction is the SSD corollary to SMR on the hard drive, on the SSD side, we had the ZNS specification, which is, very similarly where you'd divide up the name space of an SSD into fixed size zones, and those zones are written sequentially, but now those zones are intimately tied to the underlying physical architecture of the NAND itself; the dyes, the planes, the read pages, the erase pages. So that, in treating data as a block, you're actually eliminating a lot of the complexity and the work that an SSD has to do to emulate a legacy hard drive, and in doing so, you're increasing performance and endurance and the predictable performance of the device. >> I just love the way that you kind of twist the lens on the problem, and on one hand, by rule, just looking at my notes here, the zoned storage device is the ZSD's introduce a number of restrictions and limitations and rules that are outside the full capabilities of what you might do. But in doing so, an aggregate, the efficiency, and the performance of the system in the whole is much much better, even though when you first look at it, you think it's more of a limiter, but it's actually opens up. I wonder if there's any kind of performance stats you can share or any kind of empirical data just to give people kind of a feel for what that comes out as. >> So if you think about the potential of zoned storage in general and again, when I talk about zoned storage, there's two components; there's an HDD component of zoned storage that we refer to as SMR, and there's an SSD version of that that we call ZNS. So we think about SMR, the value proposition there is additional capacity. So effectively in the same drive architecture, with roughly the same bill of material used to build the drive, we can overlap or shingle the data on the drive. And generally for the customer, additional capacity. Today with our 18, 20 terabyte offerings that's on the order of just over 10%, but that delta is going to increase significantly going forward to 20% or more. And when you think about a hyperscale customer that has not hundreds or thousands of racks, but tens of thousands of racks. A 10 or 20% improvement in effective capacity is a tremendous TCO benefit, and the reason we do that is obvious. I mean, the economic paradigm that drives large at-scale data centers is total custom ownership, both acquisition costs and operating costs. And if you can put more storage in a square tile of data center space, you're going to generally use less power, you're going to run it more efficiently, you're actually, from an acquisition cost, you're getting a more efficient purchase of that capacity. And in doing that, our innovation, we benefit from it and our customers benefit from it. So the value proposition for zoned storage in capacity enterprise HDV is very clear, it's additional capacity. The exciting thing is, in the SSD side of things, or ZNS, it actually opens up even more value proposition for the customer. Because SSDs have had to emulate hard drives, there's been a lot of inefficiency and complexity inside an enterprise SSD dealing with things like garbage collection and right amplification reducing the endurance of the device. You have to over-provision, you have to insert as much as 20, 25, even 28% additional man bits inside the device just to allow for that extra space, that working space to deal with delete of data that are smaller than the block erase that the device supports. So you have to do a lot of reading and writing of data and cleaning up. It creates for a very complex environment. ZNS by mapping the zoned size with the physical structure of the SSD essentially eliminates garbage collection, it reduces over-provisioning by as much as 10x. And so if you were over provisioning by 20 or 25% on an enterprise SSD, and a ZNS SSD, that can be one or two percent. The other thing I have to keep in mind is enterprise SSD is typically incorporate D RAM and that D RAM is used to help manage all those dynamics that I just mentioned, but with a much simpler structure where the pointers to the data can be managed without all the D RAM. We can actually reduce the amount of D RAM in an enterprise SSD by as much as eight X. And if you think about the MILA material of an enterprise SSD, D RAM is number two on the list in terms of the most expensive bomb components. So ZNS and SSDs actually have a significant customer total cost of ownership impact. It's an exciting standard, and now that we have the standard ratified through the NVME working group, it can really accelerate the development of the software ecosystem around. >> Right, so let's shift gears and talk a little bit about less about the tech and more about the customers and the implementation of this. So you talked kind of generally, but are there certain types of workloads that you're seeing in the marketplace where this is a better fit or is it just really the big heavy lifts where they just need more and this is better? And then secondly, within these hyperscale companies, as well as just regular enterprises that are also seeing their data demands grow dramatically, are you seeing that this is a solution that they want to bring in for kind of the marginal kind of next data center, extension of their data center, or their next cloud region? Or are they doing lift and shift and ripping stuff out? Or do they enough data growth organically that there's plenty of new stuff that they can put in these new systems? >> Yeah, I love that. The large customers don't rip and shift; they ride their assets for a long lifecycle, 'cause with the relentless growth of data, you're primarily investing to handle what's coming in over the transom. But we're seeing solid adoption. And in SMRS you know we've been working on that for a number of years. We've got significant interest and investment, co-investment, our engineering, and our customer's engineering adapting the application environment's to take advantage of SMR. The great thing is now that we've got the NVME, the ZNS standard gratified now in the NVME working group, we've got a very similar, and all approved now, situation where we've got SMR standards that have been approved for some time, and the SATA and SCSI standards. Now we've got the same thing in the NVME standard, and the great thing is once a company goes through the lift, so to speak, to adapt an application, file system, operating system, ecosystem, to zoned storage, it pretty much works seamlessly between HDD and SSD, and so it's not an incremental investment when you're switching technologies. Obviously the early adopters of these technologies are going to be the large companies who design their own infrastructure, who have mega fleets of racks of infrastructure where these efficiencies really really make a difference in terms of how they can monetize that data, how they compete against the landscape of competitors they have. For companies that are totally reliant on kind of off the shelf standard applications, that adoption curve is going to be longer, of course, because there are some software changes that you need to adapt to enable zoned storage. One of the things Western Digital has done and taken the lead on is creating a landing page for the industry with zoned storage.io. It's a webpage that's actually an area where many companies can contribute Open source tools, code, validation environments, technical documentation. It's not a marketeering website, it's really a website built to land actual Open source content that companies can use and leverage and contribute to to accelerate the engineering work to adapt software stacks to zoned storage devices, and to share those things. >> Let me just follow up on that 'cause, again, you've been around for a while, and get your perspective on the power of Open source. And it used to be the best secrets, the best IP were closely guarded and held inside, and now really we're in an age where it's not necessarily. And the brilliant minds and use cases and people out there, just by definition, it's more groups of engineers, more engineers outside your building than inside your building, and how that's really changed kind of a strategy in terms of development when you can leverage Open source. >> Yeah, Open source clearly has accelerated innovation across the industry in so many ways, and it's the paradigm around which companies have built business models and innovated on top of it, I think it's always important as a company to understand what value ad you're bringing, and what value ad the customers want to pay for. What unmet needs in your customers are you trying to solve for, and what's the best mechanism to do that? And do you want to spend your RND recreating things, or leveraging what's available and innovating on top of it? It's all about ecosystem. I mean, the days where a single company could vertically integrate top to bottom a complete end solution, you know, those are fewer and far between. I think it's about collaboration and building ecosystems and operating within those. >> Yeah, it's such an interesting change, and one more thing, again, to get your perspective, you run the data center group, but there's this little thing happening out there that we see growing, IOT, in the industrial internet of things, and edge computing as we try to move more compute and store and power kind of outside the pristine world of the data center and out towards where this data is being collected and processed when you've got latency issues and all kinds of reasons to start to shift the balance of where the compute is and where the store and relies on the network. So when you look back from the storage perspective in your history in this industry and you start to see basically everything is now going to be connected, generating data, and a lot of it is even Opensource. I talked to somebody the other day doing kind of Opensource computer vision on surveillance video. So the amount of stuff coming off of these machines is growing in crazy ways. At the same time, it can't all be processed at the data center, it can't all be kind of shipped back and then have a decision and then ship that information back out to. So when you sit back and look at Edge from your kind of historical perspective, what goes through your mind, what gets you excited, what are some opportunities that you see that maybe the laymen is not paying close enough attention to? >> Yeah, it's really an exciting time in storage. I get asked that question from time to time, having been in storage for more than 30 years, you know, what was the most interesting time? And there's been a lot of them, but I wouldn't trade today's environment for any other in terms of just the velocity with which data is evolving and how it's being used and where it's being used. A TCO equation may describe what a data center looks like, but data locality will determine where it's located, and we're excited about the Edge opportunity. We see that as a pretty significant, meaningful part of the TAM as we look three to five years. Certainly 5G is driving much of that, I think just any time you speed up the speed of the connected fabric, you're going to increase storage and increase the processing the data. So the Edge opportunity is very interesting to us. We think a lot of it is driven by low latency work loads, so the concept of NVME is very appropriate for that, we think, in general SSDs deployed and Edge data centers defined as anywhere from a meter to a few kilometers from the source of the data. We think that's going to be a very strong paradigm. The workloads you mentioned, especially IOT, just machine-generated data in general, now I believe, has eclipsed human generated data, in terms of just the amount of data stored, and so we think that curve is just going to keep going in terms of machine generated data. Much of that data is so well suited for zoned storage because it's sequential, it's sequentially written, it's captured, and it has a very consistent and homogenous lifecycle associated with it. So we think what's going on with zoned storage in general and ZNS and SMR specifically are well suited for where a lot of the data growth is happening. And certainly we're going to see a lot of that at the Edge. >> Well, Phil, it's always great to talk to somebody who's been in the same industry for 30 years and is excited about today and the future. And as excited as they have been throughout their whole careers. So that really bodes well for you, bodes well for Western Digital, and we'll just keep hoping the smart people that you guys have over there, keep working on the software and the physics, and the mechanical engineering and keep moving this stuff along. It's really just amazing and just relentless. >> Yeah, it is relentless. What's exciting to me in particular, Jeff, is we've driven storage advancements largely through, as I said, a number of engineering disciplines, and those are still going to be important going forward, the chemistry, the physics, the electrical, the hardware capabilities. But I think as widely recognized in the industry, it's a diminishing curve. I mean, the amount of energy, the amount of engineering effort, investment, that cost and complexity of these products to get to that next capacity step is getting more difficult, not less. And so things like zoned storage, where we now bring intelligent data placement to this paradigm, is what I think makes this current juncture that we're at very exciting. >> Right, right, well, it's applied AI, right? Ultimately you're going to have more and more compute power driving the storage process and how that stuff is managed. As more cycles become available and they're cheaper, and ultimately compute gets cheaper and cheaper, as you said, you guys just keep finding new ways to move the curve in. And we didn't even get into the totally new material science, which is also coming down the pike at some point in time. >> Yeah, very exciting times. >> It's been great to catch up with you, I really enjoy the Western Digital story; I've been fortunate to sit in on a couple chapters, so again, congrats to you and we'll continue to watch and look forward to our next update. Hopefully it won't be another four years. >> Okay, thanks Jeff, I really appreciate the time. >> All right, thanks a lot. All right, he's Phil, I'm Jeff, you're watching theCUBE. Thanks for watching, we'll see you next time.
SUMMARY :
all around the world, this so all of the interviews Hi Jeff, it's great to be here. in terms of the amount of storage demands. be around in the future, that it's the ability to store this and the channel, really across and just per the Wikipedia and I think a lot of the and having the hard drive of data and just the veracity with which kind of the introduction and so one of the things of the data center, right? and so the notion of I just love the way that you kind of and the reason we do that is obvious. and the implementation of this. and the great thing is And the brilliant minds and use cases and it's the paradigm around which and all kinds of reasons to start to shift and increase the processing the data. and the mechanical engineering I mean, the amount of energy, driving the storage process I really enjoy the Western Digital story; really appreciate the time. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Phil Bullinger | PERSON | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
Colorado | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
Phil | PERSON | 0.99+ |
August 2020 | DATE | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.99+ |
Netflixes | ORGANIZATION | 0.99+ |
18 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two percent | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Facebooks | ORGANIZATION | 0.99+ |
15 | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
28% | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
14 | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
COVID-19 | OTHER | 0.99+ |
10x | QUANTITY | 0.99+ |
more than 30 years | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Cube | ORGANIZATION | 0.99+ |
two components | QUANTITY | 0.99+ |
Opensource | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.98+ |
25 | QUANTITY | 0.98+ |
20 terabyte | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
18, 20 terabyte | QUANTITY | 0.98+ |
16 | QUANTITY | 0.98+ |
over 10% | QUANTITY | 0.98+ |
COVID | OTHER | 0.97+ |
tens of thousands of racks | QUANTITY | 0.97+ |
first introduction | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
TAM | ORGANIZATION | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
NVME | ORGANIZATION | 0.95+ |
last decade | DATE | 0.94+ |
Edge | ORGANIZATION | 0.94+ |
Mammer and Hammer | ORGANIZATION | 0.93+ |
One | QUANTITY | 0.92+ |
COVID | ORGANIZATION | 0.92+ |
Keith Bradley, Nature Fresh Farms | CUBE Conversation, June 2020
(upbeat music) >> From the Cube studios in Palo Alto in Boston connecting with thought leaders all around the world. This is the CUBE Conversation. >> Hey everybody this is Dave Vellante and welcome to the special CUBE Conversation. I'm really excited to have Keith Bradley here he's the Vice President of IT at Nature Fresh Farms. Keith good to see you. >> Hey, good to see you too there Dave. >> All right, first of all I got to thank you for sending me these awesome veggies. I got these wonderful peppers. I got red, orange. I got the yellow. I got to tell you Keith these tomatoes almost didn't make it. It's my last one on the vine. >> (Laughs) >> These guys are like candy. It's amazing. >> Yap. They are the tasty thing. >> Wonderful. >> You know what, I'll probably just join you right here now too. I'll have one right here right now and I'll join you right now. >> My kids love these but I'm not bringing them home. And then I got these other grape tomatoes and then I've got these mini pepper poppers that are so sweet. You know which one I'm talking about here. And then we've got the tomatoes on the vine. I mean, it's just unbelievable that you guys are able to do this in a greenhouse. Big cukes, little cukes. Wow. Thank you so much for sending these. Delicious. Really appreciate it. >> Yeah. Well thank you for having them. It's a great little tree and it's something that I know you're going to enjoy. And I love for everybody to have it and there's not a person I haven't seen that hasn't enjoyed our tomatoes and peppers. >> Now tell me more about Nature Fresh Farms. Let's talk about your business I want to spend some time on that. We've got IoT, we got a data lifecycle. All kinds of cool stuff, scanners. Paint a picture for us. >> I like to even go... If you don't mind. I like to even go back to where our roots actually came from. So Peter Quiring, our owner actually was a builder by nature and he was actually back in the year 2000 really wanted to get into the greenhouse because he was a manufacturer. And he built our phase one facility back in 2000 under the concept that he said, "there's computers out there." And Peter will be the first one to say, "I don't know how to use them, "but I know that it can do a lot for us." So even back in 2000, we were starting to experiment with using the computers back then to control the greenhouse, to do much of the functionality. Then he bought it under the concept as our sister company, South Essex Fabricating that he would sell the greenhouse turnkey to somebody else. Well, talking to him and I've been around since about phase two. He basically said, "when I built phase three, "which is our first 32 acre range, I realized that is actually in the pepper business now," and he realized he was a grower and then he fell in love with the industry. And again, kept pushing how we can do things automated? How do we can do things? How do we get more yield, more everything out of what we do? And as a lover of technology he made it a great environment for everybody including the growers to work in and to just do something new. >> Well, I mean the thing that we know that as populations grow we're not getting more land. Okay (laughs). So, you have to get better yield and the answer is not to just pound vegetables with pesticides. So maybe talk about how you guys are different from sort of a conventional farming approach, just in terms of maybe your yield, how you treat the plants, how you're able to pick throughout the year, give us some insight there. >> So basically I'll start with through the lifecycle of a pepper. So it's basically planted at a propagator and then it comes to our facility and it comes in the little white boxes here behind me. And they actually are usually about that tall. They're about a foot tall. Maybe a little more when they come to us. And right from that point in time, we start keeping track of everything. How much we put water, how much water it doesn't take, what nutrients it takes, how much it weighs. We actually weigh the vines to know how much they are in real time. We do everything top to bottom. So we actually control the life cycle of the plant. On top of that, we also look and have a whole bio scout division. So it's a group of people that are starting to use AI to actually look at how the bugs are attacking the plants. And then at the same time, we release a good bug that will eventually die off to kill the bugs that are starting to harm the plant. So it basically allows us to basically do as close to natural way of growing a plant as possible without spraying or doing anything like that at night. It's actually funny 'cause there's a lot pictures out there and you think that a greenhouse, it's going to be wet in here. And actually for the most part, it is dry all the time. Like I'm very hot, it's very dry and it's just how we work. We don't let anything inside. We control everything in that plant's life. And now with our newest range, we even control how much light it gets. So we basically give it light all night too. And even some nights when it's a little days out, not like today, but when it's a little dark out and the sun's not up there, we'll actually make sure it gets more light to get that more yield out of it. So we can grow 24/7 12 months a year. >> Okay Keith. So it sounds like you're using data and AI to really inform you as to nature's best formula for the good bugs, the bad bugs, the lighting to really drive yields and quality. >> Yeah, we analyze, like I said, everything from the edge that we collect, like I said, we have over 2000 sensors out in the greenhouse and we keep expanding it more and more every year to collect everything from the length of the vine, the weight of the vine in real time. And we basically collect it from the day the plant is born to the day that we actually take it all out to be composted. We know how much light it got. Does it need to get light that day? We analyze everything in general and it allows us to take that data back in real time to make it better and to look at the past data to do better again. Like you hear, some times we have actually have a cart going by here now. That data from that cart, we'll go back to our growers and they will know how much weight they got out of that row in the next 15 to 20 minutes. So they can actually look, okay, how did that plant react to the sun, how's tomorrow? Does it need more nutrients? Does it need a little less? They take all that data from the core and make sure it's all accurate and as up to date as possible. >> So Keith, and maybe even you can give us approximations, but so how much acreage do you have? And how much acreage would you need with conventional farming techniques to get the kind of yields and quality that you guys are able to achieve? >> So we own 160 acres of greenhouse that's actually under glass. It's actually 200 acres total of land but what's 160 acres approximately of greenhouse that's actually under glass. 'A' we're always constantly growing. Our demand is up that that's why we grow so fast. Usually you're looking at both 12 to one. So for every foot squared of space, you're looking for equivalent is about 12 feet squared for a conventional farm. That's the general average. Mostly because we can harvest year round, we can continually harvest. We maximize the harvest amount and everything total. >> I'm also interested in your regime, your team. So obviously you're supporting from an IT perspective, but you've got all this AI going on. You've got this data life cycle. So what does the data team look like? >> We're actually... I always laugh though. I like to call our growers are basically data analysts. They're not really part of my IT team, but they basically have learned the role of how to analyze data. So we'll have basically one or two junior growers, per range. So probably about, I'd say about, we have about 10 to 12 junior growers and then one senior grower per whole farm. So probably about three or four senior growers at any one time. But my IT staff is actually about a team of four, five, including myself. And we are always constantly looking at how to improve data and how to automate the process. That's what drives us to do more. And that's where the robots even come in is every time we look at something, it's not even from an IT perspective, but even just from a picking perspective, how do we automate this? How do we do a better tomorrow? How do we continually clean this up? And it just never ends. And every year we look back, okay, it cost us a dollar per meter squared or per foot square for the people down South in America there now. We look at that and how do we do that better next year? How do we do better the next day? And it's a constant looking and it's something we look at refining and now that's why we're going so much into AI 'cause we want to not look at the data and decide what to do. We want the data to tell us what to do. >> You guys are on the cutting edge. I mean this is the future of farming. I wonder if we could talk about the IT, what does the IT group look like in the future of farming? I mean you guys, what's your infrastructure look like? Are you all in the cloud or you can't be in the cloud because this is really an extent of an IoT or an edge use case. Paint a picture of the IT infrastructure for us if you would. >> So the IT infrastructure it's a very large amount at the edge. We take a lot of the information from the edge and we bring it back to our core to do our analyzing. But for the most part, we don't really leverage the cloud much yet and most of it is on-prem. We are starting to experiment with moving out to the cloud. And a lot of it is, you'll laugh though, is because the farming and agriculture industry really was stagnant for a long time and not really stagnant, but just didn't really progress as fast as the rest of the world. So now they're just starting to catch up and realizing, wow, this is a growing industry. We can do a lot of cool things with technology in this range. And now it's just exploded. So I'm going to say in the next five to 10 years, you're going to see a lot more private clouds and things like that happening with us. I know we're right now starting to just look at creating with the VxRail, a private cloud, and a concept like that to start to test that water again of how to analyze and how to do more things onsite and in the cloud and leverage everything top to bottom. >> So you've got your own servers at the edge... So Intel based servers, what's your storage infrastructure look like? Maybe describe the network a little bit. >> Yap. Okay. So we are basically, I'll admit here, we are a Dell factory. We're basically everything top to bottom. Right now we're on an FX2, Dell FX2 platform. It's basically our core platform we've been using for the last five years. It does all of our analitics and stuff like that. And we have just transformed our unstructured data to Isilon. It's been one of the best things for us to clean that up and make things move forward. It was actually one of those things that management actually looked at me and kind of looked at me and said, "what are you nuts?" Because we basically bought our first Isilon and then four months later, I said, "I love this. I got to have more," because everybody loved it so much in the way of store things. So we actually doubled the size of it within four months, which was a great... It was actually very seamless to do, but we're now also in a position where the FX2 in that stage type of situation didn't quite work for us to expand it. It wasn't as easy to expand. So we wanted to get away that we could expand at a moment's notice. We can change, we can scale out much faster and do things easier. So that's why we're transforming to a VxRail to basically clean that up and allow us to expand as we grow. >> So you're essentially trying to replicate the agility and speed of the cloud but like you say, you're an edge use case. So you can't do everything in cloud. Is that the right way to think about it? You mentioned private cloud but just sort of cloud experience, but at the edge. >> Yeah. We try to keep everything at the edge. It just makes it a lot easier to control. Because we're so big. Think about it like you are bringing all this information back from everywhere. It's a lot of data to come back to one spot. So we're trying to push that more, to keep it at the edge so that we can analyze it right there in the moment instead of having to come back and do it but yeah. And I think you'll see in the next few years, a lot of change to the cloud, I think it'll start to be there, but again, like I said, the private cloud will probably be the way most will go. >> Okay. So I got to ask you then, I mean, you've really tested that agility over the last 60 days with this COVID pandemic. How were you able to respond? What role did data play? You had supply chain considerations. Obviously, you got a lot of online ordering going on. You got to get produce out. You've got social distancing. How were you able to handle that crisis? >> Well it was a really great thing for our team. Our team really came together in a great way. We had a lot of people that did have to go home and we started because we had so many ranges all over, already about a year and a half ago we started implementing an SD-WAN solution to allow us to connect to different areas and to do all kinds of stuff. So it was actually very quick for us to be able to send the others home. We used our VeloCloud SD-WAN to expand it. It was very seamless and we just started sending people home left, right and center. The staff that had to stay here, like the workers out in the greenhouse here now are offshore labor as we call it. They work great. They worked with at every moment of the day and they dug right in. We haven't lost heartbeat. Like actually our orders have gone up in the last... Through this COVID experience more than anything else. And it's really learned... It really helped from an IT perspective and I laugh about this and it's one of the greatest things about what I do, I love this moment, is where sometimes we were very hesitant to jump on this video collaboration. I said, "hey, that's a great way of doing this." But sometimes people they're very stuck in their ways and they love it and they're like, "I don't know about this whole team Zoom "and all that fun stuff," but because of this, they've now embraced it and it's actually really changed the way even they've worked. So in a way, it kind of sped up the processes of us becoming more agile that way in a way that would've taken a long time. They now love teams. They love being able to communicate that way. They love being able to just do a quick call. All that functionality has changed and even made us more efficient that way. (mumbles) >> How does this all affect your IT budget allocation? Did you get more budget? Was it flat budget? Did you have to shift budget to sort of work from home and securing the remote workers? Can you sort of describe that dynamic? >> So it did, I'll be true, there's no way around it to not up my budge. They basically said, "yep, "you have to do what you have to do. "We have to continue to function, "we cannot let our greenhouse go down "and what do you need to do to make it happen?" So I quickly contacted Dell and got things coming and improve our infrastructure as much as we could to get ready. I contacted (mumbles). I basically made it so that my team can support every single part of our facet from home if they actually had to go home. So for example, if I had to get stuck at home, I could do every single part of my job from home, including the growers as much as possible. So say our senior grower had to get home. I locked him up. He has to be able to see everything and do everything. So we actually expanded that very quickly and it was a cost to us. But again, there's no technology we didn't implement that we hadn't talked about before. We just hadn't said, "you know what? It's just not the right time to try that." And now we just went ahead and we just said, we got to do it now. And there's not one part of our aspect that we don't reuse. >> Was Dell able to deliver? Did they have supply constraint issues? I mean, I know there's been huge demand for that whole remote worker. Were able to get what you needed in time? >> Yeah. You know what, I think that we hit it a little ahead of the scope of when things started to go bad, our senior management, our president and all that. He basically said, "you know Keith, "we got to get ready on this. "We got to get some stuff coming." We never ran out of some things. The quirkiest thing and it is just a reality, the biggest thing was webcams was to kind of trying to get webcams. Other than that, there was issues with UPS and Purolator and FedEx because they were just inundated too. But for the most part, we kept everything moving. There wasn't a time that I was actually really waiting on something that we had to have. One of the other great things of our senior team that's here is they've really given me the latitude to say, "what do you need and how do you need to do it?" And so I have my own basically storage area of stuff everywhere. And my team does laugh at me 'cause they call me a hoarder and I basically have too much. And we were able to use either some older stuff or some newer stuff and combine it and we got everything running. There was only a little hiccups here and there but nothing ever is going to go perfect. >> Yeah. But it's enabling business results. We've asked a lot of it pros like yourself like what do you expect the shape of the recovery? And obviously our hearts go out to those small businesses that have been decimated. You're clearly seeing industries like airlines and hospitality and restaurants are obviously in rough shape, but there is a bifurcated story here. Some businesses and it sounds like in this camp where the pandemic was actually a tailwind, your online demand is up, food, vegetables, people... There were a lot of meat shortages. So people really turn to vegetables, is that right? Is that the shape of the recovery actually, is maybe not even V-shape, it's been a tailwind for Nature Fresh Farms. >> Yeah. You know what? It has been a tailwind and that's the right way to say it. We've just increased our yieldage. We've increased that, it's not unnew for us, that's been the biggest driving force for us is basically the demand for our product and building fast enough to keep up to that demand. Like we continually build and expand. We've got more ranges being built in the coming years like looking towards the 21, 22, 23 year. It's just going to just continue to expand and that is purely because of demand. And this COVID just again, escalated that little bit 'cause everybody's like, I really want the peppers and like you learned, we actually do have some tasty peppers and tomatoes. So it does make it a nice little treat to have at home for the kids. >> Well, it's an amazing story of tech meets farming. And as you said for years your industry kind of became quiet when it came to tech, but this is the future of farming, in my opinion. And Keith, thanks so much for coming on the CUBE and sharing the story of Nature Fresh Farms. >> Well, thank you for having me. It's been a great pleasure. >> Alright. Thank you for watching everybody this is Dave Vellante for the CUBE and we'll see you next time. (upbeat music)
SUMMARY :
This is the CUBE Conversation. I'm really excited to I got to tell you Keith These guys are like candy. and I'll join you right now. that you guys are able to And I love for everybody to have it we got a data lifecycle. including the growers to work in and the answer is not to just and then it comes to our facility to really inform you as to in the next 15 to 20 minutes. So we own 160 acres of greenhouse So what does the data team look like? and how to automate the process. like in the future of farming? and a concept like that to Maybe describe the network a little bit. and allow us to expand as we grow. and speed of the cloud but like you say, a lot of change to the cloud, You got to get produce out. and it's one of the greatest the right time to try that." Was Dell able to deliver? me the latitude to say, And obviously our hearts go out to and like you learned, and sharing the story Well, thank you for having me. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Bradley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
200 acres | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
June 2020 | DATE | 0.99+ |
South Essex | ORGANIZATION | 0.99+ |
160 acres | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Peter Quiring | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Purolator | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
12 | QUANTITY | 0.99+ |
four months later | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
next day | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
one time | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
four senior growers | QUANTITY | 0.98+ |
Nature Fresh Farms | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
over 2000 sensors | QUANTITY | 0.98+ |
12 junior growers | QUANTITY | 0.98+ |
two junior growers | QUANTITY | 0.98+ |
pandemic | EVENT | 0.97+ |
Isilon | ORGANIZATION | 0.97+ |
four | QUANTITY | 0.97+ |
four months | QUANTITY | 0.96+ |
about a year and a half ago | DATE | 0.96+ |
a dollar per meter | QUANTITY | 0.95+ |
CUBE Conversation | EVENT | 0.95+ |
both | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
one senior grower | QUANTITY | 0.93+ |
first 32 acre | QUANTITY | 0.93+ |
COVID pandemic | EVENT | 0.92+ |
20 minutes | QUANTITY | 0.92+ |
one spot | QUANTITY | 0.92+ |
21 | QUANTITY | 0.92+ |
15 | QUANTITY | 0.92+ |
12 months a year | QUANTITY | 0.92+ |
one part | QUANTITY | 0.92+ |
about 12 feet | QUANTITY | 0.91+ |
22 | QUANTITY | 0.9+ |
Intel | ORGANIZATION | 0.9+ |
America | LOCATION | 0.89+ |
next few years | DATE | 0.88+ |
23 year | QUANTITY | 0.88+ |
10 years | QUANTITY | 0.88+ |
last five years | DATE | 0.85+ |
CUBE | ORGANIZATION | 0.84+ |
Vice President | PERSON | 0.84+ |
last 60 days | DATE | 0.84+ |
phase three | OTHER | 0.83+ |
nature | ORGANIZATION | 0.82+ |
one of those | QUANTITY | 0.82+ |
about 10 | QUANTITY | 0.81+ |
about a foot | QUANTITY | 0.79+ |
about three | QUANTITY | 0.72+ |
every foot | QUANTITY | 0.72+ |
Nature Fresh Farms | TITLE | 0.69+ |
24/7 | QUANTITY | 0.69+ |
single part | QUANTITY | 0.67+ |
COVID | OTHER | 0.67+ |
phase two | OTHER | 0.67+ |
FX2 | COMMERCIAL_ITEM | 0.65+ |
Keith Bradley, Nature Fresh Farms
(upbeat music) >> From the Cube studios in Palo Alto in Boston connecting with thought leaders all around the world. This is the CUBE Conversation. >> Hey everybody this is Dave Vellante and welcome to the special CUBE Conversation. I'm really excited to have Keith Bradley here he's the Vice President of IT at Nature Fresh Farms. Keith good to see you. >> Hey, good to see you too there Dave. >> All right, first of all I got to thank you for sending me these awesome veggies. I got these wonderful peppers. I got red, orange. I got the yellow. I got to tell you Keith these tomatoes almost didn't make it. It's my last one on the vine. >> (Laughs) >> These guys are like candy. It's amazing. >> Yap. They are the tasty thing. >> Wonderful. >> You know what, I'll probably just join you right here now too. I'll have one right here right now and I'll join you right now. >> My kids love these but I'm not bringing them home. And then I got these other grape tomatoes and then I've got these mini pepper poppers that are so sweet. You know which one I'm talking about here. And then we've got the tomatoes on the vine. I mean, it's just unbelievable that you guys are able to do this in a greenhouse. Big cukes, little cukes. Wow. Thank you so much for sending these. Delicious. Really appreciate it. >> Yeah. Well thank you for having them. It's a great little tree and it's something that I know you're going to enjoy. And I love for everybody to have it and there's not a person I haven't seen that hasn't enjoyed our tomatoes and peppers. >> Now tell me more about Nature Fresh Farms. Let's talk about your business I want to spend some time on that. We've got IoT, we got a data lifecycle. All kinds of cool stuff, scanners. Paint a picture for us. >> I like to even go... If you don't mind. I like to even go back to where our roots actually came from. So Peter Quiring, our owner actually was a builder by nature and he was actually back in the year 2000 really wanted to get into the greenhouse because he was a manufacturer. And he built our phase one facility back in 2000 under the concept that he said, "there's computers out there." And Peter will be the first one to say, "I don't know how to use them, "but I know that it can do a lot for us." So even back in 2000, we were starting to experiment with using the computers back then to control the greenhouse, to do much of the functionality. Then he bought it under the concept as our sister company, South Essex Fabricating that he would sell the greenhouse turnkey to somebody else. Well, talking to him and I've been around since about phase two. He basically said, "when I built phase three, "which is our first 32 acre range, I realized that is actually in the pepper business now," and he realized he was a grower and then he fell in love with the industry. And again, kept pushing how we can do things automated? How do we can do things? How do we get more yield, more everything out of what we do? And as a lover of technology he made it a great environment for everybody including the growers to work in and to just do something new. >> Well, I mean the thing that we know that as populations grow we're not getting more land. Okay (laughs). So, you have to get better yield and the answer is not to just pound vegetables with pesticides. So maybe talk about how you guys are different from sort of a conventional farming approach, just in terms of maybe your yield, how you treat the plants, how you're able to pick throughout the year, give us some insight there. >> So basically I'll start with through the lifecycle of a pepper. So it's basically planted at a propagator and then it comes to our facility and it comes in the little white boxes here behind me. And they actually are usually about that tall. They're about a foot tall. Maybe a little more when they come to us. And right from that point in time, we start keeping track of everything. How much we put water, how much water it doesn't take, what nutrients it takes, how much it weighs. We actually weigh the vines to know how much they are in real time. We do everything top to bottom. So we actually control the life cycle of the plant. On top of that, we also look and have a whole bio scout division. So it's a group of people that are starting to use AI to actually look at how the bugs are attacking the plants. And then at the same time, we release a good bug that will eventually die off to kill the bugs that are starting to harm the plant. So it basically allows us to basically do as close to natural way of growing a plant as possible without spraying or doing anything like that at night. It's actually funny 'cause there's a lot pictures out there and you think that a greenhouse, it's going to be wet in here. And actually for the most part, it is dry all the time. Like I'm very hot, it's very dry and it's just how we work. We don't let anything inside. We control everything in that plant's life. And now with our newest range, we even control how much light it gets. So we basically give it light all night too. And even some nights when it's a little days out, not like today, but when it's a little dark out and the sun's not up there, we'll actually make sure it gets more light to get that more yield out of it. So we can grow 24/7 12 months a year. >> Okay Keith. So it sounds like you're using data and AI to really inform you as to nature's best formula for the good bugs, the bad bugs, the lighting to really drive yields and quality. >> Yeah, we analyze, like I said, everything from the edge that we collect, like I said, we have over 2000 sensors out in the greenhouse and we keep expanding it more and more every year to collect everything from the length of the vine, the weight of the vine in real time. And we basically collect it from the day the plant is born to the day that we actually take it all out to be composted. We know how much light it got. Does it need to get light that day? We analyze everything in general and it allows us to take that data back in real time to make it better and to look at the past data to do better again. Like you hear, some times we have actually have a cart going by here now. That data from that cart, we'll go back to our growers and they will know how much weight they got out of that row in the next 15 to 20 minutes. So they can actually look, okay, how did that plant react to the sun, how's tomorrow? Does it need more nutrients? Does it need a little less? They take all that data from the core and make sure it's all accurate and as up to date as possible. >> So Keith, and maybe even you can give us approximations, but so how much acreage do you have? And how much acreage would you need with conventional farming techniques to get the kind of yields and quality that you guys are able to achieve? >> So we own 160 acres of greenhouse that's actually under glass. It's actually 200 acres total of land but what's 160 acres approximately of greenhouse that's actually under glass. 'A' we're always constantly growing. Our demand is up that that's why we grow so fast. Usually you're looking at both 12 to one. So for every foot squared of space, you're looking for equivalent is about 12 feet squared for a conventional farm. That's the general average. Mostly because we can harvest year round, we can continually harvest. We maximize the harvest amount and everything total. >> I'm also interested in your regime, your team. So obviously you're supporting from an IT perspective, but you've got all this AI going on. You've got this data life cycle. So what does the data team look like? >> We're actually... I always laugh though. I like to call our growers are basically data analysts. They're not really part of my IT team, but they basically have learned the role of how to analyze data. So we'll have basically one or two junior growers, per range. So probably about, I'd say about, we have about 10 to 12 junior growers and then one senior grower per whole farm. So probably about three or four senior growers at any one time. But my IT staff is actually about a team of four, five, including myself. And we are always constantly looking at how to improve data and how to automate the process. That's what drives us to do more. And that's where the robots even come in is every time we look at something, it's not even from an IT perspective, but even just from a picking perspective, how do we automate this? How do we do a better tomorrow? How do we continually clean this up? And it just never ends. And every year we look back, okay, it cost us a dollar per meter squared or per foot square for the people down South in America there now. We look at that and how do we do that better next year? How do we do better the next day? And it's a constant looking and it's something we look at refining and now that's why we're going so much into AI 'cause we want to not look at the data and decide what to do. We want the data to tell us what to do. >> You guys are on the cutting edge. I mean this is the future of farming. I wonder if we could talk about the IT, what does the IT group look like in the future of farming? I mean you guys, what's your infrastructure look like? Are you all in the cloud or you can't be in the cloud because this is really an extent of an IoT or an edge use case. Paint a picture of the IT infrastructure for us if you would. >> So the IT infrastructure it's a very large amount at the edge. We take a lot of the information from the edge and we bring it back to our core to do our analyzing. But for the most part, we don't really leverage the cloud much yet and most of it is on-prem. We are starting to experiment with moving out to the cloud. And a lot of it is, you'll laugh though, is because the farming and agriculture industry really was stagnant for a long time and not really stagnant, but just didn't really progress as fast as the rest of the world. So now they're just starting to catch up and realizing, wow, this is a growing industry. We can do a lot of cool things with technology in this range. And now it's just exploded. So I'm going to say in the next five to 10 years, you're going to see a lot more private clouds and things like that happening with us. I know we're right now starting to just look at creating with the VxRail, a private cloud, and a concept like that to start to test that water again of how to analyze and how to do more things onsite and in the cloud and leverage everything top to bottom. >> So you've got your own servers at the edge... So Intel based servers, what's your storage infrastructure look like? Maybe describe the network a little bit. >> Yap. Okay. So we are basically, I'll admit here, we are a Dell factory. We're basically everything top to bottom. Right now we're on an FX2, Dell FX2 platform. It's basically our core platform we've been using for the last five years. It does all of our analitics and stuff like that. And we have just transformed our unstructured data to Isilon. It's been one of the best things for us to clean that up and make things move forward. It was actually one of those things that management actually looked at me and kind of looked at me and said, "what are you nuts?" Because we basically bought our first Isilon and then four months later, I said, "I love this. I got to have more," because everybody loved it so much in the way of store things. So we actually doubled the size of it within four months, which was a great... It was actually very seamless to do, but we're now also in a position where the FX2 in that stage type of situation didn't quite work for us to expand it. It wasn't as easy to expand. So we wanted to get away that we could expand at a moment's notice. We can change, we can scale out much faster and do things easier. So that's why we're transforming to a VxRail to basically clean that up and allow us to expand as we grow. >> So you're essentially trying to replicate the agility and speed of the cloud but like you say, you're an edge use case. So you can't do everything in cloud. Is that the right way to think about it? You mentioned private cloud but just sort of cloud experience, but at the edge. >> Yeah. We try to keep everything at the edge. It just makes it a lot easier to control. Because we're so big. Think about it like you are bringing all this information back from everywhere. It's a lot of data to come back to one spot. So we're trying to push that more, to keep it at the edge so that we can analyze it right there in the moment instead of having to come back and do it but yeah. And I think you'll see in the next few years, a lot of change to the cloud, I think it'll start to be there, but again, like I said, the private cloud will probably be the way most will go. >> Okay. So I got to ask you then, I mean, you've really tested that agility over the last 60 days with this COVID pandemic. How were you able to respond? What role did data play? You had supply chain considerations. Obviously, you got a lot of online ordering going on. You got to get produce out. You've got social distancing. How were you able to handle that crisis? >> Well it was a really great thing for our team. Our team really came together in a great way. We had a lot of people that did have to go home and we started because we had so many ranges all over, already about a year and a half ago we started implementing an SD-WAN solution to allow us to connect to different areas and to do all kinds of stuff. So it was actually very quick for us to be able to send the others home. We used our VeloCloud SD-WAN to expand it. It was very seamless and we just started sending people home left, right and center. The staff that had to stay here, like the workers out in the greenhouse here now are offshore labor as we call it. They work great. They worked with at every moment of the day and they dug right in. We haven't lost heartbeat. Like actually our orders have gone up in the last... Through this COVID experience more than anything else. And it's really learned... It really helped from an IT perspective and I laugh about this and it's one of the greatest things about what I do, I love this moment, is where sometimes we were very hesitant to jump on this video collaboration. I said, "hey, that's a great way of doing this." But sometimes people they're very stuck in their ways and they love it and they're like, "I don't know about this whole team Zoom "and all that fun stuff," but because of this, they've now embraced it and it's actually really changed the way even they've worked. So in a way, it kind of sped up the processes of us becoming more agile that way in a way that would've taken a long time. They now love teams. They love being able to communicate that way. They love being able to just do a quick call. All that functionality has changed and even made us more efficient that way. (mumbles) >> How does this all affect your IT budget allocation? Did you get more budget? Was it flat budget? Did you have to shift budget to sort of work from home and securing the remote workers? Can you sort of describe that dynamic? >> So it did, I'll be true, there's no way around it to not up my budge. They basically said, "yep, "you have to do what you have to do. "We have to continue to function, "we cannot let our greenhouse go down "and what do you need to do to make it happen?" So I quickly contacted Dell and got things coming and improve our infrastructure as much as we could to get ready. I contacted (mumbles). I basically made it so that my team can support every single part of our facet from home if they actually had to go home. So for example, if I had to get stuck at home, I could do every single part of my job from home, including the growers as much as possible. So say our senior grower had to get home. I locked him up. He has to be able to see everything and do everything. So we actually expanded that very quickly and it was a cost to us. But again, there's no technology we didn't implement that we hadn't talked about before. We just hadn't said, "you know what? It's just not the right time to try that." And now we just went ahead and we just said, we got to do it now. And there's not one part of our aspect that we don't reuse. >> Was Dell able to deliver? Did they have supply constraint issues? I mean, I know there's been huge demand for that whole remote worker. Were able to get what you needed in time? >> Yeah. You know what, I think that we hit it a little ahead of the scope of when things started to go bad, our senior management, our president and all that. He basically said, "you know Keith, "we got to get ready on this. "We got to get some stuff coming." We never ran out of some things. The quirkiest thing and it is just a reality, the biggest thing was webcams was to kind of trying to get webcams. Other than that, there was issues with UPS and Purolator and FedEx because they were just inundated too. But for the most part, we kept everything moving. There wasn't a time that I was actually really waiting on something that we had to have. One of the other great things of our senior team that's here is they've really given me the latitude to say, "what do you need and how do you need to do it?" And so I have my own basically storage area of stuff everywhere. And my team does laugh at me 'cause they call me a hoarder and I basically have too much. And we were able to use either some older stuff or some newer stuff and combine it and we got everything running. There was only a little hiccups here and there but nothing ever is going to go perfect. >> Yeah. But it's enabling business results. We've asked a lot of it pros like yourself like what do you expect the shape of the recovery? And obviously our hearts go out to those small businesses that have been decimated. You're clearly seeing industries like airlines and hospitality and restaurants are obviously in rough shape, but there is a bifurcated story here. Some businesses and it sounds like in this camp where the pandemic was actually a tailwind, your online demand is up, food, vegetables, people... There were a lot of meat shortages. So people really turn to vegetables, is that right? Is that the shape of the recovery actually, is maybe not even V-shape, it's been a tailwind for Nature Fresh Farms. >> Yeah. You know what? It has been a tailwind and that's the right way to say it. We've just increased our yieldage. We've increased that, it's not unnew for us, that's been the biggest driving force for us is basically the demand for our product and building fast enough to keep up to that demand. Like we continually build and expand. We've got more ranges being built in the coming years like looking towards the 21, 22, 23 year. It's just going to just continue to expand and that is purely because of demand. And this COVID just again, escalated that little bit 'cause everybody's like, I really want the peppers and like you learned, we actually do have some tasty peppers and tomatoes. So it does make it a nice little treat to have at home for the kids. >> Well, it's an amazing story of tech meets farming. And as you said for years your industry kind of became quiet when it came to tech, but this is the future of farming, in my opinion. And Keith, thanks so much for coming on the CUBE and sharing the story of Nature Fresh Farms. >> Well, thank you for having me. It's been a great pleasure. >> Alright. Thank you for watching everybody this is Dave Vellante for the CUBE and we'll see you next time. (upbeat music)
SUMMARY :
This is the CUBE Conversation. I'm really excited to I got to tell you Keith These guys are like candy. and I'll join you right now. that you guys are able to And I love for everybody to have it we got a data lifecycle. including the growers to work in and the answer is not to just and then it comes to our facility to really inform you as to in the next 15 to 20 minutes. So we own 160 acres of greenhouse So what does the data team look like? and how to automate the process. like in the future of farming? and a concept like that to Maybe describe the network a little bit. and allow us to expand as we grow. and speed of the cloud but like you say, a lot of change to the cloud, You got to get produce out. and it's one of the greatest the right time to try that." Was Dell able to deliver? me the latitude to say, And obviously our hearts go out to and like you learned, and sharing the story Well, thank you for having me. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Bradley | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
FedEx | ORGANIZATION | 0.99+ |
UPS | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
200 acres | QUANTITY | 0.99+ |
2000 | DATE | 0.99+ |
South Essex | ORGANIZATION | 0.99+ |
160 acres | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
Peter Quiring | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Purolator | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
12 | QUANTITY | 0.99+ |
four months later | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
next day | DATE | 0.99+ |
first | QUANTITY | 0.98+ |
one time | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
four senior growers | QUANTITY | 0.98+ |
Nature Fresh Farms | ORGANIZATION | 0.98+ |
first one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Cube | ORGANIZATION | 0.98+ |
over 2000 sensors | QUANTITY | 0.98+ |
12 junior growers | QUANTITY | 0.98+ |
two junior growers | QUANTITY | 0.98+ |
pandemic | EVENT | 0.97+ |
Isilon | ORGANIZATION | 0.97+ |
four | QUANTITY | 0.97+ |
four months | QUANTITY | 0.96+ |
about a year and a half ago | DATE | 0.96+ |
a dollar per meter | QUANTITY | 0.95+ |
CUBE Conversation | EVENT | 0.95+ |
both | QUANTITY | 0.95+ |
One | QUANTITY | 0.94+ |
one senior grower | QUANTITY | 0.93+ |
first 32 acre | QUANTITY | 0.93+ |
COVID pandemic | EVENT | 0.92+ |
20 minutes | QUANTITY | 0.92+ |
one spot | QUANTITY | 0.92+ |
21 | QUANTITY | 0.92+ |
15 | QUANTITY | 0.92+ |
12 months a year | QUANTITY | 0.92+ |
one part | QUANTITY | 0.92+ |
about 12 feet | QUANTITY | 0.91+ |
22 | QUANTITY | 0.9+ |
Intel | ORGANIZATION | 0.9+ |
America | LOCATION | 0.89+ |
next few years | DATE | 0.88+ |
23 year | QUANTITY | 0.88+ |
10 years | QUANTITY | 0.88+ |
last five years | DATE | 0.85+ |
Vice President | PERSON | 0.84+ |
last 60 days | DATE | 0.84+ |
CUBE | ORGANIZATION | 0.84+ |
phase three | OTHER | 0.83+ |
nature | ORGANIZATION | 0.82+ |
one of those | QUANTITY | 0.82+ |
about 10 | QUANTITY | 0.81+ |
about a foot | QUANTITY | 0.79+ |
COVID | OTHER | 0.73+ |
about three | QUANTITY | 0.72+ |
every foot | QUANTITY | 0.72+ |
24/7 | QUANTITY | 0.69+ |
single part | QUANTITY | 0.67+ |
phase two | OTHER | 0.67+ |
FX2 | COMMERCIAL_ITEM | 0.65+ |
Nature Fresh Farms | TITLE | 0.6+ |
Travis Vigil, Dell Technologies | CUBE Conversation, May 2020
>> From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a cube conversation. >> Hi, and welcome to a cube conversation. I'm Stu Miniman, Coming to you from our Boston area studio. happy to welcome back to the program, one of our longtime CUBE alums, Travis Vigil. He is the Senior Vice President of Product Management for storage and data protection at Dell Technologies. Travis, nice to see you. >> it's great to see you Stu, as always. >> Alright, so Travis, while we aren't at Dell tech world, you know that this May 2020 that's going to be handled in the fall. There are a lot of things happening in the storage world. your team's announced power store just a couple of weeks ago. >> Yeah. >> And you know many other things happening in the storage world and we're going to get to dig into it. So I guess let's start there. With the new mid range solution. I had a great conversation with Caitlin Gordon, what's the initial feedback you've been getting from the field? You know, and from customers of course. >> You know Stu, it's been a whirlwind of a couple of weeks with the public unveiling of power store. This was a major release for us you know simply put, I've referred to it as probably the most important and strategic launch that we've had at Dell since the combination of Dell and EMC. I personally have been involved with the program for a little over two years. And we have had significant investment in bringing this product to fruition over 1000 engineers across Dell Technologies, including engineers at VMware. Put in tireless time, hours, innovation into bringing this product to market and we're extremely excited about it. The press has been very positive on the launch on the capabilities, but more importantly, customers and partners have been extremely positive on The launch and the capabilities. Stu we've been talking for multiple years now about the fact that we are going to be simplifying our product line, especially in the mid range. And the release of power store is a major milestone in that simplification. And, you know, I can go into speeds and feeds and what differentiates it. But to me, the biggest part of this announcement is that we have a product in the market that is resonating with the field resonating with customers, resonating with partners. And, you know feedback from the largest beta program that we've done as a part of a mid range storage launch ever gave us early indications that things would be you know the feedback would come back this way. And we're very happy with the initial traction that we're receiving. >> Well Travis you know definitely a lot of work, you said over 1000 engineers working on that. >> Yeah, yeah. >> It's interesting you think back to the storage world, you know for a long time, it was you know growth of product line through acquisitions. Of course, Dell made a couple of acquisitions you're quite familiar with over the years, EMC over the years made many acquisitions, and tell me what does it mean for 1000 engineers to work on something, you know often you'll hear, you know oh well you know a startup of 50 people, they built some new thing, and you know that you know rocketed them to the next thing, and you know by the time they get to 1000 people oftentimes, you know they're talking about have they been acquired or, you know are they going public? So you know, why is that investment needed? And you know what's the outcome of that kind of, you know starting from the ground up solution? >> Yeah, it's a good question Stu. When I think about it, what we set out to do with power store was, is something very ambitious, which is to simplify the product lines. To bring the best capabilities of the current shipping mid range products into a next generation architecture. And when we looked at doing that, we quickly determined that in order to be flexible, and to be able to innovate quickly to be able to provide the features and capabilities needed to bring all of those customers forward, we had to make significant investment. And, you know, I don't know of another example to your point of a company having done this internally. Especially with a heritage of acquisition and so getting to build something from the ground up that is optimized and modern for the workloads of today, but also able to bring customers from previous or current generation products forward, is something that's been really special. And something that I think, you know that that Dell will be able to continue to innovate and lead the market in mid range from the investment that we made for the for, you know well into the future. >> Alright, so Travis, one of the other discussions we've been having with the Dell team quite a bit over the last couple of years is how storage fits into the whole discussion of cloud. So there's some recent announcements, you've got recent products. How is Dell thinking about you know, that world of storage, and how that integrates into a customer's overall cloud discussion? >> Yeah, if I think about cloud, I think about a couple things. One, there's sort of the cloud operating model, which is, you know things need to be really simple. things need to be autonomous. There's this concept of being able to provide private cloud functionality on prem. And so, you know when I look at some of the capabilities that we've built into power store, for example, it's delivering that cloud light, simple to use simple to scale experience, but on prem. Then I think the other aspect of cloud, which is just as important is, you know how do on prem products integrate with leverage, and really allow the capabilities that they provide to be used both on prem in a hybrid cloud environment or you know directly as part of a service in a public cloud. And we have seen that customers that are looking at us, customers that are in specific industries or looking at specific workloads are really looking for that flexibility, that burst capability, if you will, to be ever able to leverage certain capabilities across on prem and in the cloud. And specifically, we've seen that demand with customers that utilize our Isilon products our OneFS products, those customers, some of them are corporate using them for kind of traditional file workloads, you know, enterprise file workloads, but there's a big chunk of customers that are in specific verticals like life sciences, like genomic sequencing, media and entertainment, things like collaborative video editing, or in wanting to the ability to burst to the cloud for video rendering. Use Cases like autonomous driving Where the massive scalability that we have in OneFS. For those customers that are using it in an on prem solution, they want to be able to also utilize it in a public cloud as a public cloud capability. And so part of what we're announcing is the ability to utilize OneFS as a native capability in Google Cloud. And lots of interest from customers of the type that I just just spoke about, to be able to leverage that capability. And it's really like, the capabilities we're bringing are like nothing else on the market, you know. I'm not exaggerating to say that the scalability, the performance, the capacity, are orders of magnitude better than what competitors can provide with their cloud capabilities. >> Yeah, well, Travis absolutely, If Think about the word scale, you know, Google's one of the first companies that comes to mind, >> Right, right. >> Who else has the global reach and the networking capability. It's been interesting to watch, Dell has partnerships and the Dell family has partnerships with, you know a lot of the you know, pretty much all the cloud providers at this point. So, you know what's special about, the Google solution, you said, unparalleled there you know bring us underneath the covers a little bit, and help us understand what really differentiates the OneFS solution that you're doing with Google Cloud? >> Yeah, so I'll take it back to the workload, the life sciences, media entertainment, autonomous driving, they need massively scalable file solutions from a performance perspective and a capacity perspective. And the solution that we've engineered, co-engineered with Google Cloud provides massive scalability and massive capacity. In particular, versus one of our closest competitors, it's 46 times higher in terms of read throughput, 96 times higher in terms of write throughput, put in 500 times higher in terms of maximum file system capacity. And these workloads require it. I mean, these are massive file based repositories. And it's not just the capacity, but it's the performance of that capacity that is extremely important. So much like the value we bring on prem with OneFS, we're bringing that value to the cloud to Google. And because you have, you know you're utilizing common OneFS, whether you're on prem or in the cloud, you have native replication capabilities available for customers that want to do that bursting. >> Yeah, it's been fascinating, Travis you know, my background's a little bit more on the block side of the house than the file side of the house. >> Yeah. >> But, you know from, mid range storage, where we kind of you know, put block and file together to get unified, then you saw the huge explosion of the scale out NAS type solutions. And then one of the things you know the whole industry looked at is you know, what's that gap between object, which is what you know is typically underlying the cloud storage and file. So many of these solutions you know, are really blurring those lines pulling these things together. So that I mean, ultimately you know, customers don't need to think about you know, some of those underlying you know, storage networking architectures, >> That's right. >> They could just solve their problems So, >> Yeah, I think you're exactly right. And, you know I actually come from a mid range background as well. And, you know I've been associated with OneFS, and Isilon for a couple years now and the amazing thing to me is the growth of data, right? You know, this stat better than I do Stu, which is 80% of all the data generated in the next decade will be unstructured. And so that's not to say that you know traditional storage arrays aren't going to continue to grow. Those workloads are growing as well. But the massive growth and the reason that there is such a desire to do things in or close to the clouds for file based workloads comes from that underlying growth. >> Yeah, so Travis, I guess one of the other things that's interesting to look at is with the global pandemic going on you know, automation is front and center for customers you know, I can't go touch my gear. So therefore you know I need to limit how often I would need to do that. So, you know how does that just overall usability, you know, autonomous nature of these types of solutions fit into everything we've been talking about. >> Yeah It's I mean, it's core to solutions going further forward, whether you're talking about the OneFS Google solution, or whether you're talking about power store, more and more customers need to be focused on the business outcomes or the it outcomes and not the care and feeding or tweaking of the equipment. And so, you know I think the best example of that is, you know, what we've done with power store in terms of the ground up architecture that's really built with machine learning directly into the platform, and the ability to take multiple power stores and look at them as a single logical unit and make recommendations about where things should be located. And and best configured. And so, you know we know because of the fact that we're taking in all of this IO, whether it be on you know our file solution or block slash unified solutions, what the underlying workloads are, and the fact that we're building in this knowledge, this intelligence into the system is part of why we designed the architecture from the ground up when we're talking about power store. And and if you go back and look at the original design of OneFS, and that highly scaled out architecture you know it's been a calling card for OneFS since almost a day one, which is this concept of simplicity at scale, Right? And, you know the fact that you have petabytes and petabytes of data shouldn't mean that you need tens or hundreds of hundreds of people to manage and feed it. You should, you know you should be able to administer it with with a small number of IT professionals. >> Alright, Travis, want to give you the final word, you know bring us inside you know Customer conversations you're having and what else customers should know about, really Dell Technologies today. >> You know, it's a very interesting time to be part of a technology company with everything that's going on. And I think, you know the last several months have shown us that digital transformation is key to company's success. I look at Dell Technologies and the fact that you know basically over the course of a week, you know we went from very few people or you know, a minority of people working remotely to almost the entire company, working remotely. And you know the thing that made that happen was, our underlying IT systems and the fact that they are built on capabilities that are resilient, that are autonomous, that are modern, and so, You know, I'm extremely bullish about the capabilities that we're bringing to market. I'm extremely bullish about the cloud capabilities that we're building in to our solutions, especially on the unstructured side of the house. And I think that the next wave of I would say the thing that this pandemic has highlighted is the need to be a digital business going forward. And you know I think that speaks very well for the prospects of Dell Technologies going forward and for infrastructure solutions going forward as well. >> Alright, well, Travis, pleasure catching up. Thanks so much for joining us. >> Thank you Stu, It's always a pleasure. >> Alright, be sure to check out thecube.net for all the upcoming shows, as well as you can search through the archives. I've got interviews as Travis and I mentioned on power store and many of the other Dell announcements, so be sure to check those out. I'm Stu Miniman and as always, thank you for watching theCUBE (upbeat music)
SUMMARY :
leaders all around the world, I'm Stu Miniman, Coming to you you know that this May 2020 And you know many for us you know simply put, Well Travis you know of that kind of, you know made for the for, you know How is Dell thinking about you know, on the market, you know. a lot of the you know, And the solution that we've engineered, on the block side of the house the whole industry looked at is you know, And so that's not to say that you know So therefore you know and the ability to take bring us inside you know And I think, you know the last Thanks so much for joining us. and many of the other Dell announcements,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Travis | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
tens | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Caitlin Gordon | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
46 times | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
96 times | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
1000 engineers | QUANTITY | 0.99+ |
May 2020 | DATE | 0.99+ |
500 times | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
50 people | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Travis Vigil | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
next decade | DATE | 0.98+ |
1000 people | QUANTITY | 0.98+ |
Isilon | ORGANIZATION | 0.98+ |
thecube.net | OTHER | 0.98+ |
OneFS | ORGANIZATION | 0.97+ |
over two years | QUANTITY | 0.97+ |
over 1000 engineers | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
single | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
OneFS | TITLE | 0.91+ |
over 1000 engineers | QUANTITY | 0.9+ |
One | QUANTITY | 0.86+ |
first companies | QUANTITY | 0.85+ |
couple of weeks ago | DATE | 0.83+ |
last | DATE | 0.83+ |
Google Cloud | TITLE | 0.79+ |
a day one | QUANTITY | 0.77+ |
prem | ORGANIZATION | 0.77+ |
last couple of years | DATE | 0.76+ |
pandemic | EVENT | 0.71+ |
OneFS | COMMERCIAL_ITEM | 0.68+ |
Cloud | TITLE | 0.63+ |
wave | EVENT | 0.62+ |
months | DATE | 0.61+ |
Senior | PERSON | 0.58+ |
week | QUANTITY | 0.54+ |
couple years | QUANTITY | 0.54+ |
Chhandomay Mandal, Dell Technologies | CUBE Conversation, May 2020
>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is theCUBE conversation. >> Hi, I'm Stu Miniman and welcome to a special CUBE conversation. Digging into some of the hottest topics in tech, of course, multicloud has been one of the big things we've been talking about for a number of years, the maturation from just cloud in general, hybrid cloud and multicloud. Happy to welcome back to the program, one of our CUBE alumni, Chhandomay Mandal. He's a director of marketing from Dell Technologies. Chhandomay, pleasure to see you. >> Happy to be here. >> All right, so last year we were together for Dell Technologies World, VM World and of course, I've seen how these solutions have been expanding out partnerships especially a lot of it from Dell's side on leveraging VMware technologies to extend and connect to what your customers are doing with their cloud strategies. So, give us the update as to you know, what you're hearing from customers and how Dell is moving to meet them. >> Sure. Cloud adoption is really growing and even from the three hyperscalers, AWS, Azure and Google Cloud. There are over 500 different services today. And with this fast pace of innovation, I see costumers adopting many different services from these public cloud vendors. And again, they want to add up the services because they are differentiated. They have workloads that can leverage their services and sometimes even from leveraging the same data set. One challenge that we're seeing is, how do customers move data around from one cloud to another so that they can take advantage of the great innovation that is happening with cloud storage or cloud providers? Because moving the data comes with not only the migration risk, but also huge egress fees, the time it takes. So, solving this customer's challenge is our number one priority in the cloud offering. >> Great, Chhandomay, you brought up a bunch of really good points there. Of course, nobody's solved the speed of light issue, so we know data has gravity, it's not easy to move it. And yeah, absolutely, you know, I've been saying for the last couple of years, that data is one of those flywheels, when it comes to the cloud. Well, once you've got it in there, it's not, you know kind, of the traditional lock-in, but I have access to the data, I have access to the services and it's not easy to move it out, even if customers would want to take advantage of multiple services from multiple clouds. So, I'd love to hear, you know, what's DELL's role in this discussion? How are you helping us make our data more of a multicloud-enabled environment? >> Absolutely, it's true. So with us, Dell Technologies, cloud storage for multicloud, we are delivering scalable, resilient cloud data storage with flexible, multi-cloud access options. Ideal for securely, deploying or moving, demanding applications to the cloud for many different use cases. The way we are doing it, effectively, that customers can leverage log or file storage consumed as a service, directly from any different clouds, like, AWS, Google cloud, Azure, and we are providing very high speed, low latency connections from Dell EMC storage, from our managed services provider location, using a direct cloud connect option and, let me give you like an example. We have Dell EMC Isilon industries number one scale-out NAS It has a very high performance drives, large throughput, all scales to multi-petabyte, use cases have multiple different protocol access simultaneously from many different applications. Now the same Isilon, today can be leveraged as Dell technology's cloud storage with direct access to Azure, Google cloud, AWS on zoomed in the cloud operating model. So now, you can run your applications in any cloud while having data sitting outside of your cloud with a high performance, high speed access that you need. That's where we are bringing the innovation and the value. >> Okay, and if I heard your right Chhandomay, this is a managed service solution because if I want that, you know, high speed, you know, direct connection, with Azure and with AWS, normally I need to be, you know, in some service provider, Dell of course has lots of partners that offer those services. I'm not just talking about, you know, connecting my ray that I have in my data center, connected over the internet. because that wouldn't necessarily give me the bandwidth and performance that I need. Did I get that correct? >> Yeah, absolutely because again, you need this connection and all locations with the hyperscalers to get the high speed connection. Say in the case of Microsoft Azure, the express route, it need to be co-located in a facility like right next to them, so that you have the high bandwidth, high performance that you need for this application. >> Yeah, that makes a lot of sense. It's kind of, you know, you're hyperscaler adjacent you just, through that connection it's relatively close. Might help if maybe if you've got, you know, a customer or an industry example of, you know, what the real life expectation and use cases, for a solution like this. >> Sure, so let me give you the example of genomics analysis. Now, is genome sequencer in a single cycle or a human being that creates 100 gigabytes of data and that's just like raw data. You need to run analysis, different types of analysis to check effects that are drug or something having on the DNA. Now, for example, NVIDIA Parabricks is a popular sequencing software that needs to be run on this data set. And again, it drives very high throughput. Sometimes it needs 100 gigabytes per second throughput to drive the performance. Now, we have worked with Microsoft Azure very closely and using Microsoft ExpressRoute, you can actually get that bandwidth, that throughput for running Parabricks, or nextgen sequencing or VMs in Azure leveraging Isilon. And in fact, we have worked with Azure to provide a completely egress fee data movement. So when this application is writing back data, to this application at it, as part of Dell technologies about storage, there is zero fee associated with it. And it's not just Microsoft Azure, right? You can have the same data set and run this Parabrick or nextgen sequencing VMs in Google cloud, AWS, Azure simultaneously. Thereby scaling up this process much faster. So, if you are a pharmaceutical company, trying to cure for disease spreading across the globe, you need to run, this on hundreds of thousands of patients creating hundreds of terabytes to petabytes of data, then, you can actually scale up the process across three or more different clouds very quickly. This truly shows you how you can leverage the power of Isilon, the scalable high performance storage in a multi cloud world. >> Yeah, very very interesting, you know, you talked about no cost for an egress fee and that, you know, can be one of those architectural killers. You think you have a good solution for a cloud, you put things out and then all of a sudden, you start getting things on your bill that you weren't expecting. So today, is there something special that the customer needs to do that it's for this service, that you're saying is that a partnership with Dell and Microsoft or you know, how does this differ from kind of the traditional egress fees that I'm used to getting or whether I'm using AWS, Azure or Google? >> So, this is like a, DELL and Microsoft Azure partnership. So that's where like you do not get, charged with the egress fee when the applications running in Azure, are connecting back to EMC storage as part of that cloud storage services. >> Okay, excellent, 'cause yeah, I mean, Chhandomay, I'm sure you're well familiar. A lot of times people look at cloud and they're saying, okay, when I look at the economic, if it's computer intensive, it makes a little bit more sense, if it's data intensive, there's lots of reasons that it might not make sense, that this is unlocking some of that data capabilities, I guess that leads to, you know, some of the opportunity around AI is of course, I need to think about my architecture. A lot of times data is not going to leave the edge environments, you know, autonomous vehicles is, you know, an obvious use case that we talk about. Usually there's training in a central location, but then I need to be able to actually do the work at the edge. So what does this, you know, cloud storage for multi cloud, how does AI fit into this whole gap? >> So, yes, for AI, you need to train very large data sets, for a long time, and you get to the results like you opt and you want. You gave the example of autonomous driving, right? The self driving car needs to understand many different scenarios, whether it's, icy road, a kid on the road, it's a slippery condition or you're running into a big wall, so on and so forth. Now, when it comes to dealing with this petabytes worth of data set and you need to train these models, okay? You need a very specific servers, GPU powered servers, okay. Now, to scale, you think that you'd go to the cloud and then you will be able to get the computer needs. However it turns out cloud is not an amorphous homogeneous place. Between the vendors, there is huge difference in terms of what GPU powered server you can get. And even like within one particular cloud vendor depending on the region, this vary widely. So it becomes critical, that you can have like data set that can be connected from many different clouds, from many different regions as you need it. And one more thing I want to highlight, AI is actually one area where these cloud providers are providing very differentiated services. So in the autonomous vehicle example, there are several stages of a model training depending on like what you are trying to achieve at one point in time. Now you can one day, or one part of the process, you can leverage AWS SageMaker for your model training. On the other part, you probably would like TensorFlow from Google cloud, good to it. Now when you have your data set outside of the cloud and you have the fast connection from many different clouds. You can take advantage not only, the different, GPU powered servers but also differentiated faster services that are available from this cloud providers. >> All right, so Chhandomay, how does the VMware Cloud Solution fit into this discussion? I know it's been a important piece of the DELL technologies cloud piece. So how does the multi cloud storage, VMware cloud and the multi cloud piece fit together? >> Sure, so the VMware cloud, on AWS is one of the key offering that we have, and it also fits into the multi cloud story very well. Actually, let me explain that with a customer example, okay. We have one of world's largest energy company down in Texas. They have four petabyte of Data Lake on Isilon, okay. And this is all seismic data, they are running analytics to workloads to figure out exactly which place in the ocean they should drill and, precision on here can be like millions or billions of dollars of difference. Now, they wanted to set up a secondary data center in the case of a disaster. What we were able to do is to spin up DR service for this customer leveraging Dell Technologies cloud storage. So they replicate the data to the cloud and then we spin up their DR environment with VMware Cloud on AWS, okay. And now the data is already in the cloud. So they got their, service with VMware Cloud on AWS but with the same data set. Now they are running those seismic analytics workloads from AWS, Google cloud and Azure, thereby speeding up the process of finding the next location to drill. So, you see the example where we leverage the EMC on AWS for DR as a service and since the data set is already there, now they are running their analytics to workloads for their regular operation. >> Great, well definitely a quite a bit of maturation in the Dell cloud solution, how that fits into multi cloud. Help put a point on it, Chhandomay, if you would, the conversations you're having with customers and Dell's role in the multi cloud discussion. >> Sure, so there are two important things. First, the ability to scale to many different clouds to leverage the different services, the compute infrastructure, so on and so forth. And the second part of it is depending on the applications, right, you might need to leverage for the same workload, working on the same data set, different services from different providers. Dell Technologies cloud storage for multi cloud is enabling that for our entire customer set. And I will, close out with one more important aspect. If you are a customer who is just starting your cloud journey or a one single cloud provider, go and see your cloud expert today. But still, you want to architect your solution so that, when the need comes, you can actually leverage multi cloud for compute or other services. So, if you decouple your services from like, where are your data is while doing the cloud access that actually makes your cloud architecture (mumbles). So, with Dell Technologies, cloud storage per multi cloud, we're helping customers not only today, but also for future. >> All right, well, Chhandomay Mandal, thanks so much for the updates, Congratulations to the theme on the process and look forward to talking to you again in the future. >> Thank you. >> All right, I'm Stu Miniman, thank you so much for watching theCUBE. (gentle music)
SUMMARY :
leaders all around the world, been one of the big things and how Dell is moving to meet them. and even from the three hyperscalers, So, I'd love to hear, you know, speed access that you need. normally I need to be, you so that you have the high It's kind of, you know, that needs to be run on this data set. that the customer needs to do So that's where like you do not get, the edge environments, you know, On the other part, you probably would like and the multi cloud piece fit together? of finding the next location to drill. and Dell's role in the First, the ability to scale to you again in the future. thank you so much for watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dell | ORGANIZATION | 0.99+ |
Chhandomay | PERSON | 0.99+ |
May 2020 | DATE | 0.99+ |
DELL | ORGANIZATION | 0.99+ |
Texas | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
one day | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
Chhandomay Mandal | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
100 gigabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
One challenge | QUANTITY | 0.99+ |
hundreds of terabytes | QUANTITY | 0.99+ |
over 500 different services | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.98+ |
NVIDIA | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
three hyperscalers | QUANTITY | 0.98+ |
Dell Technologies World | ORGANIZATION | 0.97+ |
EMC | ORGANIZATION | 0.97+ |
one point | QUANTITY | 0.97+ |
one part | QUANTITY | 0.97+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
one area | QUANTITY | 0.96+ |
hundreds of thousands of patients | QUANTITY | 0.96+ |
CUBE | ORGANIZATION | 0.96+ |
VMware Cloud | TITLE | 0.95+ |
second part | QUANTITY | 0.95+ |
two important things | QUANTITY | 0.93+ |
Azure | TITLE | 0.93+ |
one more thing | QUANTITY | 0.93+ |
billions of dollars | QUANTITY | 0.92+ |
one cloud | QUANTITY | 0.92+ |
Isilon | LOCATION | 0.9+ |
Azure | ORGANIZATION | 0.9+ |
VM World | ORGANIZATION | 0.88+ |
VMware cloud | TITLE | 0.88+ |
Isilon | ORGANIZATION | 0.87+ |
TensorFlow | TITLE | 0.86+ |
single cycle | QUANTITY | 0.86+ |
zero fee | QUANTITY | 0.85+ |
Joe CaraDonna, Dell Technologies & Rich Sanzi, Google Cloud | CUBE Conversation, May 2020
>> Announcer: From theCUBE studios (upbeat music) in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Hi, and welcome to a special CUBE conversation. I'm Stu Miniman, coming to you from our Boston area studio, and really happy to welcome to the program to dig into some of the latest on what's going on in the multi-cloud ecosystem. First of all, coming back to the program, not too far from where I'm sitting, Joe CaraDonna. He is the Vice President of Engineering Technologies, with Dell Technologies, and joining him, someone he knows quite well, is Rich Sanzi, who's Vice President of Engineering at Google Cloud. Gentlemen, thanks so much for joining. >> Great to be here, Stu. >> Thank you. >> All right, so Joe, we've been watching Dell Technologies, how the cloud portfolio and solution has been maturing, and working with the ecosystem. Maybe set the table for us, what's Dell doing with cloud? Why are we sitting here with the ? >> Well, we're here to talk about our OneFS for Google Cloud offering. We did something really special with Google here. We brought together the power and scale of our OneFS file system, along with the economics and the simplicity of public cloud, and together, I think, what we did is define a new standard for scalable file in public cloud, where we have a game-changing performance and capacity. We have a full range of enterprise-grade data management capabilities, and we enable real hybrid cloud, and open up new use cases for our customers. >> Excellent, thanks Joe for setting the table on that. Rich, let's pull you into the conversation. Before we go into the Google thing, give us a little bit about your background. You've been in storage, as I hinted at. You worked with Joe before, and tell us about your role inside of Google. >> Yeah, so I actually joined Google a few years ago, responsible for storage, and storage for all of Google, in addition to Google Cloud. And then, you know, big company things. We've been growing rapidly, and an opportunity opened up where I could be much more engaged on the Compute side, and so I'm responsible for Compute, the IaaS infrastructure for Google Cloud Engine. So it's my pleasure to be here and support Joe and Dell Technologies in the launch of OneFS on Google Cloud. >> Yeah, Rich, I'd like to come back to you on something, 'cause when you look at cloud, for many years it was cloud versus, you know, taking over the world, destroying everything before it. And especially, you look at Compute, or storage specifically, people have a little bit of a hard time wrapping their heads around, where my application lives. Does it just live one place? Are my applications going a little bit hybrid there? I look back, you know the disclosure, I worked at EMP for years. You know that storage is complicated and diverse, that's why we have file, block, and object. We have lots of different types of solutions out there. There's never been a silver bullet that says, "Okay, 90% of the people can use this one thing "for everything." So Rich, let's start with you. Cloud definitely has changed the discussion of storage, but I feel like I've seen the enterprise solutions looking more like the hyperscalers, and the hyperscale solution blurring the lines with what was traditionally happening in the data center. Do you agree with some of that? >> Oh yeah, absolutely. I think it's really nice when you control the horizontal and the vertical, and you can adapt your application stack, but that's just not the reality where we are today. The reality is that a cloud vendor, working with customers who bring their workloads in the cloud, have to be able to support all of the best-in-class types of storage that people are using. You're absolutely right, we're using cloud, or sorry, we're using objects, we're using block, we're using file. One of the great pieces of this, is that in the file space, you really need scalable file to go along with your scalable compute. >> Excellent, so-- >> Yeah, and I'll just add-- >> Please, go ahead Joe. >> Yeah, I mean, our customers, for a long time, have been asking, our Isilon customers in particular, asking for a long time to bring this type of capability to the cloud. They want the scalability of the elastic compute in the GPUs. They also want the OpEx model, right? And they want to be able to bring the high performance compute workloads to the cloud, but they need a scalable file system that can keep up with the demand, and that's what we set out to solve for. >> Excellent, so Joe you mentioned that the Isilon piece. You know, we've watched what has happened with that. You know, Isilon has always been software at the core and highly scalable, so we'd like you both, Joe you teed it up there, but Rich, why is this important for Google Cloud customers, and how's it different from, maybe, how they were doing things in the past? >> Well, I think one of the things that I think I'm really excited about, is that this enables customers to leverage the cloud, and not make a ton of changes on their server side. So it really allows them to preserve their investment, and their applications, and the way that they think about storage, and the way they think about how that scales and performs. So that, for me, is a, let's make it easy for customers to consume cloud, rather than make it a hurdle, and that's my view. >> Yeah, and Joe, help frame this for us a bit. You know, we watched Dell Technologies recently had the Power Store announcement. A lot of discussion about cloud native architectures, moving to micro-services. Google's one of the earliest and most prominent examples of innerized architectures out there. So, where does the file solution fit in this whole discussion that customers have about modernization of their applications, and the journey that they're going on? >> Yeah, well, not all applications lend themselves well to object. They need file semantics, as well as the performance characteristics that come along with that, in terms of throughput and latencies. But even beyond that, what our customer's looking for is the data management capability, right? Whether it's snapshots, or the multi-protocol data access for NFS, or SMB, or even HDFS. And they're looking for replication, native replication, so they can have their Isilon systems in the data center, replicate their data directly into the file service of the cloud so they can actually operate on that data, and then there's things that we take for granted now, at least in the data center, of that high availability and that high durability, that storage arrays deliver. So, it's a combination of things that make it attractive for customers, that open up these new workloads, especially in terms of a high performance compute. >> Excellent, you talked a bit about some of the reasons why customers wouldn't want file. Of course, scale is one of those things we've been talking about for many years. Scale means different things to many people. There's few companies that know scale better than Google, so Rich, talk a little bit about scalability, performance, what these types of evolutions mean, and what you're hearing from customers. >> Certainly from a scale perspective, things like objects and object store is super scalable. It's also, you know, requires application changes, to really make use of. Customers are really looking for scalable solutions that enable them to bring their existing applications to cloud, and not have to make a ton of changes to it. That's one of the things I think is great about the Dell offering, is that it is a full-fidelity solution that has the performance and scale of what customers are expecting from their on-premise, and then when we wire that up with the Google network into our Google Cloud compute regions, we get very high performance, and very high fidelity, low latency as a result. We think that that removes potential headaches that customers may have when they bring big applications in the HBC space, and related high performance computing space in the cloud. >> Great, and Joe, is all this available now? Tell us a little bit about availability. What do you expect the demand to be for this solution? >> Well, I expect the demand to be great, right? The kind of workloads we're talking about here cut across a wide range of verticals. So everything from whether it's like sciences for genomics research, oil and gas for seismic data processing, media and entertainment for video editing and rendering, or even finishing, automotive telemetry data that requires processing and scale, and EDA. So, I think it hits upon a wide variety of use cases and verticals, and we've even structured our pricing and our tiers to make it more accessible for use cases from high performance, all the way down to even archival. >> So, maybe just to clarify, this is GA today? >> Yeah, yes, it is GA. (laughs) >> Okay, excellent. >> Beta is behind 'em. >> Appreciate that, and how does, you mentioned flexibility on pricing. How much of this is what's available from Google, what's available from Dell? How does that relationship and go-to-market work together? >> Yeah, well it's a native service in Google. You can provision directly from the Google Portal. You can manage your file systems directly from the Google Portal, and the billing is integrated. So you get one bill from Google, whether it's for our OneFS file service, or any of Google's native services. >> Excellent, Rich, we'd love to hear, talk about from the Google side, the ecosystem. I know last year, I was at the Google Next event, really saw strong demand from the partner community. They're looking to work with Google, many have worked with Google for many years. What kind of feedback have you been getting and how this fits into the overall solution? >> So, from a partner perspective, one of the things that we really want to enable our partners, is to bring their services onto our platform, and to integrate them tightly as if they were a Google offering, and that's so things like the integrated billing, the provisioning from the Google Portal, things like that are core tenets for us for helping our customers and our partners' customers easily consume services in the cloud. So, sort of one of the P-zero requirements, from my perspective, for our product offering here, was that in fact it was just integrated into the Google Cloud platform, and that it would be discoverable and easily usable by customers. So I think that enables partners to deliver a first-class service on our platform. >> Yeah, I mean, Rich, absolutely. Some of the feedback I've gotten from the ecosystem, is, how do they put it? They say, "Google kind of puts you through the ringer. "By the time you get through that, "it is going to work." And of course, we know, Google's doing that to make sure that there are good, reliable, strong services by the time the end customer gets them. All right, Joe-- >> Yes, and-- >> (laughs) Go ahead, yeah. >> I was going to say, you know, delivering these services, and delivering them reliably, it's a multi-company partnership, but we understand that at the end of the day, the customer wants to be assured that there is, they have one contact for problems with the service, and so that's where Google very much wants to be that primary contact, 'cause who knows where the issues could be. Are they in the data center, or are they in the network, or are they on the customer side? We feel responsibility to front (audio distorts). >> Yeah, absolutely. So, Joe, I guess, final thing for you. Talk about the Dell Technologies Google Cloud relationship, why that's important, what differentiates it from some of the many other partnerships that Dell has. >> Yeah, sure, before I touch on that, I want to talk about, you mentioned scale, and scale means different things to different people. And when we're talking about scale here, capacity's one element of that, and we certainly scale that way, but performance is the other way. And ESG did a performance study on the OneFS file service that we're offering, and they fired up the biozone benchmark, which fired up over 1000 cores in Google, running NFS load to the file system. They sized the file system at 2 petabytes, which seems large, and it is, but you can scale much larger than that with our service, and their results on throughput was 200 gigabytes per second on the read, and 100 gigabytes per second on the write. Now, these are game changing numbers, right? It's numbers like that that enable compute-intensive, high performance workloads in Google Cloud, and we're opening that up. And it's also important to note that this is a scalable file system, so if you want to double those throughput numbers, you just double the capacity of your file system. So that's the power of scale that we're delivering here. And our file system can scale up to 50 petabytes, so a lot of runway there. As far as the partnership with Google goes, I mean, Google's been great. Their infrastructure is amazing. In order to hit those kind of performance numbers, your head goes to compute and the file system, but there's also a network in there, and to hit those kind of numbers, Google had to supply a two terabyte per second network, and they were able to supply the compute and the network with ease, and without hiccup. So it's together that we're solving for the compute, network, and storage equation, and that we can deliver a holistic solution. And lastly, I would just point out, the engineering teams work great bringing that cloud native experience into that Google Portal, really simplifying user experience. So, they can provision and manage the systems directly from the Portal, as well as unifying the billing. So I think the partnership's been great, and it's going to be interesting to see how our customers use the service to accelerate their cloud journey. >> Well, Joe and Rich, thank you so much for the updates. Congratulations on GA of this, and definitely look forward to hearing the customer journeys as they go on. >> Thank you, Stu. >> All right, thank you. And Rich, thank you for your partnership. >> Yeah, your welcome, Joe. Thank you, as well. >> All right, be sure to check out thecube.net for all the coverage, the virtual events that we're participating, as well as the back catalog of interviews that we've done. I'm Stu Miniman, and as always, thank you for watching theCUBE. (upbeat music)
SUMMARY :
leaders all around the world, and really happy to welcome to the program how the cloud portfolio and and the simplicity of public cloud, for setting the table on that. in the launch of OneFS on Google Cloud. and the hyperscale in the cloud, have to of the elastic compute in the GPUs. that the Isilon piece. and the way they think and the journey that they're going on? into the file service of the cloud of the reasons why customers that has the performance and scale Great, and Joe, is and our tiers to make it more accessible Yeah, yes, it is GA. How much of this is what's from the Google Portal, and from the partner community. one of the things that we really want "By the time you get through that, at the end of the day, from some of the many other partnerships and the network with and definitely look forward to And Rich, thank you for your partnership. Yeah, your welcome, Joe. for all the coverage, the virtual events
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joe | PERSON | 0.99+ |
Rich | PERSON | 0.99+ |
Rich Sanzi | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Joe CaraDonna | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
May 2020 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
2 petabytes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
Stu | PERSON | 0.99+ |
two terabyte | QUANTITY | 0.99+ |
Google Portal | TITLE | 0.99+ |
ESG | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
one element | QUANTITY | 0.98+ |
thecube.net | OTHER | 0.98+ |
one bill | QUANTITY | 0.97+ |
one contact | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
both | QUANTITY | 0.96+ |
over 1000 cores | QUANTITY | 0.96+ |
Google Next | EVENT | 0.94+ |
Google Cloud | TITLE | 0.94+ |
First | QUANTITY | 0.94+ |
one thing | QUANTITY | 0.93+ |
100 gigabytes per second | QUANTITY | 0.92+ |
OneFS | COMMERCIAL_ITEM | 0.91+ |
up to 50 petabytes | QUANTITY | 0.89+ |
200 gigabytes per second | QUANTITY | 0.87+ |
Google Cloud | ORGANIZATION | 0.86+ |
Caitlin Gordon, Dell Technologies | CUBE Conversation, May 2020
from the cube studios in Palo Alto in Boston connecting with thought leaders all around the world this is a cube conversation [Music] hi I'm Stu manna man and welcome to a special cube conversation normally the first week of May we would be at Dell technologies world but that event has been moved to the fall but one of the major announcements from the event are going forward joining me to talk about powering up the mid-range of storage is Caitlin Gordon she is the vice president of marketing at Dell technologies Caitlin you thanks so much for joining thank you so much for having me Stu it's great to be here all right so Caitlin the last couple of years at a dtw different segments of the market as I said it's been powered up as the marketing messaging usually you've got some good t-shirts you've got a lot the labs and demos so tell us about the important announcement that you're sharing with today yeah I mean unfortunately the show is not going on but the product is still launching it actually is already started chipping and we are excited that we're still at be able to announce it this week our store is really probably the most exciting product they've ever gotten to help bring to market and all those demos and labs that you've talked about we're gonna have them all they're all going to be digital this year as well and it's really important for us as a business because it really changes what we're able to do for our customers you know we love speeds and feeds and storage but power store is so much more than that We certainly have designed it to meet the needs of all the workloads lock and file providing performance and efficiency but even more importantly what we built with this platform is something that will help our customers change the way that they're running their data centers and maybe most importantly can adapt with them as their businesses evolve yeah it's so important Caitlin I'm glad you talked about that you know you know the storage industry you know IP in general we can really get wonky and dig down to the speeds and feeds and yeah we want to understand you know how does nvm me and sports class memory and all that thing fit into but I want you to talk about you know what is that customer requirement that you're solving for in the age of AI and cloud you know what are the customers looking for what are those things that your cell for that maybe you know previous generations you go back to like the Unity ie this weren't on the table for discussion yeah I think one of the most interesting thing that's happened for us in the past few years in our conversations with customers is we do have the speeds and feeds the end-to-end nvme and octane and all that wonderful goodness but what they're really helped they're really asking for help on is how do they move towards this vision of having a truly autonomous data center how do they move to a fully self-service model so that all of their infrastructure can be treated like code and that you can automate all of those storage workflows picking out all of the additional costs and time and probably most importantly risk of manual tasks how do we have infrastructure that can be a more intelligent and helped them make more proactive and intelligent decisions that's one part of the equation the other piece is what we've heard loud and clear and this is now true more than ever before that infrastructure investments not only need to make sense for what the needs are today but also need to have the flexibility to adapt with businesses as they're going through this rapid and unpredictable transformation so that they can ensure that there are infrastructure investments today don't become technical debt tomorrow so that ability to have infrastructure that can adapt and evolve that the business is so important to our customers yes so Caitlin how is that done you know traditionally store do you think about it you know I buy a box like why did no way I write it off over 30 number years so what's different about you know the the service is and I'm guessing there's some financial pieces that make you know power store and the rest of the power family different than what I would have bought traditionally from buying a storage array yeah really the whole dynamic changes and it starts really foundationally with the flexible architecture so the product itself is built with the flexible architecture the ability the fact that it's a container based architecture were able to innovate on a container basis which makes our data services across the portfolio more consist enables us to innovate faster it also means that all of our innovation will be delivered to customers in a non disruptive way whether that's a hardware upgrade or a software upgrade all of that will happen without impacting the business that's really the flexible and adaptable architecture but when you look at the deployment that's an even bigger conversation how can we help and deliver infrastructure that gives you a solution that can support a small footprint at the edge collapse that infrastructure at the edge help with data center modernization connect into cloud and the last piece you're just touching on is that consumption more and more and then that's accelerated over the past month or so the ability to consume this as a service it's such an important part of what we're doing here in power stores available all of our Dell technologies on-demand offerings flex on-demand to give you that ability to really consume an infrastructure and an object model really interesting you talked about you know underneath the covers you know containerized architecture you know I think back the previous generations when you know EMC moved on to an intel-based architecture you know there's things where you say there's a major change in the code bases a major change in the architecture and from a customer standpoint they shouldn't have to think about it but I know there's so much work that goes through to make sure that things are rock-solid that it's still gonna provide you know X nines of capability and make sure that you can run your business on it helped us understand a little bit about you know how you know you said a lot of things have changed but we're still talking about things that you know you're running you know our business is on or you know mid-race customers for small enterprises midsize enterprise you know but what's what's still the same I guess is what I'm asking for today's storage compared to what we were looking at that yeah and if you look at it I mean the architecture itself is built as an architecture can pick conserve the broadest set of needs or the biggest set of our customer base so foundationally it supports all physical databases and applications we've got we've all support it's got performance that's really incredible compared to our previous lead mid range all flash solutions seven times faster three times better response times the efficiency of course is critical the ability to support that in a really small footprint with always-on inline data reduction four to one guaranteed the architecture not only scales up of course as a storage appliance but also can independently scale compute so they have the ability to scale up in an appliance and scale out into a cluster and of course you can't resist the buzzwords that's important and an nvme of course the ability to support nvme based flash drives or SEM and it's specifically actually the dual ported octane drive for persistent storage so when you look at it it truly is a best-in-class all flash mid-range storage array but it also does a lot more and that's part of the fun dynamic of what we've built okay so you know we talked about scaling up and scaling out you know of course you know we look at Bay's world two things that are critically important to customers it's my data and my applications obviously you know strong legacy at Dell EMC looking at the data you touched a little bit about the applications but you know tell me more how does this fit for you know my latest cloud native type environments you know how do applications fit into this environment yeah and it's really builds on what we're starting to talk about with that container based architecture so the fact that his container based is interesting and good for us because we can innovate faster it's even more important for customers and we can deliver that to them faster and more consistently what's more interesting is what we can then do for their workloads and their applications because we have this brand-new modular software operating system of course we can deploy that as a standard bare metal on purpose-built hardware or storage appliance what's even more interesting and what's really different about what we can do with our store is we can also abstract that storage OS from the underlying hardware onboard VMware ESXi and run both the storage operating system and applications natively on the appliance so able to collapse the compute and storage layers into a single piece of infrastructure and run a handful of specialized applications on that one appliance which really is game-changing in the data center at the edge to change the way that you can run and consolidate your operations okay yeah if you say specialize to applications so let let's build onion a little bit on that you know I think back obviously you know Dell has a very strong position in hyper-converged infrastructure which is scaling you know compute and storage and doing that an entire environment I remember there were a lot of efforts to say well with a virtualized environment maybe I hate storage and I can put applications on it that was there was use case with Isilon and to say you know I've got a lot of general-purpose compute if I have some excess capacity maybe I can do that it wasn't something that I heard used a lot so what sort of applications and how do kind of compare and contrast this with other things like like HDI yeah and this is power stores apps on capability and really what it's built for is these kind of two classes of applications the first is infrastructure apps so think of these as any type of application that the infrastructure team themselves is is leveraging and wants to simplify their operations antivirus data protection things like that the other category would be what we call data intensive so a data intensive application really is more storage intensive right either has a high demand for capacity and a small demand for compute or is one of these more latency sensitive applications real-time analytics is a good example things like blink and spark the response time is really King and when we look at that in comparison to what HCI is we have been and we are in a great position right with the xrail has been leading the hyper-converged market and we know that our customers are deploying that alongside three-tier architecture and what you look at what we've done with our store what we already have with rail they're highly complementary what we've done in HCI is we've taken storage and brought it into compute what we've done with power store we've taken storage and we brought compute into it and it really solves four different is optimized for different challenges and we really think complementing those in the data center next to each other is going to be an increasingly common deployment model to have the right architecture or the right workloads and then you have VMware consistent operations across the top so you have that consistent operations within your data center whoo edge and also to the cloud all right so end-to-end portfolio is what you're saying there's options for the different applications what one of the big challenges for storage people always is you know I always used to joke it's the four-letter word its migration so customers you know there there are very few Greenfield deployments out there so the existing Dell customers people out there that have been doing things in previous ways how do they get to power store and you know once they're on power Spore what does that mean for you know future you know growth expansion you know migration discussions yeah and I've heard this before right forklifts are not a friendly thing and the good news is with power store it is truly the end of data migration we've built with power store is an architecture that enables you to non-disruptive Li upgrade the controllers when new generations come out you can destructively operate those keep all the capacity in place and don't have any an impact to your business we also know the customers need to get data to powers for now getting to the 2 power store is going to be really really seamless we have invested significantly and a number of different migration options for our portfolio and for third-party to get data to our score and what seamless means could be different to different customers that can be non disruptive it could be agent lists it also could be host based we'll have all of those solutions from day one to enable that transition that happened as seamlessly as possible and on a customer's own time we've actually optimized this to the point where we now enable you to move data from an existing platform to our store in less than 10 players okay that that's great Kaitlyn so you know III remember back when Mendell first finished the acquisition of EMC one of the things we heard loud and clear with Jeff Clarke is a simplification of the portfolio it's something we've heard throughout the ranks remember talking to Jeff Boudreau about hinting at what was happening at the in the mid-range so what does this mean for existing mid-range lines and tell us about what we expect to see as this transition rolls out yeah absolutely so power store is absolutely our lead mid-range all-flash offering we continue to have unity XD is our lead hybrid mid-range solution and we have at end of life any of our other existing mid-range platforms what we know above anything else is that transformation and transitions in the data center and on storage race takes time and the important thing for us is that we enable our customers to do that on their own time and as seamlessly as possible so we have not announced a new end of life when we do we're going to have a long service life and we've built all of these different migration tools to help support that transition so it's going to be very easy for our customers to do that move on their own time and it still enables us to deliver on what we've promised you which is a simplified portfolio great Kaitlyn last thing I want to ask you is what's challenging for people is number one they've got kind of the skill set and the rules that they have today so there needs to be you know an easy migration to go from what they have to the new on the other hand also sometimes it you you want to take a clean sheet of paper and say boy if you could just start over and do it this way it's going to make your life so much easier so tell us how you're balancing that and how you can help both that you know you're your install base as well as you know new people coming in that might not have been traditional storage industry yeah I think the reality is that they're the specialized skill as a storage administrator isn't is something that will not be a growing skill set and we need to help our customers certainly support an operating model that does work like a storage array but does so in a way that is extraordinarily simple and has a lot of intelligence built in so first and foremost this is a storage platform and has really been designed who have the most seamless and simple operating experience from an element manager with our store manager for a storage admin but at the same time we know that for a variety of reasons a lot of customers have a single team that managed their infrastructure and is really moving into more of a cloud operating model and for that we've built in all of the integrations and tools with vmware whether it's Fiero vmware cloud foundation to really help vmware administrator also be able to operate the system as well excellent so it's just on that also how do things like analytics fit into the entire monitoring discussion help us understand how that fits in with some of the rest of the Dell portfolio yeah that's exactly where I was going to go over the last piece of this is why would I Q is something that's really important is Prateek for us cloud IQ of course comes with power so it comes with all of our storage offerings today we're officially announcing it coming across our infrastructure portfolio as well and that's really game-changing for customers in a number of different ways first is it really helps produce risk in the environment because it shows you a health scare or for your data center and if it has an issue it will quickly help you pinpoint that and troubleshoot it before it ever actually becomes a problem that impacts your business you're gonna help you predict your future user needs things like predictive analytics built into cloud IQ help you do capacity forecasting and planning so that you can see exactly when you're going to get to those thresholds of 80 90 100 percent capacity and remedy that board impacts the business and with it now coming across the entire infrastructure portfolio the value it can bring is outside of just storage alone but to the entire data center and one of the biggest things our customers and partners have loved about Cloud IQ is the trusted advisor feature that allows these are our reps or partner to have the ability to be part of that cloud IQ experience he read into from a mobile application or from a web browser have that remote monitoring of the environment and add that human intelligence to the machine intelligence really manage that data center and help our customers stay on top of problems and stay ahead of them before they impact the business well Kaitlyn congratulations the whole power store team we understand a lot of hard work goes into building this and really look forward to by the time we get to Delta technology's world in the fall talking to customers that are using thanks so much for joining us and look forward to talking with you again thanks - great to see you all right be sure to check out the cube dotnet for all the upcoming events that we're doing right now of course a hundred percent remote I'm sue minimun and thank you for watching the Q [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Boudreau | PERSON | 0.99+ |
Kaitlyn | PERSON | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
Caitlin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
May 2020 | DATE | 0.99+ |
Caitlin Gordon | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
80 | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Boston | LOCATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.98+ |
less than 10 players | QUANTITY | 0.98+ |
two things | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
Mendell | PERSON | 0.97+ |
this week | DATE | 0.97+ |
seven times | QUANTITY | 0.96+ |
tomorrow | DATE | 0.96+ |
VMware ESXi | TITLE | 0.96+ |
cloud IQ | TITLE | 0.95+ |
minimun | PERSON | 0.95+ |
three-tier | QUANTITY | 0.94+ |
one | QUANTITY | 0.94+ |
hundred percent | QUANTITY | 0.93+ |
Greenfield | ORGANIZATION | 0.93+ |
xrail | ORGANIZATION | 0.92+ |
HCI | ORGANIZATION | 0.92+ |
power store | ORGANIZATION | 0.92+ |
Delta technology | ORGANIZATION | 0.92+ |
Cloud IQ | TITLE | 0.91+ |
VMware | TITLE | 0.9+ |
vmware | TITLE | 0.89+ |
Isilon | ORGANIZATION | 0.89+ |
first week of May | DATE | 0.88+ |
single team | QUANTITY | 0.88+ |
one part | QUANTITY | 0.88+ |
Unity | TITLE | 0.85+ |
four-letter | QUANTITY | 0.84+ |
over 30 number years | QUANTITY | 0.84+ |
three times | QUANTITY | 0.84+ |
100 percent | QUANTITY | 0.83+ |
Stu | PERSON | 0.83+ |
this year | DATE | 0.83+ |
past few years | DATE | 0.83+ |
day one | QUANTITY | 0.82+ |
single piece | QUANTITY | 0.8+ |
Dell technologies | ORGANIZATION | 0.8+ |
Stu manna | PERSON | 0.8+ |
power store | ORGANIZATION | 0.8+ |
a lot of customers | QUANTITY | 0.79+ |
two classes | QUANTITY | 0.75+ |
90 | QUANTITY | 0.74+ |
Fiero | TITLE | 0.7+ |
one of the major announcements | QUANTITY | 0.68+ |
one appliance | QUANTITY | 0.68+ |
lot | QUANTITY | 0.64+ |
some financial pieces | QUANTITY | 0.64+ |
last couple of years | DATE | 0.63+ |
Prateek | TITLE | 0.57+ |
cube | COMMERCIAL_ITEM | 0.47+ |
2 | ORGANIZATION | 0.35+ |
dotnet | ORGANIZATION | 0.22+ |
Breaking Analysis: Storage...Continued Softness with Some Bright Spots
>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now here's your host, Dave Vellante. >> Hello everybody and welcome to this week's CUBE Insights, powered by ETR. It is Breaking Analysis, but first I'm coming to you from the floor of Cisco Live in Barcelona, and I want to talk about storage. Storage continues to be soft but there are some bright spots. I've been reporting on this for awhile now and I want to dig in and share with you some of the reasons why, maybe give you some forecasts as to what I think is going to happen in the coming months. And of course, we want to look into some of the ETR spending data, and try to parse through that and understand who's winning, who's losing, who's got the momentum, where are the tailwinds and headwinds. So the first thing I want to show you is let's get right into it. What this slide is showing here is a storage spending snapshot of net score. Now remember, net score in the ETR parlance is an indicator of momentum or spending velocity. Essentially every quarter, what ETR does is they go out to, in this case, 1100 respondents out of the 4500 dataset, and they ask them are you spending more or are you spending less. Essentially they subtract the less from the more and that constitutes net score. It's not that simple but for this purpose, that's what we're showing. Now you can see here on the left hand side, I'm showing all respondents out of 1161. You see the January survey net scores. You've got Rubrik, Cohesity, Nutanix, and Pure, and VMware vSAN are the top five. So Rubrik and Cohesity, very strong, and interesting, Rubrik was very strong last quarter. Cohesity not as strong but really shooting up. It kind of surprised me last quarter, Cohesity being a little low but they were early into the dataset and now they're starting to show what I think is really happening in the marketplace. That's a good indicator. But you can see 75 percent, 72 percent. Nutanix still very strong at 56 percent, driving that hyperconverge piece. You see Pure Storage at 44 percent, down a little bit, talk a little bit more about that in a moment. VMware vSAN, Veeam, et cetera, down the list. The thing about the left hand side and storage in general, you can see the softness. Only about one third of the suppliers are in the green, and that's a problem. If you compare this to security, probably three quarters are in the green. It's a much hotter segment. Now, look on the right hand side. The right hand side is showing what ETR calls GPP, giant, public, and private. You can see there's an N of 403. These are the largest, the very largest public and private companies, private company being a company like Mars Candy. And they say that they are the best indicators of spending momentum in the dataset. So really isolating on some of the large companies. Look what happens here. You can see Rubrik gets even stronger as does Cohesity, they're into the 80 percent range. That's really rarefied air, so very strong. You can see Nutanix drops down. It does better in the smaller companies, it appears. They drop down to 41 percent. Pure gets stronger in the GPP at 68 percent. You can see VMware's vSAN uptick to 45 percent. Nimble gets better, HPE's Nimble, to 54 percent. Dell drops down to 4.8 percent. HPE goes up to 33 percent. HPE was red in the left hand side. You can see Veeam drops, not surprising, Veeam in the biggest companies is not going to be as prevalent. We talked about that in our Breaking Analysis segment after the acquisition of Veeam. You can see NetApp bumps up a little bit but it's still kind of in that red zone. I also want to call your attention to Actifio. They're way down on the bottom in the left hand side, which kind of surprised me. And then I started digging into it because I know Actifio does better in the larger companies. In the right hand side, they pop up to 33 percent. It's only an N of three, but what I'm seeing in the marketplace is Actifio solving some really hard problems in database and copy data management. You're starting to see those results as well. But generally speaking, this picture is not great for storage, with the exception of a few players like Rubrik and Cohesity, Pure, Nutanix. And I'm going to get into that a little bit and try to explain what's going on here. The market's bifurcated. Primary storage has been on the back burner for awhile now, and I've been talking about that. The one exception to that is really been Pure. Little bit for Dell EMC coming back, we'll dig into that a little bit more but Pure has been the stand-out. They're even moderating lately, I'll talk about that some more. Secondary storage is where the market momentum is and you can see that with Rubrik and Cohesity. Again, we'll talk about that some more. Let me dig into the primary side. Cloud, as I've talked about in many Breaking Analysis segments is siphoning off demand from on-prem spend. The second big factor in storage has been there was such an injection of flash into the marketplace, it added headroom. Customers used to buy spindles to get performance, and they don't need to do that so much anymore because so much flash was pushed into the system. The third thing is you're still seeing in primary the consolidation dynamics play out with hyperconverge. So hyperconverge is the software defined bringing together of storage, compute, and networking into a single logical managed unit. That is taking share away from traditional primary storage. You're also seeing tactical NAND pricing be problematic for storage suppliers. You saw that with Pure again this past quarter. NAND pricing comes down, which you'd think would be a good thing from a component standpoint, which it is, but it also lowers prices of the systems. So that hurt Pure's revenue. Their unit volume was pretty good but you're seeing that sort of put pressure on prices, so ASPs are down, average system prices. Let's turn our attention to the secondary market for a moment. Huge injection of venture capital, like a billion dollars, half a billion dollars over the last year, and then another five billion just spent on the acquisition of Veeam. A lot of action going on there. You're seeing big TAM expansions where companies like Rubrik and Cohesity, who have garnered much of that VC spending, are really expanding the notion of data protection from back-up into data management, into analytics, into security, and things of that nature, so a much bigger emphasis on TAM expansion, of course as I talked about the M and A. Let's dig into each of these segments. The chart that I'm showing now really digs into primary storage. You can see here the big players, Pure, Dell EMC, HPE, NetApp, and IBM. And lookit, there's only company in the green, Pure. You can see they're trending down just a little bit from previous quarters but still far and away the company with most spending momentum. Again, here I'm showing net score measure of spending velocity back to the January '18 survey. You can see Dell EMC sort of fell and then is slowly coming back up. NetApp hanging in there, Dell EMC, HP, and NetApp kind of converging, and you can see IBM. IBM announced last quarter about three percent growth. I talked about that actually in September. I predicted that IBM storage would have growth because they synchronized their DS8000 high-end mainframe announcement to the z15, so you saw a little bit of uptick in IBM. Pure, as I said, 15 percent growth. I mean, if you're flat in this market or growing at three percent, you're doing pretty well, you're probably a share gainer. We'll see what happens in February when Dell EMC, HPE, and NetApp announce earnings. We'll update you at that time. So that's what you're seeing now. Same story, Pure outpacing the others, everybody else fighting for share. Let's turn our attention now to secondary storage. What I'm showing here is net score for the secondary storage players. I can't isolate on a drill down for secondary storage, last slide I could do on storage overall, but what I can show is pure plays. What's showing here is Rubrik, Cohesity, Veeam, Commvault, and Veritas. Five pure play, you can argue Veritas isn't a pure play, but I consider it a pure play data protection vendor. Look at Rubrik and Cohesity really shooting up to the right, 75 percent and 72 percent net scores, respectively. You see Veeam hanging in there. This is again, all respondents, the full 1100 dataset. Commvault announced last quarter it beat earnings but it's not growing. You can see some pressure there, and you can see Veritas under some pressure as well. You can see a net score really deep in the red, so that's cause for some concern. We'll keep watching that, maybe dig into some of the larger accounts to see how they're doing there. But you can see clear standouts with Rubrik and Cohesity. I want to look at hyperconverge now. Again, I can't drill into hyperconverge but what I can do is show some of the pure plays. So what this slide shows is the net score for some of the pure play hyperconverge vendors led by Nutanix. The relative newcomer here is vSAN with VMware. You can see Dell EMC, VxRail, and Simplivity. I would say this. A lot of the marketing push that you hear out of Dell and out of VMware says Nutanix is in big trouble, they're dying and so forth. Our data definitely shows something different. The one caution is, you can see Nutanix and larger accounts, not as strong. And you can see both vSAN and Dell EMC stronger in those larger accounts. Maybe that's kind of their bias and their observation space, but it's something that we've got to watch. But you can see the net scores here. Everybody's in the green because overall, this is a strong market. Everybody is winning. It's taking share as I said from primary. We're watching that very closely. Nutanix continues to be strong. Watching very carefully that competitive dynamic and the dynamics within those larger companies which are a bellwether. Now the big question that I want to ask here is can storage reverse the ten-year trend of the big cloud sucking sound that we have heard for the past decade. I've been reporting with data on how cloud generally has hurt that storage spend on-prem. So what I'm showing here in this slide is the net score for the cloud spenders. Many hundreds of cloud spenders in the dataset. What we're showing here is the net score, the spending velocity over the last 10 years for the leaders. You can see Dell EMC, the number one. NetApp, right there in terms of market share, IBM as well. I didn't show HPE because the slide got too busy but they'd be up there as well. So these are the big spenders, big on-prem players and you can see, well, it's up and down. The highs are lower and the lows tend to be lower. You can see on the latest surveys, maybe there's some upticks here in some of the companies. But generally speaking, the trend has been down. That siphoning away of demand from the cloud guys. Can that be reversed, and that's something that we're going to watch, so keeping an eye on that. Let me kind of summarize and I'll make some other comments here. One of the things we're going to watch here is Dell EMC, NetApp, and HPE earnings announcements in February. That's going to be a clear indicator. We'll look for what's happening with overall demand, what the growth trajectory looks like, and very importantly, what NAND pricing looks like. As a corollary to that, we're going to be watching elasticity. I firmly believe as prices go down, that more storage is going to bought. That's always been the case. Flash is still only about 20, 25, 30 percent of the market, about 30 percent of the spending, about 20 percent of the terabytes. But as prices come down, expect people to buy more. That's always been the case. If there's an elasticity of demand, it hasn't shown up in the earning statements, and that's a bit of a concern. But we'll keep an eye on that. We're also going to watch the cloud siphoning demand from on-prem spend. Can the big players and guys like Pure and others, new start-ups maybe, reverse that trend. Multi-cloud, there's an opportunity for these guys. Multi-cloud management, TAM expansion into new areas. Actually delivering services in the cloud. You saw Pure announce block storage in the cloud. So that's kind of interesting that we'll watch. Other players may be getting into the data protection space, but as it relates to the cloud, one of the things I'm watching very closely is the TAM expansion of the cloud players. What do I mean by that. Late last year, Amazon announced a broader set of products or services really in its portfolio. Let's watch for Amazon's moves and other big cloud players into the storage space. I fully expect they're going to want to get a bigger piece of that pie. Remember, much if not most of Amazon's revenue comes from compute. They really haven't awakened to the great storage opportunity that's out there. Why is that important. You saw this play out on-prem. Servers became a really tough market. Intel made all the money. Amazon is a huge customer of Intel, and Intel's getting a big piece of Amazon's EC2 business. That's why you see, in part, Amazon getting into its own chip design. I mean, in the server business, you're talking about low gross margin business. If you're in the 20s or low 30s, you're thrilled. Pure last quarter had 70 plus percent gross margins. It's been a 60 plus percent gross margin business consistently. You're going to see the cloud guys wake up to that and try to grab even more share. It's going to be interesting to see how the traditional on-prem vendors respond to that. Coming into last decade, you saw tons of start-ups but only two companies really reached escape velocity: Nutanix and Pure. At the beginning of the century, you saw Data Domain, Isilon, Compellent, 3PAR all went public. EqualLogic and LeftHand got taken out. There are a bunch of other companies that got acquired. Storage was really a great market. Coming into this decade, mid part of the decade, you had lots of VC opportunity here. You had Fusion and Violin, Intentury went public. They all flamed out. You had a big acquisition with SolidFire, almost a billion dollars, but really Pure and Nutanix were the only ones to make it, so the question is, are you going to see anyone reach escape velocity in the next decade, and where's that going to come from. The likely players today would be Cohesity and Rubrik. Those unicorns would be the opportunity. You could argue Veeam, I guess reached it, but hard to tell because Veeam's a private company. By escape velocity, we're talking large companies who go public, have a big exit in the public market and become transparent so we really know what's going on there. Will it come from a cloud or a cloud native play. We'll see. Are there others that might emerge, like a Nebulon or a Clumio. A company like Infinidat's doing well, will they hit escape velocity and do an IPO and again, become more transparent. That's again something that we're watching, but you're clearly seeing moves up the stack where there's a lot more emphasis in spending on cloud, cloud native. We clearly saw it with hyperconverge consolidation but up the stack towards the apps, really driving digital transformations. People want to spend less on heavy lifting like storage. They're always going to need storage. But is it going to be the same type of market it has been for the last 30 or 40 years, of great investment opportunities. We're starting to see that wane but we'll keep track of it. Thank you for watching this Breaking Analysis, this is CUBE Insights powered by ETR. This is Dave Vellante. We'll see you next time.
SUMMARY :
From the SiliconANGLE Media office You can see here the big players, Pure,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
February | DATE | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
January '18 | DATE | 0.99+ |
15 percent | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
60 plus percent | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
20s | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
three percent | QUANTITY | 0.99+ |
Mars Candy | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
72 percent | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
41 percent | QUANTITY | 0.99+ |
half a billion dollars | QUANTITY | 0.99+ |
75 percent | QUANTITY | 0.99+ |
45 percent | QUANTITY | 0.99+ |
1100 respondents | QUANTITY | 0.99+ |
4.8 percent | QUANTITY | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
68 percent | QUANTITY | 0.99+ |
44 percent | QUANTITY | 0.99+ |
TAM | ORGANIZATION | 0.99+ |
five billion | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
56 percent | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
54 percent | QUANTITY | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
80 percent | QUANTITY | 0.99+ |
3PAR | ORGANIZATION | 0.99+ |
DS8000 | COMMERCIAL_ITEM | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
ten-year | QUANTITY | 0.99+ |
70 plus percent | QUANTITY | 0.99+ |
Intentury | ORGANIZATION | 0.99+ |
last decade | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
EqualLogic | ORGANIZATION | 0.99+ |
Compellent | ORGANIZATION | 0.99+ |
Violin | ORGANIZATION | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
Data Domain | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Joe CaraDonna & Bob Ganley, Dell EMC | AWS re:Invent 2019
(upbeat music) >> Announcer: Live from Las Vegas, it's theCUBE. Covering AWS re:Invent 2019, brought to you by Amazon Web Services and Intel, along with it's Ecosystem partners. >> Good morning, welcome back to theCUBE, Lisa Martin live at AWS re:Invent. Day two of theCUBEs coverage. I am with Stu Miniman, and Stu and I are pleased to welcome a couple of guests of our own from Dell EMC. To my left is Joe CaraDonna, the VP of engineering technology. Welcome to theCUBE. >> Good to be here. >> And then one of our alumni, we've got Bob Ganley, senior consultant Cloud product marketing. Welcome back. >> Thank you. Glad to be here. >> So guys, here we are at AWS re:Invent, with 60 plus thousand people all over the strip here. We know Dell technologies, Dell EMC well, big friends of theCUBE. Joe, Dell, AWS, what's going on? You guys are here. >> Apparently Cloud is a thing. >> Lisa: I heard that. I think I've seen the sticker. >> Yeah, you've seen the sticker. Over the last year, we've been busy rolling out new Cloud services. I mean, look around right. It's important to our customers that we can deliver hybrid Cloud solutions to them, that are meaningful to them and to help them get their workloads to the Cloud. and to be able to migrate, move between Clouds and data center. >> Yeah, Joe, maybe expand a little on this. So we watched when VMware made the partnership announcement with AWS a couple of years ago, which sent ripples through the industry. And VMware has had a large presence at this show, we've seen a lot of announcements and movements with Dell, Dell technologies, Dell EMC over the last year or more, but this is the first year that Dell's actually exhibiting here so help explain for our audience a little bit that dynamic with leveraging VMware and also what Dell is bringing to this ecosystem. >> Yeah, sure. I mean, the way we think about it is, it's really a multi-level stack, you have the application layer and you've got the data layer. So applications with VMware, we're focusing on enabling applications, whether they're VMs or containerized now, being able to move those to the Cloud, move them on-prem. Same is true for data. And data is actually the harder part of the problem, in my opinion, all right, because data has gravity. It's just big, it's hard to move, the principles of data in the Cloud are the same as they are on-prem where you still have to provide the high availability and the accessibility and the security and the capacity and scale in the Cloud as you would in the data center. And what we've been doing here, with our Cloud storage services is bringing essentially our range as a service, to the Cloud. >> You talked about some of those changes and absolutely, data's at the center of everything. We've been saying for a long time, you talk about digital transformation, the outcome of that is if you're not letting data drive your decisions, you really haven't been successful there. One of the biggest challenges beyond data, is the applications. Customers have hundreds, if not thousands of applications, they're building new ones, they're migrating, they're breaking them apart in to micro services, Bob, help us understand where that intersects with what you're talking with customers about. >> Yeah, absolutely. So one of the reasons we're here is most organizations today are leveraging some public Cloud services and at the same time, most organizations have investment on-prem infrastructure. I think we heard Andy say in the keynote yesterday, 97% of all enterprise IT spend is on-prem right now. So organizations are trying to figure out how to make those work together. And that's really what we're here to do, is help organizations figure out how to make their big on-prem investment work well with their public Cloud investment and AWS is clearly the leader there in that space and so we're here to work with our customers in order to help them really bridge that gap between public Cloud and private Cloud and make them work together well. >> And Bob, where does that conversation start? Because one of the other things that Andy talked about is that, his four essentials for transformation is it's got to start at the senior executive level, strategic vision that's aggressively pushed down throughout the organization. Are you now having conversations at that CEO level for them to really include this value of data and apps as part of an overall business transformation? >> Yeah, definitely. If you think about it, it's all about people, process and technology. And technology is only a small part of it. And I think that's the important thing about what Andy was saying in the keynote yesterday, is that it's about making sure that Cloud as an operating model, not as a place, but as an operating model, gets adopted across your organization. And that has to have senior leadership investment. Yeah, they have to be invested in this move, but both from an applications and a data perspective. >> Yeah and on the technology side of things, you want to be able to give the developers the tools they need so they can develop those Cloud native applications. So in the on-prem sphere, we have ECS or objects stored kind of technology for bringing an object to data center. We're plugging into kubernetes every which way. With VMware, we're developing CSI drivers across our storage portfolio to be able to plug in to these kubernetes environments. And we're enabling for data and application migration across environments, as well. >> In many ways, Joe, we've seen, there's a really disaggregation of how people build things. When I talk to the developer community, hybrid is the model that many of them are using, but it used to be nice in the old days as, I bought a box and it had all the feature checklist that I wanted. Now, I need to put together all these micro services. So help us understand some of those services that you provide everywhere. >> It's a horror, right? What did Andy Jassy say yesterday, these are your father's data requirements, right? And he's right about that because what's happening with data is it's sprawling. You have them in data centers, you have them in Cloud, you have them in multiple Clouds, you have them in SaaS portals, you have it on file services and blog services, and how do you wrap your arms around that? And especially when you start looking at use cases like data analytics and you start thinking about data sets, how do you manage data sets? Maybe I had my data born on-prem and I want to do my analytics in the Cloud, how do I even wrap my hands around data sets? So we have a product called ClarityNow, that in fact does that. It indexes billions of files and objects across our storage, across our Cloud services, across Amazon S3, across third party NAS systems as well, and you can get a single pane of glass to see where your files and your objects reside. You can tag it, you can search upon it, you can create data sets based on search, on your tags and your meta data, to then operate on those data sets. So the rules, data's being used in new and different ways, they need new ways to manage it and these are some of the solutions that we're bringing to market. >> You mentioned Multicloud, I wanted to chat about that. We know it's not a word that AWS likes. >> Joe: Can we say that here? >> Yeah. >> On theCUBE, absolutely. >> This is theCUBE, exactly. But the reality is, as we talked to, and Stu knows as well, most CIO's say, we've inherited this mess, of Multicloud, often symptomatically, not as a strategic direction, give us an overview of what Dell EMC, I'll ask you both the same question, and Joe we'll start with you, how are you helping customers address, whether they've inherited Multicloud through M&A acquisition, or developer choice, how do they really extract value from that data, that they know, there's business insights in here that can allow us to differentiate our business, but we have all of this sprawl. What's the answer for that? >> Well some of that is ClarityNow, that I was talking about, the ability to see your data, because half the battle is seeing your data, being able to see it. Also, with Multicloud, whether you inherit it, or whether it was intentional or not, we're setting out our solutions are Multicloud, you can run them anywhere. But not only that, the twist to Multicloud is, well what if you made your data available to multiple clouds simultaneously. And why would you want to do that? One reason we want to go that path is maybe you want to use the best services from each Cloud. But you don't want to move your data around because again it has gravity and it takes time and money and resources to do that. Through our Cloud Storage Services, it's centralized, and you can attach to whatever Cloud you want. So some of that is around taking advantage of that, some of that's around data brokering, we heard Andy talk a little bit about that this morning, where you may have data sets that you want to sell to your customers and they may be running in other Clouds. And some of that is, you may want to switch Clouds due to the services they have, the economics or perhaps even the requirements of your applications. >> Yeah, from an application perspective, for us it's really about consistency, right. So we say it's consistency in two ways, consistent infrastructure and consistent operations. And so we talk about consistent infrastructure, we want to help organizations be able to take that virtual machine and move it. Where is the best place for it, right? So it's about right workload, right Cloud. And we talk about application portfolio analysis and helping organizations figure out, what is that set of applications that they have? What should they do with those applications? Which ones are right to move to Cloud? Which ones should they not invest in and kind of let retire? And so that's another aspect of that people and process thing that we talked about earlier. Helping organizations look at that application portfolio and then take that consistent infrastructure, use that multiple Clouds with that, and then consistent operations which is a single management control plane that can help you have consistency between the way you run your on-prem and the way you run your public Cloud. >> Yeah and give them the freedom to choose the Cloud they want for the workload they want. >> And is that the data level where the differences between, we'll say the public Cloud files, is most exposed? Is it at the data layer where the differences in, we'll say AWS versus it's competitors, is that where the differences between the features and the functionalities is most exposed? >> I think so. I think that one place that we think public Cloud is weak, is file. File workloads. And one of the things we're trying to do is bring consistent file, whether it's on-prem or across the Clouds, through with our Cloud Storage Services at Isilon and the scale and the throughput that those systems can provide, bringing consistent file services, whether it's NFS, SNB or even HDFS or the snapshotting capabilities. And as equally as important, that native replication capabilities across these environments. >> I wonder if we could talk a little bit about some of the organizational changes, the transformation was one of the key takeaways that Andy Jassy was talking about in his three hour keynote yesterday. We've watched for more than a decade now, the role of IT compared to the business, and we know that it's not only does IT need to respond to the business but that data discussion we have better be driving the business, because if you're not leveraging your data, your competition definitely will. I want to get your opinion as to just the positions of power and who you're talking to and what are some of the successful companies doing to help lead this type of change. >> I'll go. I think IT and business are coming together more, the lines are blurring there. And IT's being stretched in to new directions now, they have to serve customers with new demands. So whether it's managing storage or AIs or servers, or VMware environments now being pushed in to things like now managing analytics, kind of environment, right? And all the tools associated with that. Whether it's Cassandra or TetraFlow, being able to stretch, and being able to provide the kind of services that the business requires. >> And up the stack too. >> Yeah. When you talk about the fact that business and IT need to work together, it's kind of like an obvious statement, right? What that really means is, that there needs to be a way to help organizations get to responding more quickly to what the needs of the business are. It's about agility. It's about the ability to respond quickly. So you see organizations moving from waterfall process for development to Agile and you see that being supported by Cloud native architectures, and organizations need to take and be able to do that in a way that preserves the investments that they have today. So most organizations are on this journey from physical to virtual to infrastructure as a service, to container as a service and beyond and they don't want to throw away those investments that they have in existing virtualization, in existing skill sets, and so what we're really doing is helping organizations move to that place where they can adopt Cloud Native while bringing forward those investments they have in traditional infrastructure. So we think that's helping organizations work better together, both from a technology and a business perspective. >> And as far as the kind of people we talk to, I mean data science is growing and growing, data science is becoming more part of the conversation. CIO's as well, right? I mean behind all this, again, is that data that we keep coming back to. You have to ensure the governance of that data, right? That it's being controlled and it's within compliance. >> So we started off the conversation talking about that this was Dell's first year. So 60, 65,000 here. There's a sprawling ecosystem. One of the largest ones here. What do you want to really emphasize? Give us the final takeaway as to how people should think about Dell Technologies in the Cloud ecosystem. >> Yeah, I think, we know our customers want to be able to leverage the Cloud, the kind of conversation we're having with customers is more around, how can I use the Cloud to optimize my business? And that's going to vary on a workload by workload basis. We feel it's our job to arm the customer with the tools they need, right? To be able to have hybrid Cloud architectures, to be able to have the freedom to run the applications wherever they want, consume infrastructure in a way they want it to be consumed, and we're there for them. >> Yeah, I think it's really about a couple of things. One is trust, and the other one is choice. So if you think about it, organizations need to move in to this Cloud world in a way that brings forward those investments that they've made. Dell EMC is the number one provider of hyper-converged infrastructure, of servers, and we can help organizations understand that Cloud operating model, and how to bring the private Cloud investments that they have today forward to work well with the public Cloud investments that they're making, clearly. So it's really about trust and choice of how they implement. >> Trust is a big deal. >> Absolutely. >> I mean, we're the number one storage vendor for a reason. Our customers trust us with their data. >> Well Joe, Bob, thank you so much for joining me and Stu on theCUBE. >> Thank you. >> Thank you. >> And sharing with us what you guys are doing at Dell, AWS. The trust and the choice that you're delivering to your customers, we'll see you at Dell Technologies World. >> We'll see you here next year. >> All right. You got it. All right. For our guests and for Stu Miniman, I'm Lisa Martin and you're watching theCUBE, day two of our coverage of AWS re:Invent '19. Thanks for watching. (upbeat, title music)
SUMMARY :
brought to you by Amazon Web Services and Stu and I are pleased to welcome And then one of our alumni, we've got Bob Ganley, Glad to be here. So guys, here we are at AWS re:Invent, I think I've seen the sticker. and to be able to migrate, over the last year or more, And data is actually the harder part of the problem, and absolutely, data's at the center of everything. and AWS is clearly the leader there in that space is it's got to start at the senior executive level, And that has to have senior leadership investment. Yeah and on the technology side of things, and it had all the feature checklist that I wanted. and how do you wrap your arms around that? I wanted to chat about that. But the reality is, as we talked to, and Stu knows as well, the ability to see your data, and the way you run your public Cloud. Yeah and give them the freedom to choose and the scale and the throughput the role of IT compared to the business, and being able to provide the kind of services It's about the ability to respond quickly. And as far as the kind of people we talk to, One of the largest ones here. the kind of conversation we're having with customers and how to bring the private Cloud investments Our customers trust us with their data. thank you so much for joining me and Stu on theCUBE. And sharing with us what you guys are doing at Dell, AWS. I'm Lisa Martin and you're watching theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Bob Ganley | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Joe CaraDonna | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Bob | PERSON | 0.99+ |
Isilon | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
97% | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
One reason | QUANTITY | 0.98+ |
60, 65,000 | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
two ways | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.98+ |
each Cloud | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
Multicloud | ORGANIZATION | 0.97+ |
first year | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
60 plus thousand people | QUANTITY | 0.96+ |
Cloud | TITLE | 0.95+ |
Cloud Native | TITLE | 0.95+ |
single | QUANTITY | 0.94+ |
this morning | DATE | 0.93+ |
four essentials | QUANTITY | 0.92+ |
ClarityNow | ORGANIZATION | 0.92+ |
Day two | QUANTITY | 0.91+ |
S3 | TITLE | 0.91+ |
billions of files | QUANTITY | 0.91+ |
Cassandra | TITLE | 0.9+ |
WRONG TWITTER David Noy, Cohesity | Microsoft Ignite 2019
>>Live from Orlando, Florida. It's the cube covering Microsoft ignite. Brought to you by Cohesity. >>Welcome back everyone to the cubes live coverage of Microsoft ignite here in Orlando, Florida. I'm your host, Rebecca Knight. Along with my cohost Stu Miniman. We are joined by David Neu. He is the VP of cloud at Cohesity, which is where we are. We're in the Cohesity booths, so I should say thank you for welcoming us. Pleasure. They found me here. So you are pretty brand new to the company, a, a longtime tech veteran, but, but new newish to Cohesity. Talk a little bit about what made you want to make the leap to this company? >>Well, you know, as I was, I mean it was, it was time for me to move from my prior company. Um, and then we'll go into the reasons there. But, um, as I looked around and to kind of see who were the real innovators, right, who were the ones who were disrupting, cause my successes in the past have all been around disruption. And when I really looked at what these guys were doing, you know, first it's kind of hard to figure out. Then it was like, Oh my gosh, this is really something different. Like, um, it's bringing kind of the cloud into the enterprise and using that model of simplification and then adding data services and it's really groundbreaking. So I just like a, and the other thing was, Oh, I'll just throw this point out there. I read a lot of the white papers and the technology and I looked at it and having been a lot, you know, tech veteran for awhile, it looked to me like a lot of people who have done this stuff before it got together and said if I had to do it again and do it right, like what were the things I wouldn't do and what are the things I would do? >>Right, right. So that was just fascinating. So David, yeah, I was reading a Q and a recently with, with Mohit, founder of Cohesity, and it really is about that data. You mentioned data services. Yeah. Bring us insight a little bit. You know, we in the, you know, storage in it industry, you know, get so bogged down in the speeds and feeds and how fast you can do things in the terabytes and petabytes and like here, but we're talking about some real business issues that the product is helping to, to solve. >> I totally agree. Look, I've been in the, in the storage industry for a while now and you know, multi petabytes of data and the problem that you run into when you go and talk to people who use this stuff is like, well geez, I start to lose track of it. I don't know what to do with it. >>So the first thing is how do you search it, index it. That's, you know, so I can actually find out what I have. Then there's a question of being able to go in and crack the data open and provide all kinds of data services from, you know, classification to uh, Oh, is this a threat or business have vulnerabilities in it. It's really a data management solution. Now of course we started with backup, right? But then we're very quickly moving into other services back on target, file an object. You'll see some more things coming out, uh, around test and dev. For example, if you have the world's data, it's one thing to just keep it and hold it, but then what do you do with it? How do you extract value out of it is you really got to add data management services and people try to do it, but this hyper converged technology in this more of a cloud approach is, is really unique in the way that it actually goes about it. >>Could you speak a little bit of that, that that cloud approach? Yeah, so I mean, you know, Monet comes from a cloud background, right? He wrote the, he was the author of the Google file system. The idea basically is the same. Let's take a look at a global view of how data is capped. Let's basically be able to actually abstract that with a management layer on top of that and then let's provide services on top of that. Oh by the way, people now have to make a decision between am I going to keep it on premise or keep it in the cloud? And so the data services, how to extend not just to the on-prem but the act to actually extend to Conde services as well, which is kind of why I'm here. I think, uh, you know, what we do with Azure is pretty fascinating in that data management space too. So we'll be doing more data management as a service in the cloud as well. >>So let's get into that a little bit and I'm sure a lot of announcements this week with Azure arc and another products and services, but let's dig into how you're partnering in the kinds of innovative things that go Cohesity and Microsoft are doing together. >>Well, we're doing a lot of things. First of all, we, we've a very rapid cadence of engineering to engineering conversations. We do everything from archiving data and sending longterm retention data into the cloud. But that's kind of like where people start, right? Which is just ship it all up there. You know, in Harvard it's held, right? But then think about doing migrations. How do you take a workload and actually migrated from on prem to the cloud in a hold? We can do wholesale migrations that people's environments who want to go completely cloud native, we can fail over and fail back if we want to as well. So we can use the cloud is actually a dr site. Now you start to think about disaster recovery as a service. That's another service that you start to think about, Oh, what about backing up cloud native workloads? >>Well, you don't just want to back up your own workloads that are in the on prem data center. You want to back them up also in the cloud and that includes even office three 65 so you just look at all of what that means and then the ability then crack that, open that data open and then provide all these additional, when I say services, I'm talking about classification, threat analysis, um, being able to go in and identify vulnerabilities and things of that nature. That's just a, a huge, tremendous value on top of just the basic infrastructure capabilities. David, you've been in the industry, you've seen a lot of what goes on out there. Hopeless, understand really what differentiates Cohesity. Because a lot of traditional vendors out there that are all saying many of the same words to hear here, cloudifying hinders even newer vendors than Cohesity's eh, out there. >>Totally get it. Look, I mean, here's, here's kind of what I find really interesting and, and, and, and just attractive about the product. I've been in the storage industry for a long time. So many times people have asked me, can I move my applications to the storage because moving the data to the application, that's hard, but moving the application to the data, wow, that makes things a lot easier. Right? And so that's one of the big things that actually we do that's different. It's the hyperconverge platform. It's a scale out platform. It's one that, um, really looks a lot more like some of the scale out platforms that we've done in the past, but goes way beyond that. And then the ability to then say, okay, let some strike that. A ways to make it as simple as possible so people don't have to worry about managing lots of different pools and lots of different products for, you know, a service one versus service two versus service three and then bringing applications to that data. >>That's what makes it really different. And I think if you look around here and you talk to other vendors, I mean, they'll provide API APIs. That's one thing and that's great and that's important, but to actually bring the applications to the data, that's, you know, that's what all of the cloud guys do. I mean, look at Google, they put Gmail on top, they put a search on top, they put Google translate on top is all of these things are actually built on top of the data that they store such as Adela. This morning in the keynote talked about that there's going to be 500 million new at business applications built by 2023 how is Cohesity position to both partner with Microsoft and everyone out there to be ready for that cloud native future? That's a great question. Look, we're not going to put 500 million applications on the product, right? >>But we are going to pick some key applications that are important in the top verticals, whether it's healthcare, financial services, public sector, and so long I've sciences, oil and gas, but I'm in the same time we all will offer the API APIs extensions too. So if you think about going into Azure, if we can explore things as Azure blobs for example, now we can start to tie a lot of the Azure services into our storage and make it look like it's actually native Azure storage. Now we can put it on Azure cold storage, you know, hot storage, we can decide how we want to tier things from a performance perspective, but we can really make it look like it's native. Then we can take advantage of not just our own services but the services that the cloud provides as well. And that makes us extraordinarily powerful >>in terms of the differentiator of Cohesity from a services standpoint. But what about from a cultural standpoint? We had Satya Nadella on the main stage this morning talking a lot about trust. And I'm curious as particularly as a newer entrant into this technology industry, pow, how do you, uh, develop that culture and then also that reputation too? >>Here's one of the interesting things we did when I joined the company and I've been around for awhile and I've been in a couple very large brand names. I started walking down the halls and I'm like, Oh, you're here. Oh, you're here. Wait, you're here. And it's like an all star cast and a, when you go into, you know, some of the customer base and it's like, Hey, we know each other for a long time. That relationship is just there on top of that. I mean, the product works, it's solid. People love it. It's easy to use and it actually solves real problems for them. Um, and you know, we innovate extraordinarily fast. So when customers find a problem, we are in, uh, on such a fast release cadence, we can fix it for them in extraordinarily, uh, uh, in times I've never seen before. >>In fact, is a little bit scary how fast the engineering group works. It's, uh, probably faster than anything I've ever seen in the past. And I think that helps. They build the customer's trust cause they see that if we recognize there's a problem, we're going to be there to solve it for them. There's trust of the company. Uh, when we talk about our data, there's also the security aspect. Yes. How does Cohesity fit into the, there's a story with Microsoft and beyond. The security part is extraordinarily important. So look, we've already, as I said, built kind of an AR app marketplace and we're bringing a lot of applications to do things like ransomware detection, uh, um, vulnerability detection, data classification. But, uh, Microsoft is also developing similar API APIs. And you heard this morning that they're building capabilities for us to be able to go and interact with them and share information. >>So if we find vulnerabilities, we can share it with them, they can share with us and we could shut them down. So we have the native capabilities built in, they have capabilities that they're building of their own. Imagine the power of being able to tie those two together. I just think that that's extraordinarily powerful. What about growth for a company that is growing like gangbusters? Can you give us a roadmap you can expect from coaching? I've never seen growth like this. I mean, I joined, um, specifically to look at a lot of the cloud and, uh, the file and object services and you know, obviously I have a background in, in backup and data protection as well. Um, I haven't seen growth like this since my old days when I was in Iceland, when I started in Isilon back in the, you know, way, way old days. >>This is X. This is, you know, I can't give you exact numbers, but I'll tell you it's way in the triple digits, you know what I mean? And, and it's extraordinarily fast to see from an Azure perspective, we're seeing, you know, close to triple digit growth as well. So I, I love it. I mean, I'm just extraordinarily excited. All right. Uh, on the product side, give us a little bit of a look forward as to what we should be expecting from Cohesity. Absolutely. So from a look forward perspective, as I said, we protect a lot of on-premise workloads and um, you know, now and we protect obviously Azure workloads as well. We protect Azure VMs. But as we think about some of the Azure native services like sequel, um, and other services that are kind of built native within Azure, uh, we'll extend our application and to be able to actually do that as well, we'll extend kind of the ease of use and the deployment models to make it easier for customers to go and deploy and manage. It really seems like a seamless single pane of glass, right? So when you're looking at, uh, Cohesity, you should think of it as, even if it's in the cloud or if it's on premise, it looks the same to you, which is great. If I want to do search and index, I can do it across the cloud and I can do it across the on prem. So that integration is, is really what ties it together and makes it extraordinarily interesting. >>Finally, this is, this is not your first ignite. I'm interested to hear your impressions of this conference. What you're hearing from customers, what your, what the conversations that you're having. >>You know, it's a lot of fun. I've been walking around the partner booths over here to see like, you know, who can we partner with to add some more of those data management services because we don't think ourselves, again, you know, we started kind of in the backup space. We have an extraordinarily scalable storage infrastructure. I was blown away by the capabilities of the fallen object. I mean it was just as a fall guy for a long time. It was unbelievable. But when you start to add those data management capabilities on top of that so that people can either, you know, again, either to your point, uh, make sure that they can detect threats and vulnerabilities or uh, you know, find what they're looking for or you know, be able to run analytics for example, right on the box. I mean, I've been asked to do that for so long and then just, it's finally happening. It's like, it's a dream come true for me. It's like everything you ever wanted software defined, bringing the applications to the, to the data. It's just like if I could ever say like, Hey, if I could take all of the things that I always wanted at previous companies and put them together, it's Cohesity and I'm looking around here and I'm seeing a lot of great technology that we can go and integrate with. >>Great. Well, David and I, thank you so much for coming on the cube. >>Thank you very much. I appreciate it. I'm Rebecca knife. First to Miniman. You are watching the cube.
SUMMARY :
Brought to you by Cohesity. We're in the Cohesity booths, so I should say thank you for welcoming us. you know, first it's kind of hard to figure out. we in the, you know, storage in it industry, you know, get so bogged down in the speeds and you know, multi petabytes of data and the problem that you run into when you go and So the first thing is how do you search it, index it. I think, uh, you know, what we do with Azure So let's get into that a little bit and I'm sure a lot of announcements this week with Azure arc and another That's another service that you start to think about, so you just look at all of what that means and then the ability then crack moving the data to the application, that's hard, but moving the application to the data, but to actually bring the applications to the data, that's, you know, Now we can put it on Azure cold storage, you know, hot storage, We had Satya Nadella on the main stage this morning talking a lot about trust. and you know, we innovate extraordinarily fast. And you heard this morning that they're days when I was in Iceland, when I started in Isilon back in the, you know, and um, you know, now and we protect obviously Azure workloads as well. I'm interested to hear your impressions of this conference. on top of that so that people can either, you know, again, either to your point, Thank you very much.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
David Neu | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Iceland | LOCATION | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
David Noy | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
500 million | QUANTITY | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
2023 | DATE | 0.99+ |
500 million applications | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
Rebecca | PERSON | 0.98+ |
two | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Gmail | TITLE | 0.97+ |
ORGANIZATION | 0.96+ | |
Azure | TITLE | 0.96+ |
first thing | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
Mohit | PERSON | 0.95+ |
ORGANIZATION | 0.93+ | |
this morning | DATE | 0.9+ |
Isilon | LOCATION | 0.87+ |
single pane | QUANTITY | 0.85+ |
This morning | DATE | 0.85+ |
one thing | QUANTITY | 0.84+ |
Miniman | PERSON | 0.82+ |
Adela | ORGANIZATION | 0.81+ |
couple | QUANTITY | 0.8+ |
65 | OTHER | 0.76+ |
Monet | PERSON | 0.76+ |
service two | OTHER | 0.75+ |
service three | OTHER | 0.71+ |
Harvard | LOCATION | 0.69+ |
Ignite | COMMERCIAL_ITEM | 0.53+ |
2019 | TITLE | 0.44+ |
three | QUANTITY | 0.4+ |