Trey Layton, Dell EMC PowerOne | CUBEConversation, November 2019
>> From the Silicon Angle media office in Boston, Massachusetts, it's theCube. Now, here's your host, Stu Miniman. >> Hi and welcome to a special CUBE conversation. Happy to welcome back to the program Trey Layton who's the SVP of engineering with Dell EMC. Trey, great to see you. >> Hi Stu, how are you? >> I'm doing fantastic, thank you. So there's the devil technology summit happening in Austin, Texas. Let's not hide the lead, there's some news around things you've been working on for a while. Why don't you share the update with our audience? >> Well, myself and my team have been working on a new product that we are announcing at Dell technology summit called PowerOne and we are positioning in the market is autonomous infrastructure. It's a great combination of all the wonderful products in the Dell technologies portfolio combined with some very innovative automation that makes integrating the product an autonomous outcome. >> All right, first of all with the name power in it, we know that that's the branding that Dell likes. Something that's going to be with us for a while. You talk about all-in-one. You've got some history, we have some history back pulling various solution together, talk about compute, network and storage, what back in the day we called converged infrastructure. Explain how all-in-one you know, what what is the all in the all in one? >> So first of all, it's a system where you can get all of Dell technologies in one package. The next thing is about building on that decade's worth of experience of building converged products and learning about the different intricacies of integrating those products and instead of relying upon humans to integrate those technologies together to deliver an outcome for a customer, embedding that intelligence and software to make it easy for an operator to drive a configuration, to deliver an outcome for a customer to operate a modern data center environment. >> So it's exciting stuff Trey 'cause you know, the design principle before was let's simplify as much as we can, let's that entire rack if you will, be the unitive infrastructure that people manage, but what I hear you talking about, the automation and software and even you know, we're not replacing the humans, we're augmenting what they're doing by having automation take over. That's powerful stuff. We've talked about intelligence and automation for I'd say all of our careers. So explain a little bit do you know, this autonomous, what really you know, where is that automation and how come it is different today than it might have been five or 10 years ago? >> Well, you think about all the things that we've learned in 10 years of building a packaged product to actually deliver an outcome for a customer. Requiring some degree of manual intervention, but a significant amount of simplicity that we've built in those products to deliver an outcome. One of the things that's true about today is that as organizations are on a digital transformation journey, they are struggling with a high degree of intake of technology, while also maintaining the products that they manage on a daily basis to, quote-unquote keep the lights on. What we have done is say how can we take the innovations that we've built in our products that our infrastructure is code and how can we build software intelligence that understands based on the the operators desired outcome for an integration, we employ Dell engineering best practices to deliver that outcome. So a key element of the product is housing this intelligence and software that drives this automated outcome through best practices for how we engineer products together. >> All right Trey, you've got engineering. Bring us in a little aside of the team you know, building now in 2019. What are the pieces that you had? What's different about the team that you had to build this and is there a unique IP that your team and this product brings beyond what was already available in the marketplace? >> Yes, so first of all the team is a global team that we've actually been in the process of hiring in the last year plus, a year and a half plus and it's a very young team, different skill set. We learned very early on that if we're going to build a product with embedded automation, you needed to have experience and understanding, what are the best practices for integrating the technologies in the product, but simultaneously you needed people who understood how to write code that made that outcome possible and so really bringing and building a global team of DevOps minded individuals that understood open source technologies, that understood our VMware ecosystem, that understood the Dell EMC ecosystem and more importantly, the larger Dell technologies ecosystem for bringing those products together and I'll tell you, it's a diverse culture of individuals. What I'm most excited about is while we're very much focused on delivering VMware outcomes in this first release, the product that we've built is capable of delivering any type of outcome. Whether it be another type of virtualization environment or another type of application outcome. The software is designed to deliver an integration that is designed to support a customer's production operation. The intelligence or the product that we built to do that is called the PowerOne controller and embedded in that is software that a customer can drive either through a user interface or they can use automation technologies that they have in-house to call on this controller programmatically to execute those outcomes as opposed to being chained to a user interface that an operator has to learn as a new element of their environment. >> Yeah Trey, really reminds me of the conversations I've been having with customers over the last decade or more is that core understanding and building my computer infrastructure, my storage infrastructure, my networking infrastructure. I still need to understand some of those pieces, but it is much more about the software, the operating model and it's, as soon as we know, we're living in a software world. >> Well, it's interesting that you say that because you and I both know based on our history that there are complexities that we've worked to make simpler to operate, but a customer today struggles to have expertise dedicated to how do I build an underlying network fabric, how do I deploy a software virtualization layer on top of that Network fabric, how do I deploy storage arrays in a manner where the i/o is optimized not only for performance, but also for survivability. How do I carve up my computer sources in a manner that most efficiently supports the virtualization or container outcome that I'm deploying. There's a tremendous amount of skill that you need to have to employ the best practices to integrate all those technologies together and what we are doing is merely bringing those capabilities in software, so that an operator can say, I want to deploy this many cores with this much memory and associated to this much capacity of external storage and all the underlying in order configuration dependencies happen through the intelligence that we've built in automation to drive the right outcome for the customer. >> Okay, so Trey, when I've been digging into the software world and you talk to the people that are building applications, observability something that's been coming up a bunch. It's not just understanding what I have, but with the flows of information, Ansible, New Relic, that all talking about in a containerized micro-services world, there are different ways that I need to look at the entire system. How does that the kind of mindset and thinking fit into the design of PowerOne? >> Well, it's actually an age-old problem that we've had as we've began to have shared infrastructure to run, whether they be containerized services or virtualized services or contain running in virtualized services. It's how do we associate what's running to the underlying infrastructure so that if we have a problem in the underlying infrastructure that we're managing, that we target a resolution and that resolution could be increased performance so that that service can run better or it could be some type of underlying failure that we want to ensure that as survivability is kicked in, that we employ more resource to support expansion or just a continuation and burst of capability that's needed. When we build PowerOne, we thought about, it is a system. How do we give observability of that system in the context of a system to understand the associated dependencies so that we could quickly guide the operator to identifying the area that they needed to look at from an infrastructure perspective and either influence or simply respond to, instead of a more traditional mode of on-premises management is let me go find where the problem is and see if this fixes it. We have given observability to specifically identify where the issue is and enable the operator to go target that. >> All right, so Trey, you mentioned the traditional model of doing things. What does PowerOne mean for, say for example the X block is something you know, over a decade out there on the market, there's been lots of discussions forever. The Cisco stack, the Dell stack and VMware, you know, all those challenges. So tell us what this means for VX block? >> So first of all, I couldn't say enough good things about the V block team. It's a part of the organization that I'm in. We are very much committed to VxBlock engineering going forward and PowerOne is an expansion of our portfolio as opposed to a replacement of. We value our partnership with Cisco significantly, customers are committed to acquiring Cisco technologies in concert with our storage and data production products and Vxblock is all about giving customers an ability to have a converged experience with our storage technologies and a very unique experience that surrounds the offers that we deliver in that space. I will tell you that the automation that we're building in PowerOne is also something that we're targeting at our entire portfolio as opposed to just isolating into this one product. The dawn of autonomous infrastructure in our minds is not about isolating that technology to one product, but it's about bringing it to our entire portfolio of products to make our customers experiences better in managing and consuming the technologies they buy from us. >> Well, definitely something we've heard from Jeff Clark, Jeff Boudreau and the the team is the portfolio inside Dell EMC is going through a lot of simplification. So the whole autonomous infrastructure, PowerOne, how should we be thinking about where this fits kind of in the overall market? >> So it's very much includes our purpose-built storage portfolio technologies, our data protection, it includes our networking technologies and some unique automation capabilities that we've built in it to enable the IT operator to not have to worry about programming the fabric that we actually sense and understand the changes in the virtualization environment and deploy those configurations to the underlying network infrastructure and it's all about using our power edge portfolio of servers. So PowerOne is very much about consuming our data center technologies all in one package. That positioning in the market is complementary to customers who want to acquire VX block and are looking to pair Cisco technologies with Dell storage and more importantly, our HCI portfolio is a key element of our total offer to customers, where customers are looking to deploy infrastructure with software-defined storage characteristics and a very unique management experience and simplified operations, the HCI portfolio is there as well. So I often engage, specifically as we talk about the exclusively Dell portfolio. It's not an or conversation, it's an and. It's which applications are you deploying in your data center environment? What use cases are you deploying? How is the underlying infrastructure optimized to best address the goals that you have for that deployment? And so that's why we've taken a portfolio approach as opposed to one product to address every use case that's in the market. >> All right so Trey, we've talked a lot about operations and the way we design things. We haven't talked about cloud you know, and very much we believe cloud is as much an operating model as it is a place. It's a journey, not a destination, hybrid cloud is what most customers have today. They have multiple clouds, but we think one of the challenges of the day is is helping to get more value out of the some of what you have then, the individual pieces would be on their own. So where does PowerOne fit into the Dell Tech cloud story and we'd love to also hear just where it fits into the kind of the broader cloud discussions that we have when we're at a Dell show, a VMware show or beyond. >> Yeah, so it's an interesting discussion 'cause I think we begin to drift into saying a thing is cloud and I think more outcomes are cloud and it's a combination of software and infrastructure. PowerOne is an infrastructure element that is very much a part of the Dell technologies cloud strategy, but Dell technologies cloud is more about our entire portfolio of software and infrastructure participating in a common ecosystem to deliver that cloud outcome for customers and so Dell Tech, so PowerOne is absolutely a part of the Dell technologies cloud and we're excited about continuing down the automation enhancements path to make those outcomes more possible for customers as we go throughout time. So initially, PowerOne is very much an infrastructure resource in Dell technologies cloud. Over time, you're going to see even greater enhancements as you will see enhancements across our entire portfolio of technologies in participating in the larger Dell technologies cloud ecosystem story. >> Okay, and just to connect the dots 'cause when I look at those pieces and we talked about, as customers are doing hybrid cloud and multi-cloud, if they're VMware shop VCF is an important piece of that and that is part of VMware cloud on AWS, what they're doing with Azure, with Google. So this plugs in if you, you know, my words into that broader multi cloud, hybrid cloud discussion that customers are having. >> Absolutely, you think about it in layers. We are building an infrastructure layer at Dell EMC that enables that Dell technologies cloud layer to be possible through the VMware ecosystem of technologies, making that multi-cloud, that private cloud functionality realized. The VMware ecosystem is robust in its approach to supporting multi cloud environments as well as deploying the virtualization and container technologies that are critical for building in a modern enterprise and so we are an element of that strategy as opposed to the exclusive pinpoint resource in the strategy. All of the infrastructure products in the portfolio will participate in the Dell technologies cloud and we're excited about the innovation that we can bring and making the Dell technology strategy and vision more easily realized by our customers. >> Okay and Trey, when I think of PowerOne, what market segments do we think are going to kind of be the first customer for this and any specific rules or inside a customer that should be the ones looking at this? >> Yeah, that's a great question. So as we look at markets, you look at organizations who are looking to deploy a data center resource. We go as small as four servers, but candidly, if you're deploying a data center with four servers, there are other items in our portfolio that are better positioned like hyper-converged to start in that place, but if you're looking to deploy data center where you're looking to go 10s, 20s, hundreds of servers and you want external storage in the offer, then PowerOne is a great starting point. If you think about the scalability and we haven't touched on it, that we've built in PowerOne, at launch, we're going to support 270 servers in the architecture. Very quickly, we will expand into supporting what's described as a multi pod architecture where we will get beyond 700 servers and then move into thousands of servers where the architecture is actually designed to support over 7,600 servers. In concert with that, at day one, we will support multiple storage arrays as well. So deploying multiple Power Mac storage raised as a storage domain to support this. So when we talk about markets, we talk about the ability to address medium sized organizations data center use cases all the way up to the largest enterprises or service providers in the world data center deployments in an all Dell technology stack. >> All right, Trey, give us the final word on this. One or two things you want people to understand and know about PowerOne as they walk away. >> So I think the most important thing to take away is that this is a way to acquire Dell technologies products all in one place, in one package, in a incredible user experience. The way we're going to sustain that user experience and maintain that value proposition to customers is around the autonomous infrastructure packaging that we've built in the software that we're delivering. Utilizing some of the most advanced automation characteristics that are out there on the market, combined with some of the brightest minds to integrate these technologies together. Customers just need to get to production operations and when you can acquire a product that houses the intelligence to get to that outcome faster, there's a greater return on your invested capital when you're buying this product and that's the most important thing I think to walk away from. We are committed to helping get our our customers get to operational outcomes faster and these technologies that we've built in this product are delivering on that promise. >> Well Trey, congratulations to you and the team. We always love to see when you go behind the scenes, we kind of rebuild from a clean sheet of paper building on the history that you have, listening to your customer strongly and having somethings ready for today's modern era. Thanks so much. >> Thanks Stu. >> All right, be sure to check out theCUBE.net for all our coverage. I'm Stu Miniman, as always, thanks for watching theCUBE. (light electronic music)
SUMMARY :
From the Silicon Angle media office Trey, great to see you. Let's not hide the lead, there's some news that makes integrating the product an autonomous outcome. Something that's going to be with us for a while. embedding that intelligence and software to make it easy the automation and software and even you know, So a key element of the product is housing this intelligence What are the pieces that you had? and embedded in that is software that a customer can drive of the conversations I've been having with customers that most efficiently supports the virtualization How does that the kind of mindset and thinking fit and enable the operator to go target that. say for example the X block is something you know, about isolating that technology to one product, and the the team is the portfolio inside Dell EMC to best address the goals that you have for that deployment? and the way we design things. of the Dell technologies cloud and we're excited Okay, and just to connect the dots and making the Dell technology strategy So as we look at markets, you look at organizations and know about PowerOne as they walk away. that houses the intelligence to get to that outcome faster, We always love to see when you go behind the scenes, All right, be sure to check out theCUBE.net
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Trey Layton | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Stu | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Trey | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
November 2019 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
270 servers | QUANTITY | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
one package | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
one product | QUANTITY | 0.99+ |
700 servers | QUANTITY | 0.99+ |
over 7,600 servers | QUANTITY | 0.99+ |
first release | QUANTITY | 0.99+ |
a year and a half | QUANTITY | 0.98+ |
Dell Tech | ORGANIZATION | 0.98+ |
Silicon Angle | LOCATION | 0.98+ |
10s | QUANTITY | 0.98+ |
PowerOne | ORGANIZATION | 0.98+ |
PowerOne | EVENT | 0.98+ |
today | DATE | 0.97+ |
V block | ORGANIZATION | 0.97+ |
Power Mac | COMMERCIAL_ITEM | 0.97+ |
first customer | QUANTITY | 0.97+ |
Dell EMC | ORGANIZATION | 0.97+ |
20s | QUANTITY | 0.97+ |
last year | DATE | 0.97+ |
one place | QUANTITY | 0.97+ |
five | DATE | 0.96+ |
hundreds of servers | QUANTITY | 0.96+ |
four servers | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
both | QUANTITY | 0.95+ |
Ansible | ORGANIZATION | 0.95+ |
PowerOne | COMMERCIAL_ITEM | 0.95+ |
Azure | TITLE | 0.88+ |
thousands of servers | QUANTITY | 0.87+ |
Dell EMC and The State of Data Protection 2020 | CUBE Conversation, February 2020
>> From the SiliconANGLE Media office in Boston, Massachusetts, it's theCUBE. Now, here's your host Dave Vellante. >> Hello everyone and welcome to this CUBE conversation. You know, data protection, it used to be so easy. You'd have apps, they'd be running on a bunch of servers, you'd bolt on a little backup and boom! One size fit all. It was really easy peasy. Now, business disruptions at the time, they were certainly not desired, but they were definitely much more tolerated and they were certainly fairly common place. Today, business disruptions are still fairly common occurrence but the situation is different. First of all, digital imperatives have created so much more pressure for IT organizations to deliver services that are always available with great consumer experiences. The risks of downtime are so much higher but meeting expectations is far more complex. This idea of "one size fits all" it really no longer cuts it. You got physical, virtual, public cloud, on-prim, hybrid, edge, containers. Add to this cyber threats, AI, competition from digital disrupters. The speed of change is accelerating and it's stressing processes and taxing the people skills required to deliver business resilience. These and other factors are forcing organizations to rethink how they protect, manage, and secure data in the coming decade. And with me, to talk about the state of data protection today and beyond, is a thought leader from one of the companies in data protection, Arthur Lent is the Senior Vice President and CTO of the Data Protection Division at Dell EMC. Arthur, good to see you again. Thanks for coming in. >> Great to see you, Dave. >> So, I'm going to start right off. This is a hot space and everybody wants a piece of your hide because you're the leader. How are you guys responding to that competitive threat? >> Well, so the key thing that we're doing is we're taking our proven products and technologies and we've recognized the need to transform and really modernize them and invest in a new set of capabilities and changing workloads. And our core part of that, with some changes in leadership, have been to shift our processes in terms of how we do stuff internally and so we've moved from a very big batch waterfall-style approach where things need to be planned one, two, three years out in advance, to a very small batch agile approach where we're looking a couple of weeks, a couple of months in advance of what we're going to be delivering into product. And this is enabling us to be far more responsive to what we're learning in the market in very rapidly changing areas. And we're at the spot where we now have several successive releases that have been taking place with our products in this new model. >> So, that's a major cultural shift that you're really driving. I mean, that allows you to track you know, younger people, you guys are a global organization so I mean, how is that sort of dynamic change? You know, people sometimes maybe think of you as this stodgy, you know, company been around for 20 plus years. What's it like when you walk around the hallways? What's that dynamic like? >> It's like we're the largest start-up in the data protection industry but we've got the backing of a Fortune 50 company. >> Nice. All right, well let's get into it. I talked in my narrative upfront about business disruptions and I said there's still you know, kind of a common occurrence today, is that what you're seeing? >> Absolutely! So, our latest data protection index research has 82% of the people we surveyed experienced downtime or data loss within the last 12 months and this survey was just completed within the last month or two. So, this is still very much a real problem. >> Why do you think it's still a problem today? What are the factors? >> So I would say the problem's getting worse and it's because complexity is only increasing in IT environments. Complexity around multi-platform, between physical servers, virtual servers, cloud, various flavors of hybrid cloud, data distribution between the core, edge and the cloud, growing data volumes, the amount of data, and the data that companies need to run their business is ever increasing, and a growing risk around compliance, around security threats, and many customers have multi-vendor environments and multi-vendor environments also increase their complexity and risk and challenges. >> Who was that talking about cloud? Because you know, we entered last decade. Cloud was kind of this experimental, throw some dev out in the cloud, and now as we enter this decade it's kind of a fundamental part of IT strategies. Every CIO, he or she has a cloud strategy. But it's also becoming clear that it's a hybrid world. So, in thinking about data protection, how does hybrid affect how your customers are thinking about protecting their data in the coming decade? >> So it produces a bunch of changes in how you have to think about things and today, we have over a thousand customers protecting over 2.5 exabytes of data in the public cloud. And it goes across a variety of use cases from longterm retention in the cloud, backup to the cloud, disaster recovery to the cloud, a desire to leverage the cloud for analytics and dev test, as well as production workloads in the cloud and the need to protect data that is born in the cloud. And we're in an environment where IT is spanning from the edge to the core to the cloud and the need to have a cohesive ability and approach to protect that data across its lifecycle for where it's born and where it's being operated on and where value is being added to it. >> Yeah, and people don't want to buy a thousand products to do that or even a dozen products to do that, right? They want a single platform. I want to talk about containers because Kubernetes, specifically, the containers generally one of the hottest areas. It's funny, containers have been around forever (laughs) but now they're exploding, people are investing much more in containers. IT organizations and dev organizations see it as a way to drive some of the agility that you maybe talked about earlier. But I'm hearing a lot about you know, protection, data protection for containers, and I'm thinking, "Well, wait a minute... "You know, containers come and go. "They're ephemeral. Why do I need to protect them?" Help me understand that. >> So, first I want to say yeah, we're seeing a lot of interest in enterprises deploying containers. Our latest survey says 57% of enterprises are planning on deploying it next year. And in terms of the ephemerality and the importance of protection, I have to admit, I started this job about a year ago and I was thinking almost exactly the same thing you were. I came in, we had an advanced development project going on around how to protect Kubernetes environments, both to protect the data and the infrastructure. And I was like, "Yeah, I see this "as an important advanced development priority, "but why is this important "to productize in the near future?" And then I thought about it some more and was talking to folks where the Kubernetes technologies, there's two key things with it. One: It's Kubernetes as a DevOps CI/CD environment, well if that environment is down... Your business is down in terms of being able to develop. So, you have to think about the loss of productivity and the loss of business value as you're trying to get your developer environment back up and running. But also, even though there might not be stateful applications running in the containers, there's generally production usage in terms of delivering your service that's coming out of that cluster. So, if your clusters go down or your Kubernetes environment goes down, you got to be able to bring it back up in order to be able to get it up and running. And then the last thing is in the last year or two, there's been a lot of investment in the Kubernetes community around enabling Kubernetes containers to be stateful and to have persistence with them. And that will enable databases to run in containers and stateful applications to run in to containers. And we see a lot of enterprises that are interested in doing that but... Now they can have that persistence but it turns out they can't go into production with the persistence because they can't back it up. And so, there's this chicken and egg problem in order to do the production you need both the state and the data protection. And the nice thing about the transformation that we've done is as we saw this trend materializing we were able to rapidly take this advanced development project and turn it into productization. And we're able to get to a tech preview in the summer and a joint announcement with Pat Gelsinger around our work together in the Kubernetes environment and being able to get our first... Product release out to market a couple of weeks ago and we're going to be able to really rapidly enhance the capabilities of that as we're working with our customers on where do they need the features added most and being able to rapidly integrate in with VMware's management ecosystem for container environments. >> So, you got a couple things going on there. You're kind of describing the dynamic of the developer and developers set to key... Strategic linchpin now. Because the time between you developing function and you get it to market I mean, it used to be weeks or months or sometimes even years. Today, it's like nanoseconds, right? "Hey, we need this function today. "Something's happening in the market, go push it." And if you don't have your data, you don't have the containers. The data and the containers are not protected, you're in trouble, right? Okay so, that's one aspect of it. The other is the technical piece so help us understand like, how you do that. What's the secret sauce conceptually behind you know, protecting containers? >> So, there's really two parts of what one needs to do for protecting the containers. There's the container infrastructure itself and the container configuration and knowing what's involved in the environment so that if your Kubernetes cluster goes down being able to restart it and being able to get your appropriate application environment up and running So, the containers may not be stateful but you've got to be able to get your CI/CD operate environment up and running again. And then the second part is we are seeing people use stateful containers and put databases in containers in development and they want to roll that into production. And so for there we need to backup not just the container definitions but backup the data that's inside the container and be able to restore them. And those are some of the things that we're working on now. >> One of the things I've learned from being around this industry for a while is people who really understand technology, they'll ask questions about, "What happens when something goes wrong?" so it's all about the recovery is really what you're talking about is that's the key. How does machine intelligence fit in... Stay on containers for a minute. Is machine learning and machine intelligence allowing you to recover more quickly, does it fit in there? >> So a key part of the container environment that's different from some of the environments in the past is just how dynamic it is and just how frequently containers are going to come and go and workloads mix, expand, and contract their usage of IT resources and footprint. And that really increases the need for automation and using some AI and machine learning techniques so that one can discover what is an application as it's containerized and what are all the resources it needs so that in the event of an interruption of service you know, all of the pieces that you need to bring together and automate its recovery and bring back. And in these environments you can no longer be in a spot to have people handcraft and tailor exactly what to protect and exactly how to bring it back after protection. You need these things to be able to protect themselves automatically and recover themselves automatically. >> So, I want to sort of, double click on that. Again, it's 2020 so I'm always going back to last decade and thinking about what's different. Beginning of last decade people were afraid of automation, they wanted knobs to turn. Even exiting the decade recently and even now, people are afraid about losing jobs. But the reality is things are happening so fast, there's so much data that humans just can't keep up. So, maybe you could make some comments about automation generally and specifically applying to data protection and recovery. >> Okay, so with the increasing amounts of data to be protected and the increasing complexity of environments, more and more of the instances of downtime or challenges in performing a recovery, tend to be because of the complexity of having deployed them and having the recovery procedures write and insuring that the SLAs that are needed are met and it's just no longer realistic... To expect people to have to do all of those things in excruciating detail. And it's really just necessary, in order to meet the SLAs going forward, to have the environments be automatically discovered, automatically protected, and have automated workflows for the recovery scenarios. And because of the complexities of changing, we need to reach the point of having AI and machine learning technologies help guide the people owning the data protection on data criticality and what's the right SLA for this and what's the right SLA for that and really get a human-machine partnership. So, it's not people or machines, but it's rather the people and machines working together in tandem with each doing what they do best to get the best outcome. >> Now that's great, you'd be helping people prioritize and the criticality applications... I want to change the conversation and talk about the edge a little bit. You sponsor off like, IDC surveys on how big the market is in terms of just zettabytes and it's really interesting and thank you from the industry standpoint for doing that. I have no doubt edge is coming into play now because so much data is going to be created at the edge, there's all this analog data that's going to be digitized, and it's just a big component of the digital future. In thinking about data at the edge, a lot of the data is going to stay at the edge, maybe it's got to be persisted at the edge. And obviously if it's persisted it has to be protected. So, how are you thinking about the evolution of edge, specifically around data protection? >> Okay, so the... I think you kind of caught it in the beginning. There's going to be a huge amount of data in the edge. Our analysis has us seeing that there's going to be more data generated and stored in the edge than in all the public clouds combined. So, that's just a huge shift in that three to five to ten year timeframe. >> Lot of data. >> Lot of data. You're not going to be able to bring it all back. You're just going to have elements of physics. So, there's data that's going to need to be persisted there. Some of that data will be transitory. Some of that data is going to be critical and need to be recovered. And a key part of the strategy around the edge is really, again going back to that, AI and machine learning intelligence and having a centralized control and understanding of what is my data in the edge and having what are the right triggers and understanding of what's going on of when is it an event occurred where I really need to protect this data? You can't afford to protect everything all the time. You got to protect the right things at the right time and then move it around appropriately. And so, a key part of being successful with the edge is getting that distributed intelligence and distributed control and recognizing that applications are going to span from core to edge to cloud and have just specific features and functions and capabilities that implement into various spots and then that intelligence to do the right thing at the right time with central policy control. >> So this is a good discussion. We've spanned a lot of territories but let's bring it back to the practical you know, uses for the IT person today saying, "Okay, Arthur, look. "Yeah, I'm doing cloud. I'm playing around with AI. "I've got my feet in containers "and my dev staff is doing that. "Yeah, edge. I see that coming. "But I just got some problems today that I have to solve." So, my question to you is, how do you address those really tactical day-to-day problems that your customers are facing today and still help them you know, plan for the future and make sure that they've got a platform that's going to be there for them and they're not going to just have to rip and replace in three or four years? >> Okay, and so that's like the $100,000 question as we look at ourselves in this situation. And the key is really taking our proven technologies and proven products and solutions and taking the agile approach for adding the most critical modern capabilities for new workloads, new deployment scenarios alongside them as we modernize those solutions themselves and really bringing our customers along in the journey with that and having a very smooth path for that customer transition experience on that path to our powered up portfolio. >> I mean, that's key because if you get that wrong and your customers get that wrong then maybe now it's a $100,000 problem it's going to be billions of dollars of problems. >> Fair. >> So, I want to talk a little bit about alternative use cases for data protection. We've kind of changed the parlance, we used to call it "backup". I've often said people want to get more out of their backup, they want to do other things with their backup 'cause they don't want just to pay for insurance, the CFO wants ROI. What are you seeing in terms of alternative use cases and the sort of expanding TAM, if you will, of backup and data protection? >> So, a core part of our strategy is to recognize that there is all of this data that we have as part of the data protection solutions and there's a desire on our customer's parts to get additional business value out of it and additional use cases from there. And we've explored and are investing in a variety of ways of doing that and the one that we see that's really hit a key problem of the here-and-now is around security and malware. And we are having multiple customers that are under attack for a variety of threats and it's hitting front page news. And a very large fraction of enterprises are having some amount of downtime due to malware or cyber attacks. And a key focus that we've had is around our cyber recovery solutions of really enabling a protected air gap solution so that in the event of some hidden malware or an intrusion, having a protected copy of that data to be able to restore from. And we've got customers who otherwise would have been brought down but were able to be brought back up very, very quickly by recovering out of our cyber vault. >> Yeah, I mean, it's a huge problem. Cyber has become a board-level issue, people are you know, scared to death of getting hit with ransomware, getting their entire data corpus encrypted so that air gap is obviously critical and increasingly it's becoming a fundamental requirement from a compliance standpoint. All right, I'll give you last word. Bring us home. >> Okay, so, the most important thing about the evolving and rapidly changing space of data protection at this point is that need for enterprises to have a coherent approach across their old and new workloads, across their emerging technologies, across their deployments in core, edge, and cloud, to be able to identify and manage that data and protect and manage that data throughout its lifecycle and to have a single coherent way to do that and single set of policies and controls across the data in all of those places. And that's one key part of our strategy of bringing that coherence across all of those environments and not just in the data protection domain, but there's also a need for this cross-domain coherence and getting your automation and simplification, not just in the data protection domain but up into higher levels of your infrastructure. And so we've got automation's taking place with our PowerOne Converged Infrastructure and we're looking across our Dell Technologies portfolio of how can we together, with our partners in Dell Technologies, solve more of our customer problems by doing things jointly. And so for example, doing data management that spans not just your protection storage but your primary storage as well. Your AI and ML techniques for full stack automation. Working with VMware around the full end to end Kubernetes management for VMware environments. And those are just a couple of examples of where we're looking to both be full across the data protection, but then expand into broader IT collaborations. >> You're seeing this across the industry. I mean, Arthur, you mentioned PowerOne. You're talking about microservices, API-based platform increasing, we're seeing infrastructure as a code which means more speed, more agility, and that's how the industry is dealing with all this complexity. Arthur, thank you so much for coming on theCUBE. Really appreciate it. >> Thank you. >> And thank you for watching, everybody. This is Dave Vellante and we'll see you next time. (electronic music)
SUMMARY :
From the SiliconANGLE Media office and taxing the people skills required So, I'm going to start right off. Well, so the key thing that we're doing I mean, that allows you to track you know, in the data protection industry and I said there's still you know, has 82% of the people we surveyed experienced downtime and the data that companies need and now as we enter this decade it's kind of and the need to protect data that is born in the cloud. Yeah, and people don't want to buy and to have persistence with them. of the developer and developers set to key... and being able to get your appropriate One of the things I've learned and just how frequently containers are going to come and go and recovery. and insuring that the SLAs that are needed are met a lot of the data is going to stay at the edge, in that three to five to ten year timeframe. and then that intelligence to do the right thing and they're not going to just have to rip Okay, and so that's like the $100,000 question it's going to be billions of dollars of problems. and the sort of expanding TAM, if you will, and the one that we see that's really and increasingly it's becoming a fundamental and to have a single coherent way to do that and that's how the industry is dealing And thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
$100,000 | QUANTITY | 0.99+ |
Arthur | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
February 2020 | DATE | 0.99+ |
57% | QUANTITY | 0.99+ |
82% | QUANTITY | 0.99+ |
Arthur Lent | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
a dozen products | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
Dell EMC | ORGANIZATION | 0.98+ |
four years | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
2020 | DATE | 0.98+ |
last decade | DATE | 0.98+ |
one aspect | QUANTITY | 0.97+ |
last month | DATE | 0.97+ |
over a thousand customers | QUANTITY | 0.97+ |
ten year | QUANTITY | 0.97+ |
PowerOne | COMMERCIAL_ITEM | 0.96+ |
single | QUANTITY | 0.96+ |
three years | QUANTITY | 0.95+ |
IDC | ORGANIZATION | 0.95+ |
each | QUANTITY | 0.95+ |
single platform | QUANTITY | 0.94+ |
billions of dollars | QUANTITY | 0.93+ |
20 plus years | QUANTITY | 0.91+ |
over 2.5 exabytes of data | QUANTITY | 0.91+ |
one key part | QUANTITY | 0.91+ |
last year | DATE | 0.91+ |
The State of Data Protection | TITLE | 0.89+ |
last 12 months | DATE | 0.88+ |
couple of weeks ago | DATE | 0.88+ |
a minute | QUANTITY | 0.88+ |
two key things | QUANTITY | 0.88+ |
Kubernetes | TITLE | 0.87+ |
about | DATE | 0.87+ |
a thousand products | QUANTITY | 0.86+ |
TAM | ORGANIZATION | 0.85+ |
First | QUANTITY | 0.85+ |
One size | QUANTITY | 0.85+ |
a year ago | DATE | 0.8+ |
Data Protection Division | ORGANIZATION | 0.78+ |
DevOps | TITLE | 0.77+ |
SiliconANGLE Media | ORGANIZATION | 0.75+ |
coming decade | DATE | 0.75+ |
Fortune | ORGANIZATION | 0.75+ |
one size | QUANTITY | 0.74+ |
PowerOne | ORGANIZATION | 0.64+ |
VMware | TITLE | 0.62+ |
two | DATE | 0.62+ |
Infrastructure | COMMERCIAL_ITEM | 0.6+ |