Clint Wyckoff, Datrium | CUBEConversation, April 2018
(epic music) >> Hi, I'm Peter Burris welcome to another Cube Conversation from our beautiful Palo Alto studios and today we're here with Clinton Wyckoff who is a senior global solutions engineer from Datrium. Welcome to the Cube Clinton. >> Well thanks for having us Peter, it's great to be here. >> So Clint there's a lot of things that we could talk about but specifically some of the things that we want to talk about today relate to how cloud use as it becomes more broad-based is now becoming more complex. Concerns about as we use more cloud we still have off-premise. How do we then sustain that we get more work done and that crucial role that automation and human beings are still going to play as we try to achieve our overall goals with data. So why don't you tell us a little bit about some of these themes of simplicity, scalability and reliability. >> Yeah definitely Peter. It's been a very interesting time over the last 12 months here at Datrium. We've been on a rapid release cycle. We've actually released DVX 4.0 of our software just a few weeks ago and maintaining focus around those three key talking points of simplicity, scalability and reliability that's really what the Datrium DVX platform is all about and it's about solving customer challenges that they have with their traditional on-premises workloads that they've virtualized and we're also seeing an increase in customers trying to leverage the public cloud for several different use cases. So kind of the the biggest takeaway from our perspective with relation to the latest release of our software is how can we integrate what the customers have grown to love on-premises with their Datrium DVX platform and how can we integrate that into the public cloud. So our first endeavor into that area is with cloud DVX and that integrates directly into their existing AWS subscription that they have. So now that they have on-premises Datrium running for all their mission-critical providing tier one systems of all the performance, cloud backup. All those capabilities that they've grown to love but how can I get my data off-site. That's been a huge challenge for customers. How can I get my data off-site in an efficient fashion? >> But in a way that doesn't look like an entirely different new or a completely independent set of activities associated with AWS. So talk to us a little bit about, you said something interesting. You said it integrates directly into AWS. What does that mean? Yes we've taken a direct port of our software so we have on premises customers run ESX hosts. In AWS terms that translates into EC2 instances. So the first thing that we do is we instantiate an EC2 instance outside in an AWS subscription. >> That means my billing, my management, my console everything now is the same. >> Exactly and then we're utilizing an S3 bucket to hold our cloud archive. So the first use case for cloud DVX and in its current iteration is for outside archives of Datrium snapshots. I run VMs on-premises, I want to take a snapshot of these, maybe send them over to a secondary location and then I want to get those off site for more long-term archival purposes. S3 is a great target for that and that's exactly what we're doing. So an existing customer can go into their Datrium console, say I want to add my AWS subscription, click next, next, next finish and it's literally that easy. We have automated lambda functions would that automatically spin up the necessary EC2 instances, S3 buckets all that stuff for the customers so they completely simplify the entire process. I like to think of it almost like if you look at your iPhone and you go into your iCloud backup, there's literally just a little slider button that says turning on. For us it's literally that simple as well. How can we help customers get their data off-site efficiently. That's a key kind of point for us here at Datrium and the fact that we have a global deduplication pool. That means the only data that's ever going to go over the wire is truly unique so we have built-in blockchain type crypto hashing that goes on so as data comes in we're going to do a comparison on-prem, off-prem and only send the unique data over the wire. That is truly game-changing from a customer perspective. That means I can now decrease my R-POS. I can get my data off-site faster but then whenever I want to recover or retrieve those block or other virtual machine snapshots, it's efficient as well so it's both ingress and egress so from a customer perspective it's a win-win. I can get my data off-site fast and I can get it back fast as well and it ultimately decreases their AWS charges as well. >> That's the point I was going to make. But it's within the envelope of how they want to manage their AWS resources right? >> Yep. >> So this is not something that's going to come along and just blow up how you spend AWS. If you're at the AWS person so we've heard what the Datrium console person can do. If you're an AWS person you're now seeing an application and certain characteristics, performance characteristics associated with it, cost characteristics associated with it and now you're seeing what you need to see. >> Exactly. We kind of abstract the AWS components out of it so if I'm an AWS console yes I see my EC2 instance, yes I see an S3 bucket but you can't make heads or tails of what it's kind of doing. You don't need to worry about all that stuff. We manage everything solely from a Datrium perspective going back to that simplicity model that the product was built upon is how can we make this as simple as possible. It's so simple that even an admin that has no experience with AWS can go in and stand this up very very easily. >> All right so you've got some great things going on with being able to use cloud as a target. What about being able to orchestrate resources across multiple different potential resources. How is that started? How does some of the new tooling facilitate that or make it more difficult? >> Well that's a really great question Peter. It's almost like you're looking into the crystal ball of the future because the way that Datrium, the product itself and the platform is architected, it's kind of building blocks on top of each other. We started off on premises. We've built that out to have a scale out architecture. Now we're going off premises out to the public cloud. Like I said the first use case just being able to leverage that for cloud archives. But what if I want to orchestrate that and bring workloads up inside of AWS? So I have a VMware snapshot that I've sent, or a Datrium snapshot that I've sent off-prem, I want to now make that an EC2 instance or I want to orchestrate that. That's the direction that we're going so there's definitely more to come there. So that's kind of the direction in what the platform is capable of. This is just the beginning. >> Now the hybrid verge concept very powerful and it's likely going to be a major feature of being able to put the work where it needs to be put based on where the data needs it. >> Sure. >> But hyper-converged has had some successes, it's had some weirdness associated with it. We won't get into all of it but the basic notion of hyper-converged is that you can bring resources together and run them as a single unit but it still tends to be more of a resource focus. You guys are looking at this from slightly differently. You're saying let's look at this as a problem of data and how the data is going to need resources so that you're not managing in the context of resources that are converged, you're managing in the context of the resources that the data needs to do what it needs to do for the business. Have I got that right? >> Yeah I mean the hyper-converged has done a lot of really good things. First and foremost that smooth flashed the host level. Removing a lot of the latency problems that traditional sand architecture has. We apply many of those same concepts to what Datrium is but we also bring a lot of what traditional sand has as well being durability, reliability on the backside of it so we're basically separating out my performance tier from my durability capacity tier on the bottom. >> Based on what the data needs. >> Exactly right so now that I've got these individuals stateless compute hosts where all of my performances for ultra-low latency, latency is a killer of any project. Most notably like VDI for instance or even sequel serve or Oracle. One of the other capabilities we actually just added to the product as well is now full support for Oracle RAC running on Datrium in a virtualized instance so latency as I mentioned has been a killer especially for mission-critical applications. For us we're enabling customers to be able to virtualize more and more high-performance applications and rely on the Datrium platform to have the intelligence and simplicity behind the scenes to make sure that things are going to run the way that they need to. >> Now as you think about what that means to an organization, so you've been at Datrium for a while now. How are companies actually facilitating the process of doing this differently? Are they doing a better job of actually converging the way that the work is applied to these resources or is that something that's still becoming difficult? How is the simplicity and the automation and reliability making it easy for customers to actually realize value of tools like these? >> It's actually it's truly amazing because once our customers get a feel for Datrium and get it into their environment, I mean we have customers all across the world from fortune 500 customers down to more small medium-sized businesses, financial, legal, all across the entire spectrum of verticals that are benefiting from the simplicity model. I don't have to worry about and you can go out to the Datrium website and we have a whole list of customer testimonials and the one resounding theme that goes across that is I no longer have to worry about managing this. The storage, the infrastructure, I'm now able to go back to my CIO or my CEO and I can provide business value to the business. I'm doing what I'm supposed to do. I don't have to worry about managing knobs and dials and hmm, do I want to turn compression on or maybe I want to turn it off or what size volume do I need, what queue depth. That's kind of mundane tasks. Let's focus on simplicity. Things are going to run the way that you need them to do, the way that you need them they run. They're going to be fast and it's going to be simple to operate. Well we'd like to talk about the difference between business and digital business as data. But digital business treats data as an asset and that has enormous implications how you think about how your work is institutionalized, what resources you buy, how you think about investing. Now it sounds as though you guys are thinking similarly. It's not the simple tasks you perform on the data that becomes important. It's the role the data plays in your business and how you turn that into a service for the business. Is that accurate? >> That is very accurate and you brought up a really good point there and the fact that the data is the business. That is a very key foundational component that we continue to build upon inside the product. So one of the kind of big capabilities and you've seen a lot of this in today's day and age with ransomware hacks and data breaches, I mean it's almost every other week you go on CNN or I'd pick your favorite news channel that you care to watch and you hear of breaches or data being stolen. So encryption, compliancy, HIPAA, sarbanes-oxley, all that type of stuff is very important and we've actually built into the product what we call blanket encryption. So data as it comes inbound is encrypted. We use FIPS 140-2 to either validated or approved mode and it is encrypted across the entire stack in use over the wire in flight and at rest. That's very different than the way that some of the other more traditional folks out there do it. If I look at sand, it does encryption at rest. Well that's great but what if while the data is in flight? What if I want to send it off premise, out to the public cloud? With Datrium, all that is built into the product. >> And that's presumably because Datrium has a greater visibility into the multiple levels that the data is being utilized-- >> Absolutely. >> Which is why you can apply in that way and so literally data becomes a service that applications and people call out of some Datrium managed store. >> Yeah absolutely. >> So think about what's next. If we think about, you mentioned for example that when we had arrays with sands that we had a certain architectural approach to how we did things but as we move to a world where we can literally look at data as an asset and we start thinking not as the task you performing on the data but the way you generate value out of your data. What types of things not just at Datrium, but what types of challenges is the industry going to take on next? >> So that's an interesting question. So in my opinion this is Clint's personal opinion that the way that the industry is changing in regular administrators, they're trying to orchestrate as much as they possibly can. I don't want to have to worry about the low-hanging fruit on the tree. How can I automate things so that whenever something happens or an action happens or a developer needs a virtual machine or I want to send this off-site to DR, what if I can orchestrate that, automate it, make it as simple to consume because traditionally IT is a bottleneck for moving the business forward. I need to go out and procure hardware and networks which is all that type of stuff that go along with it. So what if I was able to orchestrate all of those components leveraging API calls back to my infrastructure like a user has a webform that they're going to fill out. Those challenges are the types of things that organizations in my opinion are looking to overcome. >> Now I want to build on that for a second because a lot of folks immediately then go to oh, so we're going to use technology to replace labor and well some of that may happen the way I look at it and way we look at it is the real advantage is that new workloads are coming at these guys at an unprecedented rate and so it's not so much about getting rid of people. There may be an element of that but it's allowing people to be able to perform more work. With these new technologies. >> Well more work but focus on what you should be focusing on. Of all the senior executives that-- >> That's what I mean. >> All the senior executives that I talk to they're looking to make better use of IT resources. Those IT resources are not only what's running in the racks in the data center but it's also the gentleman or the lady sitting behind the keyboard. What if I want to make better use of their intellectual property that they have to provide value back to the business and that's what I see with pretty much everybody that I talk to. >> Clint this has been a great conversation so once again this has been Clinton Wyckoff. There's been a cute conversation with Clint Wyckoff who's a senior global solutions engineer at Datrium. Clint thank you very much for being on The Cube and we'll talk again. >> All right thanks Peter. Once again, thanks very much for sitting on this Cube Conversation. We'll talk to you again soon. (epic music)
SUMMARY :
Welcome to the Cube Clinton. and human beings are still going to play So kind of the the biggest takeaway So the first thing that we do is we instantiate everything now is the same. That means the only data that's ever going to go over the wire That's the point I was going to make. that's going to come along and just blow up how you spend AWS. that the product was built upon How does some of the new tooling facilitate that We've built that out to have a scale out architecture. and it's likely going to be a major feature and how the data is going to need resources First and foremost that smooth flashed the host level. and rely on the Datrium platform to have the intelligence How is the simplicity and the automation and reliability Things are going to run the way that you need them to do, With Datrium, all that is built into the product. and so literally data becomes a service on the data but the way you generate value out of your data. that the way that the industry is changing because a lot of folks immediately then go to oh, Of all the senior executives that-- All the senior executives that I talk to and we'll talk again. We'll talk to you again soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Clint | PERSON | 0.99+ |
Clint Wyckoff | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
April 2018 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Clinton Wyckoff | PERSON | 0.99+ |
ESX | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
CNN | ORGANIZATION | 0.98+ |
S3 | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
today | DATE | 0.98+ |
FIPS 140-2 | OTHER | 0.98+ |
first endeavor | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
S3 | COMMERCIAL_ITEM | 0.95+ |
DVX 4.0 | TITLE | 0.94+ |
customers | QUANTITY | 0.94+ |
few weeks ago | DATE | 0.92+ |
three key | QUANTITY | 0.91+ |
iCloud | TITLE | 0.91+ |
One | QUANTITY | 0.88+ |
tier one | QUANTITY | 0.87+ |
Datrium | TITLE | 0.86+ |
first use | QUANTITY | 0.84+ |
single unit | QUANTITY | 0.83+ |
Datrium DVX | TITLE | 0.82+ |
CUBEConversation | EVENT | 0.81+ |
first use case | QUANTITY | 0.76+ |
months | DATE | 0.71+ |
fortune 500 | ORGANIZATION | 0.68+ |
DVX | TITLE | 0.66+ |
VMware | TITLE | 0.66+ |
Datrium | COMMERCIAL_ITEM | 0.61+ |
Cube | COMMERCIAL_ITEM | 0.58+ |
12 | QUANTITY | 0.57+ |
RAC | TITLE | 0.56+ |
Cube | ORGANIZATION | 0.55+ |
EC2 | COMMERCIAL_ITEM | 0.54+ |
second | QUANTITY | 0.54+ |
Clinton | PERSON | 0.52+ |