Sazzala Reddy, Datrium & Stuart Lewallen, Sonoma County | VMworld 2018
>> Live from Las Vegas, it's theCUBE. Covering VMworld 2018. Brought to you by VMware and its ecosystem partners. >> Welcome back. This is theCUBE in Las Vegas. VMworld 2018. Three days wall-to-wall coverage with two sets. We've got about 95 guests and so many sessions that people go to in this, happy to have one of the sessions that just went on come to give you a view into what people attending VMworld are talking about. I'm Stu Miniman with my cohost Justin Warren. Happy to welcome back to the program Sazzala Reddy, who's the Chief Technology Officer with Datrium. He's brought a customer along with him. His name is also Stuart like mine, spelled the proper Scottish way S-T-U-A-R-T, Lewallen, who is the Data Center Team Lead with Sonoma County. Gentleman, thanks so much for joining us. >> Happy to be here. >> Thanks for having us. >> Stuart, we're going to get to the tech and your role, but first of all Sonoma County. Some, I guess, interesting might not be the right thing to say, but it's been a lot of activity going on. Maybe you can share what's been happening in your neck of the woods. >> Last October, we had a little bit of excitement. We had some wildfires roll through. Burned about 140 square miles. Burned a little bit over 5,000 houses. Unfortunately, 42 people lost their life in the disaster. A lot of lessons were learned from that. >> Horrific. We've seen what's happened. I've got a lot of friends and some family in California. We've seen people far and wide that have been effected. How were you involved with this and I know you talked a little bit about it in your session? >> I was wakened in the middle of the night by a page, somebody letting me know hey, we got a problem here. They were telling me they were already evacuating. At that point, I knew it was something serious so I started getting my family ready for evacuation. Started trying to gather news about what was actually going on and what I had found was the fire had started in Napa County and was being driven by 16 mile an hour winds. It had moved 12 miles in the first three hours. Nobody was able to get a handle on it. Nobody really even knew which way it was going. That's what our emergency operations center was trying to track is where is it and where is it headed to try to get people there. >> We have a bit of familiarity with wildfires in Australia. It's well-known, it's horrific to be involved with. Tell us a little bit about how you were managing that situation day to day. What does that actually do to your normal day, it just goes out the window. What did that feel like, what was that like when you were in that situation? >> That's a fantastic question. My entire team was scattered all over Northern California. I was in San Anselmo, one of my guys was in Fresno, one of the guys had packed up his trailer and went to the beach. One of the guys was in an evacuation center and everybody was ready to go. Everybody was scattered. The county center, the fires had gotten within three blocks of our data center, so the county center had been evacuated and they wouldn't let us back. Everybody was working remote. That mostly worked OK, but again, we had a lot of learning points. From the after action, we learned a lot from what worked well and what didn't. >> Sazzala, people often talk about the human things, but technology's a lot of times involved in a lot of these emergencies, disaster recovery. I remember numerous times in my career when I worked on the vendor side where SWAT teams are helping and you've got the base product, but bring us in as to how technology plays in. >> If you talk to anybody and say what's your dream plan of DR, they can draw a nice picture, but the reality is it can be too expensive. Even if the money's not the problem, then it's painful to set up and it's fearful when you have a problem. You have fear, like is it going to work for me? If you still look at the innovation in the last decade, there was deduplication, VMware has changed infrastructure, cloud is here, AI is here, but still DR happens to be still one of those not moved forward in terms of innovation. That's something where we see the opportunity for us to help customers take to the next level. >> That's true, and maybe you can bring that in of going well how did Datrium actually help you in this situation with that DR aspect? What did that look like? >> During the event, there was really not a lot of involvement of Datrium other than the fact that one of their field engineers emailed me and said hey, do you need anything? Anything at all. I'll bring you a generator, water, food, whatever you need. Which was fantastic, you think who does that? Datrium does. Sorry, I had to get a little plug in there for you guys. Very happy with that. But, in the aftermath, when we were evaluating what we did good and what we did bad, what needs improvement and how do we do that? That's where they really came in and helped us. Helped us to get an easy way to move our data offsite. That was a fantastic product, and that's one we just started using and recently came out is the ability to back-up local data to AWS in a very simplistic way. >> If you have a data center, you also have a second data center most people set up so they can do DR for it. It's an expensive operation. It just sits there, does nothing, and then waiting for one day to show up and be used magically. If you change anything here, you got to go change something there. It is untenable kind of a model. It's a cost center for a CIOs. A lot of people I talk to, that's an easy one to eliminate and get rid of it. The cloud is here, let's take advantage of it. It's an on-demand infrastructure. Let's use that leverage for doing disaster recovery in the cloud. Because it's expensive, as you all know, cloud. There's a 80-page manual for AWS, just for pricing. It's expensive but for a week or two weeks of disaster, it is a perfectly awesome use case. There are a few things you need. It has to work well, it has to be cost effective, and it has to be operationally consistent. What I mean by that is that if you move from your workloads from your data center to the public cloud, it has to look the same. If it looks different from you, then you're not going to use it. Fundamentally, that's a thing where we have helped is that how do we bring that, how do you do back-ups to the cloud? How do you think about the orchestration software and how does that work? How do you bring up the workloads in the cloud so that it looks similar when you move from here to there? To some degree, cloud is a commodity, right? Let's use it that way. Let's take advantage of the hybrid cloud because it's already there. This is what Datrium is doing. >> A few more things that came out of our experience was we realized that failover had to be simple. The reason it had to be simple was exactly what I said before. You have no idea who you're going to have on your staff that's able to pull the trigger on this. It can't be some complex thing that only two people in your organization can do and it takes three days just to get it kicked off. It's got to be a push button, it really does these days. To make it effective. And it's got to be able to be tested. You've got to be able to validate that it's going to work. You can't wait and just hope and pray that when that day comes that it's going to work. I think, finally, it's got to be affordable. If it's not in your budget, it's not even a starter. You're going back to scripts and people running things. >> (laughing) The idea that you have to hope that that script that you wrote once is actually going to work in the middle of that disaster. You're going, oh yeah, that's right, I forgot to fix that bug. It's not something that you really want to do. Just being able to rely on something in that situation is really important. Stuart, you mentioned something before we went on to camera that you were quite interested in, which is coming from Datrium, which is around that movement of data into the cloud. Maybe you could tell us a little bit more about what that feature is and why you find it interesting. >> Think of it as like on offsite tape back-up, that's basically what it replaced. We used to spend, back in the day when we had mainframes, we spent a bazillion dollars having tapes shipped offsite. That's what everybody did back in the day. Then you went to on-site tapes that got moved, and then you went to disk arrays and you went to a remote disk array. That's kind of how things have transitioned and now instead of having a disk array somewhere else, why not just put it up in the cloud? AWS is very money-efficient as far as putting data there. If you don't need to do anything with it, which is what you're describing is your offsite back-up, it's a fantastic use case. >> This feature's coming out soon, I believe? >> It's coming out soon, we announced it-- >> And I'm sorry, I missed what it's called. >> Sorry? >> The feature? >> Yes, it's going to come out, it's called CloudShift. >> Thank you. >> It's going to be happening pretty soon. We're announcing it today. We have some demos in our booth, you can come by and check it out. If you look at applications, most people think about the application life cycle. There is the running of applications at high-performance, there's backing it up, and then doing DR. That's how the life cycle is. But if you look at the, no company has solved it end to end. I don't know why, but everybody seems to be doing piecemeal solutions, so you end up with five different products in your data center and they work together very well. Then you pray, like Murphy's Law, that it's all going to work together for you, when you actually have a problem, to get it resolved. That's kind of hoping for things to work well for you. >> Stuart, now you like five different products, right? >> No. (all laughing) I like one different product. The reality is everything's been cobbled together for years. Truly, if it was that simple, I'd be doing something else probably, they wouldn't pay me to do what I do. In this particular case, it's got to be simple. You can't rely on having your best or any particular people there in an emergency, so it's got to be simple. Has to be. >> Yeah. >> Having that (mumbles) platform really changes the game, basically. >> Stuart, talk to us, what are you looking for from the vendor community going forward? We talked about this one feature. Anything else on your wish list to make things simpler, as you've said, I think is one of the key criteria that you're looking for? >> You see all the commercials these days. Make it simple. People have simple buttons and everybody wants push button, everybody wants it simple. They want to make technology simple for everybody, for the average person. I think it's a laudable effort. I think that's where it has to go. It can't be all complex and it can't be the old days where you had guys that they were the only guys that knew anything and they became indispensable. These days, everybody has to know how to do things. You can't rely on one person cause, God forbid, what if they get hit by a bus? What if they just go to a different company and then you're left with this big hole? Simplicity is the key to any organization, really. >> You know what's simpler than one click? Zero clicks. Because one click requires you to read the manual. You'll just see what does it do for me? That's something, how we think about it, really try hard to do zero click. But it's very hard, though, because you have to build a lot more things into the system to imagine how this is going to work for the customer and imagine the best case scenario for the customer. >> It's certainly something, we're seeing a trend in a lot of companies here is automation and actually taking all of that manual effort out of things and having that automation actually be baked into the product as well, rather than relying on customers to have to automate their own environment. It just comes with it, which goes to that we just want an easy button. We want to have something which I don't even have to press the button, it presses its own buttons. >> We're living in the age of convenience. >> Yeah. >> (mumbles) Amazon to ship us products before we know it. (all laughing) >> I'll subscribe to that. >> I shudder to think what my house would fill up with there. (all laughing) >> Excellent. Sazzala and Stuart, really appreciate you giving the update. Stuart, we hope that things with the wildfires settle down, we know it's been challenging to deal with there. Thanks so much for sharing the story. >> Thanks for having me. >> Thanks for having us. >> Absolutely. Justin Warren, and I'm Stu Miniman, we'll be back with more coverage here from VMworld 2018. Thanks for watching theCUBE. (electronic tones)
SUMMARY :
Brought to you by VMware that people go to in this, happy to the tech and your role, A lot of lessons were learned from that. How were you involved with to try to get people there. that situation day to day. One of the guys was in as to how technology plays in. it going to work for me? Sorry, I had to get a little and it has to be operationally consistent. And it's got to be able to be tested. that you have to hope and then you went to missed what it's called. Yes, it's going to come Law, that it's all going to it's got to be simple. really changes the game, basically. Stuart, talk to us, Simplicity is the key to the system to imagine I don't even have to press (mumbles) Amazon to ship I shudder to think what my to deal with there. Justin Warren, and I'm
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Sazzala | PERSON | 0.99+ |
Stuart | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
Fresno | LOCATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
Australia | LOCATION | 0.99+ |
San Anselmo | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Napa County | LOCATION | 0.99+ |
12 miles | QUANTITY | 0.99+ |
a week | QUANTITY | 0.99+ |
Zero clicks | QUANTITY | 0.99+ |
80-page | QUANTITY | 0.99+ |
zero click | QUANTITY | 0.99+ |
Lewallen | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
one click | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
42 people | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one day | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Sonoma County | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Last October | DATE | 0.99+ |
two sets | QUANTITY | 0.99+ |
Northern California | LOCATION | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
Stuart Lewallen | PERSON | 0.99+ |
Three days | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
two people | QUANTITY | 0.98+ |
three blocks | QUANTITY | 0.98+ |
VMworld 2018 | EVENT | 0.98+ |
first three hours | QUANTITY | 0.98+ |
Murphy's Law | TITLE | 0.98+ |
about 140 square miles | QUANTITY | 0.97+ |
five different products | QUANTITY | 0.97+ |
over 5,000 houses | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
VMworld | EVENT | 0.96+ |
one person | QUANTITY | 0.96+ |
five different products | QUANTITY | 0.95+ |
One of the guys | QUANTITY | 0.94+ |
Datrium | TITLE | 0.94+ |
one of the guys | QUANTITY | 0.9+ |
CloudShift | TITLE | 0.9+ |
16 mile an hour | QUANTITY | 0.9+ |
about 95 guests | QUANTITY | 0.9+ |
Scottish | OTHER | 0.89+ |
last decade | DATE | 0.88+ |
Gentleman | PERSON | 0.88+ |
one of the sessions | QUANTITY | 0.88+ |
one different product | QUANTITY | 0.87+ |
second data center | QUANTITY | 0.86+ |
one of my guys | QUANTITY | 0.82+ |
DR | TITLE | 0.78+ |
one feature | QUANTITY | 0.71+ |
Datrium | PERSON | 0.7+ |
a bazillion dollars | QUANTITY | 0.69+ |
Data | ORGANIZATION | 0.68+ |
VMware | TITLE | 0.65+ |
years | QUANTITY | 0.63+ |
once | QUANTITY | 0.61+ |
SWAT | ORGANIZATION | 0.56+ |
theCUBE | EVENT | 0.55+ |
Officer | PERSON | 0.5+ |
page | QUANTITY | 0.47+ |
Tushar Agrawal & Sazzala Reddy, Datrium | CUBEConversation, July 2018
(inspirational music) >> Hi everybody, this is, Dave Vellante, from our Palo Alto Cube studios. Welcome to this Cube conversation with two gentlemen from Datrium. Tushar Agarwal is the Director of Product Management, and Sazzala Reddy is the CTO and co-founder of Datrium. We're going to talk about disaster recovery. Disaster recovery has been a nagging problem for organizations and IT organizations for years. It's complex, it's expensive, it's not necessarily reliable, it's very risky to test, and Datrium has announced a product called CloudShift. Now, Datrium is a company who creates sets of data services, particularly for any cloud, and last year introduced a backup in archiving on AWS. We've written about that, we've profiled that. Gentlemen welcome to the Cube, >> Good to see you. >> Thank you. >> Good to be here. >> Thank you (mumbles). >> So tell us about, CloudShift. >> Yeah, sure, great. So if you kind of step back and look at our journey starting with Cloud DVX, which was what we announced last year, our end goal has been to simplify infrastructure for customers and eliminate any access infrastructure that they need, starting with Cloud DVX, which addressed the backup part of it Where the customers do not need to keep a dedicated off-site backup anymore and extending that with CloudShift, which now brings it to a DR context and makes the economics so phenomenal that they don't need to keep a DR site anymore just waiting for a disaster to happen. So, CloudShift, at very beginnings, is a sort of a multi-year journey where we bring the ability to do workload mobility orchestration across an on-premises DVX system to a DVX running in the cloud, leveraging Cloud DVX backups, so that customers can do just-in-time DR. >> Sazzala, I talked earlier about some of problems with DR, and let's talk about what you see. I mean, I've talked to customers who've set up three sites, put in a fireproof box, I mean all kinds of just really difficult challenges and solutions. What are you seeing in, terms of some of the problems and challenges that customers are facing, and how are you addressing this? >> Yeah, so like you said, I don't think I've heard anybody saying my DR plan is awesome. (Laughing) or it works, or I'm enjoying this thing. It's a very fearful situation because when things go down, that's when everyone is watching you, and then that's when the fear comes in, right? So, we built kind of a. We built our service, CloudShift service. It's very easy to use, firstly, step one. And the reason, the other goals, kind of, so, if you click a button, you want to just (mumbles) to some new place, right? But to make that really work well, what are the customers, I mean, if I was a customer, what would I think about? I want the same experience no matter where I moved, right? But it has to be seamlessly like, you know, I don't have to change my tool sets, I have the same operational consistency, that's got number one, and number two is that, does it really work when I click the button, is it going to work? So if you go to Amazon, it'll convert VMs. That's a different experience completely, right? So how do you make that experience be likely foolproof? It will work fundamentally. So we've done lot of things like no conversion of VMs. And the second one is that we have built-in compliance checks. Every half an hour it checks itself to see that the whole plan is compliant. You know that when actually there is a problem it'll actually, the compliance actually has caught the issues before hand. And the third one is that, you can do schedule testing. That you can set up schedules and say know what test it every month for me. So that you know. And test it, give a report to you saying okay it's all this, all looking good for you. So that's kind of things you maybe do to make sure that it's going to be foolproof, guaranteed DR success when you initially have to hit the button. >> Yeah and just to add to that. I think, if you look at a DR equation for a customer it's really two things. I'm paying a lot for it. What can I do to address that problem? And will it work when I need it to work, right? I think it's really fundamentally those two problems. And cloud gives us a great way to address the cost equation because now you got an infrastructure that is truly on tank, can be truly on demand. And so you don't really keep those resources running unless you have to, unless you have a test event or you have the actual DR event. On the will it work when I want it to work, Cloud has typically had a lot of challenges that's a lot of outline, right? You have VMs that are going from a VMware infrastructure to an Amazon infrastructure which means those washing machines now need to be running in a different format. You don't have a simple, single-user interface to manage those two environments where you have an Amazon console at one end and a VMware recenter on the other. And then thirdly, you have this data mobility problem where you don't have the data going across a consistent, common architecture. And so we sort of solve all these problems collectively by making DR just in time because we only spin up those resources when they need to be there in the cloud. There is no VM conversion because we are building this, leveraging the benefits of VMware Cloud in AWS. There is a common single pane of glass to manage this infrastructure. And there is a tremendous amount of speed in data mobility and a tremendous amount of economics in the way that we store that data in a de-duplicated compressed way all the time so it kind of checks off the cost equation and it checks out the fact that it actually works when it needs to work. >> So, let's unpack that a little bit. So normally what I would have is a remote site and that site has resources there. It's got hardware and software and building and infrastructure hopefully far enough away from whether it's an earthquake zone or a hurricane or whatever it is and it sits there as an underutilized asset. Now maybe there's some other things that I can do with it, but if it's my DR site, it's just sitting there as insurance. >> Right. >> That's one problem. >> The other problem is testing, DR testing is oftentimes very risky. A lot of customers we talk to don't want to test because they might fail over and then they go to fail back and oops, there's a problem. And what am I going to do? Am I going to stop running my business? So maybe talk about how you address some of those challenges. >> So I think yes, that's true. We heard people like spent half a million dollars in testing DR and never be able to come back from it. Like that's a lot of money and a lot of (mumbles) and then you can't come back is a completely different business problem. So you know, more than just having the DR site, there's like expanse and maintenance, but the other problem is that when you add something, new workloads, you have to add more work. It would kind of change. It would kind of beget new licenses, get new new other, like you know more and more things. So all of this actually is a fundamental problem but if you go to the cloud, just-in-time on-demand thing is amazing because you are only paying for the backups which is you need to do. If you cannot lose it, there are backups. You need backups fundamentally to be on another site because if ransomware hits you, you need to be able to go back in time so you need copies of deep copies to be in another place. And so the thing about just-in-time DR is that you pay for the backups, sure. It's very cost-effective with us, but you only pay for the services for running your applications for the two weeks you have a problem and then when you're done with it, you're done with paying that. So it's a difference with paying everyday versus paying for insurance. Sometimes insurance pays for those kind of things. It's very cost effective. >> Okay, so I'm paying Datrium for the service. Okay, I get that. And I'm paying a little bit, let's say, for instance it's running on Amazon, a little bit for S3, got to pay for S3 and I'm only paying for the EC2 resource when I'm using that resource. (crosstalk) It's like serverless for DR. >> It actually goes beyond that, Dave, right? >> Actually I like that word that you used. You should probably use that. >> Absolutely because I think it's not just the EC2 part but if you look at a total cost of ownership equation of a data center, right, you're looking at networking, you're looking at software, you're looking at compute, you're looking at people managing that infrastructure all the time, you're looking at power cooling and so I think by having this just-in-time data center that gets spun up and you have to do nothing, literally, you just have to click a button. That saves you know a tremendous amount. That's a transformational economics situation right there where you can simply go ahead and eliminate a lot of time, a lot of energy, a lot of costs that customers pay and have to deal with to just keep that DR site running across the board. >> Mm hm. >> Let me give one more savings note. So let's say you had 100 terabytes and you failed over, so when you're done with two weeks' testing, only one terabyte changed. Are you going to bring back everything or are you going to bring only one terabyte? It's a fundamental underlying technology thing. If you don't have dedupe over the wire, you'll bring back everything 100 terabytes. You're going to pay for the digress cost and ultimately it'll be too slow for you to bring it all back. So what you really want is underlying technology which has dedupe over the wire. We call it global dedupe that you can only move back what's changed and it's fast. One terabyte moving there is not that bad, right? Otherwise you'd end up moving everything back which is kind of untenable again. So you have to make all these things happen to make DR really successful in the cloud. >> So you're attacking the latency issues. >> Latency and bestly 100 terabyte moving from one place to the other, it'll take a long time because the vanpipe is only that much and you're paying for the egress cost. >> We always joke the smartest people in Silicon Valley are working on solving the speed of light problem. >> That's right so if you look at data, if you're going to move from one place to the other. First of all, data has gravity, it doesn't want to move, right? So that's one fundamental problem. So how do you build a antigravity device to actually fix that problem, right? So if you leap forward, global dedupe is here where you can transfer only what's changed to the other side. That really defeats light speed, right? And then, both ways, moving it here and moving it there. Without having this van deduplication technology, I think you will be paying a significant amount of time and money, so then it becomes untenable. If you can't really move it fast, then it's like people don't do it anymore. >> And in the typical Datrium fashion, it's just there. It just works. (crosstalk) >> I think that's such a good point, Dave, because if you look at traditional DR solutions today, the challenge is that there are a collection of software and services and hardware from multiple vendors. And that's not such a bad thing. I think the challenge that that causes is the fact that you don't have the ability to do an end-to-end, closed loop verification of your DR plan. You know the DR orchestration software does not know whether the VM that I'm supposed to protect actually has a snapshot on the storage array on which its protecting it, right, and so that, in many ways, leads to a lot of risk to customers and it makes the DR plans very fragile because you know, you set a plan on day one and then let's say three months down the line, you know, something got changed in the system and that wasn't caught by the DR orchestration software because it's unlinked. It doesn't have the same visibility into the actual storage system. The advantage we get with the integrated, built-in backup in DR system is that we can actually verify that the virtual machine that you're supposed to protect actually has all the key ingredients that are needed for a successful DR across the stack as well as in target fader ware site. >> It's kind of the perfect use case, a perfect use case for the cloud and I think, you know, there's something even more here is that because of the complexity of the IT infrastructure around DR and the change management challenges that you talked about, the facilities management challenges that all of the sudden an organization becomes, they're in the DR business and they don't want to be in the DR business. (crosstalk) >> Show no value, I mean, really it's not really adding significantly. It's not improving organization. >> That's actually true and I think the way we have tried to tackle that problem, Dave, is kind of going back to the whole premises of this multi-cloud data services. We will make DR, you know, as simple as possible and what we really enable for them to do is to not have to worry about installing any software, not have to worry about upgrading any software, managing any software. It's a, you know, service that they can just enter their DR plans into. It's very intelligent because it's integrated very well with the DVX system. And they can schedule testing. They don't even have to click a button to actually do a plan failover and in case of an actual event, it's just a single click. It's conveniently checked all the time so you kind of take away a lot of the hassles and a lot of the worry and a lot of the risks and make it truly simple, give them a (mumbles) software as a service experience. >> So I'm kind of racking my brain here. Is there anything out there like this that provides an on-demand DR SaaS? >> I don't know of any actually. >> Yeah, I think, so if you you kind of look at the landscape, Sazzala is right, actually there is none and there a few solutions from leading providers that focus on instantiation of a virtual machine on native AWS, but they don't enter the challenge that they have to convert a virtual machine from a VMware virtual machine to an Amazon AMI and that doesn't always work. Secondly, you know, if you run into that kind of a problem, can you really call it true DR because in case of a DR, you want that virtual machine to come up and run and be a valid environment as against just a test-of-use case. >> So the other one is that backup vendors can't do this. Generally, they traditionally can probably, but I think because they are one day behind, they backup once a day, so you can't do DR if you are one day behind. DR wants to be like, okay, I am five minutes behind, I can recover my stuff, right? And then primary vendors like Pure, for example, like whole flash vendors, they focused on just running it, not about backup, but you need the backups to actually make it successful so that you can go back in time if you have ransomware. So you need a combination of both primary and backup and the ability to have it running in the service in the cloud. That's why you need all these pieces to work together. >> So you talked about ransomware a couple of times. Obviously, DR, ransomware, maybe talk a little bit more about some of the other use cases beyond DR. >> So I think that kind of goes back to why we decided to name this feature CloudShift, right? If you think about a traditional DR solution, you would call it something like DR Orchestrator, right, but that's not really the full vision for this product. DR is one of the very important use cases and we talked about how we do that phenomenally well than other solutions out there but what this solution really enables customers to do is actually look at true workload mobility between on-prem and cloud and look at interesting use cases such as ransomware protection. And the reason why we are so great at ransomware protection is because we are an indicated primary and backup from a restart points perspective and in a ransomware situation, you can't really go back to a restart point that's, you know, a day before or two days before. You really want to go down to as many points as you want and because we have this very efficient way of storing these restart points or snapshots in Cloud DVX, you have the ability to instantiate or run a backup which is from sufficiently long time ago, which gives you a great amount of ransomware protection and it's completely isolated from your on-prem copy of that data. >> Let me add one more point to that. So if you just go beyond the DR case, from a developer perspective, right, from a company perspective, developers want a flexible infrastructure to like try new stuff and try new experiments in terms of building new applications for the business, they can try it in the cloud with our platform. And when they're done, for three months, they'll like, you know, have the, because they figured out okay this is how it's going to work, this is how much (mumbles) I need, it's more elastic there. When they're done testing it, whatever they built it, they can click a button with our CloudShift and move it all back on-prem and then now you kind of have it more secure and in an environment you want to. >> Alright, guys, love to see the evolution of your data services, you know, from backup, now DR, other use cases. Congratulations on CloudShift and thanks for explaining it to us. >> Thank you very much. >> Pleasure being here. >> Okay, thanks for watching, everybody. This is Dave Vellante from our Palo Alto Cube studios. We'll see you next time. (inspiring music)
SUMMARY :
and Sazzala Reddy is the CTO and co-founder of Datrium. So if you kind of step back and look and let's talk about what you see. And the third one is that, you can do schedule testing. to manage those two environments where you have an Amazon and that site has resources there. So maybe talk about how you address for the two weeks you have a problem and I'm only paying for the EC2 resource Actually I like that word that you used. that gets spun up and you have to do nothing, literally, So you have to make all these things happen to the other, it'll take a long time We always joke the smartest people in Silicon Valley So if you leap forward, global dedupe is here And in the typical Datrium fashion, it's just there. that you don't have the ability to do an end-to-end, and the change management challenges that you talked about, it's not really adding significantly. so you kind of take away a lot of the hassles So I'm kind of racking my brain here. Secondly, you know, if you run into that kind of a problem, to actually make it successful so that you can go back So you talked about ransomware a couple of times. you have the ability to instantiate or run and move it all back on-prem and then now you kind of and thanks for explaining it to us. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tushar Agarwal | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
three months | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two weeks' | QUANTITY | 0.99+ |
July 2018 | DATE | 0.99+ |
two problems | QUANTITY | 0.99+ |
One terabyte | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one day | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
half a million dollars | QUANTITY | 0.99+ |
CloudShift | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
Palo Alto Cube | ORGANIZATION | 0.99+ |
two environments | QUANTITY | 0.99+ |
Cloud DVX | TITLE | 0.98+ |
100 terabyte | QUANTITY | 0.98+ |
once a day | QUANTITY | 0.98+ |
Sazzala | PERSON | 0.98+ |
two gentlemen | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
EC2 | TITLE | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Tushar Agrawal | PERSON | 0.97+ |
day one | QUANTITY | 0.97+ |
one place | QUANTITY | 0.96+ |
S3 | TITLE | 0.95+ |
three sites | QUANTITY | 0.95+ |
one more point | QUANTITY | 0.93+ |
one end | QUANTITY | 0.93+ |
VMware Cloud | TITLE | 0.93+ |
single pane | QUANTITY | 0.92+ |
Cube | ORGANIZATION | 0.91+ |
today | DATE | 0.9+ |
firstly | QUANTITY | 0.9+ |
First | QUANTITY | 0.9+ |
step one | QUANTITY | 0.9+ |
two days before | DATE | 0.89+ |
AMI | TITLE | 0.88+ |
Cloud | TITLE | 0.88+ |
Pure | ORGANIZATION | 0.88+ |
a day before | DATE | 0.88+ |
one fundamental problem | QUANTITY | 0.86+ |
half an hour | QUANTITY | 0.84+ |
DVX | TITLE | 0.83+ |
thirdly | QUANTITY | 0.81+ |
Palo Alto | LOCATION | 0.8+ |
DR | TITLE | 0.78+ |
single- | QUANTITY | 0.71+ |
single click | QUANTITY | 0.69+ |
number two | QUANTITY | 0.67+ |
years | QUANTITY | 0.65+ |
of money | QUANTITY | 0.62+ |