Sazzala Reddy, Datrium | CUBEConversation, September, 2019
(upbeat music) >> Announcer: From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Hi and welcome to theCUBE Studios for another CUBE Conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host, Peter Burris. Any business that aspires to be a digital business has to invest in multiple new classes of capabilities required to ensure that their business operates as they're promising to their customers. Now, we've identified a number of these, but one of the things we think is especially important here in theCUBE is data protection, data assurance. If your data is going to be a differentiating asset within your business, you have to take steps to protect it and make sure that it's where it needs to be, when it needs to be there and in the form it's required. Now, a lot of companies talk about data protection, but they kind of diminish it down to just backup. Let's just back up the data, back up the volume. But increasingly, enterprises are recognizing that there's a continuum of services that are required to do a good job of taking care of your data, including disaster recovery. So, what we're going to talk about today is one of the differences between backup and restore, and disaster recovery and why disaster recovery is becoming such an important element of any meaningful and rational digital business strategy. Now, to have that conversation, today we're here with Sazzala Reddy who's the CTO at Datrium. Sazzala, welcome back to theCUBE. >> Happy to be here, Peter. >> So, before we go on this question of disaster recovery and why it's so important, let's start with a quick update on Datrium. Where's Datrium today? You've been through a lot of changes this year. >> Yes, right. We kind of have built a bunch of services as a platform. It will include primary storage, back-up, disaster orchestration and encryption mobility. So that last piece of that puzzle was a DR orchestration, we kind of finished that a few months ago, and that's the update, and also now what we're offering concretely is DR services to the Cloud with the VMware Cloud on Amazon. It is transformational, and people really are adopting it quite heavily with us, because it simplifies, that what you just said about the business continuity, and it gives them a chance to shut down the second data center and leverage the Cloud in a very cost-effective way, to have that option for them. >> So, let's talk about that, because when you think about the Cloud, typically you think about, especially as you start to bring together the Cyber Cloud notion of an on-premise versus a Cloud orientation, you think in terms of an on-premise set of resources and you think in terms of effectively mirroring those resources in the Cloud, and a lot of people have pointed out that that can be an extremely expensive way of doing things. So, historically we had a site, we had a disaster recovery site, maybe we even had a third site, and we had to replicate hardware, we had to replicate networking, we had to replicate software and often a sizeable percentage of staff across all those services, so we've been able to do it more effectively by having the Cloud be the target, but still, having to reserve all that CPU, all that network, seemed like an extremely expensive way of doing things if you only need it, when you need it, and ideally, it's not often. >> That's correct, so, Cloud offers us a new way of doing elastic, on-demand pricing, especially for disaster recovery, it is really useful to think about it that way. In a data center to data center DR like you mentioned, you have to buy all the different products for managing your data, you'll buy primary storage, back-up and de-orchestration, all these different pieces. Then you replicate the same thing somewhere else, all these pieces are kind of just complicated. It's called Murphy's Law, you know, imagine that when there's a disaster, everybody's watching you and you're trying to figure out how this is going to work for you, that's when the challenges are, and there's the danger is that until now, disaster recovery has mostly been a disaster. It's never really worked for anybody. So, what Cloud offers you is an opportunity to simplify that and basically get your disaster recovery to be fail proof. >> Well, so, we have the ongoing expense that we're now ameliorating, we're getting rid of, because we are not forcing anyone to reserve all those resources. >> Yeah. >> But one of the biggest problems in disaster recovery has always been, as you said, it's been a disaster. The actual people processes associated with doing or recovering from a disaster in business continuity sense often fails. So, how does doing it in the Cloud, does it mean we can now do more automation in the Cloud from a disaster recovery stand-point? Tell us a little bit about that. >> There are multiple things, not just that the Cloud offers simplicity in that way, you do have to imagine how are you going to build software to help the customer on their journey. The things, like you mentioned, three things people do in a disaster planning kind of thing, one is that they have to do planning, make all these notes, keep it down somewhere and things change. The moment you make these plans, they're broken because somebody did something else. And second thing is they have to do testing, which is time consuming and they're not sure it's going to work for them, and finally when there's a disaster there's panic, everybody's afraid of it. So, to solve that problem, you need to imagine a new software stack, running in the Cloud, in the most cost-efficient way so you can store your data, you can have all these back-ups there in a steady state and not paying very much. And its three costs are pretty low, if you do dedupe on that it's even lower, so that really brings down the cost of steady-state behavior, but then, when you push the button, you, we can bring up VMware servers on the Amazon Cloud on-demand. So you only pay for the VMware server's computer services when you really need them. And when you don't need them anywhere, you fix your data center, you push a button, you bring all the data back and shut down the VMware servers. So, it's like paying for insurance, after you have an accident. That changes the game. The cost efficiencies of doing DR, it suddenly becomes affordable for everybody, and you can shut down a second data center, cut down the amount of work you have to do, and it gives you an opportunity to actually now have a chance to get that fail proof-ness and actually know it's going to work for you or not going to work for you. >> But you're shutting down the other data center, but you're also not recreating in the Cloud, right? >> Yeah. >> So, you've got the data stored there, but you're not paying for all the resources that are associated with that, you're only spinning them up-- >> That's correct. >> in VM form, when there's actually a problem. But I also want to push this a little bit, it suggests also that if you practice, you said test, I'll use the word practice-- >> I did say that. >> As one of the things you need to do. You need to practice your DR. Presumably if you have more of that automated as part of this cloud experience, then pushing that button, certainly there's going to be some human tasks to be performed, but it increases the likelihood that the recovery process in the business continuity sense is more successfully accomplished, is that right? >> Yeah, correct, there are two things in this DR, one is that, do you know it's going to work for you when you actually have a disaster. That's why you think of doing testing, or the, what did you call it, planning-- >> Practice. >> Practice once in a while. The challenge with that is that why even to practice? Like, it takes time and energy for you to do that. You can do it, no problem, but how can we, with software, transform that in such a way that you get notified when actually something is going to go wrong for you. Because we own primary, back-up and DR, all the three legs of the stool in terms of how the DR should be working, we check continuous compliance checks every half an hour so that we can detect if something is going wrong, you have changed some plans, or you have added some new things, or networking is bad, whatever, we will tell you right away, pro-actively, in half an hour, that hey, there's a problem, should go fix it now. So you don't have to like do that much plan, that much of testing continuously anymore, because we are telling you right now there's a problem. That itself is such a game changer, in the sense that it's pro-active, versus being reactive when you're doing something. >> Yeah, it dramatically increases the likelihood that the actual recovery process itself is successful. >> Sazzala: Yes, right. >> Where if you have a bunch of humans doing it, could be more challenging -- >> Sazzala: More fragile. >> And so, as you said, a lot of the scripts, a lot of that automation is now in the solution and also pro-actively, so if something is no longer in compliance, it does not fit the scheme and the model that you've established within the overall DR framework, then you can alert the business that something is no longer compliance or is out of bounds, fix it so that it stays within the overall DR framework, have I got that right? >> Yes, correct, and you can only do this if you own all the pieces, otherwise, again, it's back to the Murphy's Law, you're testing. So every customer is testing DR in different event typologies, everybody's different, right? So then a customer is not the tester of all these pieces fitting together, and different combinations and permutations. Because we have all the three pieces, we are the ones testing it all the time and everybody testing the same thing, so it's the same software running everywhere that makes the probability of success much higher. >> So it's a great story, Sazzala, but where are you? Where is Datrium today in terms of having these conversations with customers, enacting this, turning this into solutions, changing the way that your customers are doing business? >> Right, we have simplified by converging a lot of services into one platform. That itself is a big deal for a lot of customers, nobody wants to manage stuff anymore, they don't have time and patience. So, we give this platform called DVX on-prem, it runs VMware RCLI, it's super efficient. But the next thing, what we're offering today, which is actually very attractive to our customers is that we give them a path to use the Cloud as a DR site without having to pay the cost of it and also without having to worry about it working for them or not working for them. The demos are super simple to operate because once it all works together, there's no complexity anymore, it's all kind of gone away. >> And, there are a lot of companies, as they mention upfront, that are talking about back-up and restore-- >> Yeah. >> As an approximate to this, but it seems like you've taken it a step further. >> Yeah, so, having been in the business for a while, back-up, yes, back-up can live in the Cloud, you can have long-term back-ups, whatever, but remember that back-up is not DR. If you wanted to have a DR, what DR means is that you're recovering from it, if you have back-up only-- >> Back up's a tier. >> Back-up is a tier. Back-up is that, you have to do rehydration. There's two problems with that. Firstly, rehydration will take you two days, everybody's watching you while the data center is down and businesses wants to be up and running, two days to recover, maybe 22 days. I recently was with a customer, they have a petabyte of data, takes 22 days to do recovery of the data. That's like, okay I don't know what business -- >> 22 days? >> 22 days. And then another 100 days to bring the data back. So that's the problem with back-up as a topic itself. And secondly, they're converting, a lot of those back-up vendors are converting VMs into Amazon VMs, nothing wrong with Amazon, it's just that, suddenly in a disaster, you're used to all your VCenter, you're used to your VMware environment, and now you're learning some new platform? It's going to re-factor your VMs into something else. That is a different disaster waiting to happen for you. >> Well, to the point, you don't want disaster recovery in three years when you figure it all out, you want disaster recovery now-- >> Now. >> With what you have now. >> That's correct, that's exactly right. So those conversions of VMs leads to a path of, it's a one-way migration, there's no path out of that, it's like Hotel California, you're getting in, not coming out. It may be good for Amazon, but the customers want to solve a problem, which is a DR problem. So by working with VM via Cloud, they have been very friendly with us, we're super good partners with them and they've enabled us access to some of the things there to enable us to be able to work with them, use their APIs and launch VMware servers on-demand. That to me, is a game changer, and that's why it's such a highly interesting topic for a lot of customers. We see a lot of success with it, we're leading with it now, a lot of people just dying to get away from this DR problem, and have business continuity for their business, and what we're giving them is the simplicity of one product, one bill, and one support call. You can call us for anything, including Amazon, VMware and Datrium, all the pieces and we'll answer all the questions. >> Now I really like the idea, and you pay for it only as, or after the disaster has been recovered from. >> It's like paying for insurance after the-- >> I like that a lot. All right, Sazzala Reddy, CTO of Datrium, once again thanks for being on theCUBE. >> Oh, thank you very much for having me. >> And thank you for joining us for another CUBE Conversation. I'm Peter Burris, see you next time. (lively brass band music)
SUMMARY :
in the heart of Silicon Valley, but one of the things we think and why it's so important, because it simplifies, that what you just said of an on-premise set of resources and you think in terms In a data center to data center DR like you mentioned, because we are not forcing anyone to reserve has always been, as you said, it's been a disaster. and actually know it's going to work for you it suggests also that if you practice, you said test, As one of the things you need to do. one is that, do you know it's going to work for you So you don't have to like do that much plan, that the actual recovery process itself is successful. Yes, correct, and you can only do this is that we give them a path to use the Cloud As an approximate to this, but it seems like you can have long-term back-ups, whatever, Back-up is that, you have to do rehydration. So that's the problem with back-up as a topic itself. So those conversions of VMs leads to a path of, VMware and Datrium, all the pieces Now I really like the idea, and you pay for it only as, I like that a lot. And thank you for joining us
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
September, 2019 | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Sazzala | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
22 days | QUANTITY | 0.99+ |
100 days | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
two problems | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
one bill | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
three pieces | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Murphy's Law | TITLE | 0.99+ |
half an hour | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
three costs | QUANTITY | 0.98+ |
Sazz | PERSON | 0.98+ |
second thing | QUANTITY | 0.98+ |
CUBE Conversation | EVENT | 0.98+ |
this year | DATE | 0.98+ |
today | DATE | 0.97+ |
Hotel California | ORGANIZATION | 0.97+ |
Firstly | QUANTITY | 0.96+ |
second data center | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
third site | QUANTITY | 0.95+ |
theCUBE Studios | ORGANIZATION | 0.95+ |
second data | QUANTITY | 0.94+ |
VMware Cloud | TITLE | 0.93+ |
one support call | QUANTITY | 0.93+ |
few months ago | DATE | 0.93+ |
one-way | QUANTITY | 0.9+ |
Palo Alto, California | LOCATION | 0.88+ |
three | QUANTITY | 0.87+ |
three legs | QUANTITY | 0.87+ |
CTO | PERSON | 0.82+ |
Cloud | TITLE | 0.78+ |
CUBEConversation | EVENT | 0.78+ |
DR | TITLE | 0.67+ |
secondly | QUANTITY | 0.67+ |
VMware RCLI | TITLE | 0.65+ |
petabyte | QUANTITY | 0.63+ |
DVX | TITLE | 0.62+ |
of services | QUANTITY | 0.59+ |
customers | QUANTITY | 0.5+ |
Datrium V2
(light music) >> Hi, I'm Peter Burris and welcome to another CUBE Conversation. This one is part of a very, very special digital community event sponsored by Datrium. What are we gonna be talking about today? Well, Datrium's here with a special product announcement that's intended to help customers do a better job at matching their technology needs with the speed and opportunities to use their data differently within their business. This is a problem that every single customer faces, every single enterprise faces and it's one that's become especially acute as those digital natives increasingly hunt down and take out some of those traditional businesses that are trying to better understand how to use their data. Now, as we have with all digital community events, at the end of this one, we're gonna be running a crowd chat, so stay with us. We'll go through a couple of Datrium and Datrium customer conversations and then it'll be your turn to weigh in on what you think is important, ask the questions of Datrium and others in the community that you think need to be addressed. Let's hear what you have to say about this increasingly special relationship between data, technology and storage services. So, without further ado, let's get it kicked off. Tim Page is the CEO of Datrium. Tim, welcome to theCUBE. >> Thank you, Peter. >> So, Datrium, give us a quick take on where you guys are. >> Yeah, Datrium's formulated as a software defined converged infrastructure company that takes that convergence to the next level, and the purpose of us is to give the user the same experience whether you're working on-prem or across multicloud. >> Great, so let's start by saying that's the vision, but you've been talking to a lot of customers. What's the problem that you keep hearing over and over that you're pointing towards? >> Yeah, it's funny, meeting with a number of CIOs over the years and specifically as related to Datrium, they'll tell you we're on an on-demand economy that expects instant outcomes, which means you have to digitally transform and to do that, you've gotta transform IT, which means it's gotta be easy, it's gotta be consistent. You've gotta get rid of a lot of the management issues and it's gotta feel or take advantage of the services that cloud has to offer. >> All right, so that's the nature of the problem. You've also done a fair amount of research looking into the specifics of what they're asking for. Give us some insight into what Datrium's discovering as you talk to customers about what the solutions are gonna look like. >> It's interesting, if you look at how to resolve that, you've gotta converge to transform in some form or fashion. If you look at the first level of convergence a lot of people have done, it's been directly as it relates to hardware architecture. We've taken that to a whole new level to a point where we're saying how do you actually automate those mundane tasks that take multiple groups to solve. Specifically, primary, backup, disaster recovery, all the policies involved in that. There's a lot of work that goes into that across multiple groups and we set out to solve those issues. >> So, there's still a need for performance, there's still the need for capacity, to reduce management time and overhead, et cetera, but, Tim, as we move forward, how are customers responding to this? Are you getting some sense of what percentage of them are going to say, yeah, that's it? >> Yeah, so interesting, we just ran a survey and got over 500 people, IT leaders to respond to it and it's interesting 'cause they talk about performance, management, security, but they're also talking about consistency of that experience. Specifically, we asked how many of you is it important to have your platform have built-in backup and policy services with encryption built-in, et cetera and we got a 70% rate of those applicants, of those people interviewed saying it's really important for that to be part of a platform. >> Now, it sounds like you're really talking about something more than just a couple of products. You're really talking about forcing customers or you're not forcing, but customers are starting the process of rethinking their data infrastructure. Have I got that right? >> That's right. If you look at how infrastructure's grown over the last 20 years, 20 years ago, SAN technology was related and every time you threw up an app, you had to put different policies to that app or put different LUN type management to how much of my resources can go to certain things. We set out to actually automate that, which is why it took us four years to build this platform with 100 programmers is, well, how do we actually make you not think about how you're gonna back up. How do you set a policy and know disaster recovery is gonna run? And to do that, you gotta have it in one code base. And we know we're on to something even based on our survey because the old array vendors are all buying bolt-ons because they know users want an experience, but you can't have that experience with a bolt-on. You have to have it in your fundamental platform. >> Well, let me step in here. I've been around for a long time, Tim and heard a lot of people talk about platforms and if I have one rule, companies that introduce platforms that just expand typically fail. Companies that bring an opinion and converge more things so it's simpler, tend to be more successful. Which direction is Datrium going? >> Yeah, definitely, that's why we took time. If you wanna be an enterprise class company, you can't build a cheap platform in 18 months and hit the market, 'cause where you architect, you stay. Our purpose from the beginning was purposefully to spend four years building an enterprise platform that did away with a lot of the mundane tasks, SAN management. That's 20 years old technology, LUN management. If you're buying your multi-cloud type technology experience in cages, you're just buying old stuff. We took an approach saying we want that consistent approach that whether you're running your services on prem or in any type of cloud, you could instantly take advantage of that and it feels the same. That's a big task 'cause you're looking to run the speed of storage with the resiliency of backup, which is a whole different type of technology, which is how our founders who have built the first version of this went to the second and almost third version of that type of instantiation of a platform. >> All right, so we know what the solution's gonna look like. It's gonna look like a data platform that's rethought to support the needs of data assets and introduces a set of converged services that really focus the value proposition to what the enterprise needs. So, what are you guys announcing? >> That's exactly right. So, we've finalized what we call our AutoMatrix platform. AutoMatrix inherently in it will have primary backup, disaster recovery, DR solution, all the policies within that and encryption built-in from the very beginning. To have those five things, we believe to actually have the next generation experience across true multicloud, you're not bolting on hardware technologies, you're bolting on software technologies that operate in the same manner. Those five things have to be inherent or you're a bolt-on type company. >> So, you're not building a platform out by acquisition. You're building a platform out by architecture and development. >> That's right and we took four years to do it with 100 guys building this thing out. It's released, it's out and it's ready to go. So our first we're announcing is that first instantiation of that is a product we're calling Control Shift, which is really a data mobility orchestrator, true SaaS based. You can orchestrate prem to prem, prem to cloud, cloud to cloud and our first iteration of that is disaster recovery. So, truly, to be able to set up your policies, check those policies and make sure you're gonna have true disaster recovery with an RTO of zero. It's a tough thing. We've done it. >> That's outstanding. Great to hear, Tim Page, CEO Datrium talking about some of the announcements that we're gonna hear more about in a second. Let's now turn our attention to a short video. Let's hear more about it. (light music) >> Lead Bank is focused on small businesses and helping them achieve their success. We want through and redesigned the customer engagement in defining the bank in the future. This office is our first implementation of that concept. As you can see, it's a much more open floor plan design that increases the interaction between our Lead Bank associates and our clients. With Datrium's split provisioning, all of our data is now on the host. So, we have seen 80 times lower application latency. This gives our associates instant responses to their queries, so they can answer client questions in real-time. Down time is always expensive in our business. In the past, we had a 48 hour recovery plan, but with Datrium, we were able to far exceed that plan. We've been able to recover systems in minutes now. Instead of backing up once per day, with that backup time taking 18 hours, now we're doing full system snapshots hourly and we're replicating those offsite. Datrium is the only vendor I know of that can provide this end-to-end encryption. So, any cyber attacks that get into our system are neutralized. With the Datrium solution, we don't have to have storage consultants anymore. We don't have to be storage experts. We're able to manage everything from a storage perspective through vCenter, obviously spending less time and money on infrastructure. We continue to leverage new technologies to improve application performance and lower costs. We also wanna automate our DR failover, so we're looking forward to implementing Datrium's product that'll allow us to orchestrate and automate our DR failover process. (light music) >> It is always great to hear from a customer. Once again, I'm Peter Burris, this a CUBE Conversation, part of a digital community event sponsored by Datrium. We've been talking about how the relationship between the new digital business outcomes highly dependent upon data and the mismatch of technology to be able to support those new classes of outcomes. It's causing problems in so many different enterprises. So, let's dig a little bit more deeply into some of Datrium's announcements to try to find ways to close those gaps. We've got Sazzala Reddy, who's the CTO of Datrium with us today. Sazzala, welcome to theCUBE. >> Hey Peter, good to see you again. >> So, AutoMatrix, give us a little bit more detail and how it's creating value for customers. >> Yeah, if you go to any data center today, you notice that for the amount of data they have, they have five different vendors and five different products to manage that data. There is the primary storage, there is the backup and there is the DR and then there's mobility and then there is the security you have to think about. So, these five different products are causing friction for you. If you wanna be in the on-demand economy and move fast in your business, these things are causing friction. You cannot move that fast. What we have done is we took a step back and we built this Automatrix platform. It has this data services which is gonna provide autonomous data services. The idea is that you don't have to do much for it. By converging all these functions into one simple platform will remove all the friction you need to manage all your data and that's what we call Automatrix platform. >> As a consequence, I gotta believe then, your customers are discovering that not only is it super easy to use, perhaps a little bit less expertise required, but they also are more likely to be operationally successful with some of the core functions like DR that they have to work with. >> Yeah, so the other thing about these five different functions and products you need is that if you wanna imagine a future where you're gonna leverage the cloud for a simple thing like DR for example, the thing is that if you wanna move this data to a different place, with five different products, how does it move? 'Cause all these five products must move together to some other place. That's not how it's gonna operate for you. So, by having these five different functions converged into one platform is that when the data moves to any other place, the functions move with it giving you the same exact consistent view for your data. That's what we have built and on top of all this stuff is something we have, this global data management applications to control all the data you have in your enterprise. >> So, how are customers responding to this new architecture of AutoMatrix, converged services and a platform for building data applications? >> Yeah, so our customers consistently tell us one simple thing is that it's the most easiest platform they ever used in their entire enterprise life. So, that's what we aimed for simplicity of the customer experience. Autonomous data services give you exactly that experience. So, as an example, last quarter, we had about 40 proof of concepts out in the field. Out of them, about 30 have adopted it already and we're waiting for the 10 of them for results to come out in this quarter. So, generally we found that our proof of concepts don't come back because once you touch it, you experience the simplicity of it and how you get all these service and support, then people don't tend to send it back. They like to keep it and operate it that way. >> So, you mentioned earlier and I summarized the notion of applications, data services applications. Tell us a little bit about those and how they relate to AutoMatrix. >> Right, so once you have data in multiple places, people are adopt multi-cloud and we are going to also be in all these different clouds and we provide that uniform experience, you need this global data management applications to extract value out of your data and that's the reason why we built some global data management applications as SAAS products. Nothing to install, nothing to manage them, then they sit outside and then they help you manage globally all the data you have. >> So, as a result, the I&O people, the infrastructure and operations administrators, do things in terms of AutoMatrix's platform, the rest of the business can look at it in terms of services and applications that you're using in support. >> That's exactly right, so you get the single dashboard to manage all the data you have in your enterprise. >> Now, I know you're introducing some of these applications today. Can you give us a little peek into those? >> Yeah, firstly, our AutoMatrix platform is available on prem as a software defined converged infrastructure and you can get that. We call it DVX. And then we also offer in the cloud our services. It's called Cloud DVX. You can get these. And we're also announcing the release of Control Shift. It's one of our first data management applications, which helps you manage data in two different locations. >> So, go a little bit more specific into or detail into Control Shift. Specifically, which of those five data services you talk about is Control Shift most clearly associated with? >> Right, so to go to again back to this question about if you have five different services, if you have to think about DR. DR is a necessity for every business. It's digital protection, you need it, but the challenge is that there are three or four challenges you generally run into with most common people talk about is that one is that you have to plan. You have to have a proper plan. It's challenging to plan something and then you have to think about the file drill we have to run when there's a problem. And then lastly, when you eventually push the button to fail over, does it really work for you. How fast is it gonna come up? Those are three problems we wanted to solve really solidly, so we call our services, our DR services as failproof DR. That's actually takes a little courage to say failproof. ControlShift is our service which actually does this DR orchestration. It does mobility across two different places. It could be on-prem to on-prem, on-prem to the cloud and because we have this end-to-end data services ourselves, it's easy to then do compliance checks all the time. So, we do compliance checks every few minutes. What that gives you, that gives you the confidence that your DR plan's gonna work for you when you need it. And then secondly, when you push the button because you want some primary storage and backup, it's then easy to bring up all your services at once like that. And the last one is that because we are able to then work across the clouds and provide a seamless experience, so when you move the data to the cloud and have some backups there, you're gonna push a button to fail over, we'll bring up your services in VMware cloud, so that the idea is that it look exactly the same no matter where you are, in DR or not in DR and then watch the video, watch some demos. I think that you can see that you can't tell the difference. >> Well, that's great, so give us a little bit of visibility into how Datrium intends to extend these capabilities, give us a little visibility on your road map. What's up next? >> We are already on Amazon with the cloud. The next thing we're gonna be delivering is Azure, that's the next step, but if you step back a little bit and how do we think about ourselves? If you look at as an example Google, Google federates all the data, the internet data and processes an instant search, provides that instant click and access to all the data at your fingertips. So, we wanna do something similar for enterprise data. How do we federate, how do we aggregate data and provide the customer that instant management they can get from all the data they have. How do you extract value from the data? These set of applications are building towards some examples are we're building deep search. How do you find the things you want to find in a very nice intuitive way? And how do you do compliance, GDPR and also how do you think about some deep analytics on your data? So, we also wanna extend our Control Shift not to just manage the data on our platform, but also to manage data across different platforms. So, those are the kind of things we're thinking about as a future. >> Excellent stuff. Sazzala Reddy, CTO of Datrium, thanks so much for talking with us about AutoMatrix, Control Shift and the direction that you're taking with this. Very, very interesting new vision about how data and business can more easily be brought together. You know, I'll tell you what, let's take a look at a demo. Hi and welcome back to another CUBE Conversation. Once again, I'm Peter Burris and one of the biggest challenges that every user faces is how do they get more out of their technology suppliers, especially during periods of significant transformation. So, to have that conversation, we've got Bryan Bond who is Director of IT Infrastructure at eMeter, A Siemens Business. Bryan, welcome to theCUBE. >> Thanks for having me. >> So, tell us a little bit about eMeter and what you do there. >> So, eMeter is a developer and supplier of smart grid infrastructure software for enterprise level clients, utilities, water, power, energy. My team is charged with managing infrastructure for that entire business units, everything from dev tests, QA and sales. >> Well, the intelligent infrastructure as it pertains to the electronic grid, that's not a small set of applications, a small set of use cases. What kinds of pressure is that putting on your IT infrastructure? >> A lot of it is the typical pressures that you would see with do more with less, do more faster. But a lot of it is wrapped around our customers and our other end users in needing more storage, needing more app performance and needing things delivered faster. On a daily basis, things change and keeping up with the Jones' gets harder and harder to do as time moves on. >> So, as you think about Datrium's AutoMatrix, how is it creating value for you today? Give us a peek into what it's doing to alleviate some of these scaling and other sorts of pressures. >> So, the first thing it does is it does allow us to do a lot more with less. We get two times the performance, five times the capacity and we spend zero time managing our storage infrastructure. And when I say zero time, I mean zero time. We do not manage storage anymore with the Datrium product. We can deploy things faster, we can recover things faster. Our RTO and our RPO matrix is down to seconds instead of minutes or hours. And those types of things really allow us to provide a much better level of service to our customers. >> And it's especially for infrastructure like the electronic grid, it's good to hear that the RTO, RPO is getting as close to zero as possible, but that's the baseline today. Look out and as you envision where the needs are of these technologies are going for improving protection, consolidating, converging data services and overall providing a better experience for how a business uses data, how do you anticipate that you're going to evolve your use of AutoMatrix and relate it to Datrium technologies? >> Well, we fully intend to expand our use of the existing piece that we have, but then this new AutoMatrix piece is going to help us not with just deployments, but it's also gonna help us with compliance testing, data recovery, disaster recovery and also being able to deploy into any type of cloud or any type of location without having to change what we do in the back end, being able to use one tool across the entire set of the infrastructure that we're using. >> So, what about the tool set, you're using the whole thing consistently, but what about the tool set went in easiest for you within your shop? >> Installing the infrastructure pieces themselves in its entirety were very, very easy. So, putting that into what we had already and where we were headed was very, very simple. We were able to do that on the fly in production and not have to do a whole lot of changes with the environments that we were doing at the time. The operational pieces within the DVX, which is the storage part of the platform, were seamless as far as vCenter and other tools that we were using went and allowed us to just extend what we were doing already and be able to just apply that as we went forward. And we immediately found that again, we just didn't manage storage anymore and that wasn't something we were intending and that made our ROI just go through the roof. >> So, it sounds like time value for the platform was very, very quick and also it fit into your overall operational practices. You didn't have to do a whole bunch of unnatural acts to get there. >> Right, we did not have to change a lot of policies, we did not have to change a lot of procedures. A lot of times, we just shortened them, we took a few steps out in a lot of cases. >> So, how is it changing, being able to do things like that, changing your conversation with your communities that you're serving as they ask for more capabilities? >> First off, it's making me say no a lot less and that makes them very, very happy. The answer usually is less and the answer to the question of how long will it take changes from oh, we can get that done in a couple of days or oh, we can get that done in a couple hours to I did that while I was sitting here in the meeting with you and it's been handled and you're off to the races. >> So, it sounds like you're placing a pretty big bet on Datrium. What's it like working with them as a company? >> It's been a great experience. From the start in the initial piece of talking to them and going through the POC process, they were very helpful, very knowledgeable SCs and since then, they've been very, very helpful in allowing us to tell them what our needs are rather than them telling us what our needs are and going through and working through the new processes and the new procedures within our own environments. They've been very instrumental in performance testing and deployment testing with things that a lot of other storage providers didn't have any interest in talking with us about, so they've been very, very helpful with that and very, very knowledgeable. The people that are there are actually really smart, which is not surprising, but the fact that they can relay that into solutions to what my actual problems are and give me something that I can push forward onto my business and have a positive impact from day one has been absolutely without question one of the better things. >> Well, that's always one of the biggest challenge when working with a company that's just getting going is how do you get the smarts of that organization into the business outcomes and really succeed. It sounds like it's working well. >> Absolutely. >> All right, Bryan Bond, Director of IT Infrastructure at eMeter, A Siemens Business. Thanks again for being on theCUBE. >> Bryan: It's been great. >> And once again, this has been a CUBE Conversation. Now, what we'd like to do is don't forget this is your opportunity to participate in the crowd chat immediately after this video ends and let's hear your thoughts. What's important in your world as you think about new classes of data platforms, new roles of data, new approaches to taking greater advantage of the data assets that are differentiating your business. Have those conversations, make those comments, ask those questions. We're here to help. Once again, Peter Burris, let's crowd chat. (light music)
SUMMARY :
and others in the community that you think need to the next level, and the purpose of us is What's the problem that you keep hearing over and over and to do that, you've gotta transform IT, which means All right, so that's the nature of the problem. We've taken that to a whole new level to a point for that to be part of a platform. but customers are starting the process And to do that, you gotta have it in one code base. so it's simpler, tend to be more successful. of that and it feels the same. So, what are you guys announcing? on software technologies that operate in the same manner. So, you're not building a platform out by acquisition. You can orchestrate prem to prem, prem to cloud, cloud of the announcements that we're gonna hear more about all of our data is now on the host. of Datrium's announcements to try to find ways and how it's creating value for customers. The idea is that you don't have to do much for it. of the core functions like DR that they have to work with. management applications to control all the data you have and how you get all these service and support, and how they relate to AutoMatrix. all the data you have. So, as a result, the I&O people, the infrastructure to manage all the data you have in your enterprise. Can you give us a little peek into those? and you can get that. you talk about It's challenging to plan something and then you have into how Datrium intends to extend these capabilities, manage the data on our platform, but also to manage data So, to have that conversation, we've got Bryan Bond and what you do there. for that entire business units, everything from dev tests, to the electronic grid, that's not a small set A lot of it is the typical pressures that you would see how is it creating value for you today? Our RTO and our RPO matrix is down to seconds instead that the RTO, RPO is getting as close to zero as possible, is going to help us not with just deployments, and not have to do a whole lot of changes You didn't have to do a whole bunch of unnatural acts A lot of times, we just shortened them, in the meeting with you and it's been handled So, it sounds like you're placing a pretty big bet that into solutions to what my actual problems are is how do you get the smarts of that organization Thanks again for being on theCUBE. of the data assets that are differentiating your business.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bryan | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Sazzala | PERSON | 0.99+ |
five times | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
Tim Page | PERSON | 0.99+ |
two times | QUANTITY | 0.99+ |
Tim | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
Bryan Bond | PERSON | 0.99+ |
18 hours | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
80 times | QUANTITY | 0.99+ |
three problems | QUANTITY | 0.99+ |
100 programmers | QUANTITY | 0.99+ |
second | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
48 hour | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
five different products | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
eMeter | ORGANIZATION | 0.99+ |
100 guys | QUANTITY | 0.99+ |
one rule | QUANTITY | 0.99+ |
five different services | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
zero time | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.99+ |
five products | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
four challenges | QUANTITY | 0.99+ |
five things | QUANTITY | 0.99+ |
last quarter | DATE | 0.99+ |
five different vendors | QUANTITY | 0.98+ |
over 500 people | QUANTITY | 0.98+ |
20 years ago | DATE | 0.98+ |
AutoMatrix | ORGANIZATION | 0.98+ |
Automatrix | ORGANIZATION | 0.98+ |
third version | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
two different places | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
one tool | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
First | QUANTITY | 0.97+ |
five data services | QUANTITY | 0.96+ |
Tushar Agrawal & Sazzala Reddy, Datrium | CUBEConversation, July 2018
(inspirational music) >> Hi everybody, this is, Dave Vellante, from our Palo Alto Cube studios. Welcome to this Cube conversation with two gentlemen from Datrium. Tushar Agarwal is the Director of Product Management, and Sazzala Reddy is the CTO and co-founder of Datrium. We're going to talk about disaster recovery. Disaster recovery has been a nagging problem for organizations and IT organizations for years. It's complex, it's expensive, it's not necessarily reliable, it's very risky to test, and Datrium has announced a product called CloudShift. Now, Datrium is a company who creates sets of data services, particularly for any cloud, and last year introduced a backup in archiving on AWS. We've written about that, we've profiled that. Gentlemen welcome to the Cube, >> Good to see you. >> Thank you. >> Good to be here. >> Thank you (mumbles). >> So tell us about, CloudShift. >> Yeah, sure, great. So if you kind of step back and look at our journey starting with Cloud DVX, which was what we announced last year, our end goal has been to simplify infrastructure for customers and eliminate any access infrastructure that they need, starting with Cloud DVX, which addressed the backup part of it Where the customers do not need to keep a dedicated off-site backup anymore and extending that with CloudShift, which now brings it to a DR context and makes the economics so phenomenal that they don't need to keep a DR site anymore just waiting for a disaster to happen. So, CloudShift, at very beginnings, is a sort of a multi-year journey where we bring the ability to do workload mobility orchestration across an on-premises DVX system to a DVX running in the cloud, leveraging Cloud DVX backups, so that customers can do just-in-time DR. >> Sazzala, I talked earlier about some of problems with DR, and let's talk about what you see. I mean, I've talked to customers who've set up three sites, put in a fireproof box, I mean all kinds of just really difficult challenges and solutions. What are you seeing in, terms of some of the problems and challenges that customers are facing, and how are you addressing this? >> Yeah, so like you said, I don't think I've heard anybody saying my DR plan is awesome. (Laughing) or it works, or I'm enjoying this thing. It's a very fearful situation because when things go down, that's when everyone is watching you, and then that's when the fear comes in, right? So, we built kind of a. We built our service, CloudShift service. It's very easy to use, firstly, step one. And the reason, the other goals, kind of, so, if you click a button, you want to just (mumbles) to some new place, right? But to make that really work well, what are the customers, I mean, if I was a customer, what would I think about? I want the same experience no matter where I moved, right? But it has to be seamlessly like, you know, I don't have to change my tool sets, I have the same operational consistency, that's got number one, and number two is that, does it really work when I click the button, is it going to work? So if you go to Amazon, it'll convert VMs. That's a different experience completely, right? So how do you make that experience be likely foolproof? It will work fundamentally. So we've done lot of things like no conversion of VMs. And the second one is that we have built-in compliance checks. Every half an hour it checks itself to see that the whole plan is compliant. You know that when actually there is a problem it'll actually, the compliance actually has caught the issues before hand. And the third one is that, you can do schedule testing. That you can set up schedules and say know what test it every month for me. So that you know. And test it, give a report to you saying okay it's all this, all looking good for you. So that's kind of things you maybe do to make sure that it's going to be foolproof, guaranteed DR success when you initially have to hit the button. >> Yeah and just to add to that. I think, if you look at a DR equation for a customer it's really two things. I'm paying a lot for it. What can I do to address that problem? And will it work when I need it to work, right? I think it's really fundamentally those two problems. And cloud gives us a great way to address the cost equation because now you got an infrastructure that is truly on tank, can be truly on demand. And so you don't really keep those resources running unless you have to, unless you have a test event or you have the actual DR event. On the will it work when I want it to work, Cloud has typically had a lot of challenges that's a lot of outline, right? You have VMs that are going from a VMware infrastructure to an Amazon infrastructure which means those washing machines now need to be running in a different format. You don't have a simple, single-user interface to manage those two environments where you have an Amazon console at one end and a VMware recenter on the other. And then thirdly, you have this data mobility problem where you don't have the data going across a consistent, common architecture. And so we sort of solve all these problems collectively by making DR just in time because we only spin up those resources when they need to be there in the cloud. There is no VM conversion because we are building this, leveraging the benefits of VMware Cloud in AWS. There is a common single pane of glass to manage this infrastructure. And there is a tremendous amount of speed in data mobility and a tremendous amount of economics in the way that we store that data in a de-duplicated compressed way all the time so it kind of checks off the cost equation and it checks out the fact that it actually works when it needs to work. >> So, let's unpack that a little bit. So normally what I would have is a remote site and that site has resources there. It's got hardware and software and building and infrastructure hopefully far enough away from whether it's an earthquake zone or a hurricane or whatever it is and it sits there as an underutilized asset. Now maybe there's some other things that I can do with it, but if it's my DR site, it's just sitting there as insurance. >> Right. >> That's one problem. >> The other problem is testing, DR testing is oftentimes very risky. A lot of customers we talk to don't want to test because they might fail over and then they go to fail back and oops, there's a problem. And what am I going to do? Am I going to stop running my business? So maybe talk about how you address some of those challenges. >> So I think yes, that's true. We heard people like spent half a million dollars in testing DR and never be able to come back from it. Like that's a lot of money and a lot of (mumbles) and then you can't come back is a completely different business problem. So you know, more than just having the DR site, there's like expanse and maintenance, but the other problem is that when you add something, new workloads, you have to add more work. It would kind of change. It would kind of beget new licenses, get new new other, like you know more and more things. So all of this actually is a fundamental problem but if you go to the cloud, just-in-time on-demand thing is amazing because you are only paying for the backups which is you need to do. If you cannot lose it, there are backups. You need backups fundamentally to be on another site because if ransomware hits you, you need to be able to go back in time so you need copies of deep copies to be in another place. And so the thing about just-in-time DR is that you pay for the backups, sure. It's very cost-effective with us, but you only pay for the services for running your applications for the two weeks you have a problem and then when you're done with it, you're done with paying that. So it's a difference with paying everyday versus paying for insurance. Sometimes insurance pays for those kind of things. It's very cost effective. >> Okay, so I'm paying Datrium for the service. Okay, I get that. And I'm paying a little bit, let's say, for instance it's running on Amazon, a little bit for S3, got to pay for S3 and I'm only paying for the EC2 resource when I'm using that resource. (crosstalk) It's like serverless for DR. >> It actually goes beyond that, Dave, right? >> Actually I like that word that you used. You should probably use that. >> Absolutely because I think it's not just the EC2 part but if you look at a total cost of ownership equation of a data center, right, you're looking at networking, you're looking at software, you're looking at compute, you're looking at people managing that infrastructure all the time, you're looking at power cooling and so I think by having this just-in-time data center that gets spun up and you have to do nothing, literally, you just have to click a button. That saves you know a tremendous amount. That's a transformational economics situation right there where you can simply go ahead and eliminate a lot of time, a lot of energy, a lot of costs that customers pay and have to deal with to just keep that DR site running across the board. >> Mm hm. >> Let me give one more savings note. So let's say you had 100 terabytes and you failed over, so when you're done with two weeks' testing, only one terabyte changed. Are you going to bring back everything or are you going to bring only one terabyte? It's a fundamental underlying technology thing. If you don't have dedupe over the wire, you'll bring back everything 100 terabytes. You're going to pay for the digress cost and ultimately it'll be too slow for you to bring it all back. So what you really want is underlying technology which has dedupe over the wire. We call it global dedupe that you can only move back what's changed and it's fast. One terabyte moving there is not that bad, right? Otherwise you'd end up moving everything back which is kind of untenable again. So you have to make all these things happen to make DR really successful in the cloud. >> So you're attacking the latency issues. >> Latency and bestly 100 terabyte moving from one place to the other, it'll take a long time because the vanpipe is only that much and you're paying for the egress cost. >> We always joke the smartest people in Silicon Valley are working on solving the speed of light problem. >> That's right so if you look at data, if you're going to move from one place to the other. First of all, data has gravity, it doesn't want to move, right? So that's one fundamental problem. So how do you build a antigravity device to actually fix that problem, right? So if you leap forward, global dedupe is here where you can transfer only what's changed to the other side. That really defeats light speed, right? And then, both ways, moving it here and moving it there. Without having this van deduplication technology, I think you will be paying a significant amount of time and money, so then it becomes untenable. If you can't really move it fast, then it's like people don't do it anymore. >> And in the typical Datrium fashion, it's just there. It just works. (crosstalk) >> I think that's such a good point, Dave, because if you look at traditional DR solutions today, the challenge is that there are a collection of software and services and hardware from multiple vendors. And that's not such a bad thing. I think the challenge that that causes is the fact that you don't have the ability to do an end-to-end, closed loop verification of your DR plan. You know the DR orchestration software does not know whether the VM that I'm supposed to protect actually has a snapshot on the storage array on which its protecting it, right, and so that, in many ways, leads to a lot of risk to customers and it makes the DR plans very fragile because you know, you set a plan on day one and then let's say three months down the line, you know, something got changed in the system and that wasn't caught by the DR orchestration software because it's unlinked. It doesn't have the same visibility into the actual storage system. The advantage we get with the integrated, built-in backup in DR system is that we can actually verify that the virtual machine that you're supposed to protect actually has all the key ingredients that are needed for a successful DR across the stack as well as in target fader ware site. >> It's kind of the perfect use case, a perfect use case for the cloud and I think, you know, there's something even more here is that because of the complexity of the IT infrastructure around DR and the change management challenges that you talked about, the facilities management challenges that all of the sudden an organization becomes, they're in the DR business and they don't want to be in the DR business. (crosstalk) >> Show no value, I mean, really it's not really adding significantly. It's not improving organization. >> That's actually true and I think the way we have tried to tackle that problem, Dave, is kind of going back to the whole premises of this multi-cloud data services. We will make DR, you know, as simple as possible and what we really enable for them to do is to not have to worry about installing any software, not have to worry about upgrading any software, managing any software. It's a, you know, service that they can just enter their DR plans into. It's very intelligent because it's integrated very well with the DVX system. And they can schedule testing. They don't even have to click a button to actually do a plan failover and in case of an actual event, it's just a single click. It's conveniently checked all the time so you kind of take away a lot of the hassles and a lot of the worry and a lot of the risks and make it truly simple, give them a (mumbles) software as a service experience. >> So I'm kind of racking my brain here. Is there anything out there like this that provides an on-demand DR SaaS? >> I don't know of any actually. >> Yeah, I think, so if you you kind of look at the landscape, Sazzala is right, actually there is none and there a few solutions from leading providers that focus on instantiation of a virtual machine on native AWS, but they don't enter the challenge that they have to convert a virtual machine from a VMware virtual machine to an Amazon AMI and that doesn't always work. Secondly, you know, if you run into that kind of a problem, can you really call it true DR because in case of a DR, you want that virtual machine to come up and run and be a valid environment as against just a test-of-use case. >> So the other one is that backup vendors can't do this. Generally, they traditionally can probably, but I think because they are one day behind, they backup once a day, so you can't do DR if you are one day behind. DR wants to be like, okay, I am five minutes behind, I can recover my stuff, right? And then primary vendors like Pure, for example, like whole flash vendors, they focused on just running it, not about backup, but you need the backups to actually make it successful so that you can go back in time if you have ransomware. So you need a combination of both primary and backup and the ability to have it running in the service in the cloud. That's why you need all these pieces to work together. >> So you talked about ransomware a couple of times. Obviously, DR, ransomware, maybe talk a little bit more about some of the other use cases beyond DR. >> So I think that kind of goes back to why we decided to name this feature CloudShift, right? If you think about a traditional DR solution, you would call it something like DR Orchestrator, right, but that's not really the full vision for this product. DR is one of the very important use cases and we talked about how we do that phenomenally well than other solutions out there but what this solution really enables customers to do is actually look at true workload mobility between on-prem and cloud and look at interesting use cases such as ransomware protection. And the reason why we are so great at ransomware protection is because we are an indicated primary and backup from a restart points perspective and in a ransomware situation, you can't really go back to a restart point that's, you know, a day before or two days before. You really want to go down to as many points as you want and because we have this very efficient way of storing these restart points or snapshots in Cloud DVX, you have the ability to instantiate or run a backup which is from sufficiently long time ago, which gives you a great amount of ransomware protection and it's completely isolated from your on-prem copy of that data. >> Let me add one more point to that. So if you just go beyond the DR case, from a developer perspective, right, from a company perspective, developers want a flexible infrastructure to like try new stuff and try new experiments in terms of building new applications for the business, they can try it in the cloud with our platform. And when they're done, for three months, they'll like, you know, have the, because they figured out okay this is how it's going to work, this is how much (mumbles) I need, it's more elastic there. When they're done testing it, whatever they built it, they can click a button with our CloudShift and move it all back on-prem and then now you kind of have it more secure and in an environment you want to. >> Alright, guys, love to see the evolution of your data services, you know, from backup, now DR, other use cases. Congratulations on CloudShift and thanks for explaining it to us. >> Thank you very much. >> Pleasure being here. >> Okay, thanks for watching, everybody. This is Dave Vellante from our Palo Alto Cube studios. We'll see you next time. (inspiring music)
SUMMARY :
and Sazzala Reddy is the CTO and co-founder of Datrium. So if you kind of step back and look and let's talk about what you see. And the third one is that, you can do schedule testing. to manage those two environments where you have an Amazon and that site has resources there. So maybe talk about how you address for the two weeks you have a problem and I'm only paying for the EC2 resource Actually I like that word that you used. that gets spun up and you have to do nothing, literally, So you have to make all these things happen to the other, it'll take a long time We always joke the smartest people in Silicon Valley So if you leap forward, global dedupe is here And in the typical Datrium fashion, it's just there. that you don't have the ability to do an end-to-end, and the change management challenges that you talked about, it's not really adding significantly. so you kind of take away a lot of the hassles So I'm kind of racking my brain here. Secondly, you know, if you run into that kind of a problem, to actually make it successful so that you can go back So you talked about ransomware a couple of times. you have the ability to instantiate or run and move it all back on-prem and then now you kind of and thanks for explaining it to us. We'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Tushar Agarwal | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
Sazzala Reddy | PERSON | 0.99+ |
three months | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
two weeks' | QUANTITY | 0.99+ |
July 2018 | DATE | 0.99+ |
two problems | QUANTITY | 0.99+ |
One terabyte | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
100 terabytes | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two weeks | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
one day | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
half a million dollars | QUANTITY | 0.99+ |
CloudShift | TITLE | 0.99+ |
two things | QUANTITY | 0.99+ |
five minutes | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
second one | QUANTITY | 0.99+ |
Palo Alto Cube | ORGANIZATION | 0.99+ |
two environments | QUANTITY | 0.99+ |
Cloud DVX | TITLE | 0.98+ |
100 terabyte | QUANTITY | 0.98+ |
once a day | QUANTITY | 0.98+ |
Sazzala | PERSON | 0.98+ |
two gentlemen | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
EC2 | TITLE | 0.97+ |
Secondly | QUANTITY | 0.97+ |
Tushar Agrawal | PERSON | 0.97+ |
day one | QUANTITY | 0.97+ |
one place | QUANTITY | 0.96+ |
S3 | TITLE | 0.95+ |
three sites | QUANTITY | 0.95+ |
one more point | QUANTITY | 0.93+ |
one end | QUANTITY | 0.93+ |
VMware Cloud | TITLE | 0.93+ |
single pane | QUANTITY | 0.92+ |
Cube | ORGANIZATION | 0.91+ |
today | DATE | 0.9+ |
firstly | QUANTITY | 0.9+ |
First | QUANTITY | 0.9+ |
step one | QUANTITY | 0.9+ |
two days before | DATE | 0.89+ |
AMI | TITLE | 0.88+ |
Cloud | TITLE | 0.88+ |
Pure | ORGANIZATION | 0.88+ |
a day before | DATE | 0.88+ |
one fundamental problem | QUANTITY | 0.86+ |
half an hour | QUANTITY | 0.84+ |
DVX | TITLE | 0.83+ |
thirdly | QUANTITY | 0.81+ |
Palo Alto | LOCATION | 0.8+ |
DR | TITLE | 0.78+ |
single- | QUANTITY | 0.71+ |
single click | QUANTITY | 0.69+ |
number two | QUANTITY | 0.67+ |
years | QUANTITY | 0.65+ |
of money | QUANTITY | 0.62+ |
Craig Nunes, Datrium & James Stock | Dell Technologies World 2018
>> Narrator: Live from Las Vegas, it's theCube. Covering Dell Technologies World 2018. Brought to you by Dell EMC, and it's ecosystem partners. (light music) >> Welcome back to Las Vegas, everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my co-host Keith Townsend. Craig Nunes is here, he's the CMO of Datrium. >> Yeah. >> Dave: Long time CUBE's alum, it's great to see you again. >> Great to be back, awesome. >> Dave: And James Stock is a Datrium customer, he's the Vice President of IT at Grow Financial. James, welcome, first time on theCUBE, looking good man. >> It is, yes, thank you very much. >> All right, Craig, Datrium-- >> Yeah. >> You guys are smoking hot, changing the storage world give us the quick update, we'll get into it. >> Look, we are filling a huge gap, bigger, I think, than we had imagined. Because, a lot of, it's no secret, the array market is in decline. And Hyper Converged has tried to reinvent that market. And it has to a degree on the low end, BDI, that kind of stuff. But data centers need an answer that scales. They need an answer that's got resilience. And it turns out, after all these years, back up is still a problem. Figuring out the cloud is still a problem. And so we put together a system that really takes a tier one approach to HCI, a full on scale out back up system and a cloud DR approach built into one convert system. And customers love it. From cloud to back up to performance in primaries, it's been awesome reception. >> Well, let's see if they really love it, I guess. So James, first of all, so let's start with Grow Financial, your role, you heard the pitch, and then we'll get into how it your applying it to new business. But, tell us about your company. >> So we started in 1955 in a broom closet in McDowell Air Force bases headquarters, there in Tampa. And over the years, we've grown. We're now a $2.4 billion in assets. We have over 200,000 members, and we do lending throughout the south eastern United States. Offices in Tampa, and in South Carolina. >> So in your role, head of IT-- >> Basically, what I tell people, is that if it plugs in, I'm responsible for it. >> (laughs) okay. All right, so, take us through the Datrium project of before and after, what was the motivation? >> So, really, the issue that we were running into is that our existing storage solution, which was the Dell SE, was our trays were running end of life, and if we only had a couple of them, it probably wouldn't have been a problem. We might not of even entertained it, but we had probably two dozen. So, we started looking around and said, "all right, "well, what does it cost to replace what we've got? "and what else is on the market?". And we started to find out that just replacing what we had with like, was going to cost almost 200 grand more than what our full Datrium replacement cost. So, it started making financial sense, right away. But, we met up with Datrium probably, might've been summer of 2016, when they were on version one. And it looked good, you could see the promise, the whole idea of having that back in storage, that was really intriguing, because none of the other players had anything like that at the time. And we said, "All right, we're not ready." And then when they came back out in May of last year, whoa, the difference in what they've done in such a short period of time is what really kind of blew us away. >> Okay, but, we're here at Dell Technologies World, where you guys are a partner of Dells, right? So you're using Dell servers and right? >> James: Yep. >> That's part of the deal here, so, they let you in. >> They let us in, in fact, our compute nodes, it's no secret, our Dell branded compute nodes, and in fact we have partnered with Dell in one of their data centers to set a world record IO mark on Dell here, just to prove a lot of the performance specs that we've shared in the market, proved it out. And we've proved it out on Dell here. >> Cool, so James, talk to me a little bit about your perception of Olby converge. Because I've talked to Craig about Olby convergence versus Hyper convergence versus Converge infrastructure, at the end of the day, you just want a reliable, fast system, however, what about the Olby convergence story drew you today? >> So, I didn't have to replace any of the nodes I had, if I really didn't have, if I wanted too. So I've got CISCO nodes around my call center, I've got Dell nodes, I've got Datrium nodes now. But at the time, it wouldn't have mattered. I could've just, like, in my CISCO environment, I actually had to add a raid controller to the UCS box and then I could throw any solid state drives that I wanted into the device. So that was where it really got compelling, and I'm like wait a minute, so you're telling me, I don't have to buy enterprise flash drives, and stick these into each of my servers. I could just go down to Best Buy, or wherever local, grab something off the shelf, and throw it in there, as long as the server supported it? And, okay, where do I sign up? >> So we've heard that story, and one of the things that some of the hyper converge infrastructure players say, you know what, we could do that, but it's almost impossible to support. Because of firmware issues, et cetera, et cetera. Did you guys run into any of those issues? >> Nope, that's been the greatest thing. When we first started to do our reference calls, it was like everybody I talked to, I said, well, where's the catch? >> Keith: Right. Because that really seemed too good to be true. And customer after customer that I called, they said, "we ran into it with our back ups." But they finished a third of the time faster. I said, "how is that even possible?" and, so we didn't believe it either. We actually had to go back and check because some of our backup jobs finished so fast, we thought it was an error or something like that. They were fine, it was just, you're backing up from flash now, instead of backing up from old spinning discs. >> Okay, so you put the system in, talk about the business impact. It sounds like there was some residual impacts from the initial motivation? >> Right, right, so from the business impact, that's a tough story to sell. Because, really, where we saw it, it was on the backend. And that was the way our systems were before, there really wasn't a huge deal of impact in the business with our old system, until it came back to back up times. Now, where I will say that we still have reductions is, if I have to reboot a server today, our call center application, buyers are putting it on Datrium, it took anywhere from 15 to 20 minutes for that to boot up. Well, 15 to 20 minutes while our call centers down, is like an eternity. Now, that time's down to about five to seven minutes. So, like overnight, you've more than halfed that time. And the same thing with web servers, or anything else that would be member facing, those times have been greatly reduced. So, if I do have to reboot something, because everybody knows it happens, it's sped up the process tremendously for us. >> And what's the secret sauce here? We're talking architecture, just sort of modern approach? Software design? >> So that the secret sauce, if you will, is this split design that runs your workloads. Especially read intensive workloads, on flash, on the host with powerful software, Datrium software. All of your durable data does not live on those hosts, those hosts are not stay full, they can fail at any time, and you still have data availability. So you've got that bullet proof availability, and on the back end, your data's kept secure, it is shared so we don't have any network traffic between hosts, your network doesn't blow up when you install, like it does with a hyper converged approach. And that split provisioning, that split architecture is the breakthrough, and that's why we talk about beyond HCI, we took a good step there. The scale line attributes, VIUM centric admin, but then we really built in tier one capabilities, full on backup, and of course, we haven't talked about it, but access to AWS re-offset backups. >> So, James, let's talk about day two operations. What are the advantages of hyper converged? There's this idea of like I'm one pane of glass. Like, firmware updates, I can free line my operations. Do you guys see similar advantages, day two, versus your previous infrastructures? >> Yeah, I mean, one of the things that saves us a lot of times now, is the fact that there's just one big pool of data out there, instead of having to provision lunds, we were setting up our exchange conversion, so we're building out four or five servers for that. Well, normally, that'd be about a two hour process, not that we were sitting there waiting the whole time, but, all right, we'll carve out some space in this one, twiddle your thumbs, go do something else. Come back, and maybe they'll be done. Well, now, that's like an instant process. So those sort of things are like, "wow, you know what, "I'm saving tons of time", just in admin experiences. In terms of pane of glass, it is a single pane of glass. One of the cool things that we've run into is every now and then, of course, we've got to do our disaster recovery testing, we're a financial institution. Well, Datrium's approach is really unique, and a problem that we used to have, is if I failed over to our DR facility, well, now I've got to bring that data back. Because if you fail in over, it's not a problem, you've already seated that data. Well, it doesn't work the other way around. It does with Datrium. So with Datrium, when I go to bring that data back, it's now doing a differential copy back, so I'm not sitting there for days and days and days, waiting to finish my DR testing anymore. So, there's just so many different benefits that have just been great for us. >> I mean, that's huge, because a lot of times, organizations, they can't test DR's, it's too risky, or they just don't have time, and even on the resources. >> James: Right. >> Did you have that problem beforehand? Or are you guys-- >> Well, yeah, because what you would run into is that it took so much to do it before, that I had to run my guys ragged for two or three weeks. I'm like, "All right, stay up overnight, make sure "it all copies" and then once it's copied, okay bring it back up. So, I mean, yeah, that was a challenge before that's not a problem anymore. >> Burning the team out, right. And or missing your window. >> Well, and because of the way that it's architected with the production groups, I no longer need to use a third party recovery tools to do the transitions back and forth. I can do that, natively, inside their application. >> I would also like to ask practitioners, if you had to mull it again, what would you do over. And it sounds like nothing, or what kind of advice would you give to your peers embarking on a similar journey? >> Do all of your reference calls. See it for yourself, I mean, I take quite a number of reference calls because people are in the same boat I was. Is it true, does it really work the way that you say it does? Yeah, it does. I'll screen share with them, if they want to see our numbers, I'll show them. >> All right, last word, what are we looking for? >> What are we looking for? >> Dave: Looking forward. >> So you're going to see us double down on the work we just went into market. Our DVX 4.0 software which comes with that cloud DVX, cloud based capability. And take that in to full on disaster recovery, orchestration. And not in the too distant future, you'll get the whole run down, so stay tuned. >> Awesome, Craig, thanks for coming on. James, pleasure meeting you. >> Likewise, thank you. >> Good luck with everything. Thanks for hanging out with me. >> Always. >> All right, Keith, good job, good questions. All right, keep it right there everybody, we will be back with our next guest, right after this short break. You're watching theCUBE live, from Dell Technologies World 2018. We'll be right back. (light music)
SUMMARY :
Brought to you by Dell EMC, he's the CMO of Datrium. it's great to see you again. he's the Vice President yes, thank you very much. changing the storage world And it has to a degree on the low end, it to new business. And over the years, we've grown. people, is that if it plugs All right, so, take us like that at the time. That's part of the deal and in fact we have partnered with Dell at the end of the day, So that was where it that some of the hyper Nope, that's been the greatest thing. And customer after customer that I called, from the initial motivation? And the same thing with web servers, So that the secret sauce, if you will, What are the advantages not that we were sitting and even on the resources. that I had to run my guys Burning the team out, right. Well, and because of the would you give to your peers people are in the same boat And take that in to full James, pleasure meeting you. Thanks for hanging out with me. we will be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
Tampa | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
James Stock | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
South Carolina | LOCATION | 0.99+ |
Craig | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
two dozen | QUANTITY | 0.99+ |
Best Buy | ORGANIZATION | 0.99+ |
May | DATE | 0.99+ |
$2.4 billion | QUANTITY | 0.99+ |
Dells | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
1955 | DATE | 0.99+ |
over 200,000 members | QUANTITY | 0.99+ |
20 minutes | QUANTITY | 0.99+ |
three weeks | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
south eastern United States | LOCATION | 0.99+ |
Grow Financial | ORGANIZATION | 0.98+ |
Dell Technologies World | ORGANIZATION | 0.98+ |
seven minutes | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
each | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
summer of 2016 | DATE | 0.97+ |
CUBE | ORGANIZATION | 0.97+ |
McDowell Air Force | ORGANIZATION | 0.96+ |
almost 200 grand | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
about five | QUANTITY | 0.95+ |
five servers | QUANTITY | 0.95+ |
CISCO | ORGANIZATION | 0.95+ |
Hyper Converged | ORGANIZATION | 0.93+ |
Olby | TITLE | 0.93+ |
one pane | QUANTITY | 0.93+ |
DVX 4.0 | TITLE | 0.92+ |
Dell Technologies World 2018 | EVENT | 0.92+ |
day two | QUANTITY | 0.9+ |
single pane | QUANTITY | 0.9+ |
two hour | QUANTITY | 0.89+ |
UCS | ORGANIZATION | 0.87+ |
Olby converge | TITLE | 0.86+ |
day | QUANTITY | 0.86+ |
theCUBE | ORGANIZATION | 0.82+ |
version one | QUANTITY | 0.8+ |
last year | DATE | 0.8+ |
Clint Wyckoff, Datrium | CUBEConversation, April 2018
(epic music) >> Hi, I'm Peter Burris welcome to another Cube Conversation from our beautiful Palo Alto studios and today we're here with Clinton Wyckoff who is a senior global solutions engineer from Datrium. Welcome to the Cube Clinton. >> Well thanks for having us Peter, it's great to be here. >> So Clint there's a lot of things that we could talk about but specifically some of the things that we want to talk about today relate to how cloud use as it becomes more broad-based is now becoming more complex. Concerns about as we use more cloud we still have off-premise. How do we then sustain that we get more work done and that crucial role that automation and human beings are still going to play as we try to achieve our overall goals with data. So why don't you tell us a little bit about some of these themes of simplicity, scalability and reliability. >> Yeah definitely Peter. It's been a very interesting time over the last 12 months here at Datrium. We've been on a rapid release cycle. We've actually released DVX 4.0 of our software just a few weeks ago and maintaining focus around those three key talking points of simplicity, scalability and reliability that's really what the Datrium DVX platform is all about and it's about solving customer challenges that they have with their traditional on-premises workloads that they've virtualized and we're also seeing an increase in customers trying to leverage the public cloud for several different use cases. So kind of the the biggest takeaway from our perspective with relation to the latest release of our software is how can we integrate what the customers have grown to love on-premises with their Datrium DVX platform and how can we integrate that into the public cloud. So our first endeavor into that area is with cloud DVX and that integrates directly into their existing AWS subscription that they have. So now that they have on-premises Datrium running for all their mission-critical providing tier one systems of all the performance, cloud backup. All those capabilities that they've grown to love but how can I get my data off-site. That's been a huge challenge for customers. How can I get my data off-site in an efficient fashion? >> But in a way that doesn't look like an entirely different new or a completely independent set of activities associated with AWS. So talk to us a little bit about, you said something interesting. You said it integrates directly into AWS. What does that mean? Yes we've taken a direct port of our software so we have on premises customers run ESX hosts. In AWS terms that translates into EC2 instances. So the first thing that we do is we instantiate an EC2 instance outside in an AWS subscription. >> That means my billing, my management, my console everything now is the same. >> Exactly and then we're utilizing an S3 bucket to hold our cloud archive. So the first use case for cloud DVX and in its current iteration is for outside archives of Datrium snapshots. I run VMs on-premises, I want to take a snapshot of these, maybe send them over to a secondary location and then I want to get those off site for more long-term archival purposes. S3 is a great target for that and that's exactly what we're doing. So an existing customer can go into their Datrium console, say I want to add my AWS subscription, click next, next, next finish and it's literally that easy. We have automated lambda functions would that automatically spin up the necessary EC2 instances, S3 buckets all that stuff for the customers so they completely simplify the entire process. I like to think of it almost like if you look at your iPhone and you go into your iCloud backup, there's literally just a little slider button that says turning on. For us it's literally that simple as well. How can we help customers get their data off-site efficiently. That's a key kind of point for us here at Datrium and the fact that we have a global deduplication pool. That means the only data that's ever going to go over the wire is truly unique so we have built-in blockchain type crypto hashing that goes on so as data comes in we're going to do a comparison on-prem, off-prem and only send the unique data over the wire. That is truly game-changing from a customer perspective. That means I can now decrease my R-POS. I can get my data off-site faster but then whenever I want to recover or retrieve those block or other virtual machine snapshots, it's efficient as well so it's both ingress and egress so from a customer perspective it's a win-win. I can get my data off-site fast and I can get it back fast as well and it ultimately decreases their AWS charges as well. >> That's the point I was going to make. But it's within the envelope of how they want to manage their AWS resources right? >> Yep. >> So this is not something that's going to come along and just blow up how you spend AWS. If you're at the AWS person so we've heard what the Datrium console person can do. If you're an AWS person you're now seeing an application and certain characteristics, performance characteristics associated with it, cost characteristics associated with it and now you're seeing what you need to see. >> Exactly. We kind of abstract the AWS components out of it so if I'm an AWS console yes I see my EC2 instance, yes I see an S3 bucket but you can't make heads or tails of what it's kind of doing. You don't need to worry about all that stuff. We manage everything solely from a Datrium perspective going back to that simplicity model that the product was built upon is how can we make this as simple as possible. It's so simple that even an admin that has no experience with AWS can go in and stand this up very very easily. >> All right so you've got some great things going on with being able to use cloud as a target. What about being able to orchestrate resources across multiple different potential resources. How is that started? How does some of the new tooling facilitate that or make it more difficult? >> Well that's a really great question Peter. It's almost like you're looking into the crystal ball of the future because the way that Datrium, the product itself and the platform is architected, it's kind of building blocks on top of each other. We started off on premises. We've built that out to have a scale out architecture. Now we're going off premises out to the public cloud. Like I said the first use case just being able to leverage that for cloud archives. But what if I want to orchestrate that and bring workloads up inside of AWS? So I have a VMware snapshot that I've sent, or a Datrium snapshot that I've sent off-prem, I want to now make that an EC2 instance or I want to orchestrate that. That's the direction that we're going so there's definitely more to come there. So that's kind of the direction in what the platform is capable of. This is just the beginning. >> Now the hybrid verge concept very powerful and it's likely going to be a major feature of being able to put the work where it needs to be put based on where the data needs it. >> Sure. >> But hyper-converged has had some successes, it's had some weirdness associated with it. We won't get into all of it but the basic notion of hyper-converged is that you can bring resources together and run them as a single unit but it still tends to be more of a resource focus. You guys are looking at this from slightly differently. You're saying let's look at this as a problem of data and how the data is going to need resources so that you're not managing in the context of resources that are converged, you're managing in the context of the resources that the data needs to do what it needs to do for the business. Have I got that right? >> Yeah I mean the hyper-converged has done a lot of really good things. First and foremost that smooth flashed the host level. Removing a lot of the latency problems that traditional sand architecture has. We apply many of those same concepts to what Datrium is but we also bring a lot of what traditional sand has as well being durability, reliability on the backside of it so we're basically separating out my performance tier from my durability capacity tier on the bottom. >> Based on what the data needs. >> Exactly right so now that I've got these individuals stateless compute hosts where all of my performances for ultra-low latency, latency is a killer of any project. Most notably like VDI for instance or even sequel serve or Oracle. One of the other capabilities we actually just added to the product as well is now full support for Oracle RAC running on Datrium in a virtualized instance so latency as I mentioned has been a killer especially for mission-critical applications. For us we're enabling customers to be able to virtualize more and more high-performance applications and rely on the Datrium platform to have the intelligence and simplicity behind the scenes to make sure that things are going to run the way that they need to. >> Now as you think about what that means to an organization, so you've been at Datrium for a while now. How are companies actually facilitating the process of doing this differently? Are they doing a better job of actually converging the way that the work is applied to these resources or is that something that's still becoming difficult? How is the simplicity and the automation and reliability making it easy for customers to actually realize value of tools like these? >> It's actually it's truly amazing because once our customers get a feel for Datrium and get it into their environment, I mean we have customers all across the world from fortune 500 customers down to more small medium-sized businesses, financial, legal, all across the entire spectrum of verticals that are benefiting from the simplicity model. I don't have to worry about and you can go out to the Datrium website and we have a whole list of customer testimonials and the one resounding theme that goes across that is I no longer have to worry about managing this. The storage, the infrastructure, I'm now able to go back to my CIO or my CEO and I can provide business value to the business. I'm doing what I'm supposed to do. I don't have to worry about managing knobs and dials and hmm, do I want to turn compression on or maybe I want to turn it off or what size volume do I need, what queue depth. That's kind of mundane tasks. Let's focus on simplicity. Things are going to run the way that you need them to do, the way that you need them they run. They're going to be fast and it's going to be simple to operate. Well we'd like to talk about the difference between business and digital business as data. But digital business treats data as an asset and that has enormous implications how you think about how your work is institutionalized, what resources you buy, how you think about investing. Now it sounds as though you guys are thinking similarly. It's not the simple tasks you perform on the data that becomes important. It's the role the data plays in your business and how you turn that into a service for the business. Is that accurate? >> That is very accurate and you brought up a really good point there and the fact that the data is the business. That is a very key foundational component that we continue to build upon inside the product. So one of the kind of big capabilities and you've seen a lot of this in today's day and age with ransomware hacks and data breaches, I mean it's almost every other week you go on CNN or I'd pick your favorite news channel that you care to watch and you hear of breaches or data being stolen. So encryption, compliancy, HIPAA, sarbanes-oxley, all that type of stuff is very important and we've actually built into the product what we call blanket encryption. So data as it comes inbound is encrypted. We use FIPS 140-2 to either validated or approved mode and it is encrypted across the entire stack in use over the wire in flight and at rest. That's very different than the way that some of the other more traditional folks out there do it. If I look at sand, it does encryption at rest. Well that's great but what if while the data is in flight? What if I want to send it off premise, out to the public cloud? With Datrium, all that is built into the product. >> And that's presumably because Datrium has a greater visibility into the multiple levels that the data is being utilized-- >> Absolutely. >> Which is why you can apply in that way and so literally data becomes a service that applications and people call out of some Datrium managed store. >> Yeah absolutely. >> So think about what's next. If we think about, you mentioned for example that when we had arrays with sands that we had a certain architectural approach to how we did things but as we move to a world where we can literally look at data as an asset and we start thinking not as the task you performing on the data but the way you generate value out of your data. What types of things not just at Datrium, but what types of challenges is the industry going to take on next? >> So that's an interesting question. So in my opinion this is Clint's personal opinion that the way that the industry is changing in regular administrators, they're trying to orchestrate as much as they possibly can. I don't want to have to worry about the low-hanging fruit on the tree. How can I automate things so that whenever something happens or an action happens or a developer needs a virtual machine or I want to send this off-site to DR, what if I can orchestrate that, automate it, make it as simple to consume because traditionally IT is a bottleneck for moving the business forward. I need to go out and procure hardware and networks which is all that type of stuff that go along with it. So what if I was able to orchestrate all of those components leveraging API calls back to my infrastructure like a user has a webform that they're going to fill out. Those challenges are the types of things that organizations in my opinion are looking to overcome. >> Now I want to build on that for a second because a lot of folks immediately then go to oh, so we're going to use technology to replace labor and well some of that may happen the way I look at it and way we look at it is the real advantage is that new workloads are coming at these guys at an unprecedented rate and so it's not so much about getting rid of people. There may be an element of that but it's allowing people to be able to perform more work. With these new technologies. >> Well more work but focus on what you should be focusing on. Of all the senior executives that-- >> That's what I mean. >> All the senior executives that I talk to they're looking to make better use of IT resources. Those IT resources are not only what's running in the racks in the data center but it's also the gentleman or the lady sitting behind the keyboard. What if I want to make better use of their intellectual property that they have to provide value back to the business and that's what I see with pretty much everybody that I talk to. >> Clint this has been a great conversation so once again this has been Clinton Wyckoff. There's been a cute conversation with Clint Wyckoff who's a senior global solutions engineer at Datrium. Clint thank you very much for being on The Cube and we'll talk again. >> All right thanks Peter. Once again, thanks very much for sitting on this Cube Conversation. We'll talk to you again soon. (epic music)
SUMMARY :
Welcome to the Cube Clinton. and human beings are still going to play So kind of the the biggest takeaway So the first thing that we do is we instantiate everything now is the same. That means the only data that's ever going to go over the wire That's the point I was going to make. that's going to come along and just blow up how you spend AWS. that the product was built upon How does some of the new tooling facilitate that We've built that out to have a scale out architecture. and it's likely going to be a major feature and how the data is going to need resources First and foremost that smooth flashed the host level. and rely on the Datrium platform to have the intelligence How is the simplicity and the automation and reliability Things are going to run the way that you need them to do, With Datrium, all that is built into the product. and so literally data becomes a service on the data but the way you generate value out of your data. that the way that the industry is changing because a lot of folks immediately then go to oh, Of all the senior executives that-- All the senior executives that I talk to and we'll talk again. We'll talk to you again soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Clint | PERSON | 0.99+ |
Clint Wyckoff | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
April 2018 | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Clinton Wyckoff | PERSON | 0.99+ |
ESX | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
CNN | ORGANIZATION | 0.98+ |
S3 | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
today | DATE | 0.98+ |
FIPS 140-2 | OTHER | 0.98+ |
first endeavor | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
S3 | COMMERCIAL_ITEM | 0.95+ |
DVX 4.0 | TITLE | 0.94+ |
customers | QUANTITY | 0.94+ |
few weeks ago | DATE | 0.92+ |
three key | QUANTITY | 0.91+ |
iCloud | TITLE | 0.91+ |
One | QUANTITY | 0.88+ |
tier one | QUANTITY | 0.87+ |
Datrium | TITLE | 0.86+ |
first use | QUANTITY | 0.84+ |
single unit | QUANTITY | 0.83+ |
Datrium DVX | TITLE | 0.82+ |
CUBEConversation | EVENT | 0.81+ |
first use case | QUANTITY | 0.76+ |
months | DATE | 0.71+ |
fortune 500 | ORGANIZATION | 0.68+ |
DVX | TITLE | 0.66+ |
VMware | TITLE | 0.66+ |
Datrium | COMMERCIAL_ITEM | 0.61+ |
Cube | COMMERCIAL_ITEM | 0.58+ |
12 | QUANTITY | 0.57+ |
RAC | TITLE | 0.56+ |
Cube | ORGANIZATION | 0.55+ |
EC2 | COMMERCIAL_ITEM | 0.54+ |
second | QUANTITY | 0.54+ |
Clinton | PERSON | 0.52+ |
Craig Nunes, VP of Marketing, Datrium - #theCUBE
(upbeat techno music) >> Welcome to The Cube. It's a wonderful Tuesday and we're here talking to Craig Nunes who's the VP of marketing at Datrium. >> Good to be here. >> And Craig, you guys had an announcement today and the announcement particularly refers to the further convergence or the opportunity to further converge not only hardware but now increasingly operating environments specifically bringing some of the Red Hat ecosystem over to the Datrium product set. So why don't you tell us what happened? >> Sure, we've been making a great business with customers in the VMware environment. We debuted our new generation of convergence back last year and as we were picking up customers in vSphere, we're running into a number of them who were saying, "You know, God, this is awesome. I do have "some Linux stuff going on. "Can you guys help me out there? "I can't seem to find a modern converged platform to really take on both environments." And so that's precisely what we've done. We are announcing today that we've partnered with Red Hat to use their stack, Red Hat Enterprise Linux, and their full Red Hat virtualization stack, run that on our DVX on our compute nodes alongside vSphere servers. Beyond that, because we observed there is a lot of activity going on in the container space. >> Peter: Just a little bit. >> CICD is becoming something that more and more folks are moving to. We've also partnered up with Docker and we're also going to provide bare metal container support with persistent volume plug-in for the platform. So this is all in one go, you now have really for the first time, a modern converged system that can handle what you're doing today with vSphere, probably handle what you're already involved in, but you're looking for way to bring this stuff together in your Red Hat environment. But then more importantly, you're kind of set up for where you're going with containers. >> So, when you say handle, Datrium has made some interesting decisions regarding how to solve some of the engineering problems associated with convergence. >> Craig: Yeah, yeah. >> Take us through a little bit about what it means to handle. >> Craig: Sure. >> What were you doing on VMware that you're now especially doing on the Red Hat ecosystem and will be doing as you move more closely towards containers? >> In the world of converged infrastructure, of course we started with kind of packaging convergence with arrays and servers. Hyper-convergence came along, really bringing storage into the x86 architecture, super cool idea in principle. The challenge with that is because storage is now part of your server, everything is stateful. Everything is a storage node and it's tougher to scale, it's tougher to service. Taking nothing away from the hyper-converged guys, it's great for single use case, great for edge, but we're really aiming for what people are trying to get done in the private cloud data center. So for that, we found that by separating the persistence, the durable capacity from the IO processing on the server, we could provide this wonderful converged platform that scales, that you can use any server you like, you can bring your blades, you could use our own compute nodes, whatever. It gives folks just a lot more freedom to get the job done. Servers are stateless like they were with your arrays but have all the benefits you're desiring with converged infrastructure. So, we brought that to vSphere and what folks have taken away is, "Wow, since everything "runs local on the server and Flash, "it's faster than an all-Flash array." Sure, cause there's no SAM, but it's all VM-based and brings all the simplicity you would expect from a hyper-converged platform only at scale and so what we're doing is taking that model to Linux and containers. Now, one relatively new thing we did just recently in addition to taking on VM consolidation and acceleration, we built right in all the data management capabilities you would need for back up and instant recovery, disaster recovery, archive, compliance, search, analytics, copy data management, right into the platform. So, really the virtualization guy, the DevOps guy or gal, whoever is running the applications can not only run them but protect them, share them, et cetera from one cockpit, one UI. So, we're really taking a whole load of stuff that folks have had to deal with, and tossing that for one very simple platform that scales as you grow. >> So are you bringing new services to the basic management console of Datrium and expanding that set of services across platforms. >> Exactly, that's correct. >> So talk to us about how you see this evolving as the whole world of containers comes out. Containers means, more of them, new security models. Today, most communication takes place through the VM. When you start talking about adding storage flexibility, data flexibility you guys are providing, it suggests that you've got some new ways of looking at containers. You've cooked up some new stuff. >> Craig: Yeah, absolutely, yeah. >> Talk to us a little bit about that. >> Here is where a modern platform really is important. Again, not to knock hyper-converged, but five or six years ago when that was born, it was pretty cool to manage things at a VM level, error virtualization was hot and heavy. As we move into containers, VM's are just not granular enough. In fact, folks want to be able to manage at this per container level. Arrays, we're talking about lens there. Hyper-converged is going to stop short at VM's. What we're bringing folks is a way to manage, in the VM side, VM's, V-discs, files that make up VM's, individual container persistent volumes so you can protect and share the way you need to. What we do, cause it's kind of a double-edged sword, you can manage everything at that level but now you've got thousands and thousands of them. We actually give you an opportunity to group those, what we call protection groups. Think of it as a policy group and you set it up around your applications. You set your policies per group. Through naming conventions, if you spin up a new VM or container, it's going to get included as a part of that group without you having to manually go in and assign it. So, we're effectively putting the capabilities in so you can manage tens of thousands of objects very simply. That is the world of containers, right? If you thought there were a lot of VM's, there's a whole lot more in the way of containers that will be there. >> One of the things that Datrium has done, correct me if I have this wrong but I believe I got it right, is one of the things Datrium has done is facilitated the kind of ANI addressability between storage or compute resources and data resources. >> Craig: Right. >> You know, the various of types of nodes that are in there. It used to have all the data inside of your server and that created some segmentation along those lines. In many respects, you created networks of resources that Datrium would manage in that way. Are you doing something similar now as we think about containers where you're literally describing a network of containers as part of that resource mix and being able to add things to that? Is that effectively what the group becomes? >> Yeah, the group of containers is completely independent of the servers that are hosting them so you can literally group a collection of containers across all of your Linux servers and treat that in a special way. You've got great flexibility. It's something that's really intended to scale. We've got some very powerful search tools as a part of that so if you do need to find things quickly and get it rolling. When it comes to containers, it's all about speed, keeping up the pace. Partly what we bring to the party is great data reduction capabilities, so when you're doing development in like a, let's pick on a Jenkins development environment, and you've got master/slave and you are collecting data as part of every object, all of that stuff has to move through the master. The better you are at handling data efficiency, the faster your runtime is going to be. We're observing about a 30% faster runtime for developers in that Jenkins environment, and capacity-wise, we're probably consuming 95% less capacity than you otherwise would have to do in your more traditional storage environments, so-- >> 95% reduce? >> It is a 20 to one reduction cause there's so many copies in development and we can dedupe all of that away. It's fundamentally a break-through for guys thinking about development tests, DevOps, et cetera. >> So you talked about the capacity improvements that you get in the (mumbles) improvements, but as you said, when we start going to containers, we increasingly start thinking about how fast we can add new function, how fast we can bring new capabilities together. One of the things we're fascinated about in this world, you tell me if this is a benefit that you see, is that it dramatically accelerates the entire process of doing development. Four, five, seven, 12 times speed in the development process. You not only get better runtime and do you get dramatically better utilization of resources but you are also accelerating the productivity of people that are actually doing the work. Are you seeing that as well? >> Yeah, absolutely. In fact, there are two things going on here. One is, as part of the platform, when you clone a container, you do that on your dev-server or wherever, that clone is immediately available to all other servers in the cluster. There is no copying and moving around. It is immediately available for the developers who just can go. The other interesting thing is there are, in development environments, depending on the number of developers and executors involved in development, you can have problems maintaining the state that you desire. Part of what we are doing here with these very efficient cloning capabilities, we can spin up a new environment for folks that is got pristine state which means down the line, quality is better and you're not going to thrash on those iterations on your QA cycle. From end to end, it's all faster, runtime, QA, the whole nine yards. >> Datrium's a relatively new company? >> We began shipping in February '16. We've had a great 2017, in fact, well, of course it was great. We had a wonderful fundraising in December '16, one of the largest of last year so that's really propelled us in the market. We had a wonderful set of announcements just about a quarter ago with the data management capabilities, and we added these Datrium compute nodes, and just last quarter alone, our install base which had been already showing record adoption, grew a whopping 50% in a single quarter. One of the most interesting statistics that-- >> Peter: Sequentially or year-to-year? >> Sequentially. >> Sequentially, that is whopping. >> Sequentially. The end of Q one to the end of Q two, boom. Not only that, one out of every three of our customers already has multiple DVX's deployed. That's a huge testimony to they're liking what they've got. Yeah, so it's been a sprint and like I say, we've been very vSphere-focused. Our founders are a couple of Diane Greene's. They're early principal engineers at VMware. But, customer demand, customer is king, and they're looking for the same kind of capability in their Linux and container environments so here we are. >> Hey, speed is important to infrastructure people too. >> Craig: Right on, yeah. >> So, Craig, thanks very much for joining us here on The Cube. >> My pleasure. >> Once again, great to have Datrium talk a little bit about an announcement that they did today, about adding the Red Hat environment to the great work you've been doing in VMware and vSphere, and the future of how containers, the way technology will start getting folded into that whole thing. >> Yep. >> Great results, good early start, keep it up. >> Thank you, alright, see you, Peter. >> I'm Peter Burris, good to have you once again with The Cube. We've been talking to Datrium about their new announcement. Craig Munes, er, Craig Nunes (laughs). Craig Nunes of Datrium, Vice President of Marketing, thanks for being here, Craig. >> Craig: My pleasure. (techno music)
SUMMARY :
Welcome to The Cube. and the announcement particularly refers "Can you guys help me out there? So this is all in one go, you now have really So, when you say handle, what it means to handle. that scales, that you can use any server you like, So are you bringing new services So talk to us about how you see this evolving and you set it up around your applications. One of the things that Datrium has done, that resource mix and being able to add things to that? all of that stuff has to move through the master. It is a 20 to one reduction is that it dramatically accelerates the entire process the state that you desire. One of the most interesting statistics that-- The end of Q one to the end of Q two, boom. So, Craig, thanks very much and the future of how containers, I'm Peter Burris, good to have you Craig: My pleasure.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Craig | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
December '16 | DATE | 0.99+ |
Craig Nunes | PERSON | 0.99+ |
February '16 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Craig Munes | PERSON | 0.99+ |
Datrium | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
20 | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
Four | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
first time | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
12 times | QUANTITY | 0.98+ |
last quarter | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Jenkins | TITLE | 0.98+ |
The Cube | ORGANIZATION | 0.98+ |
VMware | TITLE | 0.97+ |
seven | QUANTITY | 0.97+ |
one cockpit | QUANTITY | 0.97+ |
five | DATE | 0.97+ |
Red Hat Enterprise Linux | TITLE | 0.96+ |
Today | DATE | 0.96+ |
nine yards | QUANTITY | 0.96+ |
Q two | QUANTITY | 0.96+ |
tens of thousands | QUANTITY | 0.95+ |
VMware | ORGANIZATION | 0.95+ |
Diane Greene | PERSON | 0.94+ |
Red Hat | TITLE | 0.94+ |
six years ago | DATE | 0.93+ |
#theCUBE | ORGANIZATION | 0.93+ |
Docker | ORGANIZATION | 0.91+ |
30% | QUANTITY | 0.88+ |
Q one | QUANTITY | 0.88+ |
single use case | QUANTITY | 0.88+ |
Vice President | PERSON | 0.87+ |
a quarter ago | DATE | 0.84+ |
about | DATE | 0.83+ |
single quarter | QUANTITY | 0.8+ |