Ashish Palekar & Cami Tavares, AWS | AWS Storage Day 2022
(upbeat music) >> Okay, we're back covering AWS Storage Day 2022 with Ashish Palekar. Who's the general manager of AWS EBS Snapshot and Edge and Cami Tavares. Who's the head of product at Amazon EBS. Thanks for coming back in theCube guys. Great to see you again. >> Great to see you as well, Dave. >> Great to see you, Dave. Ashish, we've been hearing a lot today about companies all kinds of applications to the cloud and AWS and using their data in new ways. Resiliency is always top of mind for companies when they think about just generally their workloads and specifically the clouds. How should they think about customers think about data resiliency? >> Yeah, when we think about data resiliency it's all about making sure that your application data, the data that your application needs is available when it needs it. It's really the ability for your workload to mitigate disruptions or recover from them. And to build that resilient architecture you really need to understand what kinds of disruptions your applications can experience. How broad the impact of those disruptions is, and then how quickly you need to recover. And a lot of this is a function of what the application does, how critical it is. And the thing that we constantly tell customers is, this works differently in the cloud than it does in a traditional on-premises environment. >> What's different about the cloud versus on-prem? Can you explain how it's different? >> Yeah, let me start with a video on-premises one. And in the on-premises one, building resilient architectures is really the customer's responsibility, and it's very challenging. You'll start thinking about what your single points of failure are. To avoid those, you have to build in redundancy, you might build in replication as an example for storage and doing this now means you have to have provision more hardware. And depending on what your availability requirements are, you may even have to start looking for multiple data centers, some in the same regions, some in different geographical locations. And you have to ensure that you're fully automated, so that your recovery processes can take place. And as you can see that's a lot of owners being placed on the customer. One other thing that we hear about is really elasticity and how elasticity plays into the resiliency for applications. As an example, if you experience a sudden spike in workloads, in a on-premises environment, that can lead to resource saturation. And so really you have two choices. One is to sort of throttle the workload and experience resiliency, or your second option becomes buying additional hardware and securing more capacity and keeping it fair low in case of experiencing such a spike. And so your two propositions that are either experiencing resiliency, challenges or paying really to have infrastructure that's lying around. And both of those are different really when you start thinking about the cloud. >> Yeah, there's a third option too, which is lose data, which is not an option. Go ahead- >> Which is not, yeah, I pretty much as a storage person, that is not an option. The reason about that that we think is reasonable for customers to take. The big contrast in the cloud really comes with how we think about capacity. And fundamentally the the cloud gives you that access to capacity so you are not managing that capacity. The infrastructure complexity and the cost associated with that are also just a function of how infrastructure is built really in the cloud. But all of that really starts with the bedrock of how we design for avoiding single points of failure. The best way to explain this is really to start thinking about our availability zones. Typically these availability zones consist of multiple data centers, located in the same regional area to enable high throughput and low latency for applications. But the availability zones themselves are physically independent. They have independent connections to utility power, standalone backup power resources, independent mechanical services and independent network connectivity. We take availability zone independence extremely seriously, so that when customers are building the availability of their workload, they can architect using these multiple zones. And that is something that when I'm talking to customers or Tami is talking to customers, we highly encourage customers to keep in mind as they're building resiliency for their applications. >> Right, so you can have within an availability zone, you can have, you know, instantaneous, you know when you're doing it right. You've got, you've captured that data and you can asynchronously move to outside of that in case there's, the very low probability, but it does happen, you get some disasters. You're minimizing that RPO. And I don't have to worry about that as a customer and figuring out how to do three site data centers. >> That's right. Like that even further, now imagine if you're expanding globally. All those things that we described about like creating new footprint and creating a new region and finding new data centers. As a customer in an on-premises environment, you take that on yourself. Whereas with AWS, because of our global presence, you can expand to a region and bring those same operational characteristics to those environments. And so again, bringing resiliency as you're thinking about expanding your workload, that's another benefit that you get from using the availability zone region architecture that AWS has. >> And as Charles Phillips, former CEO of Infor said, "Friends, don't let friends build data center," so I don't have to worry about building the data center. Let's bring Cami into the discussion here. Cami, think about elastic block storage, it gives, you know customers, you get persistent block storage for EC2 instances. So it's foundational for any mission critical or business critical application that you're building on AWS. How do you think about data resiliency in EBS specifically? I always ask the question, what happens if something goes wrong? So how should we think about data resiliency in EBS specifically? >> Yeah, you're right Dave, block storage is a really foundational piece. When we talk to customers about building in the cloud or moving an application to the cloud, and data resiliency is something that comes up all the time. And with EBS, you know EBS is a very large distributed system with many components. And we put a lot of thought and effort to build resiliency into EBS. So we design those components to operate and fail independently. So when customers create an EBS volume for example, we'll automatically choose the best storage nodes to address the failure domain and the data protection strategy for each of our different volume types. And part of our resiliency strategy also includes separating what we call a volume life cycle control plane. Which are things like creating a volume, or attaching a volume to an EC2 instance. So we separate that control plane, from the storage data plane, which includes all the components that are responsible for serving IO to your instance, and then persisting it to durable media. So what that means is once a volume is created and attached to the instance, the operations on that volume they're independent from the control point function. So even in the case of an infrastructure event, like a power issue, for example, you can recreate an EBS volume from a snapshot. And speaking of snapshots, that's the other core pillar of resiliency in EBS. Snapshots are point in time copies of EBS volumes that would store in S3. And snapshots are actually a regional service. And that means internally we use multiple of the availability zones that Ashish was talking about to replicate your data so that the snapshots can withstand the failure of an availability zone. And so thanks to that availability zone independence, and then this builtin component independence, customers can use that snapshot and recreate an EBS following another AZO or even in another region if they need to. >> Great so, okay, so you touched on some of the things EBS does to build resiliency into the service. Now thinking about over your right shoulders, you know, Joan Deviva, so what can organizations do to build more resilience into their applications on EBS so they can enjoy life without anxiety? >> (laughs) That is a great question. Also something that we love to talk to customers about. And the core thing to think about here is that we don't believe in a one size fits all approach. And so what we are doing in EBS is we give customers different tools so that they can design a resiliency strategy that is custom tailored for their data. And so to do this, this resiliency assessment, you have to think about the context of this specific workload and ask questions like what other critical services depend on this data and what will break if this data's not available and how long can can those systems withstand that, for example. And so the most important step I'll mention it again, snapshots, that is a very important step in a recovery plan. Make sure you have a backup of your data. And so we actually recommend that customers take the snapshots at least daily. And we have features that make that easier for you. For example, Data Lifecycle Manager which is a feature that is entirely free. It allows you to create backup policies, and then you can automate the process of creating the snapshot, so it's very low effort. And then when you want to use that backup to recreate a volume, we have a feature called Fast Snapshot Restore, that can expedite the creation of the volume. So if you have a more, you know a shorter recovery time objective you can use that feature to expedite the recovery process. So that's backup. And then the other pillar we talked to customers about is data replication. Just another very important step when you're thinking about your resiliency and your recovery plans. So with EBS, you can use replication tools that work at the level of the operating system. So that's something like DRBD for example. Or you can use AWS Elastic Disaster Recovery, and that will replicate your data across availability zones or nearby regions too. So we talked about backup and replication, and then the last topic that we recommend customers think about is having a workload monitoring solution in place. And you can do that in EBS, using cloud watch metrics. So you can monitor the health of your EBS volume using those metrics. We have a lot of tips in our documentation on how to measure that performance. And then you can use those performance metrics as triggers for automated recovery workflows that you can build using tools like auto scaling groups for example. >> Great, thank you for that advice. Just quick follow up. So you mentioned your recommendation, at least daily, what kind of granularity, if I want to compress my RPO can I go at a more granular level? >> Yes, you can go more granular and you can use again the daily lifecycle manager to define those policies. >> Great, thank you. Before we go, I want to just quickly cover what's new with EBS. Ashish, maybe you could talk about, I understand you've got something new today. You've got an announcement, take us through that. >> Yeah, thanks for checking in and I'm so glad you asked. We talked about how snapshots help resilience and are a critical part of building resilient architectures. So customers like the simplicity of backing up their EC2 instances, using multi volume snapshots. And what they're looking for is the ability to back up only to exclude specific volumes from the backup, especially those that don't need backup. So think of applications that have cash data, or applications that have temporary data that really doesn't need backup. So today we are adding a new parameter to the create snapshots API, which creates a crash consistent set of snapshots for volumes attached to an EC2 instance. Where customers can now exclude specific volumes from an instance backup. So customers using data life cycle manager that can be touched on, can automate their backups. And again they also get to exclude these specific volumes. So really the feature is not just about convenience, but it's also to help customers save on cost. As many of these customers are managing tens of thousands of snapshots. And so we want to make sure they can take it at the granularity that they need it. So super happy to bring that into the hands of customers as well. >> Yeah, that's a nice option. Okay, Ashish, Cami thank you so much for coming back in theCube, helping us learn about what's new and what's cool and EBS, appreciate your time. >> Thank you for having us Dave. >> Thank you for having us Dave. >> You're very welcome now, if you want to learn more about EBS resilience, stay right here because coming up, we've got a session which is a deep dive on protecting mission critical workloads with Amazon EBS. Stay right there, you're watching theCube's coverage of AWS Storage Day 2022. (calm music)
SUMMARY :
Great to see you again. and specifically the clouds. And the thing that we And so really you have two choices. option too, which is lose data, to capacity so you are not and you can asynchronously that you get from using so I don't have to worry about And with EBS, you know EBS is a very large of the things EBS does And the core thing to So you mentioned your and you can use again the Ashish, maybe you could is the ability to back up only you so much for coming back if you want to learn more
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ashish | PERSON | 0.99+ |
Ashish Palekar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Joan Deviva | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Cami | PERSON | 0.99+ |
third option | QUANTITY | 0.99+ |
EBS | ORGANIZATION | 0.99+ |
two propositions | QUANTITY | 0.99+ |
second option | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Infor | ORGANIZATION | 0.99+ |
Cami Tavares | PERSON | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
two choices | QUANTITY | 0.98+ |
EBS | TITLE | 0.97+ |
EC2 | TITLE | 0.97+ |
Tami | PERSON | 0.96+ |
tens of thousands of snapshots | QUANTITY | 0.95+ |
each | QUANTITY | 0.95+ |
AZO | TITLE | 0.93+ |
Amazon EBS | ORGANIZATION | 0.91+ |
theCube | ORGANIZATION | 0.89+ |
Ashish | ORGANIZATION | 0.89+ |
single points | QUANTITY | 0.86+ |
three site | QUANTITY | 0.83+ |
single points | QUANTITY | 0.82+ |
DRBD | TITLE | 0.8+ |
Storage Day 2022 | EVENT | 0.78+ |
one size | QUANTITY | 0.76+ |
Elastic Disaster | TITLE | 0.7+ |
Edge | ORGANIZATION | 0.68+ |
CEO | PERSON | 0.63+ |
Lifecycle | TITLE | 0.59+ |
thing | QUANTITY | 0.57+ |
Snapshot | TITLE | 0.49+ |
S3 | TITLE | 0.46+ |
Danny Allan, Veeam | VeeamON 2022
>>Hi, this is Dave Volonte. We're winding down Day two of the Cubes coverage of Vim on 2022. We're here at the area in Las Vegas. Myself and Dave Nicholson had been going for two days. Everybody's excited about the VM on party tonight. It's It's always epic, and, uh, it's a great show in terms of its energy. Danny Allen is here. He's cto of in his back. He gave the keynote this morning. I say, Danny, you know, you look pretty good up there with two hours of sleep. I >>had three. >>Look, don't look that good, but your energy was very high. And I got to tell you the story you told was amazing. It was one of the best keynotes I've ever seen. Even even the technology pieces were outstanding. But you weaving in that story was incredible. I'm hoping that people will go back and and watch it. We probably don't have time to go into it, but wow. Um, can you give us the the one minute version of that >>long story? >>Sure. Yeah. I read a book back in 2013 about a ship that sank off Portsmouth, Maine, and I >>thought, I'm gonna go find that >>ship. And so it's a long, >>complicated process. Five >>years in the making. But we used data, and the data that found the ship was actually from 15 years earlier. >>And in 20 >>18, we found the bow of the ship. We found the stern of the ship, but what we were really trying to answer was torpedoed. Or did the boilers explode? Because >>the navy said the boilers exploded >>and two survivors said, No, it was torpedoed or there was a German U boat there. >>And so >>our goal was fine. The ship find the boiler. >>So in 20 >>19, Sorry, Uh, it was 2018. We found the bow and the stern. And then in 2019, we found both boilers perfectly intact. And in fact, the rear end of that torpedo wasn't much left >>of it, of course, but >>data found that wreck. And so it, um, it exonerated essentially any implication that somebody screwed >>up in >>the boiler system and the survivors or the Children of the survivors obviously appreciated >>that. I'm sure. Yes, Several >>outcomes to it. So the >>chief engineer was one >>of the 13 survivors, >>and he lived with the weight of this for 75 years. 49 sailors dead because of myself. But I had the opportunity of meeting some of the Children of the victims and also attending ceremonies. The families of those victims received purple hearts because they were killed due to enemy action. And then you actually knew how to do this. I wasn't aware you had experience finding Rex. You've >>discovered several of >>them prior to this one. But >>the interesting connection >>the reason why this keynote was so powerful as we're a >>team, it's a data conference. >>You connected that to data because you you went out and bought a How do you say this? Magnanimous magnetometer. Magnetometer, Magnetometer. I don't know what that >>is. And a side >>scan Sonar, Right? I got that right. That was >>easy. But >>then you know what this stuff is. And then you >>built the model >>tensorflow. You took all the data and you found anomalies. And then you went right to that spot. Found the >>wreck with 12 >>£1000 of dynamite, >>which made your heart >>beat. But >>then you found >>the boilers. That's incredible. And >>but the point was, >>this is data >>uh, let's see, >>a lot of years after, >>right? >>Yeah. Two sets of data were used. One was the original set of side scan sonar >>data by the historian >>who discovered there was a U boat in the area that was 15 years old. >>And then we used, of >>course, the wind and weather and wave pattern data that was 75 years old to figure out where the boiler should be because they knew that the ship had continued to float for eight minutes. And so you had to go back and determine the models of where should the boilers >>be if it exploded and the boilers >>dropped out and it floated along >>for eight minutes and then sank? Where was >>that data? >>It was was a scanned was an electronic was a paper. How did you get that data? So the original side scan sonar data was just hard >>drive >>data by the historian. >>I wish I could say he used them to >>back it up. But I don't know that I can say that. But he still had >>the data. 15 years later, the >>weather and >>wind and wave data, That was all public information, and we actually used that extensively. We find other wrecks. A lot of wrecks off Boston Sunken World War Two. So we were We were used to that model of tracking what happened. Wow. So, yes, imagine if that data weren't available >>and it >>probably shouldn't have been right by all rights. So now fast forward to 2022. We've got Let's talk about just a cloud >>data. I think you said a >>couple of 100 >>petabytes in the >>cloud 2019. 500 in, Uh, >>no. Yeah. In >>20 2200 and 42. Petabytes in 20 2500 Petabytes last year. And we've already done the same as 2020. So >>240 petabytes >>in Q one. I expect >>this year to move an exhibit of >>data into the public cloud. >>Okay, so you got all that data. Who knows what's in there, right? And if it's not protected, who's going to know in 50 60 7100 years? Right. So that was your tie in? Yes. To the to the importance of data protection, which was just really, really well done. Congratulations. Honestly, one of the best keynotes I've ever seen keynotes often really boring, But you did a great job again on two hours. Sleep. So much to unpack here. The other thing that really is. I mean, we can talk about the demos. We can talk about the announcements. Um, so? Well, yeah, Let's see. Salesforce. Uh, data protection is now public. I almost spilled the beans yesterday in the cube. Caught myself the version 12. Obviously, you guys gave a great demo showing the island >>cloud with I think it >>was just four minutes. It was super fast. Recovery in four minutes of data loss was so glad you didn't say zero minutes because that would have been a live demos which, Okay, which I appreciate and also think is crazy. So some really cool demos, Um, and some really cool features. So I have so much impact, but the the insights that you can provide through them it's VM one, uh, was actually something that I hadn't heard you talk about extensively in the past. That maybe I just missed it. But I wonder if you could talk about that layer and why it's critical differentiator for Wien. It's >>the hidden gem >>within the Wien portfolio because it knows about absolutely >>everything. >>And what determines the actions >>that we take is the >>context in which >>data is surviving. So in the context of security, which we are showing, we look for CPU utilisation, memory utilisation, data change rate. If you encrypt all of the data in a file server, it's going to blow up overnight. And so we're leveraging heuristics in their reporting. But even more than that, one of the things in Wien one people don't realise we have this concept of the intelligent diagnostics. It's machine learning, which we drive on our end and we push out as packages intervene one. There's up to 200 signatures, but it helps our customers find issues before they become issues. Okay, so I want to get into because I often time times, don't geek out with you. And don't take advantage of your your technical knowledge. And you've you've triggered a couple of things, >>especially when the >>analysts call you said it again today that >>modern >>data protection has meaning to you. We talked a little bit about this yesterday, but back in >>the days of >>virtualisation, you shunned agents >>and took a different >>approach because you were going for what was then >>modern. Then you >>went to bare metal cloud hybrid >>cloud containers. Super Cloud. Using the analyst meeting. You didn't use the table. Come on, say Super Cloud and then we'll talk about the edge. So I would like to know specifically if we can go back to Virtualised >>because I didn't know >>this exactly how you guys >>defined modern >>back then >>and then. Let's take that to modern today. >>So what do you >>do back then? And then we'll get into cloud and sure, So if you go back to and being started, everyone who's using agents, you'd instal something in the operating system. It would take 10% 15% of your CPU because it was collecting all the data and sending it outside of the machine when we went through a virtual environment. If you put an agent inside that machine, what happens is you would have 100 operating systems all on the same >>server, consuming >>resources from that hyper visor. And so he said, there's a better way of capturing the data instead of capturing the data inside the operating system. And by the way, managing thousands of agents is no fun. So What we did is we captured a snapshot of the image at the hyper visor level. And then over time, we just leverage changed block >>tracking from the hyper >>visor to determine what >>had changed. And so that was modern. Because no more >>managing agents >>there was no impact >>on the operating system, >>and it was a far more >>efficient way to store >>data. You leverage CBT through the A P. Is that correct? Yeah. We used the VCR API >>for data protection. >>Okay, so I said this to Michael earlier. Fast forward to today. Your your your data protection competitors aren't as fat, dumb and happy as they used to be, so they can do things in containers, containers. And we talked about that. So now let's talk about Cloud. What's different about cloud data protection? What defines modern data protection? And where are the innovations that you're providing? >>Let me do one step in >>between those because one of the things that happened between hypervisors and Cloud was >>offline. The capture of the data >>to the storage system because >>even better than doing it >>at the hyper visor clusters >>do it on the storage >>array because that can capture the >>data instantly. Right? So as we go to the cloud, we want to do the same thing. Except we don't have access to either the hyper visor or the storage system. But what they do provide is an API. So we can use the API to capture all of the blocks, all of the data, all of the changes on that particular operating system. Now, here's where we've kind of gone full circle on a hyper >>visor. You can use the V >>sphere agent to reach into the operating system to do >>things like application consistency. What we've done modern data protection is create specific cloud agents that say Forget >>about the block changes. Make sure that I have application consistency inside that cloud operating >>system. Even though you don't have access to the hyper visor of the storage, >>you're still getting the >>operating system consistency >>while getting the really >>fast capture of data. So that gets into you talking on stage about how synapse don't equal data protection. I think you just explained it, but explain why, but let me highlight something that VM does that is important. We manage both snapshots and back up because if you can recover from your storage array >>snapshot. That is the best >>possible thing to recover from right, But we don't. So we manage both the snapshots and we converted >>into the VM portable >>data format. And here's where the super cloud comes into play because if I can convert it into the VM portable data format, I can move >>that OS >>anywhere. I can move it from >>physical to virtual to cloud >>to another cloud back to virtual. I can put it back on physical if I want to. It actually abstracts >>the cloud >>layer. There are things >>that we do when we go >>between clouds. Some use bio, >>some use, um, fee. >>But we have the data in backup format, not snapshot format. That's theirs. But we have been in backup format that we can move >>around and abstract >>workloads across. All of the infrastructure in your >>catalogue is control >>of that. Is that Is >>that right? That is about >>that 100%. And you know what's interesting about our catalogue? Dave. The catalogue is inside the backup, and so historically, one of the problems with backup is that you had a separate catalogue and if it ever got corrupted. All of your >>data is meaningless >>because the catalogue is inside >>the backup >>for that unique VM or that unique instance, you can move it anywhere and power it on. That's why people said were >>so reliable. As long >>as you have the backup file, you can delete our >>software. You can >>still get the data back, so I love this fast paced so that >>enables >>what I call Super Cloud we now call Super Cloud >>because now >>take that to the edge. >>If I want to go to the edge, I presume you can extend that. And I also presume the containers play a role there. Yes, so here's what's interesting about the edge to things on the edge. You don't want to have any state if you can help it, >>and so >>containers help with that. You can have stateless environment, some >>persistent data storage, >>but we not only >>provide the portability >>in operating systems. We also do this for containers, >>and that's >>true if you go to the cloud and you're using SE CKs >>with relational >>database service is already >>asked for the persistent data. >>Later, we can pick that up and move it to G K E or move it to open shift >>on premises. And >>so that's why I call this the super cloud. We have all of this data. Actually, I think you termed the term super thank you for I'm looking for confirmation from a technologist that it's technically feasible. It >>is technically feasible, >>and you can do it today and that's a I think it's a winning strategy. Personally, Will there be >>such a thing as edge Native? You know, there's cloud native. Will there be edge native new architectures, new ways of doing things, new workloads use cases? We talk about hardware, new hardware, architectures, arm based stuff that are going to change everything to edge Native Yes and no. There's going to be small tweaks that make it better for the edge. You're gonna see a lot of iron at the edge, obviously for power consumption purposes, and you're also going to see different constructs for networking. We're not going to use the traditional networking, probably a lot more software to find stuff. Same thing on the storage. They're going to try and >>minimise the persistent >>storage to the smallest footprint possible. But ultimately I think we're gonna see containers >>will lead >>the edge. We're seeing this now. We have a I probably can't name them, but we have a large retail organisation that is running containers in every single store with a small, persistent footprint of the point of sale and local data, but that what >>is running the actual >>system is containers, and it's completely ephemeral. So we were >>at Red Hat, I was saying >>earlier last week, and I'd say half 40 50% of the conversation was edge open shift, obviously >>playing a big role there. I >>know doing work with Rancher and Town Zoo. And so there's a lot of options there. >>But obviously, open shift has >>strong momentum in the >>marketplace. >>I've been dominating. You want to chime in? No, I'm just No, >>I yeah, I know. Sometimes >>I'll sit here like a sponge, which isn't my job absorbing stuff. I'm just fascinated by the whole concept of of a >>of a portable format for data that encapsulates virtual machines and or instances that can live in the containerised world. And once you once you create that common denominator, that's really that's >>That's the secret sauce >>for what you're talking about is a super club and what's been fascinating to watch because I've been paying attention since the beginning. You go from simply V. M. F s and here it is. And by the way, the pitch to E. M. C. About buying VM ware. It was all about reducing servers to files that can be stored on storage arrays. All of a sudden, the light bulbs went off. We can store those things, and it just began. It became it became a marriage afterwards. But to watch that progression that you guys have gone from from that fundamental to all of the other areas where now you've created this common denominator layer has has been amazing. So my question is, What's the singer? What doesn't work? Where the holes. You don't want to look at it from a from a glass half empty perspective. What's the next opportunity? We've talked about edge, but what are the things that you need to fill in to make this truly ubiquitous? Well, there's a lot of services out there that we're not protecting. To be fair, right, we do. Microsoft 3 65. We announced sales for us, but there's a dozen other paths services that >>people are moving data >>into. And until >>we had data protection >>for the assassin path services, you know >>you have to figure out how >>to protect them. Now here's the kicker about >>those services. >>Most of them have the >>ability to dump date >>out. The trick is, do they have the A >>P? I is needed to put data >>back into it right, >>which is which is a >>gap. As an industry, we need to address this. I actually think we need a common >>framework for >>how to manage the >>export of data, but also the import of data not at a at a system level, but at an atomic level of the elements within those applications. >>So there are gaps >>there at the industry, but we'll fill them >>if you look on the >>infrastructure side. We've done a lot with containers and kubernetes. I think there's a next wave around server list. There's still servers and these micro services, but we're making things smaller and smaller and smaller, and there's going to be an essential need to protect those services as well. So modern data protection is something that's going to we're gonna need modern data protection five years from now, the modern will just be different. Do you ever see the day, Danny, where governance becomes an >>adjacency opportunity for >>you guys? It's clearly an opportunity even now if you look, we spent a lot of time talking about security and what you find is when organisations go, for example, of ransomware insurance or for compliance, they need to be able to prove that they have certifications or they have security or they have governance. We just saw transatlantic privacy >>packed only >>to be able to prove what type of data they're collecting. Where are they storing it? Where are they allowed to recovered? And yes, those are very much adjacency is for our customers. They're trying to manage that data. >>So given that I mean, >>am I correct that architecturally you are, are you location agnostic? Right. We are a location agnostic, and you can actually tag data to allowable location. So the big trend that I think is happening is going to happen in in this >>this this decade. >>I think we're >>scratching the surface. Is this idea >>that, you know, leave data where it is, >>whether it's an S three >>bucket, it could be in an Oracle >>database. It could be in a snowflake database. It can be a data lake that's, you know, data, >>bricks or whatever, >>and it stays where >>it is. And it's just a note on the on the call of the data >>mesh. Not my term. Jim >>Octagon coined that term. The >>problem with that, and it puts data in the hands of closer to the domain experts. The problem with that >>scenario >>is you need self service infrastructure, which really doesn't exist today anyway. But it's coming, and the big problem is Federated >>computational >>governance. How do I automate that governance so that the people who should have access to that it can have access to that data? That, to me, seems to be an adjacency. It doesn't exist except in >>a proprietary >>platform. Today. There needs to be a horizontal >>layer >>that is more open than anybody >>can use. And I >>would think that's a perfect opportunity for you guys. Just strategically it is. There's no question, and I would argue, Dave, that it's actually >>valuable to take snapshots and to keep the data out at the edge wherever it happens to be collected. But then Federated centrally. It's why I get so excited by an exhibit of data this year going into the cloud, because then you're centralising the aggregation, and that's where you're really going to drive the insights. You're not gonna be writing tensorflow and machine learning and things on premises unless you have a lot of money and a lot of GPS and a lot of capacity. That's the type of thing that is actually better suited for the cloud. And, I would argue, better suited for not your organisation. You're gonna want to delegate that to a third party who has expertise in privacy, data analysis or security forensics or whatever it is that you're trying to do with the data. But you're gonna today when you think about AI. We talked about A. I haven't had a tonne of talk about AI some >>appropriate >>amount. Most of the >>AI today correct me if you think >>this is not true is modelling that's done in the cloud. It's dominant. >>Don't >>you think that's gonna flip when edge >>really starts to take >>off where it's it's more real time >>influencing >>at the edge in new use cases at the edge now how much of that data >>is going to be >>persisted is a >>point of discussion. But what >>are your thoughts on that? I completely agree. So my expectation of the way >>that this will work is that >>the true machine learning will happen in the centralised location, and what it will do is similar to someone will push out to the edge the signatures that drive the inferences. So my example of this is always the Tesla driving down the road. >>There's no way that that >>car should be figuring it sending up to the cloud. Is that a stop sign? Is it not? It can't. It has to be able to figure out what the stop sign is before it gets to it, so we'll do the influencing at the edge. But when it doesn't know what to do with the data, then it should send it to the court to determine, to learn about it and send signatures back out, not just to that edge location, but all the edge locations within the within the ecosystem. So I get what you're saying. They might >>send data back >>when there's an anomaly, >>or I always use the example of a deer running in front of the car. David Floyd gave me that one. That's when I want to. I do want to send the data back to the cloud because Tesla doesn't persist. A tonne of data, I presume, right, right less than 5% of it. You know, I want to. Usually I'm here to dive into the weeds. I want kind of uplevel this >>to sort of the >>larger picture. From an I T perspective. >>There's been a lot of consolidation going on if you divide the >>world into vendors >>and customers. On the customer side, there are only if there's a finite number of seats at the table for truly strategic partners. Those get gobbled up often by hyper >>scale cloud >>providers. The challenge there, and I'm part of a CEO accreditation programme. So this >>is aimed at my students who >>are CEOs and CIOs. The challenge is that a lot of CEOs and CIOs on the customer side don't exhaustively drag out of their vendor partners like a theme everything that Saveem >>can do for >>them. Maybe they're leveraging a point >>solution, >>but I guarantee you they don't all know that you've got cast in in the portfolio. Not every one of them does yet, let alone this idea of a super >>cloud and and and >>how much of a strategic role you can play. So I don't know if it's a blanket admonition to folks out there, but you have got to leverage the people who are building the solutions that are going to help you solve problems in the business. And I guess, as in the form of >>a question, >>uh, do you Do you see that as a challenge? Now those the limited number of seats at >>the Table for >>Strategic Partners >>Challenge and >>Opportunity. If you look at the types of partners that we've partnered with storage partners because they own the storage of the data, at the end of the day, we actually just manage it. We don't actually store it the cloud partners. So I see that as the opportunity and my belief is I thought that the storage doesn't matter, >>but I think the >>organisation that can centralise and manage that data is the one that can rule the world, and so >>clearly I'm a team. I think we can do amazing things, but we do have key >>strategic partners hp >>E Amazon. You heard >>them on stage yesterday. >>18 different >>integrations with AWS. So we have very strategic partners. Azure. I go out there all the time. >>So there >>you don't need to be >>in the room at the table because your partners are >>and they have a relationship with the customer as well. Fair enough. But the key to this it's not just technology. It is these relationships and what is possible between our organisations. So I'm sorry to be >>so dense on this, but when you talk about >>centralising that data you're talking about physically centralising it or can actually live across clouds, >>for instance. But you've got >>visibility and your catalogues >>have visibility on >>all that. Is that what you mean by centralised obliterated? We have understanding of all the places that lives, and we can do things with >>it. We can move it from one >>cloud to another. We can take, you know, everyone talks about data warehouses. >>They're actually pretty expensive. >>You got to take data and stream it into this thing, and there's a massive computing power. On the other hand, we're >>not like that. You've storage on there. We can ephemeral e. Spin up a database when you need it for five minutes and then destroy it. We can spin up an image when you need it and then destroy it. And so on your perspective of locations. So irrespective of >>location, it doesn't >>have to be in a central place, and that's been a challenge. You extract, >>transform and load, >>and moving the data to the central >>location has been a problem. We >>have awareness of >>all the data everywhere, >>and then we can make >>decisions as to what you >>do based >>on where it is and >>what it is. And that's a metadata >>innovation. I guess that >>comes back to the catalogue, >>right? Is that correct? >>You have data >>about the data that informs you as to where it is and how to get to it. And yes, so metadata within the data that allows you to recover it and then data across the federation of all that to determine where it is. And machine intelligence plays a role in all that, not yet not yet in that space. Now. I do think there's opportunity in the future to be able to distribute storage across many different locations and that's a whole conversation in itself. But but our machine learning is more just on helping our customers address the problems in their infrastructures rather than determining right now where that data should be. >>These guys they want me to break, But I'm >>refusing. So your >>Hadoop back >>in their rooms via, um that's >>well, >>that scale. A lot of customers. I talked to Renee Dupuis. Hey, we we got there >>was heavy lift. You >>know, we're looking at new >>ways. New >>approaches, uh, going. And of course, it's all in the cloud >>anyway. But what's >>that look like? That future look like we haven't reached bottle and X ray yet on our on our Hadoop clusters, and we do continuously examine >>them for anomalies that might happen. >>Not saying we won't run into a >>bottle like we always do at some >>point, But we haven't yet >>awesome. We've covered a lot of We've certainly covered extensively the research that you did on cyber >>security and ransomware. Um, you're kind of your vision for modern >>data protection. I think we hit on that pretty well casting, you know, we talked to Michael about that, and then, you know, the future product releases the Salesforce data protection. You guys, I think you're the first there. I think you were threatened at first from Microsoft. 3 65. No, there are other vendors in the in the salesforce space. But what I tell people we weren't the first to do data capture at the hyper >>visor level. There's two other >>vendors I won't tell you they were No one remembers them. Microsoft 3 65. We weren't the first one to for that, but we're now >>the largest. So >>there are other vendors in the salesforce space. But we're looking at We're going to be aggressive. Danielle, Thanks >>so much for coming to Cuba and letting us pick your brain like that Really great job today. And congratulations on >>being back >>in semi normal. Thank you for having me. I love being on all right. And thank you for watching. Keep it right there. More coverage. Day volonte for Dave >>Nicholson, By >>the way, check out silicon angle dot com for all the written coverage. All the news >>The cube dot >>net is where all these videos We'll we'll live. Check out wiki bond dot com I published every week. I think I'm gonna dig into the cybersecurity >>research that you guys did this week. If I can >>get a hands my hands on those charts which Dave Russell promised >>me, we'll be right back >>right after this short break. Mm.
SUMMARY :
He gave the keynote this morning. And I got to tell you the story you told off Portsmouth, Maine, and I And so it's a long, But we used data, and the data that found the ship was actually from 15 years earlier. We found the stern of the ship, but what we were really trying to answer was The ship find the boiler. We found the bow and the stern. data found that wreck. Yes, Several So the But I had the opportunity of meeting some of the Children of the victims and also attending ceremonies. them prior to this one. You connected that to data because you you went out and bought a How do you say this? I got that right. But And then you And then you went right to that spot. But the boilers. One was the original set of side scan sonar the boiler should be because they knew that the ship had continued to float for eight minutes. So the original side scan sonar data was just hard But I don't know that I can say that. the data. So we were We were used to that model of tracking So now fast forward to 2022. I think you said a cloud 2019. 500 in, And we've already done the same as 2020. I expect To the to the importance the insights that you can provide through them it's VM one, But even more than that, one of the things in Wien one people don't realise we have this concept of the intelligent diagnostics. data protection has meaning to you. Then you Using the analyst meeting. Let's take that to modern today. And then we'll get into cloud and sure, So if you go back to and being started, of capturing the data inside the operating system. And so that was modern. We used the VCR API Okay, so I said this to Michael earlier. The capture of the data all of the changes on that particular operating system. You can use the V cloud agents that say Forget about the block changes. Even though you don't have access to the hyper visor of the storage, So that gets into you talking on stage That is the best possible thing to recover from right, But we don't. And here's where the super cloud comes into play because if I can convert it into the VM I can move it from to another cloud back to virtual. There are things Some use bio, But we have been in backup format that we can move All of the infrastructure in your Is that Is and so historically, one of the problems with backup is that you had a separate catalogue and if it ever got corrupted. for that unique VM or that unique instance, you can move it anywhere and power so reliable. You can You don't want to have any state if you can help it, You can have stateless environment, some We also do this for containers, And Actually, I think you termed the and you can do it today and that's a I think it's a winning strategy. new hardware, architectures, arm based stuff that are going to change everything to edge Native Yes storage to the smallest footprint possible. of the point of sale and local data, but that what So we were I And so there's a lot of options there. You want to chime in? I yeah, I know. I'm just fascinated by the whole concept of of instances that can live in the containerised world. But to watch that progression that you guys have And until Now here's the kicker about The trick is, do they have the A I actually think we need a common but at an atomic level of the elements within those applications. So modern data protection is something that's going to we're gonna need modern we spent a lot of time talking about security and what you find is when organisations to be able to prove what type of data they're collecting. So the big trend that I think is happening is going to happen in scratching the surface. It can be a data lake that's, you know, data, And it's just a note on the on the call of the data Not my term. Octagon coined that term. The problem with that But it's coming, and the big problem is Federated How do I automate that governance so that the people who should have access to that it can There needs to be a horizontal And I would think that's a perfect opportunity for you guys. That's the type of thing that is actually better suited for the cloud. Most of the this is not true is modelling that's done in the cloud. But what So my expectation of the way the true machine learning will happen in the centralised location, and what it will do is similar to someone then it should send it to the court to determine, to learn about it and send signatures Usually I'm here to dive into the weeds. From an I T perspective. On the customer side, there are only if there's a finite number of seats at So this The challenge is that a lot of CEOs and CIOs on the customer side but I guarantee you they don't all know that you've got cast in in the portfolio. And I guess, as in the form of So I see that as the opportunity and my belief is I thought that the storage I think we can do amazing things, but we do have key You heard So we have very strategic partners. But the key to this it's not just technology. But you've got all the places that lives, and we can do things with We can take, you know, everyone talks about data warehouses. On the other hand, We can ephemeral e. Spin up a database when you need it for five minutes and then destroy have to be in a central place, and that's been a challenge. We And that's a metadata I guess that about the data that informs you as to where it is and how to get to it. So your I talked to Renee Dupuis. was heavy lift. And of course, it's all in the cloud But what's the research that you did on cyber Um, you're kind of your vision for modern I think we hit on that pretty well casting, you know, we talked to Michael about that, There's two other vendors I won't tell you they were No one remembers them. the largest. But we're looking at We're going to be aggressive. so much for coming to Cuba and letting us pick your brain like that Really great job today. And thank you for watching. the way, check out silicon angle dot com for all the written coverage. I think I'm gonna dig into the cybersecurity research that you guys did this week. right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Dave Russell | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Renee Dupuis | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Danny Allen | PERSON | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
Danny Allan | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
David Floyd | PERSON | 0.99+ |
Danny | PERSON | 0.99+ |
Danielle | PERSON | 0.99+ |
75 years | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2018 | DATE | 0.99+ |
Cuba | LOCATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
eight minutes | QUANTITY | 0.99+ |
two survivors | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
13 survivors | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
100 operating systems | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
18 | QUANTITY | 0.99+ |
two hours | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Jim | PERSON | 0.99+ |
five minutes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
2022 | DATE | 0.99+ |
four minutes | QUANTITY | 0.99+ |
Two sets | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
49 sailors | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
42 | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
today | DATE | 0.99+ |
Portsmouth | LOCATION | 0.99+ |
zero minutes | QUANTITY | 0.99+ |
Nicholson | PERSON | 0.99+ |
15 years later | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
three | QUANTITY | 0.98+ |
less than 5% | QUANTITY | 0.98+ |
Rancher | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
Wien | LOCATION | 0.98+ |
100% | QUANTITY | 0.98+ |
both boilers | QUANTITY | 0.98+ |
Amazon | ORGANIZATION | 0.97+ |
both snapshots | QUANTITY | 0.97+ |
up to 200 signatures | QUANTITY | 0.97+ |
15 years old | QUANTITY | 0.97+ |
Veeam | PERSON | 0.96+ |
Town Zoo | ORGANIZATION | 0.96+ |
this week | DATE | 0.96+ |
thousands of agents | QUANTITY | 0.96+ |
100 | QUANTITY | 0.95+ |
Salesforce | ORGANIZATION | 0.95+ |
tonight | DATE | 0.95+ |
Boston Sunken World War Two | EVENT | 0.94+ |
hp | ORGANIZATION | 0.94+ |
Pete Robinson, Salesforce & Shannon Champion, Dell Technologies | Dell Tech World 2022
>>The cube presents, Dell technologies world brought to you by Dell. >>Welcome back to the cube. Lisa Martin and Dave Vale are live in Las Vegas. We are covering our third day of covering Dell technologies world 2022. The first live in-person event since 2019. It's been great to be here. We've had a lot of great conversations about all the announcements that Dell has made in the last couple of days. And we're gonna unpack a little bit more of that. Now. One of our alumni is back with us. Shannon champion joins us again, vice president product marketing at Dell technologies, and she's a company by Pete Robinson, the director of infrastructure engineering at Salesforce. Welcome. Thank >>You. >>So Shannon, you had a big announcement yesterday. I run a lot of new software innovations. Did >>You hear about that? I heard a little something >>About that. Unpack that for us. >>Yeah. Awesome. Yeah, it's so exciting to be here in person and have such a big moment across our storage portfolio, to see that on the big stage, the boom to announce major updates across power store, PowerMax and power flex all together, just a ton of innovation across the storage portfolio. And you probably also heard a ton of focus on our software driven innovation across those products, because our goal is really to deliver a continuously modern storage experience. That's what our customers are asking us for that cloud experience. Let's take the most Val get the most value from data no matter where it lives. That's on premises in the public clouds or at the edge. And that's what we, uh, unveil. That's what we're releasing. And that's what we're excited to talk about. >>Now, Pete, you, Salesforce is a long time Dell customer, but you're also its largest PowerMax customer. The biggest in the world. Tell us a little bit about what you guys are doing with PowerMax and your experience. >>Yeah, so, um, for Salesforce, trust is our number one value and that carries over into the infrastructure that we develop, we test and, and we roll out and Parex has been a key part of that. Um, we really like the, um, the technology in terms of availability, reliability, um, performance. And it, it has allowed us to, you know, continue to grow our customers, uh, continue needs for more and more data. >>So what was kind of eye popping to me was the emphasis on security. Not that you've not always emphasized security, but maybe Shannon, you could do a rundown of, yeah. Maybe not all the features, but give us the high level. And at Pete, I, I wonder how I, if you could comment on how, how you think about that as a practitioner, but please give us that. >>Sure. Yeah. So, you know, PowerMax has been leading for, uh, a long time in its space and we're continuing to lean into that and continue to lead in that space. And we're proud to say PowerMax is the world's most secure mission, critical storage platform. And the reason we can say that is because it really is designed for comprehensive cyber resiliency. It's designed with a zero trust security architecture. And in this particular release, there's 19 different security features really embedded in there. So I'm not gonna unpack all 19, but a couple, um, examples, right? So multifactor authentication also continuous ransomware anomaly detection, a leveraging cloud IQ, which is, uh, huge. Um, and last but not least, um, we have the industry's most granular cyber recovery at scale PowerMax can do up to 65 million imutable snapshots per array. So just, uh, and that's 30 times more than our next nearest competitor. So, you know, really when you're talking about recovery point objectives, power max can't be beat. >>So what does that mean to you, Pete? >>Uh, well, it's it's same thing that I was mentioning earlier about that's a trust factor. Uh, security is a big, a big part of that. You know, Salesforce invests heavily into the securing our customer data because it really is the, the core foundation of our success and our customers trust us with their data. And if we, if we were to fail at that, you know, we would lose that trust. And that's simply not, it's not an option. >>Let's talk about that trust for a minute. We know we've heard a lot about trust this week from Michael Dell. Talk to us about trust, your trust, Salesforce's trust and Dell technologies. You've been using them a long time, but cultural alignment yeah. Seems to be pretty spot on. >>I, I would agree. Um, you know, both companies have a customer first mentality, uh, you know, we, we succeed if the customer succeeds and we see that going back and forth in that partnership. So Dell is successful when Salesforce is successful and vice versa. So, um, when we've it's and it goes beyond just the initial, you know, the initial purchase of, of hardware or software, you know, how you operate it, how you manage it, um, how you continue to develop together. You know, our, you know, we work closely with the Dell engineering teams and we've, we've worked closely in development of the new, new PowerMax lines to where it's actually able to help us build our, our business. And, and again, you know, continue to help Dell in the process. So you've >>Got visibility on the new, a lot of these new features you're playing around with them. What I, I, I obviously started with security cuz that's on top of everybody's mind, but what are the things are important to you as a customer? And how do these features the new features kind of map into that? Maybe you could talk about your experience with the, I think you're in beta, maybe with these features. Maybe you could talk about that. >>Yeah. Um, probably the, the biggest thing that we're seeing right now, other than OB the obvious enhancements in hardware, which, which we love, uh, you know, better performance, better scalability, better, and a better density. Um, but also the, some of the software functionality that Dells starting to roll out, you know, we've, we've, we've uh, implemented cloud IQ for all of our PowerMax systems and it's the same thing. We continue to, um, find features that we would like. And we've actually, you know, worked closely with the cloud IQ team. And within a matter of weeks or months, those features are popping up in cloud IQ that we can then continue to, to develop and, and use. >>Yeah. I think trust goes both ways in our partnership, right? So, you know, Salesforce can trust Dell to deliver the, you know, the products they need to deliver their business outcomes, but we also have a relationship to where we can trust that Salesforce is gonna really help us develop the next generation product that's gonna, you know, really deliver the most value. Yeah. >>Can you share some business outcomes that you've achieved so far leveraging power max and how it's really enabled, maybe it's your organization's productivity perspective, but what are some of those outcomes that you've achieved so far? >>Um, there there's so many to, to, to choose from, but I would say the, probably the biggest thing that we've seen is a as we roll out new infrastructure, we have various generations that we deploy. Um, when we went to the new PowerMax, um, initially we were concerned about whether our storage infrastructure could keep up with the new compute, uh, systems that we were rolling out. And when we went through and began testing it, we came to realize that the, the performance improvements alone, that we were seeing were able to keep up with the compute demand, making that transition from the older VMAX platforms to the PMAX practically seamless and able to just deploy the new SKUs as, as they came out. >>Talk about the portfolio that you apply to PowerMax. I mean, it's the highest of the highest end mission critical the toughest workloads in the planet. Salesforce has made a lot of acquisitions. Yeah. Um, do you throw everything at PowerMax? Are you, are you selective? What's your strategy there? So >>It's, it's selective. In other words that there's no square peg that meets every need, um, you know, acquisitions take some time to, to ingest, um, you know, some run into cloud, some run in first, in, in first party. Um, but so we, we try to take a very, very intentional approach to where we deploy that technology. >>So 10 years ago, someone in your position, or maybe someone who works for you was probably do spent a lot of time managing lawns and tuning performance. And how has that changed? >>We don't do that. <laugh> we? >>We can, right. So what do you do with right. Talk, talk more double click on that. So how talk about how that transition occurred from really non-productive activities, managing storage boxes. Yeah. And, and where you are today, what are you doing with those resources? >>It, it, it all comes outta automation. Like, you know, the, you know, hardware is hardware to a point, um, but you reach a point where the, the manageability scale just goes exponential and, and we're way, well past that. And the only way we've been able to meet that, meet that need is to, to automate and really develop our operations, to be able to not just manage at a lung level or even at the system level, but manage at the data center level at the geographical, you know, location level and then being able to, to manage from there. >>Okay. Really stupid question. But I'm gonna ask it cause I wanna hear your answer. True. Why can't you just take a software defined storage platform and just run everything on that? Why do you need all these different platforms and why do you gotta spend all this money on PowerMax? Why, why can't you just do >>That? That's the million dollar question. Uh, I, I ask that all the time. <laugh>, um, I think software defined is it's on its way. Um, it's come a long way just in the last decade. Yeah. Um, but in terms of supporting what I consider mission critical, large scale, uh, applications, it's, it's not, it's just simply not on par just yet with what we do with PowerMax, for example. >>And that's exactly how we position it in our portfolio. Right? So PowerMax runs on 95% of the fortune 100 companies, top 20 healthcare companies, top 10 financial services companies in the world. So it's really mission critical high end has all of the enterprise level features and capabilities to really have that availability. That's so important to a lot of companies like Salesforce and, and Pete's right, you know, software define is on its way and it provides a lot of agility there. But at the end of the day for mission critical storage, it's all about PowerMax. >>I wonder if we're ever gonna get to, I mean, you, you, you, it was interesting answer cuz you kind of, I inferred from your that you're hopeful and even optimistic that someday will get to parody. But I wonder because you can't be just close enough. It's almost, you have to be. >>I think, I think the key answer to that is it's it's the software flying gets you halfway there. The other side of the coin is the application ecosystem has to change to be able to solve that other, other side of it. Cuz if you simply simply take an application that runs on a PowerMax and try to run it, just forklift it over to a software defined. You're not gonna have very much luck. >>Recovery has to be moved up to stack >>Operations recovery, the whole, whole whole works. >>Jenny, can you comment on how customers like Salesforce? Like what's your process for involving them in testing in roadmap and in that direction, strategic direction that you guys are going? Great >>Question. Sure. Yeah. So, you know, customer feedback is huge. You've heard it. I'm sure this is not new right product development and engineering. We love to hear from our customers. And there's multiple ways you heard about beta testing, which we're really fortunate that Salesforce can help us provide that feedback for our new releases. But we have user groups, we have forums. We, we hear directly from our sales teams, our, you know, our customers, aren't shy, they're willing to give us their feedback. And at the end of the day, we take that feedback and make sure that we're prioritizing the right things in our product management and engineering teams so that we're delivering the things that matter. Most first, >>We've heard a lot of that this week. So I would agree guys, thank you so much for joining Dave and me talking about Salesforce. What you doing with PowerMax? All the stuff that you announced yesterday, alone. Hopefully you get to go home and get a little bit of rest. >>Yes. >>I'm sure that there's, there's never a dull moment. Never. Can't wait guys. Great to have you. >>Thank you. You guys, >>For our guests on Dave Volante, I'm Lisa Martin and you're watching the queue. We are live day three of our coverage of Dell technologies world 2022, Dave and I will be right back with our final guest of the show.
SUMMARY :
about all the announcements that Dell has made in the last couple of days. So Shannon, you had a big announcement yesterday. Unpack that for us. And you probably also heard a ton Tell us a little bit about what you guys are doing with it has allowed us to, you know, continue to grow our customers, uh, I, I wonder how I, if you could comment on how, how you think about that as a practitioner, So, you know, really when you're talking about recovery point objectives, power max can't be beat. And if we, if we were to fail at that, you know, we would lose that trust. Talk to us about trust, your trust, Salesforce's trust and Dell technologies. um, when we've it's and it goes beyond just the initial, you know, the initial purchase of, Maybe you could talk about your experience with the, I think you're in beta, maybe with these features. starting to roll out, you know, we've, we've, we've uh, implemented cloud IQ for all of our PowerMax systems Salesforce can trust Dell to deliver the, you know, the products they need to to keep up with the compute demand, making that transition from the older VMAX platforms Talk about the portfolio that you apply to PowerMax. um, you know, acquisitions take some time to, to ingest, um, you know, And how has that changed? We don't do that. So what do you do with right. but manage at the data center level at the geographical, you know, location level and then Why do you need all these different platforms and why do you gotta spend all this money on PowerMax? Uh, I, I ask that all the time. and, and Pete's right, you know, software define is on its way and it provides a lot of agility there. But I wonder because you can't be just close enough. I think, I think the key answer to that is it's it's the software flying gets you halfway there. our, you know, our customers, aren't shy, they're willing to give us their feedback. All the stuff that you announced yesterday, alone. Great to have you. You guys, of our coverage of Dell technologies world 2022, Dave and I will be right back with our final guest of the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Shannon | PERSON | 0.99+ |
Pete Robinson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vale | PERSON | 0.99+ |
30 times | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Jenny | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Pete | PERSON | 0.99+ |
Salesforce | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
million dollar | QUANTITY | 0.99+ |
19 different security features | QUANTITY | 0.99+ |
Shannon Champion | PERSON | 0.99+ |
both companies | QUANTITY | 0.98+ |
this week | DATE | 0.98+ |
third day | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
PowerMax | ORGANIZATION | 0.97+ |
2019 | DATE | 0.97+ |
Parex | ORGANIZATION | 0.96+ |
Dell Technologies | ORGANIZATION | 0.96+ |
zero trust | QUANTITY | 0.94+ |
PMAX | ORGANIZATION | 0.94+ |
Dells | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.93+ |
20 healthcare companies | QUANTITY | 0.93+ |
2022 | DATE | 0.92+ |
power flex | ORGANIZATION | 0.91+ |
10 financial services companies | QUANTITY | 0.91+ |
today | DATE | 0.91+ |
both ways | QUANTITY | 0.91+ |
last decade | DATE | 0.89+ |
VMAX | ORGANIZATION | 0.87+ |
100 companies | QUANTITY | 0.87+ |
Dell technologies | ORGANIZATION | 0.86+ |
day three | QUANTITY | 0.85+ |
first mentality | QUANTITY | 0.85+ |
up to 65 million imutable snapshots | QUANTITY | 0.84+ |
first party | QUANTITY | 0.82+ |
PowerMax | COMMERCIAL_ITEM | 0.81+ |
19 | QUANTITY | 0.77+ |
first live in | QUANTITY | 0.73+ |
double | QUANTITY | 0.68+ |
days | DATE | 0.65+ |
Dell Tech World 2022 | EVENT | 0.62+ |
world 2022 | EVENT | 0.59+ |
Pete | ORGANIZATION | 0.59+ |
Ranga Rajagopalan, Commvault & Stephen Orban, AWS | Commvault Connections 2021
>>Mhm. Mhm. >>We're here with the Cube covering Calm Vault Connections 21. We're gonna look at the data protection space and how cloud computing has advanced the way we think about backup recovery and protecting our most critical data. Ranga Rajagopalan, who is the vice president of products at Con vault and Stephen Orban, who's the General manager of AWS marketplace and control services gents. Welcome to the cube. Good to see you. >>Thank you. Always A pleasure to see you here >>steve. Thanks for having us. Very >>welcome, Stephen, let's start with you. Look the cloud has become a staple of digital infrastructure. I don't know where we'd be right now without being able to access enterprise services I. T. Services remotely. Um But specifically how our customers looking at backup and recovery in the cloud, is it a kind of a replacement for existing strategies? Is it another layer of protection? How are they thinking about that? >>Yeah. Great question. David again, thank thanks for having me. And I think you know, look if you look back to 15 years ago when the founders of AWS had the hypothesis that many enterprises governments and developers we're gonna want access to on demand pay as you go I. T. Resources in the cloud. Uh None of us would have been able to predict that it would have Matured and um you know become the staple that it has today over the last 15 years. But the reality is that a lot of these enterprise customers, many of whom have been doing their own IT infrastructure for the last 10, 20 or multiple decades do have to kind of figure out how they deal with the change management of moving to the cloud. And while a lot of our customers um will initially come to us because they're looking to save money or costs, almost all of them decided to stay and go big because of the speed at which they are able to innovate on behalf of their customers and when it comes to storage and backup, that just plays right into where they're headed. And there's a variety of different techniques that customers use, whether it be, you know, a lift and shift for a particular set of applications or a data center where they do very much. Look at how can they replace the backup and recovery that they have on premises in the cloud using solutions like, but we're partnering with console to do or completely reimagining their architecture for net new developments that they can really move quickly for their customers. Um and and completely developing something brand new, where it is really a, you know, a brand new replacement and innovation for for for what they've done in the past. >>Great, thank you, Stephen Rachael, I want to ask you about the d were digital. Look, if you're not a digital business today, you're basically out of business. So, my question to you is how have you seen customers change the way they think about data protection during what I call the forced March to digital over the last 18, 19 months or customers, you know, thinking about data protection differently today >>definitely Dave and and thank you for having me and steven. Pleasure to join you on this cube interview first going back to stevens comments can't agree more. Almost every business that we talked with today has a cloud first strategy, a cloud transformation mandate and you know, the reality is back to your digital comment. There are many different paths to the hybrid multi cloud and different customers. You know, there are different parts of the journey. So I still was saying most often customers at least in the data protection perspective start the conversation by thinking here have all these tips. Can I start using cloud as my air gap long term retention target and before they realized they start moving their workloads into the cloud and none of the backup and record yesterday's are going to change. So you need to continue protecting the clothes, which is where the cloud native data protection comes in and then they start innovating around er, can I use cloud as media sites so that you know, I don't need to meet in the other side. So this year is all around us. Cloud transformation is all around us and and the real essence of this partnership between AWS and calm vault is essentially to dr and simplify all the paths to the club regardless of whether you're going to use it as a storage started or you know, your production data center, all your dear disaster recovery site. >>Yeah, it really is about providing that optionality for customers. I talked to a lot of customers and said, hey, our business resilience strategy was really too focused on D. R. I've talked to all the customers at the other end of the spectrum said we don't even have a D. R. Strategy now, we're using the cloud for that. So it's really all over the map and you want that optionality. So steven and then go ahead. >>Please, ransomware plays a big role in many of these considerations that greatly. It's unfortunately not a question of whether you're going to be hit by ransomware, it's almost we can like, what do you do when you're hit by ransomware and the ability to use the clothes scaled immediately, bring up the resources, use the cloud backups has become a very popular choice simply because of the speed with which you can bring the business back to normal our patients. The agility and the power that cloud brings to the table. >>Yeah, ransomware is scary. You don't, you don't even need a high school diploma to be a ransomware ist you can just go on the dark web and by ransomware as a service and do bad things and hopefully you'll end up in jail. Uh Stephen we know about the success of the AWS marketplace, uh you guys are partnering here. I'm interested in how that partnership, you know, kind of where it started and how it's evolving. >>Yeah, happy to highlight on that. So, look when >>we when we started >>Aws or when the founders of aws started aws, as I said 15 years ago we we realized very early on that while we were going to be able to provide a number of tools for customers to have on demand access to compute storage, networking databases that many, particularly enterprise and government government customers still use a wide range of tools and solutions from hundreds, if not in some cases thousands of different partners. I mean I talked to enterprises who literally use thousands of of different vendors to help them deliver their solutions for their customers. So almost 10 years ago, we're almost at our 10 year anniversary for AWS marketplace, we launched the first substantiation of AWS marketplace which allowed builders and customers to find try buy and then deploy third party software solutions running on amazon machine instances also noticed as armies natively right in their AWS and cloud accounts to complement what they were doing in the cloud. And over the last nearly 10 years we've evolved quite a bit to the point where we support software and multiple different packaging types, whether it be amazon machine instances, containers, machine learning models and of course SAS and the rise of software as a service. So customers don't have to manage the software themselves. But we also support data products through the AWS Data exchange and professional services for customers who want to get services to help them integrate the software into their environments. And we now do that across a wide range of procurement options. So what used to be pay as you go amazon machine instances now includes multiple different ways to contract directly, customer can do that directly with the vendor with their channel partner or using kind of our public e commerce capabilities. And we're super excited, um, over the last couple of months we've been partnering with calm vault to get their industry leading backup and recovery solutions listed on AWS marketplace, which is available for our collective customers now. So not only do they have access to convulse awesome solutions to help them protect against ransomware as we talked about and to manage their backup and recovery environments, but they can find and deploy that directly in one click right into their AWS accounts and consolidate their building relationship right on the AWS and voice. And it's been awesome to work with with Rhonda and the product teams and convo to really, um, expose those capabilities where converts using a lot of different AWS services to provide a really great native experience for our collective customers as they migrate to the cloud. >>Yeah, the marketplace has been amazing. We've watched it evolve over the past decade and, and, and it's a, it's a key characteristic of everybody has a cloud today. We're a cloud to butt marketplaces unique uh, in that it's the power of the ecosystem versus the resources of one and Ringo. I wonder from, from your perspective, if you could talk about the partnership with AWS from your view and then specifically you've got some hard news, I wonder if you could talk about that as well. >>Absolute. So the partnership has been extended for more than 12 years. Right. So aws and Commonwealth have been bringing together solutions that help customers solve the data management challenges and everything that we've been doing has been driven by the customer demand that we seek. Right customers are moving their workloads in the cloud. They're finding new ways of deploying their workloads and protecting them. Um, you know, earlier we introduced cloud native integration with the EBS API which has driven almost 70% performance improvements in backup and restores. And when you look at huge customers like coca cola who have standardized on AWS um, combo. That is the scale that they want to operate in. You manage around 1 50,000 snapshots 1200 ec, two instances across six regions. But with just one resource dedicated for the data management strategy. Right? So that's where the real built in integration comes into play and we've been extending it to make use of the cloud efficiencies like our management and auto scale and so on. Another aspect is our commitment to a radically simple customer experience and that's, you know, I'm sure Stephen would agree it's a big month for at AWS as well. That's really together with the customer demand which brought us together to introduce com ball into the AWS marketplace exactly the way Stephen described it. Now the heart announcement is coming back up and recovery is available in native this marketplace. So the exact four steps that Stephen mentioned, find, try buy and deploy everything simplified through the marketplace So that our aws customers can start using far more back of software in less than 20 minutes. A 60 year trial version is included in the product through marketplace and you know, it's a single click buy, we use the cloud formation templates to deploy. So it becomes a super simple approach to protect the AWS workloads and we protect a lot of them. Starting from easy to rds dynamodb document DB um, you know, the containers, the list just keeps going on. So it becomes a very natural extension for our customers to make it super simple to start using convert data protection for the w >>well the con vault stack is very robust. You have extremely mature stack. I want, I'm curious as to how this sort of came about and it had to be customer driven. I'm sure where your customers saying, hey, we're moving to the cloud, we had a lot of workloads in the cloud, we're calm vault customer. That intersection between calm vault and AWS customers. So again, I presume this was customer driven. but maybe you can give us a little insight and add some color to that. >>Everything in this collaboration has been customer driven. We were earlier talking about the multiple paths to chlorine vapor example and still might probably add more color from his own experience at our jones. But I'll bring it to reference Parsons who's a civil engineering leader. They started with the cloud first mandate saying we need to start moving all our backups to the cloud but we have wanted that bad actors might find it easy to go and access the backups edible is um, Conwell came together with the security features and com well brought in its own authorization controls and now we have moved more than 14 petabytes of backup data into the club and it's so robust that not even the backup administrator and go and touch the backups without multiple levels of authorization. Right. So the customer needs, whether it is from a security perspective performance perspective or in this case from a simplicity perspective is really what is driving this. And and the need came exactly like that. There are many customers who have no standardized on it because they want to find everything through the AWS marketplace. They want to use their existing, you know, the AWS contracts and also bring data strategy as part of that so that that's the real um, driver behind this. Um, Stephen and I hope actually announced some of the customers that I actively started using it. You know, many notable customers have been behind this uh, innovation, don't even, I don't know, I wanted to add more to that. >>I would just, I would, I would just add Dave, you know, look if I look back before I joined a W S seven years ago, I was the C I O at dow jones and I was leading a a fairly big cloud migration there over a number of years. And one of the impetus is for us moving to the cloud in the first place was when Hurricane Sandy hit, we had a real disaster recovery scenario in one of our New Jersey data centers um, and we had to act pretty quickly convert was, was part of that solution. And I remember very clearly Even back then, back in 2013, they're being options available to help us accelerate are moved to the cloud and just to reiterate some of the stuff that Rhonda was talking about consoles, done a great job over the last more than a decade, taking features from things like EBS and S three and EC two and some of our networking capabilities and embedding them directly into their services so that customers are able to more quickly move their backup and recovery workloads to the cloud. So each and every one of those features was as a result of, I'm sure combo working backwards from their customer needs just as we do at >>AWS >>and we're super excited to take that to the next level to give customers the option to then also by that right on their AWS invoice on AWS marketplace. >>Yeah, I mean, we're gonna have to leave it there steven, you've mentioned several times the sort of the early days of back then we were talking about gigabytes and terabytes and now we're talking about petabytes and beyond. Guys. Thanks so much. I really appreciate your time and sharing the news with us. >>Dave. Thanks for having us. >>All right. Keep it right there more from combat connections. 21. You're watching the >>cube. Mm hmm.
SUMMARY :
protection space and how cloud computing has advanced the way we think about backup Always A pleasure to see you here Thanks for having us. at backup and recovery in the cloud, is it a kind of a replacement for existing strategies? have been able to predict that it would have Matured and um you know become the staple that my question to you is how have you seen customers change the way they think about data all the paths to the club regardless of whether you're going to use it as a storage started or you So it's really all over the map and you want that optionality. of the speed with which you can bring the business back to normal our patients. you know, kind of where it started and how it's evolving. Yeah, happy to highlight on that. So customers don't have to manage the software themselves. I wonder if you could talk about that as well. to a radically simple customer experience and that's, you know, I'm sure Stephen would agree it's a big but maybe you can give us a little insight and add some color to that. And and the need came exactly like that. And one of the impetus is for us moving to the cloud in the first place was when and we're super excited to take that to the next level to give customers the option to back then we were talking about gigabytes and terabytes and now we're talking about petabytes and beyond. Keep it right there more from combat connections.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Ranga Rajagopalan | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
Stephen Rachael | PERSON | 0.99+ |
Stephen Orban | PERSON | 0.99+ |
New Jersey | LOCATION | 0.99+ |
Con vault | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Rhonda | PERSON | 0.99+ |
stevens | PERSON | 0.99+ |
aws | ORGANIZATION | 0.99+ |
steven | PERSON | 0.99+ |
60 year | QUANTITY | 0.99+ |
less than 20 minutes | QUANTITY | 0.99+ |
more than 12 years | QUANTITY | 0.99+ |
six regions | QUANTITY | 0.99+ |
Commonwealth | ORGANIZATION | 0.99+ |
two instances | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
15 years ago | DATE | 0.99+ |
more than 14 petabytes | QUANTITY | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
one resource | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
15 years ago | DATE | 0.98+ |
first strategy | QUANTITY | 0.98+ |
this year | DATE | 0.97+ |
today | DATE | 0.97+ |
steve | PERSON | 0.97+ |
Hurricane Sandy | EVENT | 0.96+ |
EC two | TITLE | 0.96+ |
March | DATE | 0.96+ |
10 year anniversary | QUANTITY | 0.95+ |
almost 70% | QUANTITY | 0.95+ |
seven years ago | DATE | 0.95+ |
around 1 50,000 snapshots | QUANTITY | 0.95+ |
coca cola | ORGANIZATION | 0.95+ |
yesterday | DATE | 0.94+ |
2021 | DATE | 0.94+ |
first mandate | QUANTITY | 0.94+ |
four steps | QUANTITY | 0.94+ |
each | QUANTITY | 0.93+ |
1200 ec | QUANTITY | 0.93+ |
first place | QUANTITY | 0.92+ |
S three | TITLE | 0.92+ |
calm vault | ORGANIZATION | 0.9+ |
Commvault | ORGANIZATION | 0.89+ |
single click | QUANTITY | 0.87+ |
first substantiation | QUANTITY | 0.86+ |
EBS | ORGANIZATION | 0.85+ |
10 years ago | DATE | 0.84+ |
last 15 years | DATE | 0.84+ |
Ranga Rajagopalan & Stephen Orban
(Techno music plays in intro) >> We're here with theCUBE covering Commvault Connections 21. And we're going to look at the data protection space and how cloud computing has advanced the way we think about backup, recovery and protecting our most critical data. Ranga Rajagopalan who is the Vice President of products at Commvault, and Stephen Orban who's the General Manager of AWS Marketplace & Control Services. Gents! Welcome to theCUBE. Good to see you. >> Thank you, always a pleasure to see you Dave. >> Dave, thanks for having us. Great to be here. >> You're very welcome. Stephen, let's start with you. Look, the cloud has become a staple of digital infrastructure. I don't know where we'd be right now without being able to access enterprise services, IT services remotely, Um, but specifically, how are customers looking at backup and recovery in the cloud? Is it a kind of a replacement for existing strategies? Is it another layer of protection? How are they thinking about that? >> Yeah. Great question, Dave. And again, thanks. Thanks for having me. And I think, you know, look. If you look back to 15 years ago, when the founders of AWS had the hypothesis that many enterprises, governments, and developers were going to want access to on demand, pay as you go, IT resources in the cloud. None of us would have been able to predict that it would have matured and, um, you know become the staple that it has today over the last 15 years. But the reality is that a lot of these are enterprise customers. Many of whom have been doing their own IT infrastructure for the last 10, 20 or or multiple decades do have to kind of figure out how they deal with it. The change management of moving to the cloud, and while a lot of our customers will initially come to us because they're looking to save money or costs. Almost all of them decide to stay and go big because of the speed at which they're able to innovate on behalf of their customers. And when it comes to storage and backup, that just plays right into where they're headed and there's a variety of different techniques that customers use. Whether it be, you know, a lift and shift for a particular set of applications. Or a data center or where it, where they do very much look at how can they replace the backup and recovery that they have on premises in the cloud using solutions like what we're partnering with Commvault to do. Or completely re-imagining their architecture for net new developments that they can really move quickly for, for their customers and, and completely developing something brand new, where it is really a, um, you know a brand new replacement and innovation for, for, for what they've done in the past. >> Great. Thank you, Stephen. Ranga, I want to ask you about the D word, digital. Look, if you're not a digital business today, you're basically out of business. So my question to you Ranga is, is how have you seen customers change the way they think about data protection during what I call the forced March to digital over the last 18, 19 months? Are customers thinking about data protection differently today? >> Definitely Dave, and and thank you for having me and Stephen pleasure to join you on this CUBE interview. First, going back to Stephen's comments, can't agree more. Almost every business that we talk with today has a cloud first strategy, a cloud transmission mandate. And, you know, the reality is back to your digital comment. There are many different paths to the hybrid micro cloud. And different customers. You know, there are different parts of the journey. So as Stephen was saying, most often customers, at least from a data protection perspective. Start the conversation their thinking, hey, I have all these tapes, can I start using cloud as my air gap, long-term retention target. And before they realize they start moving their workloads into the cloud, and none of the backup and recovery facilities are going to change. So you need to continue protecting the cloud, which is where the cloud meta data protection comes in. And then they start innovating around DR Can I use cloud as my DR sites so that, you know, I don't need to meet in another site. So this is all around us, cloud transmissions, all around us. And, and the real essence of this partnership between AWS and Commvault is essentially to drive, and simplify all the paths to the cloud Regardless of whether you're going to use it as a storage target or, you know, your production data center or your DR. Disaster Recovery site. >> Yeah. So really, it's about providing that optionality for customers. I talked to a lot of customers and said, hey, our business resilience strategy was really too focused on DR. I've talked to all the customers at the other end of the spectrum said, we didn't even have a DR strategy. Now we're using the cloud for that. So it's a, it's really all over the map and you want that optionality. So Stephen, >> (Ranga cuts in) >> Go ahead, please. >> And sorry. Ransomware plays a big role in many of these considerations as well, right? Like, it's unfortunately not a question of whether you're going to be hit by ransomware. It's almost become like, what do you do when you're hit by ransomware? And the ability to use the cloud scale to immediately bring up the resources. Use the cloud backers has become a very popular choice simply because of the speed with which you can bring the business back to normal operations. The agility and the power that cloud brings to the table. >> Yeah. Ransomware is scary. You don't, you don't even need a high school degree diploma to be a ransomware-ist. You could just go on the dark web and buy ransomware as a service and do bad things. And hopefully you'll end up in jail. Stephen, we know about the success of the AWS Marketplace. You guys are partnering here. I'm interested in how that partnership, you know, kind of where it started and how it's evolving. >> Yeah. And happy to highlight on that. So look, when we, when we started AWS or when the founders of AWS started AWS, as I said, 15 years ago. We realized very early on that while we were going to be able to provide a number of tools for customers to have on demand access to compute storage, networking databases, that many particularly, enterprise and government government customers still use a wide range of tools and solutions from hundreds, if not in some cases, thousands of different partners. I mean, I talked to enterprises who who literally used thousands of of different vendors to help them deliver those solutions for their customers. So almost 10 years ago, we're almost at our 10 year anniversary for AWS Marketplace. We launched the first instantiation of AWS Marketplace, which allowed builders and customers to find, try, buy, and then deploy third-party software solutions running on Amazon Machine Instances, also known as AMI's. Natively, right in their AWS and cloud accounts to compliment what they were doing in the cloud. And over the last, nearly 10 years, we've evolved quite a bit. To the point where we support software in multiple different packaging types. Whether it be Amazon Machine Instances, containers, machine learning models, and of course, SAS and the rise of software as a service, so customers don't have to manage the software themselves. But we also support a data products through the AWS data exchange and professional services for customers who want to get services to help them integrate the software into their environments. And we now do that across a wide range of procurement options. So what used to be pay as you go Amazon Machine Instances now includes multiple different ways to contract directly. The customer can do that directly with the vendor, with their channel partner or using kind of our, our public e-commerce capabilities. And we're super excited, um, over the last couple of months, we've been partnering with Commvault to get their industry leading backup and recovery solutions listed on AWS Marketplace. Which is available for our collective customers now. So not only do they have access to Commvault's awesome solutions to help them protect against ransomware, as we talked about and, and to manage their backup and recovery environments. But they can find and deploy that directly in one click right into their AWS accounts and consolidate their, their billing relationship right on the AWS invoice. And it's been awesome to work with with Ranga and the, and the product teams at Commvault to really expose those capabilities where Commvault's using a lot of different AWS services to, to provide a really great native experience for our collective customers as they migrate to the cloud. >> Yeah. The Marketplace has been amazing. We've watched it evolve over the past decade and it's just, it's a key characteristic of cloud. Everybody has a cloud today, right? Ah, we're a cloud too, but Marketplace is unique in, in, in that it's the power of the ecosystem versus the resources of one. And Ranga, I wonder if from your perspective, if you could talk about the partnership with AWS from your view, and and specifically you've got some hard news. Would, if you could, talk about that as well. >> Absolutely. So the partnership has been extending for more than 12 years, right? So AWS and Commvault have been bringing together solutions that help customers solve the data management challenges and everything that we've been doing has been driven by the customer demand that we see, right. Customers are moving their workloads to the cloud. They are finding new ways of deploying the workloads and protecting them. You know, earlier we introduced cloud native integration with the EBS AVI's which has driven almost 70% performance improvements in backup and restore. When you look at huge customers like Coca-Cola, who have standardized on AWS and Commvault, that is the scale that they want to operate on. They manage around one through 3,000 snapshots, 1200 easy, two instances across six regions, but with just one resource dedicated for the data management strategy, right? So that's where the real built-in integration comes into play. And we've been extending it to make use of the cloud efficiencies like power management and auto-scale, and so on. Another aspect is our commitment to a radically simple customer experience. And that's, you know, I'm sure Stephen would agree. It's a big mantra at AWS as well. That's really, together, the customer demand that's brought us together to introduce combo into the AWS Marketplace, exactly the way Stephen described it. Now the hot announcement is calmer, backup and recovery is available in AWS Marketplace. So the exact four steps that Stephen mentioned: find, try, buy, and deploy everything simplified to the Marketplace so that our AWS customers can start using our more backup software in less than 20 minutes. A 60 day trial version is included in the product through Marketplace. And, you know, it's a single click buy. We use the cloud formation templates to deploy. So it becomes a super simple approach to protect the AWS workloads. And we protect a lot of them starting from EC2, RDS DynamoDB, DocumentDB, you know, the, the containers, the list just keeps going on. So it becomes a very natural extension for our customers to make it super simple, to start using Commvault data protection for the AWS workloads. >> Well, the Commvault stack is very robust. You have an extremely mature stack. I want to, I'm curious as to how this sort of came about? I mean, it had to be customer driven, I'm sure. When your customers say, hey, we're moving to the cloud, we had a lot of workloads in the cloud. We're a Commvault customer, that intersection between Commvault and AWS customer. So, so again, I presume this was customer driven, but maybe you can give us a little insight and add some color to that, Ranga. >> Every everything, you know, in this collaboration has been customer driven. We were earlier talking about the multiple paths to cloud and a very good example, and Stephen might probably add more color from his own experience at Dow Jones, but I I'll, I'll bring it to reference Parsons. Who's, you know, civil engineering leader. They started with the cloud first mandate saying, we need to start moving all our backups to the cloud, but we averted that bad actors might find it easy to go and access the backups. AWS and Commvault came together with AWS security features and Commvault brought in its own authorization controls. And now we are moved more than 14 petabytes of backup data into the cloud, and it's sort of as that, not even the backup administrators can go and patch the backups without multiple levels of authorization, right? So the customer needs, whether it is from a security perspective, performance perspective, or in this case from a simplicity perspective is really what is driving us and, and the need came exactly like that. There are many customers who have now standardized on AWS, they want to find everything related to this Marketplace. They want to use their existing, you know, the AWS contracts and also bring data strategy as part of that. So that, that's the real driver behind this. Stephen and I were hoping that we could actually announce some of the customers that have actively started using it. You know, many notable customers have been behind this innovation. And Stephen I don't know if you wanted to add more to that. >> I would just, I would just add Dave, you know, like if I look back before I joined AWS seven years ago, I was the CIO at Dow Jones. And I was leading a, a fairly big cloud migration there over a number of years. And one of the impetuses for us moving to the cloud in the first place was when Hurricane Sandy hit, we had a real disaster recovery scenario in one of our New Jersey data centers. And we had to act pretty quickly. Commvault was, was part of that solution. And I remember very clearly, even back then, back in 2013, there being options available to help us accelerate our move to the cloud. And, and just to reiterate some of the stuff that Ranga was talking about, you know, Commvault's done a great job over the last, more than a decade. Taking features from things like EBS, and S3, and TC2 and some of our networking capabilities and embedding them directly into their services so that customers are able to, you know, more quickly move their backup and recovery workloads to the cloud. So each and every one of those features was, is a result of, I'm sure, Commvault working backwards from their customer needs just as we do at AWS. And we're super excited to take that to the next level, to give customers the option to then also buy that right on their AWS invoice on AWS Marketplace. >> Yeah. I mean, we're going to have to leave it there. Stephen you've mentioned this several times, there's sort of the early days of AWS. We went back then we were talking about gigabytes and terabytes, and now we're talking about petabytes and beyond. Guys thanks so much. We really appreciate your time and sharing the news with us. >> Dave, thanks for having us. >> All right, keep it right there more from Commvault Connections 21, you're watching theCUBE.
SUMMARY :
the way we think about backup, recovery pleasure to see you Dave. Great to be here. and recovery in the cloud? of moving to the cloud, and while So my question to you Ranga is, and simplify all the paths to the cloud So it's a, it's really all over the map And the ability to use the cloud scale You could just go on the dark web and the rise of software as a service, in that it's the power of the ecosystem that is the scale that I mean, it had to be the multiple paths to cloud And, and just to reiterate and sharing the news with us. you're watching theCUBE.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephen | PERSON | 0.99+ |
Ranga Rajagopalan | PERSON | 0.99+ |
Stephen Orban | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Ranga | PERSON | 0.99+ |
2013 | DATE | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
Dow Jones | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
New Jersey | LOCATION | 0.99+ |
3,000 snapshots | QUANTITY | 0.99+ |
60 day | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
more than 14 petabytes | QUANTITY | 0.99+ |
more than 12 years | QUANTITY | 0.99+ |
less than 20 minutes | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Coca-Cola | ORGANIZATION | 0.99+ |
seven years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
six regions | QUANTITY | 0.98+ |
1200 easy | QUANTITY | 0.98+ |
Hurricane Sandy | EVENT | 0.98+ |
EBS | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
15 years ago | DATE | 0.97+ |
EC2 | TITLE | 0.97+ |
two instances | QUANTITY | 0.97+ |
AWS Marketplace & Control Services | ORGANIZATION | 0.96+ |
March | DATE | 0.96+ |
one resource | QUANTITY | 0.96+ |
first mandate | QUANTITY | 0.96+ |
Breaking Analysis: Tech Spending Powers the Roaring 2020s as Cloud Remains a Staple of Growth
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> Last year in 2020 it was good to be in tech and even better to be in the cloud, as organizations had to rely on remote cloud services to keep things running. We believe that tech spending will increase seven to 8% in 2021. But we don't expect investments in cloud computing to sharply attenuate, when workers head back to the office. It's not a zero sum game, and we believe that pent up demand in on-prem data centers will complement those areas of high growth that we saw last year, namely cloud, AI, security, data and automation. Hello everyone, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis we'll provide our take on the latest ETR COVID survey, and share why we think the tech boom will continue, well into the future. So let's take a look at the state of tech spending. Fitch Ratings has upped its outlook for global GDP to 6.1% for January's 5.3% projection. We've always expected tech spending to outperform GDP by at least 100 to 200 basis points, so we think 2021 could see 8% growth for the tech sector. That's a massive swing from last year's,5% contraction, and it's being powered by spending in North America, a return of small businesses, and, the massive fiscal stimulus injection from the U.S led central bank actions. As we'll show you, the ETR survey data suggests that cloud spending is here to stay, and a dollar spent back in the data center doesn't necessarily mean less spending on digital initiatives, generally and cloud specifically. Moreover, we see pent up demand for core on-prem data center infrastructure, especially networking. Now one caveat, is we continue to have concerns for the macro on-prem data storage sector. There are pockets of positivity, for example, pure storage seems to have accelerating momentum. But generally the data suggests the cloud and flash headroom, continue, to pressure spending on storage. Now we don't expect the stock market's current rotation out of tech. We don't expect that that changes the fundamental spending dynamic. We see cloud, AI and ML, RPA, cybersecurity and collaboration investments still hovering above, that 40% net score. Actually cybersecurity is not quite there, but it is a priority area for CIOs. We'll talk about that more later. And we expect that those high growth sectors will stay steady in ETRs April survey along with continued spending on application modernization in the form of containers. Now let me take a moment to comment on the recent action in tech stocks. If you've been following the market, you know that the rate on the 10-year Treasury note has been rising. This is important, because the 10 years of benchmark, and it affects other interest rates. As interest rates rise, high growth tech stocks, they become less attractive. And that's why there's been a rotation, out of the big tech high flyer names of 2020. So why do high growth stocks become less attractive to investors when interest rates rise? Well, it's because investors are betting on the future value of cash flows for these companies, and when interest rates go up, the future values of those cash flows shrink, making the valuations less attractive. Let's take an example. Snowflake is a company with a higher revenue multiple than pretty much any other stock, out there in the tech industry. Revenues at the company are growing more than 100%, last quarter, and they're projected to have a revenue of a billion dollars next year. Now on March 8th, Snowflake was valued at around $80 billion and was trading at roughly 75x forward revenue. Today, toward the middle the end of March. Snowflake is valued at about 50 billion or roughly 45x forward revenue. So lower growth companies that throw off more cash today, become more attractive in a rising rate climate because, the cash they throw off today is more valuable than it was in a low rate environment. The cash is there today versus, a high flying tech company where the cash is coming down the road and doesn't have to be discounted on a net present value basis. So the point is, this is really about math, not about fundamental changes in spending. Now the ETR spending data has shown, consistent upward momentum, and that cycle is continuing, leading to our sanguine outlook for the sector. This chart here shows the progression of CIO expectations on spending over time, relative to previous years. And you can see the steady growth in expectations each quarter, hitting 6% growth in 2021 versus 2020 for the full year. ETR estimates show and they do this with a 95% confidence level, that spending is going to be up between 5.1 to 6.8% this year. We are even more up optimistic accounting for recent upward revisions in GDP. And spending outside the purview of traditional IT, which we think will be a tailwind, due to digital initiatives and shadow tech spending. ETR covers some of that, but it is really a CIO heavy survey. So there's some parts that we think can grow even faster, than ETR survey suggests. Now the positive spending outlook, it's broad based across virtually all industries that ETR tracks. Government spending leads the pack by a wide margin, which probably gives you a little bit of heartburn. I know it does for me, yikes. Healthcare is interesting. Perhaps due to pent up demand, healthcare has been so busy saving lives, that it has some holes to fill. But look at the sectors at 5% or above. Only education really lags notably. Even energy which got crushed last year, showing a nice rebound. Now let's take a look at some of the strategies that organizations have employed during COVID, and see how they've changed. Look, the picture is actually quite positive in our view. This data shows the responses over five survey snapshots, starting in March of 2020. Most people are still working from home that really hasn't changed much. But we're finally seeing some loosening of the travel restrictions imposed last year, is a notable drop in canceled business trips. It's still high, but it's very promising trend. Quick aside, looks like Mobile World Congress is happening in late June in Barcelona. The host of the conference just held a show in Shanghai and 20,000 attendees showed up. theCube is planning to be there in Barcelona along with TelcoDr, Who took over Ericsson's 65,000 square foot space, when Ericsson tapped out of the conference. We are good together we're going to lay out the future of the digital telco, in a hybrid: physical slash virtual event. With the ecosystem of telcos, cloud, 5G and software communities. We're very excited to be at the heart of reinventing the event experience for the coming decade. Okay, back to the data. Hiring freezes, way down. Look at new IT deployments near flat from last quarter, with big uptick from a year ago. Layoffs, trending downward, that's really a positive. Hiring momentum is there. So really positive signs for tech in this data. Now let's take a look at the work from home, survey data. We've been sharing this for several quarters now, remember, the data showed that pre pandemic around 15 to 16% of employees worked remotely. And we had been sharing the CIO is expected that figure to slowly decline from the 70% pandemic levels and come into the spring in the summer, hovering in the 50% range. But then eventually landing in the mid 30s. Now the current survey shows 31%. So, essentially, it's exactly double from the pre COVID levels. It's going to be really interesting to see because across the board organizations are reporting, big increases in productivity as a result of how they've responded to COVID in the remote work practices and the infrastructure that's been put in place. And look, a lot of workers are expecting to stay remote. So we'll see where this actually lands. My personal feelings, the number is going to be higher than the low 30s. Perhaps well into the mid to upper 30s. Now let's take a look at the cloud and on-prem MCS. So were a little bit out on a limb here with a can't have a cake and eat it too scenario. Meaning pent up demand for data center infrastructure on-prem is going to combine with the productivity benefits of cloud in the digital imperative. So that means that technology budgets are going to get a bigger piece of the overall spending pie, relative to other initiatives. At least for the near term. ETR asked respondents about how the return to physical, is going to impact on-prem architectures and applications. You can see 63% of the respondents, had a cloud friendly answer, as shown in the first two bars. Whereas 30% had an on-prem friendly answer, as shown in the next three bars. Now, what stands out, is that only 5% of respondents plan to increase their on-prem spend to above pre COVID levels. Sarbjeet Johal pinged me last night and asked me to jump into a clubhouse session with Martin Casado and the other guys from Andreessen Horowitz. They were having this conversation about the coming cloud backlash. And how cloud native companies are spending so much, too much, in their opinion, on AWS and other clouds. And at some point, as they scale, they're going to have to claw back technology infrastructure on-prem, due to their AWS vague. I don't know. This data, it certainly does not suggest that that is happening today. So the cloud vendors, they keep getting more volume, you would think they're going to have better prices and better economies of scales than we'll see on-prem. And as we pointed out, the repatriation narrative that you hear from many on-prem vendors is kind of dubious. Look, if AWS Azure, and Google can't provide IT infrastructure and better security than I can on-prem, then something is amiss. Now however, they are creating an oligopoly. And if they get too greedy and get hooked on the margin crack, of cloud, they'd better be careful, or they're going to become the next regulated utility? So, it's going to be interesting to see if the Andreessen scenario has (laughs) legs, maybe they have another agenda, maybe a lot of their portfolio companies, have ideas are around doing things to help on-prem? Why are we so optimistic that we'll see a stronger 2021 on-prem spend if the cloud continues to command so much attention? Well, first, because nearly 20% of customers say there will be an uptick in on-prem spending. Second, we saw in 2020, that the big on-prem players, Dell, VMware, Oracle, and SAP in particular, and even IBM made it through, okay. And they've managed to figure out how to work through the crisis. And finally, we think that the lines between on-prem and cloud, and hybrid and cross cloud and edge will blur over the next five years. We've talked about this a lot, that abstraction layer that we see coming, and there's some real value opportunities there. It'll take some time. But we do see there, that the traditional vendors, are going to attack those new opportunities and create value across clouds and hybrid systems and out to the edge. Now, as those demarcation lines become more gray, a hybrid world is emerging that is going to require hardware and software investments that reduce latency and are proximate to users buildings and distributed infrastructure. So we see spending in certain key areas, continuing to be strong across the board, will require connecting on-prem to cloud in edge workloads. Here's where it CIOs see the action, asked to cite the technologies that will get the most attention in the next 12 months. These seven stood out among the rest. No surprise that cyber comes out as top priority, with cloud pretty high as well. But interesting to see the uptick in collaboration in networking. Execs are seeing the importance of collaboration technologies for remote workers. No doubt, there's lots of Microsoft Teams in that bar. But there's some pent up demand it seems for networking, we find that very interesting. Now, just to put this in context, in a spending context. We'll share a graphic from a previous breaking analysis episode. This chart shows the net score or spending momentum on the vertical axis. And the market share or pervasiveness in the ETR data set on the horizontal axis. The big four areas of spend momentum are cloud, ML and AI, containers in RPA. This is from the January survey, we don't expect a big change in the upcoming April data, we'll see. But these four stand out above the 40% line that we've highlighted, which to us is an indicator of elevated momentum. Now, note on the horizontal axis only cloud, cloud is the only sector that enjoys both greater than 60% market share on the x axis, and is above the 40% net score line and the y axis. So even though security is a top priority as we were talking about earlier. It competes with other budget items, still right there certainly on the horizontal axis, but it competes with other initiatives for that spend momentum. Okay, so key takeaways. Seven to 8% tech spending growth expected for 2021. Cloud is leading the charge, it's big and it has spending momentum, so we don't expect a big rotation out of cloud back to on-prem. Now, having said that, we think on-prem will benefit from a return to a post isolation economy. Because of that pent up demand. But we caution we think there are some headwinds, particularly in the storage sector. Rotation away from tech in the stock market is not based on a fundamental change in spending in our view, or demand, rather it's stock market valuation math. So there should be some good buying opportunities for you in the coming months. As money moves out of tech into those value stocks. But the market is very hard to predict. Oh 2020 was easy to make money. All you had to do is buy high growth and momentum tech stocks on dips. 2021 It's not that simple. So you got to do your homework. And as we always like to stress, formulate a thesis and give it time to work for you. Iterate and improve when you feel like it's not working for you. But stay current, and be true to your strategy. Okay, that's it for today. Remember, these episodes are all available as podcasts wherever you listen. So please subscribe. I publish weekly in siliconangle.com and wikibond.com and always appreciate the comments on LinkedIn. You can DM me @dvellante or email me at david.vellante@siliconangle.com. Don't forget to check out etr.plus where all the survey data science actually resides. Some really interesting things that they're about to launch. So do follow that. This is Dave vellante. Thanks for watching theCube Insights powered by ETR. Good health to you, be safe and we'll see you next time.
SUMMARY :
in Palo Alto in Boston, how the return to physical,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
Sanjay Poonen | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Clarke | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
George | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Diane Greene | PERSON | 0.99+ |
Michele Paluso | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Sam Lightstone | PERSON | 0.99+ |
Dan Hushon | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Andy Armstrong | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
John | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Lisa Martin | PERSON | 0.99+ |
Kevin Sheehan | PERSON | 0.99+ |
Leandro Nunez | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
GE | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Keith | PERSON | 0.99+ |
Bob Metcalfe | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Sam | PERSON | 0.99+ |
Larry Biagini | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Brendan | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Clarke Patterson | PERSON | 0.99+ |
Breaking Analysis: CIOs Expect 2% Increase in 2021 Spending
from the cube studios in palo alto in boston bringing you data-driven insights from the cube and etr this is breaking analysis with dave vellante cios in the most recent september etr spending survey tell us that they expect a slight sequential improvement in q4 spending relative to q3 but still down four percent from q4 2019 so this picture is still not pretty but it's not bleak either to whit firms are adjusting to the new abnormal and are taking positive actions that can be described as a slow thawing of the deep freeze hello everyone this is dave vellante and welcome to this week's wikibon cube insights powered by etr in this breaking analysis we're going to review fresh survey data from etr and provide our outlook for both q4 of 2020 and into 2021. now we're still holding at our four to five percent decline in tech spending for 2020 but we do see light at the end of the tunnel with some cautions specifically more than a thousand cios and it buyers have we've surveyed expect tech spending to show a slight upward trend of roughly two percent in 2021. this is off of a q4 decline of 4 relative to q4 2019 but i would put it this way a slightly less worse decline sequentially from q3 last quarter we saw a 5 decline in spending okay so generally more of the same but things seem to be improving again with caveats now in particular we'll show data that suggests technology project freezes are slowly coming back and we see remote workers returning at a fairly significant rate however executives expect nearly double the percentage of employees working remotely in the midterm and even long term than they did pre-covert that suggests that the work from home trend is not cyclical but showing signs of permanence and why not cios report that on balance productivity has been maintained or even improved during covit now of course this all has to be framed in the context of the unknowns like the fall and even winter surge what about fiscal policy there's uncertainty in the election social unrest all right so let's dig into some of the specifics of the etr data now i mentioned uh the number of respondents at over a thousand i have to say this was predominantly a us-based survey so it's it's 80 sort of bias to the u.s and but it's also weighted to the big spenders in larger organizations with a nice representation across industries so it's good data here now you can see here the slow progression of improvement relative to q3 which as i said was down five percent year-on-year with the four percent decline expected in q4 now etr is calling for a roughly four percent decline for the year you know i've been consistently in the four to five percent decline range and agree with that outlook and you can see cios are planning for a two percent uptick in 2021 as we said at the open now in our view this represents some prudent caution and i think there's probably some upside but it's a good planning assumption for the market overall in my view now let's look at some of the actions that organizations are taking and how that's changed over time you can see here that organizations they're slowly releasing that grip on tech spending overall you know still no material change in employees working from home or traveling we can see that hiring freezes are down that's that's positive in the green as our new i.t deployment freezes and a slight uptick in acceleration of new deployments now as well you see fewer companies are planning layoffs and while small the percent of companies adding head count has doubled from last quarter's you know minimal number all right so this is based on survey data at the end of the summer so it reflects that end of summer sentiment so we got to be a little bit cautious here and i think cios are you know by nature cautious on their projections of two percent up in 2021. now importantly remember this does not get us back to 20 20 19 spending levels so we may be seeing a kind of a long slow climb out of this you know tepid market maybe 2022 gets back over 2019 before we start to see sustained growth again and remember these recoveries are rarely smooth they're not straight lines so you got to expect some choppiness with you know some pockets of opportunity which we'll discuss here in this slide we're showing the top areas that respondents cited as spending priorities for q4 and into 2021 so the chart shows the ratings based on a seven-point scale and these are the top spending initiatives heading into the year end now as we've been saying for the better part of a decade cyber security is a do-over and i've joked you know if it ain't broke don't fix it well coven broke everything and cyber is an area that's seeing long-term change in my opinion endpoint security identity access management cloud security security as a service these are all trends that we're seeing as really major waves as a result of covid now it's coming at the expense of large install bases of things like traditional hardware-based firewalls and we've talked about this a lot in previous segments cloud migration is interesting and i really think it needs some interpretation i mean nobody likes to do migrations so i would suggest this includes things like i have a bunch of people answering phones and offices or i had and then overnight boom the offices are closed so i needed a cloud-based solution i didn't just lift and ship my shift my entire phone routing system you know from the office into the cloud but i probably pivoted to a cloud solution to support those work from home employees now my guess is i think that would be included in these responses i mean i do know an example of an insurance company that did migrate its claims application to the cloud during coven but this was something that they were you know planning to do pre-covered and i guess the point here is twofold again like i said migrations are hairy nobody wants to do them and i think this category really means i'm increasing my use of the cloud so i'm kind of migrating my my operations over time to the cloud all right look at collaboration no shocker here we've pounded you know zoom and webex to death analytics is really interesting we have talked extensively uh and have been covering snowflake and we pointed out that there's a new workload that has emerged in the cloud it's not just snowflake you know there are others aws redshift google with bigquery and and others but snowflake is the off the charts you know hot ipo and so we we talk a lot about it but it relates to this easy setup and access to a data layer with having you know requisite security and governance and this market is exploding adding ai on top and really doing this in the cloud so you can scale it up or down and really only pay for what you need that's a real benefit to people compare that to the traditional edw snake swallowing a basketball i got to get every new intel chip you're not dialing up down down you're over provisioning and half the time you're not using you know half most of the time you're not utilizing what you've paid for all right look at networking you know traffic patterns changed overnight with covet ddos attacks are up 25 to 40 percent uh since coven cyber attacks overall are up 400 percent this year so these all have impacts on the network machine learning and ai i talked about a little bit earlier about that but organizations are realizing that infusing ai into the application portfolio it's becoming really an imperative much more important as the automation mandate that we've talked about becomes more acute people you can't scale humans at this at the pace of technology so automation becomes much more important that of course leads us to rpa now you might think rpa should be a higher priority but i think what's happening here is i t organizations they were scrambling to plug holes in the dike rpa is somewhat more strategic and planful our data suggests that rpa remains one of the most elevated spending categories in terms of net score etr's measure of spending momentum so this means way more people are spending more than spending less in the rpa category so it really has a lot of legs in fact with the exception of container orchestration i think rpa is a sector that has the highest net score i think you'll see that in the upcoming surveys it's as high or even higher than ai i think it's higher than cloud it's just that it remember this is an it survey and a lot of the rpa stuff is going on at the business level but it had to keep the ship afloat when coveted hit which somewhat shifted priorities but but rpa remains strong now let's go back uh to the work from home trend for a moment i know it's been been played out and kind of beat on really heavily covered but i got to tell you etr was the very first on this trend it was way back in march and the data here is instructive it shows that the percentage of employees working from home prior to cor covid currently working from home the percent expected in six months and then those expected essentially permanently and this is primarily work from home versus yeah i don't work a day or two per week it's really the the five day a week i i work remotely as you can see only 16 percent of employees were working from home pre pandemic whereas more than 70 percent are at home today and cios they actually see a meaningful decline in that number over the next six months you know we'll see based on how covid comes back and you know this fall and winter surge and how will that will affect these plans but look what it does long term it settles in at like 34 percent that's double pre-covet so really a meaningful and permanent impact is expected from the isolation economy that we're in today and again why not look at this data it shows the distribution of productivity improvements so that while 23 of respondents said work from home productivity impacts were neutral nearly half i think it was 48 if you add up those bars on the right nearly half are seeing productivity improvements well less than 30 percent see a decline in productivity and you can see the etr quants they peg the average gain at between three and five percent that's pretty significant now of course not everyone can work from home if you're working at a restaurant you really you know unless you're in finance you really can't work from home but we're seeing in this digital economy with cloud and other technologies that we actually can work from pretty much anywhere in the world and many employees are going to look at work from home options as a benefit you know it was just a couple years ago remember that we were talking about companies like ibm and yahoo who mandated coming into the office i mean that was like 2017 2018 time frame well that trend is over now let me give you a quick preview of some of the other things that we're seeing and what the etr data shows now let me also say i'm just scratching the surface here etr has deep deep data cuts they have the sas platform allows you to look at the data all different ways and if you're not working with them you should be because the data gets updated so frequently every quarter there's new data there's drill down surveys and it's forward-looking so you know a lot of the survey data or a lot of the data that we use market share data and other data are sort of looking back you know you use your sales data your sales forecast that's obviously forward-looking but but the etr survey data can actually give an observation space outside of your sales force and no i'm not getting paid by etr but but it's been such a valuable resource i want to make it available and make the community aware of it all right so let's do a little speed round on on some of the the vendors of interest that we've talked about in the last several segments last couple years actually many years decade anyway start with aws aws continues to be strong but they they have less momentum than microsoft this is sort of a recurring pattern here but aws churn is low low low not a lot of people leaving the aws platform despite what we hear about this repatriation trend data warehousing is a little bit soft whereas we see snowflake very very strong but aws share is really strong inside of large companies so cloud and teams and security are strong from microsoft whereas data warehouse and ai aren't as robust as we've seen before but but microsoft azure cloud continues to see a little bit more momentum than aws so we'll watch that next quarter for aws earnings call now google has good momentum and they're steady especially in cloud database ai and analytics we've talked a lot about how google's behind the big two but nonetheless they're showing good good momentum servicenow very low churn but they're kind of hitting the law of large numbers still super strong in large accounts but not the same red hot hat red hot momentum as we've seen in the past octa is showing continued momentum they're holding you know close to number one or that top spot in security that we talked about last time no surprise given the increased importance of identity access management that we've been talking about so much crowdstrike last survey in july they showed some softness despite a good quarter and and we we're seeing continued to sell it to deceleration in the survey now that's from extremely elevated levels but it's significantly down from where crowdstrike was at the height of the lockdown i mean we like the sector of endpoint security and crowdstrike is definitely a leader there and you know well-managed company company but you know maybe they got hit with uh with you know a quick covet injection with with a step up function that's maybe moderating somewhat you know maybe there's some competition you know vmware freezing the market with carbon black i i really don't see that i think it's it's it's you know maybe there's some survey data isn't reflective of of what what crowdstrike is seeing we're going to see in the upcoming earnings release but it's something that we're watching very closely you know two survey snapshots with crowdstrike being a little bit softer it doesn't make a sustained trend but we would have liked to seen you know a little bit stronger this this quarter the data's still coming in so we'll see sale point is one we focused on recently and we see very little negative in their numbers so they're holding solid z scalar showing pretty strong momentum and while there was some concern last survey within large organizations it seemed that might have been a survey anomaly because z scalar they had a strong quarter a good outlook and we're seeing a strong recovery in the most recent data so it also looks like z z scaler is pressuring some of palo alto network's dominance and momentum heading into the quarter so we'll pay close attention to that we've said we like palo alto networks but they're so big uh they've got some exposures but they can offset those you know and they're doing a better job in cloud with their pricing models and sort of leaning into some of the the market waves uh sale point appears to be holding serve you know heading into the fourth quarter snowflake i mean what can we say it continues to show some of the strongest spending momentum going into q4 and into 2021 no signs of slowing down they're going to have their first earnings reports coming up you know in a few months so i i got to believe they got it together and and they're going to be strong reports uipath and momentum is is slowing down a bit but existing customers keep spending with ui path and there's very few defections so it looks like their land and expand is working pretty well automation anywhere continues to be strong despite comments about the sector earlier which showed you know maybe it wasn't as high a priority some other sectors but as i said you know it's still really really strong strong in terms of momentum and automation anywhere in uipath they continue to battle it out for the the top spot within the data set within the automation data set well i should say within rpa i mean companies like pega systems have a broader automation agenda and we really like their strategy and their execution databricks you know hot company once a hot company and still hot but we're seeing a little bit of a deceleration in the survey even though new customer acquisition is quite strong put it this way databricks is strong but not the off the chart outperformer that it used to be this is how etr frame that their analysis so i want to obviously credit that to them datadog showing the most strength in the application performance management or monitoring sector whichever you prefer but generally the the net scores in that sector as we talked about last week they're not great as a sector when you compare it to other leading sectors like cloud or automation rpa as an example container orchestration you know apm is kind of you know significantly lower it's not it's not as low as some of the on-prem on-prem infrastructure or some of the on-prem software but you know given datadog's high valuation it's somewhat of a concern so keep an eye on that mongodb you know they got virtually no customer churn but they're losing some momentum in terms of net score in the survey which is something we're keeping an eye on and a big downtick in in large organization acquisitions within the data so in other words they had a lot of new acquisitions within large companies but that's down now again that could be anomalies in the data i don't want to you know go to the bank on that necessarily but that's something to watch zoom they keep growing but etr data cites a churn of actually up to seven percent due to some security concerns so that was widely reported in the press and somewhere slower velocity for zoom overall due to possible competition from microsoft teams but i tell you it has an amazing stat that etr threw out pre-cove at zoom penetration in the education vertical was 15 today it's over 80 percent wowza cisco cisco's core is weak as we've said you've seen that in their earnings numbers it's it's there's softness there but security meraki those are two areas that remain strong same kind of similar story to last quarter survey pure storage you know they're the the high flyer they're like the one-eyed man in the land of the the storage blind so storage you know not a great market we've talked about that we've seen some softness in the the data set from uh in pure storage and really often sympathy with the generally back burner storage market you know again they they still outperforming their peers but we've seen slower growth rates there in the in in the survey and that's been reflected in their earnings uh so we've been talking about that for a while really keeping an eye on on on pure they made some acquisitions trying to expand their market enough said about that rubric rubric's interesting they kind of were off the charts in a couple surveys ago and they really come off of those highs you know anecdotally we're hearing some concerns in in the market it's hard to tell the private company cohesity has overtaken rubric and spending momentum now for the second quarter in a row you know they're still not as prevalent in the data set we'd like to see more ends from cohesity remember this is sort of a random sample across multiple industries we let the or etr lets the the respondents tell them what they're buying and what they're spending on you know but because cohesity has the highest net score relative to to compares like rubric like veeam you know i even threw in when i looked at nutanix pure dell emcs vxrail those are not direct competitors but they're you know kind of quasi compares if you will new relic they're showing some concerning trends on churn and the company is way off its 2018 momentum highs in the survey and we talked about this last week some of the challenges new relic is facing but we like their tech the nrdb is purpose-built for monitoring and performance management and we feel like you know they can retain their leadership if they can can pull it together we talked about elliott management being in there so that's something that we're watching red hat is showing strength in open shift really really strong ibm you know services exposure uh it's it's not the greatest business in the world right now at the same time there's there's crosswinds there at the same time people you know need some services and they need some help there but the certainly the outsourcing business so there's you know countervailing you know crosswinds uh within ibm but openshift bright spot i i think you know when i look at at the the red hat acquisition yeah 34 billion but but it's it's pretty obvious why ibm made that move um but anyway ibm's core business continues to be under under pressure that's why red hat is such an important component which brings me to vmware vmware has been an execution machine they had vmworld this past week uh we talked last month about the strength of vmware cloud on aws and it's still strong and and vmware cloud portfolio with vmware cloud foundation and other offerings but other than tanzu vmware is in this october survey of the first first look shows some deceleration really across the board you know one potential saving grace etr shared with me is that the fortune 500 spending for vmware is stronger so maybe on a spend basis when i say stronger stronger stronger than the mean so maybe on a spend basis vmware is okay but there seems to be some potential exposure there you know we won't know for sure until late next year uh how the dell reshuffle is going to affect them but it's going to be interesting to see how dell restructures vmware's balance sheet to get its own house in order and remember dell wants to get to investment grade for its own balance sheet yet at the same time it wants to keep vmware at investment grade but the interesting thing to watch is what impact that's going to have on vmware's ability to fund its future and we're not going to know that for a long long time but you know we'll keep an eye on on those developments now dell for its part showing strength and work from home and also strengthen giant public and privates which is a bellwether in the etr data set uh you know these are huge private companies for example uh koch industries would be one you know massive private companies mars would be another example not necessarily that they're the ones responding although my guess is they are it's it's anonymous but actually etr actually knows and they can identify who those bell weathers are and it's been a it's been a predictor of performance for the last you know better part of a decade so we'll see vxrail is strong um you know servers and storage they're they're still muted relative to last year but not really down from july so you know holding on dell holding on to it to to a tepid spending outlook they got such huge exposure on-prem you know so on balance i wouldn't expect you know a barn burner out of dell you know but they got a big portfolio and they've got a lot of a lot of options there and remember they still have the the still have they have a pc uh business unlike hpe which i'll talk about in in in a moment talk about now aruba is the bright spot for hpe but servers and storage those seem to be off you know similar to dell uh but but but maybe even down further i think you know dell is kind of holding relative to last quarter survey you know down from earlier this year and certainly down from from last year uh but hpe seems to be on a steeper downward trajectory uh in storage and service from the survey you know we'll see again you know one one snapshot quarter this is not a trend to make uh but again storage looks particularly soft which is a bit of a concern and we saw that you know in hpe's numbers you know last quarter um customer acquisition is strong for nutanix but overall spending is decelerating versus a year ago levels uh we know about the 750 million dollar injection uh from from bain capital basically you know in talking to bain what essentially they're doing is they they're betting on upside in the hyper-converged marketplace it's true that from a penetration standpoint there's a long long way to go and it's also true that nutanix is shifting from a you know perpetual model you know boom by the the capex to a in an annual occurring revenue model and they kind of need a bridge of cash to sort of soften that blow we've seen companies like tableau make that transition adobe successfully made that transition splunk is in that transition now and it's you know kind of funky for them but at any rate you know within that infrastructure software and virtualization sectors you know nutanix is showing some softness but in things like storage actually nutanix looking pretty strong very strong actually so again this theme of of these crosswinds uh supporting some companies whereas they're exposed in other areas you certainly see that with large companies and and nutanix looks like it's got some momentum in some areas and you know challenges in in others okay so that's just a quick speed dating round with some of the vendor previews for the upcoming survey so i just want to summarize now and we'll wrap so we see overall tech spending off four to five percent in 2020 with a slightly less bad slightly less bad q4 sequentially relative to q3 all this is relative to last year so we see continued headwinds coming into 2021 expect low single-digit spending growth next year let's call it two percent and there are some clear pockets of growth taking advantage of what we see is a more secular work from home trend particularly in security although we're watching some of the leaders shift positions cloud despite the commentary earlier remains very very strong aws azure google red hat open shift serverless kubernetes analytic cloud databases all very very strong automation also stands out as as a a priority in what we think is the coming decade with an automation mandate and some of the themes we've talked about for a long time particularly the impact of cloud relative to on-prem you know we don't see this so-called repatriation as much of a trend as it is a bunch of fun from on-prem vendors that don't own a public cloud so just you just don't see it i mean i'm sure there are examples of oh we did something in the cloud we lifted and shifted it didn't work out we didn't change our operating model okay but the the number of successes in cloud is like many orders of magnitude you know greater than the numbers of failures on the plus side however the for the on-prem guys the hybrid and multi-cloud spaces are increasingly becoming strategic for customers so that's something that i've said for a long time particularly with multi-cloud we've kind of been waiting it's been a lot of vendor power points but that really we talked to customers now they're hedging their bets in cloud they're they're putting horses for courses in terms of workloads they're they're they're not betting their business necessarily on a single cloud and as a result they need security and governance and performance and management across clouds that's consistent so that's actually a a really reasonable and significant opportunity for a lot of the on-prem vendors and as we've said before they're probably not necessarily going to trust the cloud players the public cloud players to deliver that they're going to want somebody that's cloud agnostic okay that's it for this week remember all these episodes are available as podcasts wherever you listen so please subscribe i publish weekly on wikibon.com and siliconangle.com and don't forget to check out etr.plus for all the survey action and the analytics these guys are amazing i always appreciate the comments on my linkedin posts thank you very much you can dm me at d vallante or email me at david.volante at siliconangle.com and this is dave vellante thanks for watching this episode of cube insights powered by etr be well and we'll see you next time you
SUMMARY :
percent decline for the year you know
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
2021 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
two percent | QUANTITY | 0.99+ |
five percent | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
microsoft | ORGANIZATION | 0.99+ |
yahoo | ORGANIZATION | 0.99+ |
2022 | DATE | 0.99+ |
four percent | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.99+ |
a day | QUANTITY | 0.99+ |
48 | QUANTITY | 0.99+ |
seven-point | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
four percent | QUANTITY | 0.99+ |
34 percent | QUANTITY | 0.99+ |
less than 30 percent | QUANTITY | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
july | DATE | 0.99+ |
2017 | DATE | 0.99+ |
aws | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
2% | QUANTITY | 0.99+ |
more than 70 percent | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
34 billion | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
next year | DATE | 0.99+ |
vmware | ORGANIZATION | 0.99+ |
boston | LOCATION | 0.99+ |
last quarter | DATE | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
last quarter | DATE | 0.99+ |
ORGANIZATION | 0.98+ | |
late next year | DATE | 0.98+ |
palo alto | ORGANIZATION | 0.98+ |
2019 | DATE | 0.98+ |
q4 | DATE | 0.98+ |
david.volante | OTHER | 0.98+ |
earlier this year | DATE | 0.98+ |
q4 2019 | DATE | 0.98+ |
a year ago | DATE | 0.98+ |
dell | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
more than a thousand cios | QUANTITY | 0.98+ |
five day a week | QUANTITY | 0.98+ |
nutanix | ORGANIZATION | 0.98+ |
uipath | ORGANIZATION | 0.97+ |
october | DATE | 0.97+ |
q3 | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
up to seven percent | QUANTITY | 0.97+ |
intel | ORGANIZATION | 0.96+ |
15 | QUANTITY | 0.96+ |
next quarter | DATE | 0.96+ |
this year | DATE | 0.96+ |
two per week | QUANTITY | 0.95+ |
two areas | QUANTITY | 0.95+ |
first | QUANTITY | 0.94+ |
both | QUANTITY | 0.94+ |
over a thousand | QUANTITY | 0.94+ |
datadog | ORGANIZATION | 0.93+ |
Ashok Ramu, Actifio | CUBEConversation January 2020
>> From the SiliconAngle media office in Boston, Massachusetts, it's theCUBE! Now, here's your host, Stu Miniman. >> Hi, I'm Stu Miniman, and welcome to theCUBE's Boston-area studio. Welcome back to the program, CUBE alum, Ashok Ramu, Vice President and General Manager of Cloud at Actifio, great to see you. >> Happy New Year, Stu, happy to be here. >> 2020, hard to believe it said, it feels like we're in the future here. And talking about future, we've watched Actifio for many years, we remember when copy data management, the category, was created, and really, Actifio, we were talking a lot before Cloud was the topic that we spent so much talking about, but Actifio has been on this journey with its customers in Cloud for many years, and of course, that is your role is working, building the product, the team working all over it, so give us a little bit of a history, if you would, and give us the path that led to 10C announcement. >> Sure thing. We started the Cloud journey early on, in 2014 or 2013-ish, when Amazon was the only Cloud that really worked. We built our architecture, in fact, we took our enterprise architecture and put it on the Cloud and realized, "Oh my god," you know, it's a world of difference. The economics don't work, the security model is different, the scale is different. So, I think with the 8.0 version that came out in 2017, we really kind of figured out the architecture that worked for large enterprises, particularly enterprises that have diverse data sets and have requirements around, you know, marrying different applications to data sets anywhere they want, so we came up with efficient use of object, we came up with the capability of migrating workloads, taking VMware VMs, bringing up on Azure, bringing up on DCP, et cetera. So that was the first foray into Actifio's Cloud, and since then, we've been just building strength after strength, you know. It's been a building block, understanding our customers, and thank you to the customers and the hyperscalers that actually led us to the 10C release. So this, I believe, we've taken it up a notch wherein, we understand the Cloud, we understand the infrastructure, the software auto-tunes itself to know where it's running on, taking the guessing game out of the equation. So 10C really represents what we see as a launchpad for the rest of the Cloud journey that Actifio's going to embark upon. We have enabled a number of new use cases like AI and ML, data transformation is key, we tackled really complicated workloads like HANA and Sybase and MySQL, et cetera, and in addition to that, we also adopt different native Cloud technologies, like Cloud snapshots, like recovery orchestration of the Cloud, et cetera. >> Yeah, I think it's worth reminding our audience there that Actifio's always been software. And when you talk about, you know, I think back to 2013, 2014, it was the public Cloud versus the data center, and we have seen the public Cloud in many ways looks more and more like what the enterprise has been used to. >> Absolutely. >> And the data centers have been trying to Cloud-ify for a number of years, and things like containerization and Kubernetes is blurring the line, and of course, every hyperscaler out there now has something that reaches their public Cloud into the data center and of course, technologies like VMware are also extending into the public Cloud, or, SAP now, of course is all of the Cloud environment. So with hybrid Cloud and multi-Cloud as kind of the waves of driving, help us understand that Actifio lives in all of these environments, and they're all a little bit different, so how does Actifio make sure that it can provide the functionality and experience that users want, regardless of where it is? >> Absolutely, you said it right. Actifio has always been a software company. And it is our customers that showed us, by Cloudifying their data centers, that we had to operate in the Cloud. So we had on premises VMware Clouds, not before we had Amazon and Azure and Google. So that evolution started much early on. And so, from what, you know, Actifio's a very customer-driven company, be it, you know, all segments of the company are driven by the customers, and in 2019, and even before, when you see a strong trend to migrate workloads, to move workloads, we realized, there is a significant opportunity, because the hardest thing to migrate is the volume of data because it's ever-changing, and it is ever-growing. So, the key element of neutrality was the application itself. Microsoft SQL's a SQL no matter how you run it. It could be on a big Windows machine in your data center or a NGCP, it makes no difference. So Actifio's approach to start application down basically gave us the freedom to say, we're going to create SQL to SQL. I don't know if you're running in Azure, Google, DOP data center, or AliCloud, it makes no difference to me. I understand SQL, I understand SQL's availability groups, I understand logs, I can capture it and give it back to you, so when we took that approach, it kind of automatically gave us infrastructure neutrality, really didn't care. So when we have a conversation with a customer, it basically goes around lines of, "Okay, Mr. Customer, how much data do you have? And what are your key applications? Can you categorize them in terms of priority?" It usually comes out to be databases are the crown jewels, so they're the number one priority in terms of data management, migration, test Av, et cetera. And then, we basically drill down into the ecosystem the databases live into. So, because we walk application down, the conversation is the same whether the customer is in the data center, or in the Cloud. So that is how we've evolved, and that's how we're thinking from a product standpoint, from a support standpoint, and then the overall company is built that way. So it makes it easy for us to adapt a new platform that comes in. So, when you talked about, you know, how does, each Cloud is different, you're absolutely right, the security concepts are different, right? Microsoft is built on active directory, Google is built on something very different. So how do you utilize and how do you make this work? We do have an infrastructure layer that basically provides Cloud-specific capabilities for various Cloud platforms. And that has gotten to a point where it understands and tunes itself from a security standpoint and a performance standpoint. Once that's taken care of, the rest of the application stack, which is over 90% of our software, stays the same, there's no change. And so that is how we kind of tackle this. Because the ecosystem we live in, we have to keep up with two people. We have to keep up with the infrastructure people who are making it bigger, faster, and we also have to keep up with the application people who are making it fancier and more complicated. So that's unfortunately the ecosystem we live in, and taking this approach has given us a mechanism to insulate us from a lot of the complexities of these two environments. >> Yeah, that's great, 'cause when you talk to customers and you say, "What's going on in your environment," change is difficult. So, how many different pieces of what I'm doing do I need to move to be able to take advantage of the modern economics. On the one hand, you know, if I have an application and I like it, well, maybe I could just lift and shift it, but if I'm just lifting, shifting, I'm not necessarily taking advantage of the full Cloud native environments, but I need to make sure that my data is protected, backup, you mentioned security, are of course the top concerns, so. It sounds like, in many ways, you're talking, helping customers work through some of those initiatives, being able to take advantage of new environments, but not need to completely change everything. Maybe, I'd love to hear a little bit, when you talk about the developers and DevOps initiatives that are happening inside customers, where does that impact, where does that connect with what Actifio's doing? >> Well, that's a great question. So, let me start with a real customer example. We have this customer, SEI Investments, who basically, their business model is to grow by acquisition, so they're adding on tens, hundreds of developers every quarter. So it's impossible to keep up with infrastructure needs when you grow at that pace. They decided to adopt a Cloud platform. And with each Cloud platform comes some platform-specific piece that all these developers now have to re-tool themselves. So, I'm a developer, I used to come in the morning, open up my machine and start working away on the application, now I have to do something different, and if there is 300 of me, and the cost of moving to the Cloud was a lot less than training the developers. It was much harder to train the developers because it has been ongoing process. So we were presented the challenge of how do you avoid it? So, when we are able to separate the application layer from the data layer, because of the way we operate, what we present as a solution was to say, just move your, what is the heaviest layer you have? That's the database, okay. And what are the copies you're creating? I'm creating hundreds of copies of my Oracle database, okay. Let's just move that to the Cloud. All of the front-end application doesn't see a change, thanks to the great infrastructure work the Cloud providers do, you add 10 Gigabyte to everywhere. So network is not a problem, computer's not a problem, it's just available on an API call, so you provision that. All they did was a data movement, moved it from Point A to Point B, gives you the flexibility to spend up any number of copies you want in the Cloud, now, your developer tool sets haven't changed, so there's no training required for developers, but from an operations standpoint, you've completely eased the burden of creating a hundred more copies every month, because Cloud is built for that. So you take the elasticity of the Cloud, advantage of that, and provide the data in the last mile to the Cloud, thereby, developers, they will access the application with the same level of ease. So, that is the paradigm we're seeing, we're seeing, you know, in some of our customers, there is faster and better storage provision for Actifio because there are 190 developers working off Actifio, where there's only about a handful of people running production. So, it's a paradigm shift is where we see it. And the pace at which we bring up the application wherein we're able to bring up 150 terabyte article database in three hours. Before Actifio, it used to be, maybe, 30 days, if you were lucky. So it's not just an order of magnitude, it's what you can do with that data, is where we're seeing the shift going to. >> Yeah, it's interesting, when you go back and look at some of the changes that have happened in the Cloud, Cloud storage was one of the earliest discussed use cases there, and backup to the Cloud was one of the earlier pieces of the Cloud storage discussion. Yet, we've seen changes and maturation into what can actually be done, explain a little bit how Actifio enables even greater functionality when you're talking about backup to the Cloud. >> Absolutely. You know, the object storage technology, it's probably the most scalable and stable piece of storage known to mankind, because nobody can build that level of scale that Amazon, Azure, and Google have put into it. From a security standpoint, performance standpoint, and scale standpoint. So I'm able to drop my data in Boston and pick it up in Tokyo seamlessly, right? That's unheard of before. And the biggest impediment to that, was a lot of legacy application data didn't know how to consume this object storage. So what Actifio came up with on onboard technology was to light up the object storage for everybody, and basically make it a performance neutral platform, wherein you take the guessing game out of the customer. The customer doesn't need to go research S3 or Google Nearline or Google Persistent Disk and say I want ten copies there versus five copies there, Actifio figures it out for you. You give us your SLA, you give us your RTOs and RPOs, and we tell you, okay, this is the most cost effective way to store your data. You get the multi-year retention for free, you get the GDPR, appchafe and protection for free, you get the geo-redundancy for free. All this is built into the platform. In addition, you also can run DevOps off the object store. You can run DR off the object store. So we enabled a lot of the legacy use cases using this new technology, so that is kind of where we see the cusp, wherein, in the Cloud, there's always a question and a debate, does D-doop make sense? D-doop consumes a lot of compute, takes a lot of memory, you need to have that memory and compute whether you want it or not. We're seeing a lot more adoption of encryption, where the data is encrypted at source. When you encrypt data, D-doop is just a big compute-churning platform, it doesn't do much for you. So we went through this debate actively, I think four or five years ago, and we figured out, object store's the way to go. You cannot get storage, I mean, it's a buck a terabyte in Google, and dropping. How can you get storage that's reliable, scalable, at a lower cost? All we had to do was actuate the use of that storage, which is what we did. >> Yeah. I'm just laughing a little bit because, you know, gosh, I think back a dozen years ago, the industry knew that the future of storage would be object, yet it's taken a long time to really be able to leverage it and use it, and the Cloud, the hyperscalers of course, have been a huge enabler on that, but we don't want customers to have to think about that it's object underneath, and that's the bridging the gap that I think we've been looking for. There, what else. We talk about really being able to extract the value out of Cloud, you know, data protection, disaster recovery, migrations are all things that are top of mind. >> Yeah, absolutely. All those use cases, and we're seeing some of the top rating CIOs talk about AI and ML. We've had a couple of customers who want to basically take their manufacturing data from remote sites and pump it into Google bit query. Now we all know manufacturing happens in Taiwan and Singapore and all those locations, now how do you take data from all those applications, normalize it, and pump it into Google bit query and get your predictable results on a quarterly basis, it's a challenge. Because the data volumes are large. So with our Cloud technology and our onboard capability, we're able to funnel data directly into Google Nearline, and on a quarterly basis, on a scheduled basis, transform it, push it into bit query, and bring out the results for the end user. So that journey is pretty transformated, from a customer standpoint. What they used to have five people do maybe once a year, now with a push of a button happens every quarter. So it's a change in how the AI and ML analytics evolve. The other element is also you know, our partnership with IBM, we're working very closely with their Cloud bag for data. Cloud bag for data is an awesome platform built to analyze any kind of data that you might have. With Actifio's normalization platform, you basically can feed any data into Actifio and it presents a unified interface into the slow pack, so you can build your analytics workloads very quickly and easily. >> So we've talked a lot about Cloud, one of the other C's of course in 10C is containers, if we look at containerization, when it first started, it was stateless applications, most applications that are running in containers are running for very short period of time, so help us understand where Actifio fits there, what's the problem statement that you are solving? >> Oh, absolutely. So containers are coming up, up and coming and out of reality, and as we see more applications flow into containers, you see the data lives outside the container. Because containers are short-lived, they're microservices, they come up and they go down, and the state is maintained in a storage platform outside the container, so Actifio tackles containers by taking the data protection strategy we have for the storage platform already, Bell defined, but enhancing the data presentation into the container as it comes up. So a container can be brought up in seconds, maybe less. But the container is only brought to life when it can lead to data and start working again, so that's the bridge Actifio actuates. So we understand, you know, the architecture of how a container is put together, how the container system is put together, and basically, we marry the storage and the application consistent in the storage into the container so that the container's databases, or applications, come to life. >> And that could be in a customer's data center, in a public Cloud, Kubernetes enabled, all of that? >> Absolutely, it can be anywhere, and with 10C, what we have done is we've also integrated with Cloud Native Snapshot, so if you talk about net neutrality for the container platform, if it's on premises, we have all kinds of access to the storage, the infrastructure, and the platforms so our processing is very different. If you take it to the Cloud, let's say Google, Google Kubernetes platform is fairly, it's a black box. You get some storage, and you get containers. And you have an API access to the storage. So in Google, we automatically autotune and start taking the Google snapshots to take the storage perfection, so that's the other way we've kind of neutralized the platform. >> Yeah, you've got a, thinking about it just from a customer's standpoint, one of the big challenges there is they've got everything from their big monoliths, they're big databases, through these microservice Cloud native architectures there, and it sounds like you know, is that just one of the fundamental architectural designs to make sure that you can span across those environments and give customers a common look and feel between those environments? >> Absolutely. The single pane of glass is a big askt and a big focus for us, not just across infrastructure, it's across geos and across all platforms. So you could have workloads running AIX6, VMware, in the Cloud, all the way through containers, and manage it all to a single console, to know when was the last good backup, how many copies of the database am I running, and each of these databases could have their own security constructs. So we normalize all of those elements and put them in a single console. >> Okay, 10C, shipping today? >> 10C shipping today, we have early access to a few customers, the general availability releases possibly in the February timeframe. >> Okay, and if I'm an existing Actifio customer, what's the path for me to get to 10C? >> Our support will reach out and do a simple software upgrade, it's available on all Cloud platforms, it's available everywhere, so you will see that on all the marketplaces and the regular upgrade process will get you that. >> Okay, and if I'm not an Actifio customer today, how easy is it for me to try this out? >> Oh, it is very easy, with our Actifio go SAS platform, it's a one-click download, you can download and try it out, try all the capabilities of the platform, it's also available on all the Cloud marketplaces for you to go and access that. >> All right, well, Ashok, a whole lot of pieces inside of 10C, congratulations to you and the team for building that, and definitely look forward to hearing more about the customer deployments. >> Thank you, we have exciting times ahead. >> All right. Lots more coverage from theCUBE throughout 2020, be sure to check out theCUBE.net, I'm Stu Miniman, thanks for watching theCUBE. (techno music)
SUMMARY :
From the SiliconAngle media office of Cloud at Actifio, great to see you. the path that led to 10C announcement. and in addition to that, we also adopt And when you talk about, you know, I think that it can provide the functionality because the hardest thing to migrate On the one hand, you know, if I have an application and the cost of moving to the Cloud was a lot and look at some of the changes that And the biggest impediment to that, the value out of Cloud, you know, into the slow pack, so you can build your and the application consistent in the storage and the platforms so our processing is very different. VMware, in the Cloud, all the way through containers, releases possibly in the February timeframe. and the regular upgrade process will get you that. it's also available on all the Cloud marketplaces to you and the team for building that, be sure to check out theCUBE.net,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Ashok Ramu | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
2017 | DATE | 0.99+ |
Singapore | LOCATION | 0.99+ |
Boston | LOCATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
Tokyo | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
2013 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Actifio | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
300 | QUANTITY | 0.99+ |
five people | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
February | DATE | 0.99+ |
SQL | TITLE | 0.99+ |
2020 | DATE | 0.99+ |
two people | QUANTITY | 0.99+ |
five copies | QUANTITY | 0.99+ |
Ashok | PERSON | 0.99+ |
190 developers | QUANTITY | 0.99+ |
one-click | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
January 2020 | DATE | 0.99+ |
four | DATE | 0.99+ |
ten copies | QUANTITY | 0.99+ |
Cloud | ORGANIZATION | 0.99+ |
three hours | QUANTITY | 0.99+ |
hundreds of copies | QUANTITY | 0.99+ |
two environments | QUANTITY | 0.98+ |
HANA | TITLE | 0.98+ |
over 90% | QUANTITY | 0.98+ |
Boston, Massachusetts | LOCATION | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
150 terabyte | QUANTITY | 0.98+ |
SEI Investments | ORGANIZATION | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
each | QUANTITY | 0.98+ |
single console | QUANTITY | 0.98+ |
Cloud | TITLE | 0.98+ |
GDPR | TITLE | 0.97+ |
AliCloud | ORGANIZATION | 0.97+ |
today | DATE | 0.97+ |
five years ago | DATE | 0.97+ |
Sybase | TITLE | 0.97+ |
a dozen years ago | DATE | 0.96+ |
Oracle | ORGANIZATION | 0.96+ |
Kubernetes | TITLE | 0.96+ |
10C | TITLE | 0.96+ |
once a year | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
Actifio | TITLE | 0.95+ |
Azure | ORGANIZATION | 0.94+ |
10 Gigabyte | QUANTITY | 0.94+ |
Anthony Lai-Ferrario & Shilpi Srivastava, Pure Storage | KubeCon + CloudNativeCon NA 2019
>>Live from San Diego, California at the cue covering to clock in cloud native con brought to you by red hat, the cloud native computing foundation and its ecosystem Marsh. >>Welcome back to the cube here in San Diego for cube con cloud native con 2019. It's our fourth year of doing the cube here. I'm Stu Miniman. It's my fourth time I've done this show. Joining me is Justin Warren. He's actually been to more of the coupons than the cube has, I think at least in North America. And welcome into the program to two veterans of these events from pure storage. Uh, sitting to my right is she'll be uh, Shrivastava who's a director of product marketing and sitting to her right is Anthony lay Ferrario who's a senior product manager, uh, both of you with pure storage. Thank you so much for joining us. Thanks for having us. All right, so, so we, we were kind of joking about veterans here because we know that things are moving faster and faster. You both work for storage companies. Storage is not known to be the fastest moving industry. Um, it's been fascinating for me to watch kind of things picking up the pace of change, especially when you talk about, uh, you know, how developers and you know, software and a multicloud environment, a fit-out. So she'll be maybe, you know, give us a frame for, you know, you, you know, you're in a Cooper ladies tee shirt here pures at the show. How should we be thinking about pure in this ecosystem? >>Sure. Yeah. So, uh, you're, as, you know, we, we side off as all flash on brand storage company, uh, 10 years ago and, uh, we've kept pace with constantly innovating and making sure we're meeting our customer's needs. One of the areas of course that we see a lot of enterprises moving today is two words, microservices, two words, containerized applications. And our goal that you're really is to help customers modernize, modernize their applications while still keeping that store it's seamless and keeping that, uh, invisible to the application developers. >> I think it actually lines up really well if you're do just a pure sort of steam across time has been performance with simplicity. Right? And I think the simplicity argument starts to mean something different over time, but it's a place that we still want to really focus as our customers started to use, uh, try to containerize our applications. >>There are couple of challenges. We saw continued environments, of course, they're known for their, uh, agility, uh, how portable they are. They're lightweight and they're fast. And when they're fast, storage can sometimes be a bottleneck because your storage might not necessarily scale as fast. It might not be able to provision storage volumes as fast, your container environment. And that's the challenge that we at pure why to solve with our Cuban eighties integrations. Anthony, you mentioned simplicity there. So I'm going to challenge you a bit on that because Kubernetes is generally not perceived as being particularly simple and the storage interfaces as well, like stateful sets is kind of only really stabilized over the last 18 months. So how >>is pure actually helping to make the Cuban Eddie's experience simpler for developers? Yeah, and you know, you're totally right. I don't think I was necessarily saying that someone looking for the simplest thing that could ever find would adopt Kubernetes and expect to find that. But what I really meant was, you know, on one hand you have, you know, your more traditional enterprise infrastructure type folks who are trying to build out the underlying private cloud that you're going to deploy, you know, your infrastructure on. And on the other hand, you have your developers, you have your Kubernetes, you have your cloud native applications, right? And really the interface between those is where I'm looking at that simplicity argument because traditionally pure has focused on that simple interface to the end user. But the end user, as we were talking about before, the show has shifted from a person to being a machine, right? >>And the objective for pure and what we're building on the cognitive side is how do we take that simple sort of as a service consumption experience and present that on top of what looks like a traditional infrastructure platform. So I can get more into the, the details of that if you'd like, but really that that layer is where we're focused on the simplicity and really just asking the, the, uh, the end user as few questions as we can. Right. I just want to ask you, what do you need? I don't want to ask you, well, tell me about the, you know, IQN and blah, blah. They don't want that, right? That's the simplicity I'm talking about. Yeah. Well, you run developers generally, I mean, the idea of dev ops and I challenge people whenever they mentioned dev ops, and I'm hearing a pretty consistent message that developers really don't care about infrastructure and don't want to have anything to do with it at all. >>So if you can just bake it into the system and somehow make it easier to operate it, that kind of SRE level, that infrastructure level that, that Kubernetes as a platform. So once that's solved, then as a developer, I can just get on with, with writing some code. We definitely want stories to be invisible. Yeah. So if you want, but if they want stories to be invisible, that's not so great for your brand because you actually want them to know and care about having a particular storage platform. So how do you, how do you balance that idea that we want to show you that we can have to have innovative products that you care about the storage, but you also don't need to care about the storage at all because we'll make it invisible. How does that work? >>So Coupa storage for container environments has been a challenge. And what we are trying to educate the platform level users is that with the right kind of storage, it can actually be easy stores. For QA, these can be easy. And, uh, the way we make it simple or invisible is through the automation that we provide. So pure service orchestrator is our, uh, automation for storage delivery into the containerized environments. And so it's delivered to a CSI plugin, but we tried to do a little more than just develop a plugin into your Cubanetis environments. We tried to make your scalability seamless, so it's super easy to add new storage. And, um, so yeah, I think because a container environments were initially developed for States, less applications when became to staple applications, they still think about, Oh, why should I care about storage? But people are slowly realizing that we need care about it because we don't want to ultimately be bothered by it. Right. >>And if I can make, if I can make a point to just tag on to that I, the conversations I've had at the show this week, I've even helped me sort of crystallize the way I like to explain this to people, which is at first, you know, a lot of people will say, Oh, I don't, I don't do stateful application. I'm doing stateless applications and competitors. And my response is, okay, I understand that you've decided to externalize the state of your system from your Kubernetes deployment. But at some point you have to deal with state. Now, whether that's an Oracle database, you happen to be calling out to outside of your community's cluster, whether that's a service from a public cloud like S3 or whether that's deciding to internalize that state into Coobernetti's and manage it through the same management plane you have to have state. >>Now when we talk about, you know, what we're doing in PSO and why that's valuable and why, you know, to your point about the brand, I don't necessarily worry is because when we can give a seamless experience at the developer layer and we can give the SRE or the cluster manager layer a way where they can have a trusted high performance, high availability storage platform that their developers consume without knowing or worrying about it. And then as we look into the future, how do we handle cross cluster and multi-cloud stateful workloads, we can really add value there. >>Well, yeah, and I'm glad you brought up the multi-cloud piece of it because one of the more interesting things I saw from pure this year is how pure is putting in software cloud native. Um, so when I saw that one of the questions like, okay, when I come to a show like this, how does Kubernetes and containers fit into that old discussion? So how help us connect the dots as to what was announced and everything else that's happening. >>You've heard about cloud block store, which is our software running on the AWS cloud today. And uh, that's basically what we've done is we've people have loved flash array all these years for the simplicity it provides for the automation and performance. You want to give you something similar and something enterprise grade in the public cloud. The cloud, Luxor is basically, you can think of it as a virtual flash array and on the AWS cloud. So with that, you now have D duplication, 10 provisioning capabilities in the cloud. You can, um, be brought an active cluster, which is active, active, synchronous replication between availability zones. So really making your AWS environments ready for mission critical applications. Plus with our, you know, PSO just works the same way on prem as in the cloud. So it's just great for hybrid application mobility. You have the same APIs. >>Yeah, it's actually very cool. Right? One of the, one of the, you know, fun things for me as a software developer at pure, at a software side guy at pure, um, is that the API's that our arrays have are the same API. It's actually the same underlying software version even though it's a totally different hardware, hardware back end implementation. When we run in a cloud native form factor versus when we run in a physical appliance form factor, the replication engines work between the two snapshots, clones. Um, our ability to do instant, um, restores like everything that we do and that has brought value from our, our storage software stack, we still get access to in a cloud native environment and the transports as well. I guess trying to understand, is there Kubernetes involved here or is this just natively in AWS? And then then on premises itself is a, >> is a compute orchestration layer component. So when I look at Kubernetes, I'd say Kubernetes sits above both sides, right? Or potentially above and across both sides, um, depending on how you decide to structure your environment. But the nice part is if you've developed a cloud native application, right, and that's running on Kubernetes, the ability to support that with the same storage interfaces, the same SLS, move it efficiently, copy it efficiently and do that on whatever cloud you care to do. That's where it gets really cool. >>So we developed this really cool demo where you have a container application running on PSO, on flash array, on prem. We migrated that to cloud block store and on AWS and it just runs, you use the same yanno scripts in both places. There is no need to, you know, do a massive rearchitecture anything. Your application just runs when you move it. And we take care of all the data mobility with our asynchronous replications, you can take a snapshot on prem, you can snap it out into AWS, restore it back into cloud block store. So it really opens up a lot of new use cases and make them simple for customers >>that that idea of write once run anywhere. I said I'm, I'm old enough to remember when Java was a brand new thing and that was the promise. And it never quite got there because it turns out it's really, really hard to do that. Um, but we are seeing for from pure and from a lot of vendors here at the show that there's a lot of work and effort being put in into that difficult problem so that other people don't have to care about it. So you're building that abstraction in and, and working on how this particular, how the details of this work. And, uh, I was fortunate enough to get a deep dive into the end of the architecture of cloud Brock's door, just a recent accelerate conference and the way you've actually used cloud resources as if they were kind of infrastructure components and then built the abstraction on top of it, but in the same way that it runs on site, it, that's what gives you that ability to, to keep everything the same and make it simple, is doing a lot of hard work and hard engineering underneath so that no one has to care anymore. >>Yeah. And the way we've architected CloudLock store is that, you know, be as use the highest performance performing, uh, AWS infrastructure. And the highest durability it this infrastructure. So you're actually now able to buy performance and, and durability in one through one single virtual appliance as you would. >>Yeah. How's the adoption of the products going? I know it was, it was very early when it was announced just a few months ago. So what's the feedback from customers been so far about? >>It's been really positive and actually, you know, the one use case that I want to highlight really most is actually dev ops use cases, right? This, the value add of being able to have the same deployment for that application for a test or dev infrastructure in one cloud versus a production to point them in another cloud has been very exciting for folks. So, you know, when you think about that use case in particular, right? The ability to say, okay, I'm coming up to a major quarterly release or whatever I have for my product, I need to establish a bunch more test environments. I don't necessarily want to have bought that and we're not necessarily talking about, you know, bursting over the wire anymore. Right. We're talking about local, uh, local storage under the same interfaces in the cloud that you choose to spin up all of those test environments. So cases like that are pretty interesting for folks. >>Yeah. I think that's how people have started to realize that it's that operation side of things. It's not even day to day 90 and day 147 where I want to be operating this in the same place in the same way no matter where it is because it just saves me so much heartache and time of not having to re implement differently and I don't have to retrain my resources because it all looks the same. So, uh, yeah, Def does definitely have a big use case migration through verbose. That's another use case that we are seeing a lot of customers interested in and uh, disaster recovery, using it as a disaster recovery. How do you, so you can efficiently store backups on Amazon S three, but how do you do an easy fast restore to actually run your applications there? So with CloudLock store, it is now possible to do that, to do a fast, easy restore. Also a couple of weeks ago actually, we started taking registrations for a beta program for cloud Glocks or for Azure as well. Uh, yup. Customers are going multi-cloud. We are going multi-cloud with them. >>Great. I want to give you both a final word, uh, takeaways for a pure storage participation here at the show. >>I think the biggest thing that I, that I want people to understand, and I actually gave this talk at the cloud native storage day on day zero is that cloud native storage is an approach to storage. There's not a location for storage. And I think pure storage that really defines to me the way we're going about this, we're trying to be cloud native storage wherever you need it. So that's, that's really the takeaway I'd like people to have about pure >>and cute and storage for Cuban. It is, doesn't have to be hard. We are here all day today as well. So, um, I mean this is a challenge the industry seeing today and uh, we have a solution to solve that for you. >>All right, well that's a, that's a bold statement, uh, to help end us as Shilpi. Anthony, thank you so much for joining us for Justin Warren. I'm Stu Miniman back with more coverage here from cube con cloud native con 2019 stay classy, San Diego. And thanks for watching the queue.
SUMMARY :
clock in cloud native con brought to you by red hat, the cloud native computing foundation the pace of change, especially when you talk about, uh, you know, how developers and you know, One of the areas of course that we And I think the simplicity argument starts to mean something different So I'm going to challenge you a bit on that because Kubernetes is generally not perceived as being particularly simple And on the other hand, you have your developers, you have your Kubernetes, And the objective for pure and what we're building on the cognitive side is how do we take So if you can just bake it into the system and somehow make it easier to operate it, that kind of SRE level, And so it's delivered to a CSI plugin, but we tried to do that state into Coobernetti's and manage it through the same management plane you have to have state. you know, to your point about the brand, I don't necessarily worry is because when we can give a seamless Well, yeah, and I'm glad you brought up the multi-cloud piece of it because one of the more interesting things So with that, you now have D duplication, One of the, one of the, you know, fun things for me as a software developer the same SLS, move it efficiently, copy it efficiently and do that on whatever cloud you care And we take care of all the data mobility with our asynchronous replications, you can take a snapshot on prem, and effort being put in into that difficult problem so that other people don't have to care And the highest durability it this infrastructure. I know it was, it was very early when it was announced just a few months ago. that and we're not necessarily talking about, you know, bursting over the wire anymore. but how do you do an easy fast restore to actually run your applications there? I want to give you both a final word, uh, takeaways for a pure storage participation here at the show. And I think pure storage that really defines to me the way we're going about this, It is, doesn't have to be hard. Anthony, thank you so much for joining us for Justin Warren.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
Anthony | PERSON | 0.99+ |
Shrivastava | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Anthony lay Ferrario | PERSON | 0.99+ |
North America | LOCATION | 0.99+ |
fourth year | QUANTITY | 0.99+ |
San Diego | LOCATION | 0.99+ |
Shilpi Srivastava | PERSON | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
fourth time | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
CloudLock | TITLE | 0.99+ |
Shilpi | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
two snapshots | QUANTITY | 0.99+ |
Anthony Lai-Ferrario | PERSON | 0.98+ |
10 provisioning capabilities | QUANTITY | 0.98+ |
Kubernetes | TITLE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
KubeCon | EVENT | 0.98+ |
both places | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
One | QUANTITY | 0.97+ |
this year | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
Cooper | ORGANIZATION | 0.95+ |
Eddie | PERSON | 0.95+ |
two veterans | QUANTITY | 0.94+ |
Azure | TITLE | 0.94+ |
Brock | ORGANIZATION | 0.94+ |
red hat | ORGANIZATION | 0.93+ |
S3 | TITLE | 0.93+ |
Cuban | OTHER | 0.93+ |
day | OTHER | 0.89+ |
few months ago | DATE | 0.89+ |
one single virtual appliance | QUANTITY | 0.88+ |
cube con cloud native con 2019 | EVENT | 0.86+ |
cloud block store | TITLE | 0.84+ |
cloud Glocks | TITLE | 0.84+ |
CloudLock store | TITLE | 0.82+ |
last 18 months | DATE | 0.81+ |
CloudNativeCon NA 2019 | EVENT | 0.78+ |
a couple of weeks ago | DATE | 0.78+ |
three | TITLE | 0.77+ |
SRE | TITLE | 0.68+ |
Marsh | LOCATION | 0.68+ |
Luxor | ORGANIZATION | 0.66+ |
cloud native con | EVENT | 0.66+ |
day | QUANTITY | 0.6+ |
block | TITLE | 0.6+ |
foundation | ORGANIZATION | 0.6+ |
2019 | DATE | 0.6+ |
couple | QUANTITY | 0.57+ |
cloud native | ORGANIZATION | 0.57+ |
90 | QUANTITY | 0.53+ |
zero | DATE | 0.51+ |
questions | QUANTITY | 0.5+ |
147 | QUANTITY | 0.5+ |
native con | EVENT | 0.5+ |
A New Service & Ops Experience
and II just think about how data could be customer experience value propositions operations that improve profitability and strategic options for the business as it moves forward but that means openly either we're thinking about how we embed data more deeply into our operations that means we must also think about how we're going to protect that data so the business does not suffer because someone got a hold of our data or corrupted our data or that a system just failed and we needed to restore that data very quickly now what we want to be able to do is we want to do that in a way that's natural and looks a lot like a cloud because we want that cloud experience in our data protection as well so that's we're going to talk about with Klum you know today a lot of folks think in terms of moving all the data into the cloud we think increasingly we have to recognize a cloud is not a strategy for centralizing data but rather distributing data and being able to protect that data where it is utilizing a simple common cloud like experience it's becoming an increasingly central competitive need for a lot of digital enterprises the first conversation we had was with pooja Kumar who John is a CEO and co-founder of Kaleo let's hear a pooja I had to say about data value data services and Kumi Oh poo John welcome to the show thank you Peter nice to be here so give us the update in clue so comeö is a two year old company right we just recently launched out of stealth so so far you know we we came out with innovative offering which is a SAS solution to go and protect on premises you know VMware and BMC environments that's what we launched out of style two months ago we won our best of show when we came out of stealth in in VMware 2019 but ultimately we started with a vision about you know protecting data irrespective of where it resides so it was all about you know you know on-premises on cloud and other SAS services so one single service that protects data irrespective of where it resides so far we executed on on-premises VMware and BMC today what we are announcing for the first time is our protection to go and protect applications natively built on AWS so these are applications that are natively built on AWS that loomio as a service will protect irrespective of you know them running you know in one region or cross region cross accounts and a single service that will allow our customers to protect native AWS applications the other big announcement we are making is a new round of financing and that is testament to the interest in the space and the innovative nature of the platform that we have built so when we came out of stealth we announced we had raised two rounds of financing 51 million dollars in series a and Series B rounds of financing today what we are announcing is a Series C round of financing of 135 million dollars the largest I would say Series C financing for a SAS enterprise company especially a company that's a little over two years old Oh congratulations that's gonna buy a lot of new technology and a lot of customer engagement but what customers as I said up from what customers are really looking for is they're looking for tooling and methods and capabilities that allow them to treat their data differently talk a bit about the central importance of data and how it's driving decisions of Cluny oh yes so fundamentally you know when we built out the the data platform it was about going after the data protection as the first use case on the platform longer term the journey really is to go from a data protection company to a data management company and this is possible for the first time because you have the public cloud on your side if you truly built a platform for the cloud on the public cloud you have this distinct advantage of now taking the data that you're protecting and really leveraging it for other services that you can enable the enterprise for and this is exactly what enterprises are asking for especially as they you know you know make a transition from on-premises to the public cloud where they're powering on more and more applications in the public cloud and they really you know sometimes have no idea in terms of where the data is sitting and how they can take advantage of all these data sources that ultimately protecting well no idea where the data is sitting take advantage of these data sources presumably facilitate new classes of integration because that's how you generate value out of data that suggests that we're not just looking at protection as crucial and important as it is we're looking at new classes of services they're going to make it possible to alter the way you think about data management if I got that right and what are those new services yes it's it's a journey as I said right so starting with you know again data protection it's also about doing data protection across multiple clouds right so ultimately we are a platform even though we are announcing you know AWS you know application support today we've already done VMware and BMC as we go along you'll see us kind of doing this across multiple clouds so an application that's built on the cloud running across multiple clouds AWS ashore and GCP or whatever it might be you see as kind of doing data protection across in applications in multiple clouds and then it's about going and saying you know can we take advantage of the data that we are protecting and really power on adjacent use cases you know there could be security use cases because we know exactly what's changing when it's changing there could be infrastructure analytics use cases because people are running tens of thousands of instances and containers and VMs in the public cloud and if a problem happens nobody really knows what caused it and we have all the data and we can kind of you know index it in the backend analyze in the backend without the customer needing to lift a finger and really show them what happened in their environment that they didn't know about right so there's a lot of interesting use cases that get powered on because you have the ability to index all the data here you have the ability to essentially look at all the changes that are happening and really give that visibility to the end customer and all of this one-click and automating it without the customer needing to do much I will tell you this that we've talked to a number of customers of Cuneo and the fundamental choice the clue Meo choice was simplicity how are you going to sustain that even as you add these new classes of services that is the key right and that is about the foundation we have built at the end of the day right so if you look at all of our customers that have you know on boarded today it's really the experience we're in less than you know 15 minutes they can we start enjoying the power of the platform and the backend that we have built and the focus on design that we have is ultimately why we are able to do this with simplicity so so when we when we think about you know all the things we do in the back end there's obviously a lot of complexity in the back end because it is a complex platform but every time we ask ourselves the question that okay from a customer perspective how do we make sure that it is one click and easy for them so that focus and that attention to detail that we have behind the scenes to make sure that the customer ultimately should just consume the service and should not need to do anything more than what they absolutely need to do so that they can essentially focus on what adds value to their business takes a lot of technology a lot of dedication to make complex things really simple absolutely whoo John Kumar CEO and co-founder of coolio thanks very much for being on the cube Thank You bigger great conversation with poo John data value leading to data services now let's think a little bit more about how enterprises ultimately need to start thinking about how to manifest that in a cloud rich world Chad Kenney is the vice president and chief acknowledges of Cuneo and Chad and I had an opportunity to sit down and talk about some of the interesting approaches that are possible because of cloud and very importantly to talk about a new announcement that clue miios making as they expand their support of different cloud types let's see what Chad had to say the notion of data services has been around for a long time but it's being upended recast reformed as a consequence of what cloud can do but that also means that cloud is creating new ways of thinking about data services new opportunities to introduce and drive this powerful approach of thinking about digital businesses centralized assets and to have that conversation about what that means we've got Chad Kenny who's a VP and chief technologists of comeö with us today Chad welcome to the cube thanks so much for having me okay so let's start with that notion of data services and the role the clouds going to play Kumi always looked at this problem this challenge from the ground up what does that mean so if you look at the the cloud as a whole customers have gone through a significant journey we've seen you know that the first shadow IT kind of play out where people decided to go to the cloud IT was too slow it moved into kind of a cloud first movement where people realize the power of cloud services that then got them to understand a little bit of interesting things that played out one moving applications as they exist were not very efficient and so they needed to react attack certain applications second SAS was a core way of getting to the cloud in a very simplistic fashion without having to do much of whatsoever and so for applications that were not core competencies they realized they should go SACEM for anything that was a core competency they needed to really reaaargh attack to be able to take advantage of those you know very powerful cloud services and so when you look at it if people were to develop applications today cloud is the default that you'd go towards and so for us we had the luxury of building from the cloud up on these very powerful cloud services to enable a much more simple model for our customers to consume but even more so to be able to actually leverage the agility and elasticity of the cloud think about this for a quick second we can take facilities break them up expand them across many different compute resources within the cloud versus having to take kind of what you did on prim in a single server or multitudes of servers and try to plant that in the cloud from a customer's experience perspective it's vastly different you get a world where you don't think about how you manage the infrastructure how you manage the service you just consume it and the value that customers get out of that is not only getting their data there which is the on-ramp around our data protection mechanisms but also being able to leverage cloud native services on top of that data in the longer term as we have this one common global index and platform what we're super excited today to announce is that we're adding in AWS native capabilities to be able to date and protect that data in the public cloud and this is kind of the default place where most people go to from a cloud perspective to really get their applications up and running and take advantage a lot of those cloud native services well if you're gonna be cloud native and promise to customers as you're going to support their workloads you got to be obviously on AWS so congratulations on that but let's go back to this notion of user word powerful mm-hmm AWS is a mature platform GCPs coming along very rapidly asher is you know also very very good and there are others as well but sometimes enterprises discover that they have to make some trade-offs to get the simplicity they have to get less function to get the reliability they have to get rid of simplicity how does qu mio think through those trade-offs to deliver that simple that powerful that reliable platform for something as important as data protection and data services in general so we wanted to create an experience that was single click discover everything and be able to help people consume that service quickly and if you look at the problem that people are dealing with a customer's talked to us about this all the time is the power of the cloud resulted in hundreds if not thousands of accounts within AWS and now you get into a world where you're having to try to figure out how do I manage all of these for one discover all of it and consistently make sure that my data which as you've mentioned is incredibly important to businesses today as protect it and so having that one common view is incredibly important to start with and the simplicity of that is immensely powerful when you look at what we do as a business to make sure that that continues to occur is first we leverage cloud native services on the back which are complex and and and you know getting those things to run and orchestrate are things that we build on the back end on the front end we take the customer's view and looking at what is the most simple way of getting this experience to occur for both discovery as well as you know backup for recovery and even being able to search in a global fashion and so really taking their seats to figure out what would be the easiest way to both consume the service and then also be able to get value from it by running that service AWS has been around well AWS in many respects founded the cloud industry it's it's you know certainly Salesforce and the south side but AWS is that first company to make the promise that it was going to provide this very flexible very powerful very a a July infrastructure as a service and they've done an absolutely marvelous job about it and they've also advanced the state of your technology dramatically and in many respects are in the driver's seat what trade offs what limits does your new platform face as it goes to AWS or is it the same Coolio experience adding now all of the capabilities of AWS it's a great question because I think a lot of solutions out there today are different parts and pieces kind of clump together well we built is a platform that these new services just get instantly added next time you log into that service you'll see that that available to you and you can just go ahead and log in to your accounts and be able to discover directly and I think that the vow the power of SAS is really that not only have we made it immensely secure which is something that people think about quite a bit with having you know not only data in flight but data at rest encryption and and leveraging really the cloud capabilities of security but we've made it incredibly simple for them to be able to consume that easily literally not lift a finger to get anything done it's available for you when you log into that system and so having more and more data sources in one single pane of glass and being able to see all the accounts especially in AWS where you have quite a few of those accounts and to be able to apply policies in a consistent fashion to ensure that you're you know compliant within the environment for whatever business requirements that you have around data protection is immensely powerful to our customers Chad Denny Chief Technologist plumie oh thanks very much for being on the tube thank you great conversation Chad especially interested in hearing about how klum EO is being extended to include AWS services within its overall data protection approach and obviously into Data Services but let's take a little bit more into that Columbia was actually generated and prepared a short video that we could take a look at that goes a little bit more deeply into how this is all going to work enterprises are moving rapidly to the cloud embracing sass for simplified delivery of key services in this cloud centric world IT teams can focus on more strategic work accelerating digital transformation initiatives for when it comes to backup IT is stuck designing patching and capacity planning for on-premise systems snapshots alone for data protection in the public cloud is risky and there are hundreds of unprotected SAS applications in the typical enterprise the move to cloud should make backup simpler but it can quickly become exponentially worse it's time to rethink the backup experience what if there were no hardware software or virtual appliances to size configure manage or even buy it all and by adding Enterprise backup public cloud workloads are no longer exposed to accidental data deletion and ransomware and Clube o we deliver secure data backup and recovery without any of that complexity or risk we provide all of the critical functions of enterprise backup d dupe and scheduling user and key management and cataloging because we're built in the public cloud we can rapidly deliver new innovations and take advantage of inherent data security controls our mission is to protect your data wherever it's stored the clew mio authentic SAS backup experience scales on demand to manage and protect your data more easily and efficiently and without things like cloud bills or egress charges luenell gives you predictable costs monitoring global backup compliance is far simpler and the built-in always-on security of Clue mio means that your data is safe take advantage of the cloud for backup with no constraints clew mio authentic SAS for the enterprise great video as we think about moving forward in the future and what customers are trying to do we have to think more in terms of the native services that cloud can provide and how to fully exploit them to increase the aggregate flexible both within our enterprises but also based on what our supplies have to offer we had a great conversation with wounds Young who is the CTO and co-founder of Clue mio about just that let's hear it wound had to say everybody's talking about the cloud and what the cloud might be able to do for their business the challenges there are a limited number of people in the world who really understands what it means to build for the cloud utilizing the cloud it's a lot of approximations out there but not a lot of folks are deeply involved in actually doing it right we've got one here with us today woo Jung is the CTO and co-founder of Cluny Oh woo and welcome to the cube how they theny here so let's start with this issue of what it means to build for the cloud now loomio has made the decision to have everything fit into that as a service model what is that practically mean so from the engineering point of view building our SAS application is fundamentally different so the way that I'll go and say is that at Combe you know we actually don't build software and ship software what we actually do it will service and service is what we actually ship to our customers let me give you an example in the case of Kumu they say backups fail like software sometimes fails and we get that failures >> the difference in between chromeo and traditional solutions is that if something were to fail we are the one detecting that failure before our customers - not only that when something fails we actually know exactly why you fail therefore we can actually troubleshoot it and we can actually fix it and operate the service without the customer intervention so it's not about the bugs also or about the troubleshooting aspect but it's also about new features if you were to introduce our new features we can actually do this without having customers upgraded code we will actually do it ourselves so essentially it frees the customers from actually doing all these actions because we will do them on behalf of them at scale and I think that's the second thing I want to talk about quickly is that the ability to use the cloud to do many of the things that you're talking about at scale creates incredible ranges of options that customers have at their disposal so for example AWS customers have historically used things like snapshots to provide it a modicum of data protection to their AWS workloads but there are other new options that could be applied if the systems are built to supply them give us a sense of how kkumeul is looking at this question of no snapshots versus something else yeah so basically traditionally even on the on print side of the things you have something called the snapshots and you had your backups right and there they're fundamentally different but if you actually shift your gears and you look at what they Wis offers today they actually offers the ability for you to take snapshots but actually that's not a backup right and they're fundamentally different so let's talk about it a little bit more what it means to be snapshots and a backup right so let's say there's a bad actor and your account gets compromised like your AWS account gets compromised so then the bad actor has access not only to the EBS volumes but also to the EBS snapshots what that means is that that person can actually go ahead and delete the EBS volume as well as the EBS snapshots now if you had a backup let's say you actually take a backup of that EBS volume to Kumu that bad actor will have access to the EBS volumes however they won't be able to delete the backup that we actually have in Kumu so in the whole thing the idea of Kumi on is that you should be able to protect all of your assets that being either a non-prime or AWS by setting up a single policies and these are true backups and not just snapshots and that leads to the last question I have which is ultimately the ability to introduce these capabilities at scale creates a lot of new opportunities that customers can utilize to do a better job of building applications but also I presume managing how they use AWS because snapshots and other types of servers can expand dramatically which can increase your cost how is doing it better with things like native backup services improve a customer's ability to administer their AWS spend and accounts so great question so essentially if you look at the enterprise's today obviously they have multiple you know on-premise data centers and also a different card provide that they use like AWS and Azure and also a few SAS applications right so then the idea is for cumin is to create this single platform where all of these things can actually be backed up in a uniform way where you can actually manage all of them and then the other thing is all doing it in the cloud so if you think about it if you don't solve the poem fundamentally in the cloud there's things that you end up paying later on so let's take an example right moving bytes moving bytes in between one server to the other traditionally basically moving bytes from one rack to the other it was always free you never had to pay anything for that certainly in the data center all right but if you actually go to the public cloud you cannot say the same thing right basically moving by it across aw s recent regions is not free anymore moving data from AWS to the on premises that's not fair either so these are all the things that any you know car provider service provider like ours has to consider and actually solve so that the customers can only back it up into Kumu but then they actually can leverage different cloud providers you know in a seamless way without having to worry all of this costs associated with it so kkumeul we should be able to back it up but we should be able to also offer mobility in between either AWS back at VMware or VNC so if I can kind of summarize what you just said that you want to be able to provide to an account to an enterprise the ability to not have to worry about the back-end infrastructure from a technical and process standpoint but not also have to worry so much about the back-end infrastructure from a cost and financial standpoint that by providing a service and then administering how that service is optimally handled the customer doesn't have to think about some of those financial considerations of moving data around in the same way that they used to I got that right I absolutely yes basically multiple accounts multiple regions multiple providers it is extremely hard to manage what Cuneo does it will actually provide you a single pane of glass where you can actually manage them all but then if you actually think about just and manageability it's actually you can actually do that by just building a management layer on top of it but more importantly you and we need to have a single data you know repository for you for us to be able to provide a true mobility between them one is about managing but the other thing is about if you're done if you're done it the real the right way it provides you the ability to move them and it leverages the cloud power so that you don't have to worry about the cloud expenses but kkumeul internally is the one are actually optimizing all of this for our customers wound jeong CTO and co-founder of columbia thanks very much for being on the cube thank you thanks very much moon I want to thank chromeo for providing this important content about the increasingly important evolution of data protection and cloud now here's your opportunity to weigh in on this crucially important arena what do you think about this evolving relationship how do you foresee it operating in your enterprise what comments do you have what questions do you have of the thought leaders from clew mio and elsewhere that's what we're going to do now we're gonna go into the crowd chat and we're gonna hear from each other about this really important topic and what you foresee in your enterprise as your digital business transforms let's crouch at you [Music] [Music] [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
pooja Kumar | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Chad | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Chad Kenney | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Chad Kenny | PERSON | 0.99+ |
BMC | ORGANIZATION | 0.99+ |
135 million dollars | QUANTITY | 0.99+ |
51 million dollars | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
Kumi | PERSON | 0.99+ |
coolio | ORGANIZATION | 0.99+ |
first time | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
chromeo | TITLE | 0.99+ |
John Kumar | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
two year old | QUANTITY | 0.99+ |
chromeo | PERSON | 0.99+ |
today | DATE | 0.99+ |
15 minutes | QUANTITY | 0.99+ |
two months ago | DATE | 0.99+ |
two rounds | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
Chad Denny | PERSON | 0.98+ |
over two years old | QUANTITY | 0.97+ |
one-click | QUANTITY | 0.97+ |
SAS | ORGANIZATION | 0.97+ |
single service | QUANTITY | 0.97+ |
Cuneo | ORGANIZATION | 0.96+ |
Kaleo | ORGANIZATION | 0.96+ |
one server | QUANTITY | 0.96+ |
one click | QUANTITY | 0.96+ |
tens of thousands of instances | QUANTITY | 0.96+ |
VNC | ORGANIZATION | 0.96+ |
both | QUANTITY | 0.95+ |
one region | QUANTITY | 0.94+ |
series a | OTHER | 0.94+ |
Combe | ORGANIZATION | 0.94+ |
single policies | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
thousands of accounts | QUANTITY | 0.93+ |
columbia | LOCATION | 0.93+ |
first company | QUANTITY | 0.93+ |
first movement | QUANTITY | 0.92+ |
single platform | QUANTITY | 0.91+ |
first conversation | QUANTITY | 0.91+ |
pooja | PERSON | 0.91+ |
Chad | ORGANIZATION | 0.91+ |
Series B | OTHER | 0.9+ |
Clue mio | ORGANIZATION | 0.9+ |
one single pane of glass | QUANTITY | 0.9+ |
woo Jung | PERSON | 0.9+ |
Cluny | PERSON | 0.89+ |
Trevor Starnes, Pure Storage | VeeamON 2019
live from Miami Beach Florida it's the queue covering beam on 2019 brought to you by V hello everyone welcome back to Miami this is the cubed a leader and live tech covers is day two of our coverage of v-mon 2019 at the Fontainebleau Hotel in sunny Miami Dave Volante with Peter Burroughs Trevor stars is here is the director of systems engineering for pure storage Trevor great to see you again yeah thanks for having me yeah well we've been following pure since the did the early days I remember interviewing Scott Dietzen it's a snw way back when and seeing the ascendency and the rise fewer hits escape velocity he goes public just been an awesome ride you guys have really kind of transformed the industry started out as you know the flash play but now really getting much deeper into sort of data and data strategies and data protection is one of those so we're here at v-mon what are your impressions so far this week the conference has been great a lot of great interactions theme has been an incredibly strong alliance partner for us the synergies are just incredible because you know as we've evolved as you mentioned from a singular product in all flash array and disrupting the market there back in the early 2010's evolving into more of a data platform company and data protections actually turned out to be a great business for us it's growing incredibly fast and you know like I said a lot of great synergies with beam so the systems engineering role has always been a critical part of the the sales process right this right the SC's is like I need an se you know and then you guys will go in help the sales team really understand what the customer needs you'll help solve problems but how was that roll it to find a pure and how was it evolving in the industry yeah absolutely and I think similar to our products in the early days we we hired a lot of folks who were storage specialists and and we've evolved into having to go far beyond that right into the different realms around things like AI machine learning data protection you know virtualization containers and so it it's definitely evolving it's challenge to us as a company and we're certainly trying to not maintain a status quo we want to continue to disrupt and do that in adjacent markets so how do you work with veem just in terms of taking your platform and their software and making a solution that's kind of simple for customers it's not you know stove-piped you know single throat to choke describe that whole process yeah yeah so we recently earlier this year maybe it was late last year we we developed some integration with beam to where we we actually integrate with their universal storage API so beam can control pure storage snapshots which you're which are probably familiar with pure snapshots on flash are incredibly powerful it's a it's a very powerful metadata engine in purity and which means we can take thousands of snapshots with no performance impact in their near instantaneous with veeam we can instantly integrate that into Veen backup and data protection workflows and vm can completely control pure storage snapshots both honoré and offer a which we'll talk about without having to have a storage administrator log into pure at all okay and so talk more about how the system plays with your customers I mean when you're when you're in with the customer and you're sort of scoping it out how is that conversation changing is just in terms of as you say you went from okay here's an array and flash now there's all the spectrum of other things that you're doing that's what's the data protection conversation like how does it relate to their digital transformation their digital business where do you guys fit there well operationally we've seen a huge trend from customers that a decade or so ago you saw the trend of going from disk to disk to tape tape for long term archive what we're actually here at the conference really promoting is this idea of the next big wave of evolution there which is we see customers going from flash to flash for the first step in backup and then instead of off-site tape going to the cloud so that's been an incredibly successful message for us early on and so that actually started with you know I mentioned the pure flash or a snapshot integration but actually moving those snapshots off of flash array to our second product which is flash blade flash blades a really unique product it was originally designed with the next big wave of innovation in mind around things like containers and deep learning where high amounts of bandwidth and parallelism are just absolutely critical billions of small files it just so happens that actually caters really well to backup performant and restore performance so backing up to disk was a big success for a lot of customers but what they're seeing now and what we're seeing as workloads continue to get more diverse is that there's a restore challenge so we have customers that are backing up to disk but they're seeing massive challenges around getting their data back and getting back online the recovery time objective pressures from the business are becoming more and more important this actually started for us in the SAS industry where one of the world's largest SAS providers out in Silicon Valley had to do an increasing amount of restore and they've they actually started using flash blade as what we call a rapid restore platform so they're able to nearly instantaneously restore these databases and what we found is nearly across the board and in all other industries that there's a large number of customers that have that challenge more so then we find you know going to market with flash played for like AI for example there's not as many people doing that quite yet we've been successful in the ones that are but across the board healthcare legal high-tech you name it it there's a restore problem and with flash played we've seen people go you know for example one of our really other customers outside of the SAS world is in the healthcare space the industry's number one cancer center in the in the world is actually leveraging it for rapid restore for databases but they're also doing some other neat things because flash blades not a purpose-built backup appliance it can be used for other things anything file an object works great so what you can do is you can combine the use cases and that's been really powerful from a TCO perspective you might say customers might say well you know flash is too expensive but if there's a restore problem that may not matter and then if you combine it with other use cases we call that our data hub story it's even more powerful in the TCO becomes you know really attractive so the healthcare using it for PACs and rapid restore you know there's other industries like you know my gaming industry like I mentioned high tech so that data hub message has been really powerful that's return on asset and asset leverage oh absolutely and and and one of the things I'd like to talk about Trevor is relating to that is there are a lot of ways of describing some of those fundamentals some of those really contingent and essential changes that are taking place in the industry today but one of them clearly is flash allows us to move from a data storage orientation of record and you know save the data to one of deliver the data to new applications pure has been at the vanguard of that and has seen a lot of these new use cases as we think about no return on data assets and whatnot how is your visibility into those new use cases changing Pure's perspective and pure customers perspective on data protection because it seems like the notion of data protection which has been around for a long time is starting to fray as these new use cases say it's not just protecting about what's happened it's setting me up for doing new types of work in the future so how is that how is pure seeing that how does that conversation about data protection changing because of some of the drive that purus got in them in the marketplace yeah and I think the first step is hey I can backup my data but if I can't use it it's kind of worthless right so being able to use that data and much much more rapidly but also repurposing that this idea of data silos has been around IT for years and with flash blade and that data hub story we're really breaking down those silos to be able to say hey the same the same platform that you're storing your data protection data as well as other data it's the same platform that I might be able to spin that data up so beams got a great story with data labs where you can actually spin up these virtual environments and run and on a purposeful backup device you know you it's it's questionable if that actually works right and having to pull that back over the network to another silo with Flash blade and the data hub it makes that realistic and and getting so much more out of the data delivering that back to the business and actually be able to deliver these key insights into what my data is actually doing and be able to make better business decisions as what the output you see kind of an analogy of a relationship between previous to now where storage was about persisting data and therefore was about protecting what has happened to flash being about delivering data to new applications and therefore there's some new concept our customers pushing you guys towards something that goes that's bigger than data protection I mean it's something that we're struggling with and one of the customers we have is struggling with how do I talk about what these services are when I'm spitting up kubernetes clusters like that that's right so is it is there some new conversation that you're starting to see you guys are one of the first to have a conversation about data you know flash data for AI are you starting to have conversations about you know deliver data something more than protection yeah near real-time ability to spin up development environments see ICD pipelines all of those things we actually have a product that as a pure customer you get as inclusive of maintenance contract called pure source service Orchestrator which can actually help provision end-to-end container environments and being able to repurpose that data for like I said test dev development pipelines and those kind of things and we're also as you've probably heard we're tying that into a cloud strategy as well so there's there's products we've announced cloud block storage side as well as object engine which is a product we haven't talked about yet which enables customers to truly see the benefit of a hybrid cloud scenario so they may be developing an application on Prem and pushing the cloud or vice versa and we're actually going to give them that back capability to do that talk more about object engines specifically what it is that means been inferring object yes store and object engine you know you hear the name it could mean a number of things but clearly it has to do with object storage so object engine was actually a technology that was born in the cloud so it was a cloud native application that was really designed around data reduction for cloud workloads what we've done as part of that product it folded into peers we've actually ironically it is not what we used it for first we'd developed an appliance and we call that our object engine appliance that's just phase one so what object engine delivers is a highly scalable highly performant data reduction platform we're starting with backup and data protection workloads so vemma obviously does their in-line data reduction technology if the customer finds that they need something more scalable they can actually leverage object engine to do that and then flash blade on the back end as the initial tier and then the future vision for object engine is that it's going to give you the cloud connectivity to be able to say okay I want to automatically push my backup workloads from an archive perspective out to the cloud we're starting in AWS we're gonna do Azure and others as well so the next big wave of that that you'll see is actually running object engine in the cloud in a hybrid scenario and be able to move those work clothes back and forth so kind of envision you know the the near-term backup and restore most of the resource happen within a week or two on Prem and then 100% also stored in the cloud for more long-term archives so that really it really completes the flash to flash the cloud story but we're not gonna stop with backup workloads either and where's your sort of value added in that equation and where's VM and how do you sort of what's the connection points there good question good question so I you know I think again I mentioned via obviously has their in line data reduction technology where we insert object engine is really one of two reasons one if if our data reduction offsets the cost of the whole solution without using it with just using beams data reduction because it is it's a hardware offload essentially and then the second one is if you need a you know a large amount of data that you want to push out to the cloud as our kind of phase two of that product right okay I want to ask you about the partnership from the terms from the standpoint of values sure the values of pure are you guys are fun company I love orange you go to pure accelerate everybody's wearing orange to come here everybody's wearing green so these seem to be kind of birds of a feather but but we just talked about value add what about the values of your company and sort of how you guys getting along yeah we're getting along great I like I said there's a lot of synergies from a solution standpoint but just from a go-to market standpoint trying to be you know a disruptive company it just technology disruptive solutions what is that next thing right not being a me-too player in the market and so I think we share a lot of those those same values but also customer success we really focus on the outcomes and a happiness of the customer and that that's down in the core of our engineering same thing with beam where I think we can really help each other is Veeam has a big push right now to move up market into the enterprise and we feel like we can help beam in that respect we've been very successful in enterprise and likewise veem obviously has a major presence in amia and that's a market that is is is growing for us substantially but we've got a lot of upside so we really think we can help each other there and I actually failed to mention the very first object engine flash Blade sale we did was with him so you know it was it was it was just natural in that perspective and I think pre object engine and before this whole idea of rapid restore really took off with flash blade it was it was just the flashier a protection and even that's still pretty new but now it's much more comprehensive so we've got common common competitors as well and pure accelerates coming up in September it's in Austin you're your hometown I'm town in Austin Texas so yeah we'll be there September 15th to 18th and we're going to be talking about a ton of stuff obviously flash to flash the cloud but well beyond storage as well so even if you know don't think of it as just a storage conference it's always fun event we've covered now I think twice this will be our third year so in Austin is a great great town and looking forward to that Trevor thanks so much for coming on the cube love that loved it thanks for having me feel very welcome all right keep it right to everybody we'll be back with our next guest right after this short break I'm Dave Volante with Peterborough's v-mon 28 2019 from Miami right back
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Austin | LOCATION | 0.99+ |
Miami | LOCATION | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Scott Dietzen | PERSON | 0.99+ |
third year | QUANTITY | 0.99+ |
Trevor | PERSON | 0.99+ |
September | DATE | 0.99+ |
September 15th | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
Trevor Starnes | PERSON | 0.99+ |
first step | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
2019 | DATE | 0.99+ |
Austin Texas | LOCATION | 0.99+ |
second product | QUANTITY | 0.98+ |
Veeam | ORGANIZATION | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
Peter Burroughs | PERSON | 0.98+ |
one | QUANTITY | 0.98+ |
18th | DATE | 0.98+ |
Miami Beach Florida | LOCATION | 0.98+ |
early 2010 | DATE | 0.98+ |
thousands of snapshots | QUANTITY | 0.98+ |
a week | QUANTITY | 0.98+ |
earlier this year | DATE | 0.97+ |
late last year | DATE | 0.97+ |
first | QUANTITY | 0.97+ |
this week | DATE | 0.96+ |
two | QUANTITY | 0.96+ |
billions of small files | QUANTITY | 0.96+ |
second one | QUANTITY | 0.96+ |
two reasons | QUANTITY | 0.95+ |
today | DATE | 0.94+ |
Fontainebleau Hotel | LOCATION | 0.93+ |
Veen | ORGANIZATION | 0.91+ |
lot of customers | QUANTITY | 0.9+ |
v-mon 2019 | EVENT | 0.9+ |
phase one | QUANTITY | 0.9+ |
phase two | QUANTITY | 0.89+ |
veem | ORGANIZATION | 0.86+ |
SAS | ORGANIZATION | 0.85+ |
Dave Volante | PERSON | 0.84+ |
Peterborough | ORGANIZATION | 0.82+ |
both | QUANTITY | 0.82+ |
big wave of evolution | EVENT | 0.81+ |
years | QUANTITY | 0.8+ |
big wave of innovation | EVENT | 0.77+ |
a decade or so ago | DATE | 0.75+ |
v-mon | EVENT | 0.75+ |
lot | QUANTITY | 0.74+ |
a ton of stuff | QUANTITY | 0.73+ |
Azure | TITLE | 0.71+ |
pure | ORGANIZATION | 0.7+ |
VeeamON | ORGANIZATION | 0.7+ |
days | DATE | 0.64+ |
28 | DATE | 0.62+ |
purus | ORGANIZATION | 0.61+ |
beam | ORGANIZATION | 0.61+ |
the world's | QUANTITY | 0.57+ |
TCO | ORGANIZATION | 0.54+ |
day two | QUANTITY | 0.52+ |
beam | TITLE | 0.5+ |
number of | QUANTITY | 0.5+ |
things | QUANTITY | 0.5+ |
folks | QUANTITY | 0.5+ |
mon | TITLE | 0.35+ |
Nathan Hall, Pure Storage | Veritas Vision Solution Day
>> From Tavern on the Green in Central Park, New York it's theCUBE. Covering Veritas Vision Solution Day, brought to you by Veritas. >> Welcome back to New York City everybody. We're here in the heart of Central Park at Tavern On the Green, a beautiful facility. I'm surrounded by Yankee fans so I'm like a fish out of water. But that's okay, it's a great time of the year. We love it, we're still in it up in Boston so we're happy. Dave Vellante here, you're watching theCUBE, the leader in live tech coverage. Nathan Hall is here, he's the field CTO at Pure Storage. Nathan, good to see you. >> Good to see you too. >> Thanks for coming on. >> Thanks. >> So you guys made some announcements today with Veritas, what's that all about? >> It's pretty exciting and Veritas, being the market leader in data protection software. Now our customers are able to take Veritas's net backup software and use it to drive the policy engine of Snapshots for our FlashArrays. They're also able to take Veritas and back up our data hub, which is our new strategy with FlashBlade to really unify all of data analytics onto a single platform. So Veritas really is the solution net back up that's able to back up all the workloads and Pure is the solution that's able to run all the workloads. >> So what if I could follow-up on that, maybe push you a little bit? A lot of these announcements that you see, we call them Barney deals, I love you, you love me, we go to market together and everything's wonderful. Are we talking about deeper integration than that or is just kind of press release? >> Absolutely deeper integration. So you'll see not just how-to guides, white papers, et cetera, but there's actual engineering-level integration that's happening here. We're available as an advanced disk target within that back up, we've integrated into CloudPoint as well. We certify all of our hardware platforms with Veritas. So this is deep, deep engineering-level integration. >> Yeah, we're excited about Pure, we followed you guys since the early days. You know we saw Scott Dietzen, what he built, very impressive modern architecture, you won't be a legacy for 20, 25 years so you've got a lot going for you. Presumably it's easier to integrate with such a modern architecture, but now at the same time you got to integrate with Veritas, it's been around for about 25 years. We heard a lot about how they're investing in API-based architectures, and microservices, and containers and the like, so what is that like in terms of integrating with a 25-year-old company? >> Well I think, from Pure's perspective we are API first, we're RESTfull APIs first. We've done a ton of integrations across multiple platforms whether it's Kubernetes, Docker, VMware, et cetera, so we have a lot of experience in terms of how to integrate with various flavors of other infrastructure. I think Veritas has done a lot of work as well in terms of maturing their API to really be this kind of cloud-first type of API, this RESTful API, that made our cross-integration much easier. >> You guys like being first, there were a number of firsts, you guys were kind of the first, or one of the first with flash for block. You were kind of the first for file. You guys have hit AI pretty hard, everybody's now doing that. You guys announced the first partnership with NVIDIA, everybody's now doing that. (laughs) You guys announced giving away NVME as part of the Stack for no upcharge, everybody's now doing that. So, you like to be first. Culturally, you've worked at some other companies, what's behind that? >> Well culturally, this is best company I've worked at in terms of culture, period, and really it all starts with the culture of the company. I think that's why we're first in so many places and it's not just first in terms of first to market. It's really about first in terms of customer feedback. If you look at the Gartner Magic Quadrant we're up, we've been at leaders quadrant for five years in a row. But this year, we're indisputably the leader. Furthest to the right on the X-axis, furthest north on the Y-axis and that's all driven by just a customer-obsessed culture. We've got a Net Promoter Score of 86.6 which is stratospheric. It's something that puts us in the top 1% of all business-to-business companies, not just tech companies. So, it's really that culture about customer obsession that drives us to be first. Both to market, in a lot of cases, but also just first in terms of customer perception of our technology. >> You guys were a first at really escape velocity, the billion dollar unicorn status, and now you're kind of having that fly-wheel effect where you're able to throw off different innovations in different areas. Can you talk more about the data hub and the relevance to what you're doing with Veritas and data protection? Let's unpack that a little bit. >> Sure, sure, the data hub, we had a great keynote this morning with Jyothi the VP of Marketing for Veritas and he had an interesting customer tidbit. He had some sort of unnamed government agency customer that actually gets penalized when they're unable to retrieve data fast enough. That's not something that many of our customers have, but they do get penalized in terms of opportunity costs. The reason why is 'cause customers just have their data siloed into all these different split-up locations and that prevents them from being able to get insight out of that data. If you look at AI luminaries like Andrew Ng or even people like Dominique Brezinski at Apple, they all agree that you have to, in order to be successful with your data strategy, you have to unify these data silos. And that's what the data hub does. For the first time we're able to unify everything from data warehousing, to data lakes, to streaming analytics, to AI and now even backup all onto a single platform with multidimensional performance. That's FlashBlade and that is our data hub, we think it's revolutionary and we're challenging the rest of the storage industry to follow suit. Let's make less silos, let's unify the data into a data hub so that our customers can get real actionable information out of their data. >> I was on a crowd chat the other day, you guys put out an open letter to the storage community, an open challenge, so that was kind of both a little controversial but also some fun. That's a very important point you're making about sort of putting data at the core. I make an observation, it's not so much true about Facebook anymore 'cause after the whole fake news thing their market value dropped. But if you look at the top five companies in terms of market value, include Facebook in there, they and Berkshire keep doing this, but let's assume for a second that Facebook's up there. Apple, Google, Facebook, Microsoft, and Amazon, top five in terms of US market value. Of course markets ebb and they flow, but it's no coincidence that those are data companies. They all have a lot of hard assets at those companies. They've got data at their core so it's interesting to hear you talk about data hub because one of the challenges that we see for traditional companies, call them incumbents, is they have data in stovepipes. For them to compete they've got to put it in the digital world, they've got to put data at their core. It's not just for start-ups and people doing Greenfield, it's for folks that are established and don't want to get disrupted. Long-winded question, how do they get, let's think of traditional company, an incumbent company, how do they get from point A to point B with the data hub? >> I think Andrew Ng has a great talked-point on this. He basically talks about your data strategy and you need to think about, as a company, how do you acquire data and then how do you unify into a single data hub? It's not just around putting it on a single platform, such as FlashBlade. A valuable byproduct of that is if you have all the stove-piped data, though you probably in terms of your data scientist trying to get access to it, now have to, they have 10 different stovepipes you've got 10 different VPs that you have to go talk to in order to get access to that data. So it really starts with stopping the bleeding and starting to have a data strategy around how do we acquire and how do we make certain or storing data in the same place and have a single unified data hub in order to maximize the value we are able to get out of that data. >> You know when I talked to, I'll throw my two cents in, I talk to a lot of chief data officers. To me, the ones that are most insightful talk about their five imperatives. First of all, is they got to understand how data contributes to monetization. Whether it's saving money or making money, it's not necessarily selling your data. I think a lot of people make that mistake, oh I'm going to monetize my data, I mean I'm going to sell my data, no, it's all about how it contributes to value. The second is, what about data sources? And then how do I get access to data sources? There's a lot implied there in terms of governance and security and who has access to that. And in the same time, how do I scale up my business so that I get the right people who can act on that data? Then how do I form relationships with a line of business so that I can maximize that monetization? Those are, I think, sensible steps that aren't trivial. They require a lot of thought and a lot of cultural change and I would imagine that's what a lot of your customers are going through right now. >> I think they are and I think as IT practitioners out there, I think that we have a duty to get closer to our business and be able to kind of educate them around these data strategies. To give them the same level of insight that you're talking about, you see in some chief data officers. But if I looked out at the, there's a recent study on the Fortune 50, the CXOs, and these aren't even CIOs, they're actually, we think as IT practitioners that the cloud is the most disruptive thing that we see, but the CEOs and the CFOs are actually five times more likely to talk about AI and data as being more disruptive to their business. But most of them have no data strategy, most of them don't know how AI works. It's up to us as IT practitioners to educate the business. To say here's what's possible, here's what we have to do in order to maximize the value out of data, so that you can get a business advantage out of this. It's incumbent on us as IT leaders. >> So Nathan, I think again, that's really insightful because let's face it, if you're moving at the speed of the CIO, which is what many companies want to do, because that's the so called, fat middle and that's where the money is. But you're behind, I mean we're moving into a new era, the cloud era, no pun intended, is here, it's solid but we're entering that data of machine intelligence and we built the foundation with the dupe even, there's a lot of data now what do we do with it? We see, and I wonder if you could comment on this, is the innovation engine of the future changing it? It use to be Moore's Law, we marched to the cadence of Moore's Law for years. Now it's data applying machine intelligence and then, of course, using the cloud for scale and attracting start-ups and innovation. That's fine because we want to program infrastructure, we don't want to deploy infrastructure. If you think about Pure, you got data for sure. You're going hard after machine intelligence. And cloud, if I understand your cloud play, you sell to cloud providers whether they're on-prem or in the public cloud but what do you think about those? That innovation sandwich that I just described and how do you guys play? >> Well, cloud is where we get over 30% of our revenue so we're actually selling to the cloud, cloud service providers, et cetera. For example, one of the biggest cloud service providers out there that I think today's announcement helps them out a lot from a policy perspective actually used FlashBlade to reduce their SLAs, to reduce their restore time from, I think, it was 30 hours down to 38 minutes. They were paying money before to their customers. What we see in our cloud strategy is one of empowering cloud providers, but also we think that cloud is increasingly, at the infrastructure layer, going to be commoditized and it's going to be about how do we enable multicloud? So how do we enable customers to get around data gravity problems? I've got this big, weighty database that I want to see if I can move it up to the cloud but that takes me forever. So how do we help customers be able to move to one cloud or even exit a cloud to another or back to on-prem? We think there's a lot of value in applying our, for example deduplication technology, et cetera, to helping customers with those data gravity problems, to making a more open world in terms of sharing data to and from the cloud. >> Great, well we looked at Pure and Veritas getting together, do some hard core engineering, going to market, solving some real problems. Thanks Nathan for hanging out, this iconic beautiful Tavern on the Green in the heart of New York City. Appreciate you coming on theCUBE. >> Thanks Dave. >> All right, keep it right there everybody, Dave Vallante. We'll be right back right after this short break. You're watching theCUBE from Veritas Solutions Day, #VeritasVision, be right back. (digital music)
SUMMARY :
brought to you by Veritas. We're here in the heart of Central Park that's able to run all the workloads. A lot of these announcements that you see, We certify all of our hardware platforms with Veritas. but now at the same time you got to integrate with Veritas, in terms of maturing their API to really be or one of the first with flash for block. and it's not just first in terms of first to market. to what you're doing with Veritas and data protection? the rest of the storage industry to follow suit. how do they get from point A to point B with the data hub? to maximize the value we are able to get out of that data. so that I get the right people who can act on that data? that the cloud is the most disruptive thing that we see, or in the public cloud but what do you think about those? to be about how do we enable multicloud? in the heart of New York City. We'll be right back right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Andrew Ng | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dominique Brezinski | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Amazon | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Nathan | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Dave Vallante | PERSON | 0.99+ |
Nathan Hall | PERSON | 0.99+ |
Jyothi | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
38 minutes | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
30 hours | QUANTITY | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Scott Dietzen | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
two cents | QUANTITY | 0.99+ |
billion dollar | QUANTITY | 0.99+ |
Central Park | LOCATION | 0.98+ |
five times | QUANTITY | 0.98+ |
about 25 years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
single | QUANTITY | 0.98+ |
first partnership | QUANTITY | 0.98+ |
US | LOCATION | 0.98+ |
Gartner | ORGANIZATION | 0.98+ |
10 different stovepipes | QUANTITY | 0.97+ |
1% | QUANTITY | 0.97+ |
over 30% | QUANTITY | 0.97+ |
First | QUANTITY | 0.96+ |
10 different VPs | QUANTITY | 0.96+ |
Veritas Solutions Day | EVENT | 0.96+ |
RESTful | TITLE | 0.96+ |
86.6 | QUANTITY | 0.96+ |
first time | QUANTITY | 0.96+ |
one cloud | QUANTITY | 0.95+ |
firsts | QUANTITY | 0.95+ |
five imperatives | QUANTITY | 0.95+ |
single platform | QUANTITY | 0.95+ |
25-year-old | QUANTITY | 0.94+ |
Central Park, New York | LOCATION | 0.92+ |
CloudPoint | TITLE | 0.92+ |
Tavern on the Green | LOCATION | 0.92+ |
Barney | ORGANIZATION | 0.92+ |
Tavern On the Green | LOCATION | 0.9+ |
#VeritasVision | ORGANIZATION | 0.88+ |
Moore's Law | TITLE | 0.87+ |
FlashBlade | ORGANIZATION | 0.87+ |
Veritas Vision Solution Day | EVENT | 0.86+ |
VP | PERSON | 0.86+ |
25 years | QUANTITY | 0.85+ |
Jacob Broido & Neville Yates, INFINIDAT | VMworld 2018
>> Live from Las Vegas. It's theCUBE. Covering VM World 2018. Brought to you by VMware and Adziko System partners. >> Welcome back to the Mandalay Bay everybody in Las Vegas. My name is Dave Vellante, I'm here with David Floyer. This is day three of our wall to wall coverage of VMworld 2018. We've got two sets here in the VM Village. 94 guests this week. It's a record for the CUBE. Thanks so much for watching. I've been in this business as long as Pat Gelsinger and ever since I've been in this business people have said, "oh infrastructure's dying", and you know what, storage is the gift that keeps on giving. And I just, we love the conversations. Guys from Infinidat are here. Jacob Broido is the Chief Product Officer and Neville Yates is the Senior Director of Data Protection Solutions at Infinidat. Gentlemen, welcome to theCUBE. Happy VMworld 2018. >> Thank you >> Thank you >> All right Jacob, I'm going to start with you. >> Okay. >> So we have seen Infinidat come in. You're basically competing with all flash arrays, you're faster than Flash, and that's your sort of tag line. So you have this system designed for primary storage and then all of a sudden, you know last summer, around last summer, maybe it was the fall. We see you guys entering the data protection market with essentially the same architecture. How is it that you can take a system that's designed for primary storage faster than Flash, and then point it at data protection. Help us understand. >> That's a great question. So, it all starts with the fact that we designed our system to work with mixed workloads. And primary storage being our first keypoint, but the design and architecture supposed to work with any type of workload. And what we started seeing in the field is that our customers first displaced a lot of incumbent primary storage on us. And then we started seeing them putting backup workloads as well, and data protection workloads on our systems as well, and coming back and saying that this works amazingly led to more of that. This basically led us to a point of expanding on that strategy and introducing additional products and services. The key point for us in this was that it was remarkably easy for us to introduce additional capabilities because of the solid technical and architectural foundation. We're very fast. Our financial model enables us to do and go after the data protection market efficiently, and we're seeing this in the field. >> So Neville help us, paint a picture for us. You've got a long history in the data protection market. You were involved in disrupting tape, you've been a consultant in this space working with customers. What's the market sort of look like, the sort of available market for you guys? >> So when Jacob refers to the expansion into data protection, we took this technology as Jacob describes the InfiniBox, and we didn't just expand in one direction. We expanded in two directions, multi-direct, with the introduction of of InfiniSync, which is a means by which critical applications can enable a recovery point of zero, Jacob will go into more details on that. And then at the other end of the spectrum, we deliver a deploying InfiniGuard. Based on the same technology that Jacob described as the core, we're now able to be the target of factual re-enter, the typical grandfather/father/son, every 24-hours you do a backup, you do an incremental. And with deduplication as a front end to the core storage, now we've got a coverage across a data protection spectrum that nobody else can match. Recovery point of zero, leveraging replication technologies that Jacob will expand upon in a minute, Snap technology internal to InfiniBox, integrated with backup applications such as the dash-board management is all consistent, and then further down the spectrum, the InfiniGuard itself, dealing with the traditional kind of data protection schemes. A complete spectrum coverage. Nobody else can deliver it. Built on that technology core to the InfinityBox storage itself. >> So you got the full pyramid covered with the same fundamental architecture. But Jacob, you can't just throw the Box at data protection, you have to bring in other features, you got to be best of breed. So maybe you can talk a little bit about, double-click on some of those. >> Sure. So it all starts with kind of base foundation for our data protection that is InfiniSnaps. It's our snapshot core engine which from day one, we designed to work at multi-petabyte scale, and for us what that means is that you need to support hundred-thousands of snapshots and up to multiple millions. That's by design how we designed the system. But not only that, you have to have zero impact on performance. If you look at our systems in the field, our customers are doing thousands of snapshots per day. Some are doing tens of thousands or more per day with no performance impact, that's not even measurable on any of their performance graphs. This is the foundational technology on which we have built our forward looking additional data protection technologies. So, if we look upper in the pyramid of overall solutions for data protection, after that we introduce our asynchronous replication which is based on that snapshot technology for us. The reason we had such an efficient and groundbreaking snapshot technology, enables us to do the lowest RPO protection for async replication when comparing to any storage product on the market. We're talking about four seconds RPO, and this is something that no other vendor was able to do, because snapshots break at that pace. It's very hard to create and delete snapshots at scale at a such a short interval. >> Without performance degradation. >> Exactly, exactly. We were able to do this. And this is kind of one example of how our early days architectural planning and investment in our product architecture pays off year after year with every new feature. That's why it seems easy for now when we release features quickly, because we have such a solid technical foundation. >> One of the things that I was really fascinated by, was your purchase of Axxana. And how have you been able to use that to get this RTO zero, that you're claiming on that? I mean if you look at the marketplace at the moment, it seems to be that the storage vendors in general are owning this whole space of RTO, lower-RTO's, et cetera. >> That's a great question, but before we get into details about that I want to cover a kind of foundational technology for that, that enabled us to do this. And that is our synchronous replication within InfiniBox already. Which is also built on top of our async, which in turn, built on top of our snapshots. With our synchronous replication within InfiniBox, we're delivering the lowest possible latency for sync replication today. Just to give you an example of how low and how efficient that is, systems that are running synchronous replication on top of InfiniBox are having lower latency than a single all-flash array writing locally. Just imagine what it means. We're able to do the round trip right to another array, and complete the whole work faster than you'll have an all-flash array, a typical all-flash array doing. Now that foundational technology also is a key part of our InfiniSync implementation. Because what we did, we took a great product which comes from Axxana, which is the hardened black box, capable of withstanding any type of disaster, fire, floods, earthquake, whatever. And we essentially integrated it very closely with InfiniBox sync replication, where we're writing this very efficient low-latency sync operations to our InfiniSync appliance, and essentially enabling RPO zero over in the distance. So if you look at it from the heart things perspective which is the data path, we had existing capability, which is our sync replication within the array. We just had to integrate it with another great product, Axxana, and that essentially was more than anything an integration work rather than from scratch development. Because again, this is part of our philosophy, we plan ahead as far a our product, road map, and strategy, and when you lay out the foundation early on, you get to the point where some things look easy, because they were pre-made and prepared early on. >> So that's the tip of the pyramid. For those mission critical applications where you need RPO zero, you've now enabled customers to do that for much lower cost than let's say for instance, the three site data center. >> Yep. >> What about the sort of fat middle, Neville, of data protection, I think you guys call it InfiniGuard. Right? That's kind of your solution there. >> So InfiniGuard simply is InfiniBox storage, with all of it's resiliency and performance, and algorithms that outperform typical arrays, and in front of that we've integrated deduplication engines. These deduplication engines present themselves as targets to the traditional backup ecosystem, receive data, de-duplicate it, and use the resources of InfiniBox storage integrated into the InfiniGuard. And, it's been received well, because its ability to deliver aggressive recovery time objectives, because of its performance in terms of resource speeds. The traditional systems that have been designed ten or fifteen years ago were okay at doing backups, they were purposely built for backup processes. They suffer greatly as a byproduct of the process of deduplication, and the IO profile that that generates. InfiniGuard breaks through that, because of its performance in the underlying storage, in order to drive RTO's, for the recovery of those files that are under the 24-hour sort of data protection cycle. And the customers are receiving it well. They are amazed at the performance, the reliability, and the simplicity within which that fits into the existing ecosystem. So it completes. InfiniSync, InfiniGuard, with InfiniBox at the core in the middle. >> And so you partner with the backup software vendors. >> Of course. >> You're not writing your own backup software, right? >> No no no. So integration, Veeam, the ConVals, the Veritas OST's, et cetera. A little further integration when it comes to InfiniBox Snap technology. That is integrated into backup applications such as ConVal or Veeam. Specifically, you can use their dashboard and their scheduling scheme to trigger the snap that then is taken care of in InfiniBox. So, it's quite a comprehensive deliverable against the whole data protection paradigm. >> And have you made a cloud of that now? With your new service? >> Not yet, but as Jacob said, there's the vision, we are always building strategically, slightly ahead of the curve. So you can imagine that that's not lost on the radar screen. >> Right. >> I see this as a return on asset play. In other words, I've got the architecture, I've got my processes and procedures in place, I don't have to go out and buy a purpose built appliance for data protection now, I can use the asset that's on my floor, that people are trained on, what are your thoughts? >> Absolutely, it seems to me that you have, uh simplified tremendously, all of those previous steps, that took one to another to another, and put them all in the same box, and used the same technologies, to achieve much better end to end results. I think it's excellent. >> You're absolutely correct, and it's deliverable in a timely fashion, because the foundation is so strong. The investment that we made from day one, to make sure that that storage architecture was able to deliver the storage services at the right cost point, at the right resiliency, at the right performance levels, is the means by which we're able to accomplish that. No one else can do it. >> And there's another arc to this story. That we're constantly, we're continually investing into that foundation. Every, our customers, the one unique thing that they experience with us, is that their systems get better every time, every release that we have, every month they get better. Not only on performance, which is obvious, in that our systems are improving all the time. >> As opposed to the normal expectation is that >> Yes. >> as you fill it up it gets worse. >> Yeah. We are actually delivering the opposite. Our customers that are buying the system today, know that, the ones that experienced InfiniBox, know that it will become better over time. And that expands the whole spectrum. It's performance, it's reliability, but it also futures it. All of the things that we discussed here, were delivered free of charge through our software upgrade to our existing InfiniBox customers. And, without disclosing something specific looking forward, there are many more things in that area coming up pretty soon from us. >> Very innovative. You guys always solve problems differently, cutting against the conventional wisdom. You see, VMworld, a lot of glam. A lot of big market. And you guys, I was at your customer dinner the other night. A lot of happy customers. A very intimate event. And a lot of good belly to belly conversations. So congratulations. Final thoughts from each of you on VMworld 2018, the future of Infinidat, anything you want to share with us? Go ahead, Neville. >> Good show, the clients, the prospects that I've spoken to here, they get to open their minds in terms of our solution-offering, and it's generated a lot of interest, and it's going to be a good remainder of the year and a good 2019. >> Great, Jacob, final words from you. >> I agree as well. And we're, I'm seeing customers that are actually reaching out to new prospects for us, and telling the story of Infinidat, and that's catching on. And it's great to see that. >> Jacob, Neville, thanks very much for coming to theCUBE. Bringing you all the action from VMworld 2018, I'm Dave Vellante, for David Floyer. You're watching theCUBE, and we'll be right back after this short break. (light electronic music)
SUMMARY :
Brought to you by VMware and Neville Yates is the Senior Director going to start with you. How is it that you can take and go after the data the sort of available market for you guys? of factual re-enter, the the Box at data protection, This is the foundational and investment in our product architecture One of the things that and complete the whole work So that's the tip of the pyramid. What about the sort and in front of that we've the backup software vendors. So integration, Veeam, the ConVals, not lost on the radar screen. I don't have to go out and buy to me that you have, uh is the means by which we're the one unique thing that And that expands the whole spectrum. of you on VMworld 2018, and it's going to be a and telling the story of Infinidat, and we'll be right back
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Jacob | PERSON | 0.99+ |
Jacob Broido | PERSON | 0.99+ |
Infinidat | ORGANIZATION | 0.99+ |
Adziko | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Neville Yates | PERSON | 0.99+ |
94 guests | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Neville | PERSON | 0.99+ |
last summer | DATE | 0.99+ |
Mandalay Bay | LOCATION | 0.99+ |
24-hour | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
Axxana | ORGANIZATION | 0.99+ |
first keypoint | QUANTITY | 0.99+ |
VMworld 2018 | EVENT | 0.99+ |
hundred-thousands | QUANTITY | 0.98+ |
two sets | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
2019 | DATE | 0.98+ |
ten | DATE | 0.98+ |
VM World 2018 | EVENT | 0.98+ |
this week | DATE | 0.98+ |
one direction | QUANTITY | 0.97+ |
InfiniSync | TITLE | 0.97+ |
zero | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
InfiniBox | ORGANIZATION | 0.96+ |
ConVal | TITLE | 0.95+ |
one example | QUANTITY | 0.94+ |
fifteen years ago | DATE | 0.92+ |
three site | QUANTITY | 0.92+ |
CUBE | ORGANIZATION | 0.91+ |
One | QUANTITY | 0.91+ |
each | QUANTITY | 0.91+ |
today | DATE | 0.91+ |
Flash | TITLE | 0.91+ |
InfiniGuard | TITLE | 0.9+ |
day one | QUANTITY | 0.87+ |
Veeam | ORGANIZATION | 0.87+ |
tens of thousands or more | QUANTITY | 0.87+ |
millions | QUANTITY | 0.86+ |
single | QUANTITY | 0.85+ |
Infinidat | TITLE | 0.84+ |
Veeam | TITLE | 0.84+ |
INFINIDAT | ORGANIZATION | 0.83+ |
about four seconds | QUANTITY | 0.82+ |
every 24-hours | QUANTITY | 0.79+ |
InfiniBox | COMMERCIAL_ITEM | 0.78+ |
zero impact | QUANTITY | 0.76+ |
InfiniBox | TITLE | 0.75+ |
petabyte | QUANTITY | 0.75+ |
snapshots | QUANTITY | 0.74+ |
day | QUANTITY | 0.74+ |
InfiniGuard | COMMERCIAL_ITEM | 0.73+ |
InfiniSnaps | ORGANIZATION | 0.66+ |
InfiniGuard | ORGANIZATION | 0.64+ |
thing | QUANTITY | 0.64+ |
Veritas | ORGANIZATION | 0.63+ |
three | QUANTITY | 0.61+ |
Data Protection | ORGANIZATION | 0.61+ |
Nutanix .NEXT Morning Keynote Day1
Section 1 of 13 [00:00:00 - 00:10:04] (NOTE: speaker names may be different in each section) Speaker 1: Ladies and gentlemen our program will begin momentarily. Thank you. (singing) This presentation and the accompanying oral commentary may include forward looking statements that are subject to risks uncertainties and other factors beyond our control. Our actual results, performance or achievements may differ materially and adversely from those anticipated or implied by such statements because of various risk factors. Including those detailed in our annual report on form 10-K for the fiscal year ended July 31, 2017 filed with the SEC. Any future product or roadmap information presented is intended to outline general product direction and is not a commitment to deliver any functionality and should not be used when making any purchasing decision. (singing) Ladies and gentlemen please welcome Vice President Corporate Marketing Nutanix, Julie O'Brien. Julie O'Brien: All right. How about those Nutanix .NEXT dancers, were they amazing or what? Did you see how I blended right in, you didn't even notice I was there. [French 00:07:23] to .NEXT 2017 Europe. We're so glad that you could make it today. We have such a great agenda for you. First off do not miss tomorrow morning. We're going to share the outtakes video of the handclap video you just saw. Where are the customers, the partners, the Nutanix employee who starred in our handclap video? Please stand up take a bow. You are not going to want to miss tomorrow morning, let me tell you. That is going to be truly entertaining just like the next two days we have in store for you. A content rich highly interactive, number of sessions throughout our agenda. Wow! Look around, it is amazing to see how many cloud builders we have with us today. Side by side you're either more than 2,200 people who have traveled from all corners of the globe to be here. That's double the attendance from last year at our first .NEXT Conference in Europe. Now perhaps some of you are here to learn the basics of hyperconverged infrastructure. Others of you might be here to build your enterprise cloud strategy. And maybe some of you are here to just network with the best and brightest in the industry, in this beautiful French Riviera setting. Well wherever you are in your journey, you'll find customers just like you throughout all our sessions here with the next two days. From Sligro to Schroders to Societe Generale. You'll hear from cloud builders sharing their best practices and their lessons learned and how they're going all in with Nutanix, for all of their workloads and applications. Whether it's SAP or Splunk, Microsoft Exchange, unified communications, Cloud Foundry or Oracle. You'll also hear how customers just like you are saving millions of Euros by moving from legacy hypervisors to Nutanix AHV. And you'll have a chance to post some of your most challenging technical questions to the Nutanix experts that we have on hand. Our Nutanix technology champions, our MPXs, our MPSs. Where are all the people out there with an N in front of their certification and an X an R an S an E or a C at the end. Can you wave hello? You might be surprised to know that in Europe and the Middle East alone, we have more than 2,600 >> Julie: In Europe and the Middle East alone, we have more than 2,600 certified Nutanix experts. Those are customers, partners, and also employees. I'd also like to say thank you to our growing ecosystem of partners and sponsors who are here with us over the next two days. The companies that you meet here are the ones who are committed to driving innovation in the enterprise cloud. Over the next few days you can look forward to hearing from them and seeing some fantastic technology integration that you can take home to your data center come Monday morning. Together, with our partners, and you our customers, Nutanix has had such an exciting year since we were gathered this time last year. We were named a leader in the Gartner Magic Quadrant for integrated systems two years in a row. Just recently Gartner named us the revenue market share leader in their recent market analysis report on hyper-converged systems. We know enjoy more than 35% revenue share. Thanks to you, our customers, we received a net promoter score of more than 90 points. Not one, not two, not three, but four years in a row. A feat, I'm sure you'll agree, is not so easy to accomplish, so thank you for your trust and your partnership in us. We went public on NASDAQ last September. We've grown to more than 2,800 employees, more than 7,000 customers and 125 countries and in Europe and the Middle East alone, in our Q4 results, we added more than 250 customers just in [Amea 00:11:38] alone. That's about a third of all of our new customer additions. Today, we're at a pivotal point in our journey. We're just barely scratching the surface of something big and Goldman Sachs thinks so too. What you'll hear from us over the next two days is this: Nutanix is on it's way to building and becoming an iconic enterprise software company. By helping you transform your data center and your business with Enterprise Cloud Software that gives you the power of freedom of choice and flexibility in the hardware, the hypervisor and the cloud. The power of one click, one OS, any cloud. And now, to tell you more about the digital transformation that's possible in your business and your industry and share a little bit around the disruption that Nutanix has undergone and how we've continued to reinvent ourselves and maybe, if we're lucky, share a few hand clap dance moves, please welcome to stage Nutanix Founder, CEO and Chairman, Dheeraj Pandey. Ready? Alright, take it away [inaudible 00:13:06]. >> Dheeraj P: Thank you. Thank you, Julie and thank you every one. It looks like people are still trickling. Welcome to Acropolis. I just hope that we can move your applications to Acropolis faster than we've been able to move people into this room, actually. (laughs) But thank you, ladies and gentlemen. Thank you to our customers, to our partners, to our employees, to our sponsors, to our board members, to our performers, to everybody for their precious time. 'Cause that's the most precious thing you actually have, is time. I want to spend a little bit of time today, not a whole lot of time, but a little bit of time talking about the why of Nutanix. Like why do we exist? Why have we survived? Why will we continue to survive and thrive? And it's simpler than an NQ or category name, the word hyper-convergence, I think we are all complicated. Just thinking about what is it that we need to talk about today that really makes it relevant, that makes you take back something from this conference. That Nutanix is an obvious innovation, it's very obvious what we do is not very complicated. Because the more things change, the more they remain the same, so can we draw some parallels from life, from what's going on around us in our own personal lives that makes this whole thing very natural as opposed to "Oh, it's hyper-converged, it's a category, it's analysts and pundits and media." I actually think it's something new. It's not that different, so I want to start with some of that today. And if you look at our personal lives, everything that we had, has been digitized. If anything, a lot of these gadgets became apps, they got digitized into a phone itself, you know. What's Nutanix? What have we done in the last seven, eight years, is we digitized a lot of hardware. We made everything that used to be single purpose hardware look like pure software. We digitized storage, we digitized the systems manager role, an operations manager role. We are digitizing scriptures, people don't need to write scripts anymore when they automate because we can visually design automation with [com 00:15:36]. And we're also trying to make a case that the cloud itself is not just a physical destination. That it can be digitized and must be digitized as well. So we learn that from our personal lives too, but it goes on. Look at music. Used to be tons of things, if you used to go to [inaudible 00:15:55] Records, I'm sure there were European versions of [inaudible 00:15:57] Records as well, the physical things around us that then got digitized as well. And it goes on and on. We look at entertainment, it's very similar. The idea that if you go to a movie hall, the idea that you buy these tickets, the idea that we'd have these DVD players and DVDs, they all got digitized. Or as [inaudible 00:16:20] want to call it, virtualized, actually. That is basically happening in pretty much new things that we never thought would look this different. One of the most exciting things happening around us is the car industry. It's getting digitized faster than we know. And in many ways that we'd not even imagined 10 years ago. The driver will get digitized. Autonomous cars. The engine is definitely gone, it's a different kind of an engine. In fact, we'll re-skill a lot of automotive engineers who actually used to work in mechanical things to look at real chemical things like battery technologies and so on. A lot of those things that used to be physical are now in software in the car itself. Media itself got digitized. Think about a physical newspaper, or physical ads in newspapers. Now we talk about virtual ads, the digital ads, they're all over on websites and so on is our digital experience now. Education is no different, you know, we look back at the kind of things we used to do physically with physical things. Their now all digital. The experience has become that digital. And I can go on and on. You look at retail, you look at healthcare, look at a lot of these industries, they all are at the cusp of a digital disruption. And in fact, if you look at the data, everybody wants it. We all want a digital transformation for industries, for companies around us. In fact, the whole idea of a cloud is a highly digitized data center, basically. It's not just about digitizing servers and storage and networks and security, it's about virtualizing, digitizing the entire data center itself. That's what cloud is all about. So we all know that it's a very natural phenomenon, because it's happening around us and that's the obviousness of Nutanix, actually. Why is it actually a good thing? Because obviously it makes anything that we digitize and we work in the digital world, bring 10X more productivity and decision making efficiencies as well. And there are challenges, obviously there are challenges, but before I talk about the challenges of digitization, think about why are things moving this fast? Why are things becoming digitally disrupted quicker than we ever imagined? There are some reasons for it. One of the big reasons is obviously we all know about Moore's Law. The fact that a lot of hardware's been commoditized, and we have really miniaturized hardware. Nutanix today runs on a palm-sized server. Obviously it runs on the other end of the spectrum with high-end IBM power systems, but it also runs on palm-sized servers. Moore's Law has made a tremendous difference in the way we actually think about consuming software itself. Of course, the internet is also a big part of this. The fact that there's a bandwidth glut, there's Trans-Pacific cables and Trans-Atlantic cables and so on, has really connected us a lot faster than we ever imagined, actually, and a lot of this was also the telecom revolution of the '90s where we really produced a ton of glut for the internet itself. There's obviously a more subtle reason as well, because software development is democratizing. There's consumer-grade programming languages that we never imagined 10, 15, 20 years ago, that's making it so much faster to write- >> Speaker 1: 15-20 years ago that's making it so much faster to write code, with this crowdsourcing that never existed before with Githubs and things like that, open source. There's a lot more stuff that's happening that's outside the boundary of a corporation itself, which is making things so much faster in terms of going getting disrupted and writing things at 10x the speed it used to be 20 years ago. There is obviously this technology at the tip of our fingers, and we all want it in our mobile experience while we're driving, while we're in a coffee shop, and so on; and there's a tremendous focus on design on consumer-grade simplicity, that's making digital disruption that much more compressed in some of sense of this whole cycle of creative disruption that we talk about, is compressed because of mobility, because of design, because of API, the fact that machines are talking to machines, developers are talking to developers. We are going and miniaturizing the experience of organizations because we talk about micro-services and small two-pizza teams, and they all want to talk about each other using APIs and so on. Massive influence on this digital disruption itself. Of course, one of the reasons why this is also happening is because we want it faster, we want to consume it faster than ever before. And our attention spans are reducing. I like the fact that not many people are watching their cell phones right now, but you can imagine the multi-tasking mode that we are all in today in our lives, makes us want to consume things at a faster pace, which is one of the big drivers of digital disruption. But most importantly, and this is a very dear slide to me, a lot of this is happening because of infrastructure. And I can't overemphasize the importance of infrastructure. If you look at why did Google succeed, it was the ninth search engine, after eight of them before, and if you take a step back at why Facebook succeeded over MySpace and so on, a big reason was infrastructure. They believed in scale, they believed in low latency, they believed in being able to crunch information, at 10x, 100x, bigger scale than anyone else before. Even in our geopolitical lives, look at why is China succeeding? Because they've made infrastructure seamless. They've basically said look, governance is about making infrastructure seamless and invisible, and then let the businesses flourish. So for all you CIOs out there who actually believe in governance, you have to think about what's my first role? What's my primary responsibility? It's to provide such a seamless infrastructure, that lines of business can flourish with their applications, with their developers that can write code 10x faster than ever before. And a lot of these tenets of infrastructure, the fact of the matter is you need to have this always-on philosophy. The fact that it's breach-safe culture. Or the fact that operating systems are hardware agnostic. A lot of these tenets basically embody what Nutanix really stands for. And that's the core of what we really have achieved in the last eight years and want to achieve in the coming five to ten years as well. There's a nuance, and obviously we talk about digital, we talk about cloud, we talk about everything actually going to the cloud and so on. What are the things that could slow us down? What are the things that challenge us today? Which is the reason for Nutanix? Again, I go back to this very important point that the reason why we think enterprise cloud is a nuanced term, because the word "cloud" itself doesn't solve for a lot of the problems. The public cloud itself doesn't solve for a lot of the problems. One of the big ones, and obviously we face it here in Europe as well, is laws of the land. We have bureaucracy, which we need to deal with and respect; we have data sovereignty and computing sovereignty needs that we need to actually fulfill as well, while we think about going at breakneck speed in terms of disrupting our competitors and so on. So there's laws of the land, there's laws of physics. This is probably one of the big ones for what the architecture of cloud will look like itself, over the coming five to ten years. Our take is that cloud will need to be more dispersed than they have ever imagined, because computing has to be local to business operations. Computing has to be in hospitals and factories and shop floors and power plants and on and on and on... That's where you really can have operations and computing really co-exist together, cause speed is important there as well. Data locality is one of our favorite things; the fact that computing and data have to be local, at least the most relevant data has to be local as well. And the fact that electrons travel way faster when it's actually local, versus when you have to have them go over a Wide Area Network itself; it's one of the big reasons why we think that the cloud will actually be more nuanced than just some large data centers. You need to disperse them, you need to actually think about software (cloud is about software). Whether data plane itself could be dispersed and even miniaturized in small factories and shop floors and hospitals. But the control plane of the cloud is centralized. And that's the way you can have the best of both worlds; the control plane is centralized. You think as if you're managing one massive data center, but it's not because you're really managing hundreds or thousands of these sites. Especially if you think about edge-based computing and IoT where you really have your tentacles in tens of thousands of smaller devices and so on. We've talked about laws of the land, which is going to really make this digital transformation nuanced; laws of physics; and the third one, which is really laws of entropy. These are hackers that do this for adrenaline. These are parochial rogue states. These are parochial geo-politicians, you know, good thing I actually left the torture sign there, because apparently for our creative designer, geo-politics is equal to torture as well. So imagine one bad tweet can actually result in big changes to the way we actually live in this world today. And it's important. Geo-politics itself is digitized to a point where you don't need a ton of media people to go and talk about your principles and what you stand for and what you strategy for, for running a country itself is, and so on. And these are all human reasons, political reasons, bureaucratic reasons, compliance and regulations reasons, that, and of course, laws of physics is yet another one. So laws of physics, laws of the land, and laws of entropy really make us take a step back and say, "What does cloud really mean, then?" Cause obviously we want to digitize everything, and it all should appear like it's invisible, but then you have to nuance it for the Global 5000, the Global 10000. There's lots of companies out there that need to really think about GDPR and Brexit and a lot of the things that you all deal with on an everyday basis, actually. And that's what Nutanix is all about. Balancing what we think is all about technology and balancing that with things that are more real and practical. To deal with, grapple with these laws of the land and laws of physics and laws of entropy. And that's where we believe we need to go and balance the private and the public. That's the architecture, that's the why of Nutanix. To be able to really think about frictionless control. You want things to be frictionless, but you also realize that you are a responsible citizen of this continent, of your countries, and you need to actually do governance of things around you, which is computing governance, and data governance, and so on. So this idea of melding the public and the private is really about melding control and frictionless together. I know these are paradoxical things to talk about like how do you really have frictionless control, but that's the life you all lead, and as leaders we have to think about this series of paradoxes itself. And that's what Nutanix strategy, the roadmap, the definition of enterprise cloud is really thinking about frictionless control. And in fact, if anything, it's one of the things is also very interesting; think about what's disrupting Nutanix as a company? We will be getting disrupted along the way as well. It's this idea of true invisibility, the public cloud itself. I'd like to actually bring on board somebody who I have a ton of respect for, this leader of a massive company; which itself is undergoing disruption. Which is helping a lot of its customers undergo disruption as well, and which is thinking about how the life of a business analyst is getting digitized. And what about the laws of the land, the laws of physics, and laws of entropy, and so on. And we're learning a lot from this partner, massively giant company, called IBM. So without further ado, Bob Picciano. >> Bob Picciano: Thanks, >> Speaker 1: Thank you so much, Bob, for being here. I really appreciate your presence here- >> Bob Picciano: My pleasure! >> Speaker 1: And for those of you who actually don't know Bob, Bob is a Senior VP and General Manager at IBM, and is all things cognitive and obviously- >> Speaker 1: IBM is all things cognitive. Obviously, I learn a lot from a lot of leaders that have spent decades really looking at digital disruption. >> Bob: Did you just call me old? >> Speaker 1: No. (laughing) I want to talk about experience and talking about the meaning of history, because I love history, actually, you know, and I don't want to make you look old actually, you're too young right now. When you talk about digital disruption, we look at ourselves and say, "Look we are not extremely invisible, we are invisible, but we have not made something as invisible as the public clouds itself." And hence as I. But what's digital disruption mean for IBM itself? Now, obviously a lot of hardware is being digitized into software and cloud services. >> Bob: Yep. >> Speaker 1: What does it mean for IBM itself? >> Bob: Yeah, if you allow me to take a step back for a moment, I think there is some good foundational understanding that'll come from a particular point of view. And, you talked about it with the number of these dimensions that are affecting the way businesses need to consider their competitiveness. How they offer their capabilities into the market place. And as you reflected upon IBM, you know, we've had decades of involvement in information technology. And there's a big disruption going on in the information technology space. But it's what I call an accretive disruption. It's a disruption that can add value. If you were to take a step back and look at that digital trajectory at IBM you'd see our involvement with information technology in a space where it was all oriented around adding value and capability to how organizations managed inscale processes. Thinking about the way they were going to represent their businesses in a digital form. We came to call them applications. But it was how do you open an account, how do you process a claim, how do you transfer money, how do you hire an employee? All the policies of a company, the way the people used to do it mechanically, became digital representations. And that foundation of the digital business process is something that IBM helped define. We invented the role of the CIO to help really sponsor and enter in this notion that businesses could re represent themselves in a digital way and that allowed them to scale predictably with the qualities of their brand, from local operations, to regional operations, to international operations, and show up the same way. And, that added a lot of value to business for many decades. And we thrived. Many companies, SAP all thrived during that span. But now we're in a new space where the value of information technology is hitting a new inflection point. Which is not about how you scale process, but how you scale insight, and how you scale wisdom, and how you scale knowledge and learning from those operational systems and the data that's in those operational systems. >> Speaker 1: How's it different from 1993? We're talking about disruption. There was a time when IBM reinvented itself, 20-25 years ago. >> Bob: Right. >> Speaker 1: And you said it's bigger than 25 years ago. Tell us more. >> Bob: You know, it gets down. Everything we know about that process space right down to the very foundation, the very architecture of the CPU itself and the computer architecture, the von Neumann architecture, was all optimized on those relatively static scaled business processes. When you move into the notion where you're going to scale insight, scale knowledge, you enter the era that we call the cognitive era, or the era of intelligence. The algorithms are very different. You know the data semantically doesn't integrate well across those traditional process based pools and reformation. So, new capabilities like deep learning, machine learning, the whole field of artificial intelligence, allows us to reach into that data. Much of it unstructured, much of it dark, because it hasn't been indexed and brought into the space where it is directly affecting decision making processes in a business. And you have to be able to apply that capability to those business processes. You have to rethink the computer, the circuitry itself. You have to think about how the infrastructure is designed and organized, the network that is required to do that, the experience of the applications as you talked about have to be very natural, very engaging. So IBM does all of those things. So as a function of our transformation that we're on now, is that we've had to reach back, all the way back from rethinking the CPU, and what we dedicate our time and attention to. To our services organization, which is over 130,000 people on the consulting side helping organizations add digital intelligence to this notion of a digital business. Because, the two things are really a confluence of what will make this vision successful. >> Speaker 1: It looks like massive amounts of change for half a million people who work with the company. >> Bob: That's right. >> Speaker 1: I'm sure there are a lot of large customers out here, who will also read into this and say, "If IBM feels disrupted ... >> Bob: Uh hm >> Speaker 1: How can we actually stay not vulnerable? Actually there is massive amounts of change around their own competitive landscape as well. >> Bob: Look, I think every company should feel vulnerable right. If you're at this age, this cognitive era, the age of digital intelligence, and you're not making a move into being able to exploit the capabilities of cognition into the business process. You are vulnerable. If you're at that intersection, and your competitor is passing through it, and you're not taking action to be able to deploy cognitive infrastructure in conjunction with the business processes. You're going to have a hard time keeping up, because it's about using the machines to do the training to augment the intelligence of our employees of our professionals. Whether that's a lawyer, or a doctor, an educator or whether that's somebody in a business function, who's trying to make a critical business decision about risk or about opportunity. >> Speaker 1: Interesting, very interesting. You used the word cognitive infrastructure. >> Bob: Uh hm >> Speaker 1: There's obviously computer infrastructure, data infrastructure, storage infrastructure, network infrastructure, security infrastructure, and the core of cognition has to be infrastructure as well. >> Bob: Right >> Speaker 1: Which is one of the two things that the two companies are working together on. Tell us more about the collaboration that we are actually doing. >> Bob: We are so excited about our opportunity to add value in this space, so we do think very differently about the cognitive infrastructure that's required for this next generation of computing. You know I mentioned the original CPU was built for very deterministic, very finite operations; large precision floating point capabilities to be able to accurately calculate the exact balance, the exact amount of transfer. When you're working in the field of AI in cognition. You actually want variable precision. Right. The data is very sparse, as opposed to the way that deterministic or scorecastic operations work, which is very dense or very structured. So the algorithms are redefining the processes that the circuitry actually has to run. About five years ago, we dedicated a huge effort to rethink everything about the chip and what we made to facilitate an orchestra of participation to solve that problem. We all know the GPU has a great benefit for deep learning. But the GPU in many cases, in many architectures, specifically intel architectures, it's dramatically confined by a very small amount of IO bandwidth that intel allows to go on and off the chip. At IBM, we looked at all 686 roughly square millimeters of our chip and said how do we reuse that square area to open up that IO bandwidth? So the innovation of a GPU or a FPGA could really be utilized to it's maximum extent. And we could be an orchestrator of all of the diverse compute that's going to be necessary for AI to really compel these new capabilities. >> Speaker 1: It's interesting that you mentioned the fact that you know power chips have been redefined for the cognitive era. >> Bob: Right, for Lennox for the cognitive era. >> Speaker 1: Exactly, and now the question is how do you make it simple to use as well? How do you bring simplicity which is where ... >> Bob: That's why we're so thrilled with our partnership. Because you talked about the why of Nutanix. And it really is about that empowerment. Doing what's natural. You talked about the benefits of calm and being able to really create that liberation of an information technology professional, whether it's in operations or in development. Having the freedom of action to make good decisions about defining the infrastructure and deploying that infrastructure and not having to second guess the physical limitations of what they're going to have to be dealing with. >> Speaker 1: That's why I feel really excited about the fact that you have the power of software, to really meld the two forms together. The intel form and the power form comes together. And we have some interesting use cases that our CIO Randy Phiffer is also really exploring, is how can a power form serve as a storage form for our intel form. >> Bob: Sure. >> Speaker 1: It can serve files and mocks and things like that. >> Bob: Any data intensive application where we have seen massive growth in our Lennox business, now for our business, Lennox is 20% of the revenue of our power systems. You know, we started enabling native Lennox distributions on top of little Indian ones, on top of the power capabilities just a few years ago, and it's rocketed. And the reason for that if for any data intensive application like a data base, a no sequel database or a structured data base, a dupe in the unstructured space, they typically run about three to four times better price performance on top of Lennox on power, than they will on top of an intel alternative. >> Speaker 1: Fascinating. >> Bob: So all of these applications that we're talking about either create or consume a lot of data, have to manage a lot of flexibility in that space, and power is a tremendous architecture for that. And you mentioned also the cohabitation, if you will, between intel and power. What we want is that optionality, for you to utilize those benefits of the 3X better price performance where they apply and utilize the commodity base where it applies. So you get the cost benefits in that space and the depth and capability in the space for power. >> Speaker 1: Your tongue in cheek remark about commodity intel is not lost on people actually. But tell us about... >> Speaker 1: Intel is not lost on people actually. Tell us about ... Obviously we digitized Linux 10, 15 years ago with [inaudible 00:40:07]. Have you tried to talk about digitizing AIX? That is the core of IBM's business for the last 20, 25, 30 years. >> Bob: Again, it's about this ability to compliment and extend the investments that businesses have made during their previous generations of decision making. This industry loves to talk about shifts. We talked about this earlier. That was old, this is new. That was hard, this is easy. It's not about shift, it's about using the inflection point, the new capability to extend what you already have to make it better. And that's one thing that I must compliment you, and the entire Nutanix organization. It's really empowering those applications as a catalog to be deployed, managed, and integrated in a new way, and to have seamless interoperability into the cloud. We see the AIX workload just having that same benefit for those businesses. And there are many, many 10's of thousands around the world that are critically dependent on every element of their daily operations and productivity of that operating platform. But to introduce that into that network effect as well. >> Speaker 1: Yeah. I think we're looking forward to how we bring the same cloud experience on AIX as well because as a company it keeps us honest when we don't scoff at legacy. We look at these applications the last 10, 15, 20 years and say, "Can we bring them into the new world as well?" >> Bob: Right. >> Speaker 1: That's what design is all about. >> Bob: Right. >> Speaker 1: That's what Apple did with musics. We'll take an old world thing and make it really new world. >> Bob: Right. >> Speaker 1: The way we consume things. >> Bob: That governance. The capability to help protect against the bad actors, the nefarious entropy players, as you will. That's what it's all about. That's really what it takes to do this for the enterprise. It's okay, and possibly easier to do it in smaller islands of containment, but when you think about bringing these class of capabilities into an enterprise, and really helping an organization drive both the flexibility and empowerment benefits of that, but really be able to depend upon it for international operations. You need that level of support. You need that level of capability. >> Speaker 1: Awesome. Thank you so much Bob. Really appreciate you coming. [crosstalk 00:42:14] Look forward to your [crosstalk 00:42:14]. >> Bob: Cheers. Thank you. >> Speaker 1: Thanks again for all of you. I know that people are sitting all the way up there as well, which is remarkable. I hope you can actually see some of the things that Sunil and the team will actually bring about, talk about live demos. We do real stuff here, which is truly live. I think one of the requests that I have is help us help you navigate the digital disruption that's upon you and your competitive landscape that's around you that's really creating that disruption. Thank you again for being here, and welcome again to Acropolis. >> Speaker 3: Ladies and gentlemen, please welcome Chief Product and Development Officer, Nutanix Sunil Potti. >> Sunil Potti: Okay, so I'm going to just jump right in because I know a bunch of you guys are here to see the product as well. We are a lot of demos lined up for you guys, and we'll try to mix in the slides, and the demos as well. Here's just an example of the things I always bring up in these conferences to look around, and say in the last few months, are we making progress in simplifying infrastructure? You guys have heard this again and again, this has been our mantra from the beginning, that the hotter things get, the more differentiated a company like Nutanix can be if we can make things simple, or keep things simple. Even though I like this a lot, we found something a little bit more interesting, I thought, by our European marketing team. If you guys need these tea bags, which you will need pretty soon. It's a new tagline for the company, not really. I thought it was apropos. But before I get into the product and the demos, to give you an idea. Every time I go to an event you find ways to memorialize the event. You meet people, you build relationships, you see something new. Last night, nothing to do with the product, I sat beside someone. It was a customer event. I had no idea who I was sitting beside. He was a speaker. How many of you guys know him, by the way? Sir Ranulph Fiennes. Few hands. Good for you. I had no idea who I was sitting beside. I said, "Oh, somebody called Sir. I should be respectful." It's kind of hard for me to be respectful, but I tried. He says, "No, I didn't do anything in the sense. My grandfather was knighted about 100 years ago because he was the governor of Antigua. And when he dies, his son becomes." And apparently Sir Ranulph's dad also died in the war, and so that's how he is a sir. But then I started looking it up because he's obviously getting ready to present. And the background for him is, in my opinion, even though the term goes he's the World's Greatest Living Explorer. I would have actually called it the World's Number One Stag, and I'll tell you why. Really, you should go look it up. So this guy, at the age of 21, gets admitted to Special Forces. If you're from the UK, this is as good as it gets, SAS. Six, seven years into it, he rebels, helps out his local partner because he doesn't like a movie who's building a dam inside this pretty village. And he goes and blows up a dam, and he's thrown out of that Special Forces. Obviously he's in demolitions. Goes all the way. This is the '60's, by the way. Remember he's 74 right now. The '60's he goes to Oman, all by himself, as the only guy, only white guy there. And then around the '70's, he starts truly exploring, truly exploring. And this is where he becomes really, really famous. You have to go see this in real life, when he sees these videos to really appreciate the impact of this guy. All by himself, he's gone across the world. He's actually gone across Antarctica. Now he tells me that Antarctica is the size of China and India put together, and he was prepared for -50 to 60 degrees, and obviously he got -130 degrees. Again, you have to see the videos, see his frostbite. Two of his fingers are cut off, by the way. He hacksawed them himself. True story. And then as he, obviously, aged, his body couldn't keep up with him, but his will kept up with him. So after a recent heart attack, he actually ran seven marathons. But most importantly, he was telling me this story, at 65 he wanted to do something different because his body was letting him down. He said, "Let me do something easy." So he climbed Mount Everest. My point being, what is this related to Nutanix? Is that if Nutanix is a company, without technology, allows to spend more time on life, then we've accomplished a piece of our vision. So keep that in mind. Keep that in mind. Now comes the boring part, which is the product. The why, what, how of Nutanix. Neeris talked about this. We have two acts in this company. Invisible Infrastructure was what we started off. You heard us talk about it. How did we do it? Using one-click technologies by converging infrastructure, computer storage, virtualization, et cetera, et cetera. What we are now about is about changing the game. Saying that just like we'd applicated what powers Google and Amazon inside the data center, could we now make them all invisible? Whether it be inside or outside, could we now make clouds invisible? Clouds could be made invisible by a new level of convergence, not about computer storage, but converging public and private, converging CAPEX and OPEX, converging consumption models. And there, beyond our core products, Acropolis and Prism, are these new products. As you know, we have this core thesis, right? The core thesis says what? Predictable workloads will stay inside the data center, elastic workloads will go outside, as long as the experience on both sides is the same. So if you can genuinely have a cloud-like experience delivered inside a data center, then that's the right a- >> Speaker 1: Genuinely have a cloud like experience developed inside the data center. And that's the right answer of predictable workloads. Absolutely the answer of elastic workloads, doesn't matter whether security or compliance. Eventually a public cloud will have a data center right beside your region, whether through local partner or a top three cloud partner. And you should use it as your public cloud of choice. And so, our goal is to ensure that those two worlds are converged. And that's what Calm does, and we'll talk about that. But at the same time, what we found in late 2015, we had a bunch of customers come to us and said "Look, I love this, I love the fact that you're going to converge public and private and all that good stuff. But I have these environments and these apps that I want to be delivered as a service but I want the same operational tooling. I don't want to have two different environments but I don't want to manage my data centers. Especially my secondary data centers, DR data centers." And that's why we created Xi, right? And you'll hear a lot more about this, obviously it's going to start off in the U.S but very rapidly launch in Europe, APJ globally in the next 9-12 months. And so we'll spend some quality time on those products as well today. So, from the journey that we're at, we're starting with the score cloud that essentially says "Look, your public and private needs to be the same" We call that the first instantiation of your cloud architectures and we're essentially as a company, want to build this enterprise cloud operating system as a fabric across public and private. But that's just the starting point. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. Just like you have a public and a private cloud in the core data centers and so forth, you'll need a similar experience inside your remote office branch office, inside your DR data centers, inside your branches, and it won't stop there. It'll go all the way to the edge. All we're already seeing this right? Not just in the army where your forward operating bases in Afghanistan having a three note cluster sitting inside a tent. But we're seeing this in a variety of enterprise scenarios. And here's an example. So, here's a customer, global oil and gas company, has couple of primary data centers running Nutanix, uses GCP as a core public cloud platform, has a whole bunch of remote offices, but it also has this interesting new edge locations in the form of these small, medium, large size rigs. And today, they're in the process of building a next generation cloud architecture that's completely dispersed. They're using one node, coming out on version 5.5 with Nutanix. They're going to use two nodes, they're going to throw us three nods, multicultural architectures. Day one, they're going to centrally manage it using Prism, with one click upgrades, right? And then on top of that, they're also now provisioning using Calm, purpose built apps for the various locations. So, for example, there will be a re control app at the edge, there's an exploration data lag in Google and so forth. My point being that increasingly this architecture that we're talking about is happening in real time. It's no longer just an existing cellular civilization data center that's being replatformed to look like a private cloud and so forth, or a hybrid cloud. But the fact that you're going into this multi cloud era is getting excel bated, the more someone consumes AWL's GCP or any public cloud, the more they're excel bating their internal transformation to this multi cloud architecture. And so that's what we're going to talk about today, is this construct of ONE OS and ONE Click, and when you think about it, every company has a standard stack. So, this is the only slide you're going to see from me today that's a stack, okay? And if you look at the new release coming out, version 5.5, it's coming out imminently, easiest way to say it is that it's got a ton of functionality. We've jammed as much as we can onto one slide and then build a product basically, okay? But I would encourage you guys to check out the release, it's coming out shortly. And we can go into each and every feature here, we'd be spending a lot of time but the way that we look at building Nutanix products as many of you know, it is not feature at a time. It's experience at a time. And so, when you really look at Nutanix using a lateral view, and that's how we approach problems with our customers and partners. We think about it as a life cycle, all the way from learning to using, operating, and then getting support and experiences. And today, we're going to go through each of these stages with you. And who better to talk about it than our local version of an architect, Steven Poitras please come up on stage. I don't know where you are, Steven come on up. You tucked your shirt in? >> Speaker 2: Just for you guys today. >> Speaker 1: Okay. Alright. He's sort of putting on his weight. I know you used a couple of tight buckles there. But, okay so Steven so I know we're looking for the demo here. So, what we're going to do is, the first step most of you guys know this, is we've been quite successful with CE, it's been a great product. How many of you guys like CE? Come on. Alright. I know you had a hard time downloading it yesterday apparently, there's a bunch of guys had a hard time downloading it. But it's been a great way for us not just to get you guys to experience it, there's more than 25,000 downloads and so forth. But it's also a great way for us to see new features like IEME and so forth. So, keep an eye on CE because we're going to if anything, explode the way that we actually use as a way to get new features out in the next 12 months. Now, one thing beyond CE that we did, and this was something that we did about ... It took us about 12 months to get it out. While people were using CE to learn a lot, a lot of customers were actually getting into full blown competitive evals, right? Especially with hit CI being so popular and so forth. So, we came up with our own version called X-Ray. >> Speaker 2: Yup. >> Speaker 1: What does X-Ray do before we show it? >> Speaker 2: Yeah. Absolutely. So, if we think about back in the day we were really the only ACI platform out there on the market. Now there are a few others. So, to basically enable the customer to objectively test these, we came out with X-Ray. And rather than talking about the slide let's go ahead and take a look. Okay, I think it's ready. Perfect. So, here's our X-Ray user interface. And essentially what you do is you specify your targets. So, in this case we have a Nutanix 80150 as well as some of our competitors products which we've actually tested. Now we can see on the left hand side here we see a series of tests. So, what we do is we go through and specify certain workloads like OLTP workloads, database colocation, and while we do that we actually inject certain test cases or scenarios. So, this can be snapshot or component failures. Now one of the key things is having the ability to test these against each other. So, what we see here is we're actually taking a OLTP workload where we're running two virtual machines, and then we can see the IOPS OLTP VM's are actually performing here on the left hand side. Now as we're actually go through this test we perform a series of snapshots, which are identified by these red lines here. Now as you can see, the Nutanix platform, which is shown by this blue line, is purely consistent as we go through this test. However, our competitor's product actually degrades performance overtime as these snapshots are taken. >> Speaker 1: Gotcha. And some of these tests by the way are just not about failure or benchmarking, right? It's a variety of tests that we have that makes real life production workloads. So, every couple of months we actually look at our production workloads out there, subset those two cases and put it into X-Ray. So, X-Ray's one of those that has been more recently announced into the public. But it's already gotten a lot of update. I would strongly encourage you, even if you an existing Nutanix customer. It's a great way to keep us honest, it's a great way for you to actually expand your usage of Nutanix by putting a lot of these real life tests into production, and as and when you look at new alternatives as well, there'll be certain situations that we don't do as well and that's a great way to give us feedback on it. And so, X-Ray is there, the other one, which is more recent by the way is a fact that most of you has spent many days if not weeks, after you've chosen Nutanix, moving non-Nutanix workloads. I.e. VMware, on three tier architectures to Atrio Nutanix. And to do that, we took a hard look and came out with a new product called Xtract. >> Speaker 2: Yeah. So essentially if we think about what Nutanix has done for the data center really enables that iPhone like experience, really bringing it simplicity and intuitiveness to the data center. Now what we wanted to do is to provide that same experience for migrating existing workloads to us. So, with Xtract essentially what we've done is we've scanned your existing environment, we've created design spec, we handled the migration process ... >> Steven: ... environment, we create a design spec. We handle for the migration process as well as the cut over. Now, let's go ahead and take a look in our extract user interface here. What we can see is we have a source environment. In this case, this is a VC environment. This can be any VC, whether it's traditional three tier or hypherconverged. We also see our Nutanix target environments. Essentially, these are our AHV target clusters where we're going to be migrating the data and performing the cut over to you. >> Speaker 2: Gotcha. Steven: The first thing that we do here is we go ahead and create a new migration plan. Here, I'm just going to specify this as DB Wave 2. I'll click okay. What I'm doing here is I'm selecting my target Nutanix cluster, as well as my target Nutanix container. Once I'll do that, I'll click next. Now in this case, we actually like to do it big. We're actually going to migrate some production virtual machines over to this target environment. Here, I'm going to select a few windows instances, which are in our database cluster. I'll click next. At this point, essentially what's occurring is it's going through taking a look at these virtual machines as well as taking a look at the target environment. It takes a look at the resources to ensure that we actually have enough, an ample capacity to facilitate the workload. The next thing we'll do is we'll go ahead and type in our credentials here. This is actually going to be used for logging into the virtual machine. We can do a new device driver installation, as well as get any static IP configuration. Well specify our network mapping. Then from there, we'll click next. What we'll do is we'll actually save and start. This will go through create the migration plan. It'll do some analysis on these virtual machines to ensure that we can actually log in before we actually start migrating data. Here we have a migration, which has been in progress. We can see we have a few virtual machines, obviously some Linux, some Windows here. We've cut over a few. What we do to actually cut over these VMS, is go ahead select the VMS- Speaker 2: This is the actual task of actually doing the final stage of cut over. Steven: Yeah, exactly. That's one of the nice things. Essentially, we can migrate the data whenever we want. We actually hook into the VADP API's to do this. Then every 10 minutes, we send over a delta to sync the data. Speaker 2: Gotcha, gotcha. That's how one click migration can now be possible. This is something that if you guys haven't used this, this has been out in the wild, just for a month or so. Its been probably one of our bestselling, because it's free, bestselling features of the recent product release. I've had customers come to me and say, "Look, there are situations where its taken us weeks to move data." That is now minutes from the operator perspective. Forget where the director, or the VP, it's the line architecture and operator that really loves these tools, which is essentially the core of Nutanix. That's one of our core things, is to make sure that if we can keep the engineer and the architect truly happy, then everything else will be fine for us, right? That's extract. Then we have a lot of things, right? We've done the usual things, there's a tunnel functionality on day zero, day one, day two, kind of capabilities. Why don't we start with something around Prism Central, now that we can do one click PC installs? We can do PC scale outs, we can go from managing thousands of VMS, tens of thousands of VMS, while doing all the one click operations, right? Steven: Yep. Speaker 2: Why don't we take a quick look at what's new in Prism Central? Steven: Yep. Absolutely. Here, we can see our Prism element interface. As you mentioned, one of the key things we added here was the ability to deploy Prism Central very simply just with a few clicks. We'll actually go through a distributed PC scale of deployment here. Here, we're actually going to deploy, as this is a new instance. We're going to select our 5.5 version. In this case, we're going to deploy a scale out Prism Central cluster. Obviously, availability and up-time's very critical for us, as we're mainly distributed systems. In this case we're going to deploy a scale-out PC cluster. Here we'll select our number of PC virtual machines. Based upon the number of VMS, we can actually select our size of VM that we'd deploy. If we want to deploy 25K's report, we can do that as well. Speaker 2: Basically a thousand to tens of thousands of VM's are possible now. Steven: Yep. That's a nice thing is you can start small, and then scale out as necessary. We'll select our PC network. Go ahead and input our IP address. Now, we'll go to deploy. Now, here we can see it's actually kicked off the deployment, so it'll go provision these virtual machines to apply the configuration. In a few minutes, we'll be up and running. Speaker 2: Right. While Steven's doing that, one of the things that we've obviously invested in is a ton of making VM operations invisible. Now with Calm's, what we've done is to up level that abstraction. Two applications. At the end of the day, more and more ... when you go to AWS, when you go to GCP, you go to [inaudible 01:04:56], right? The level of abstractions now at an app level, it's cloud formations, and so forth. Essentially, what Calm's able to do is to give you this marketplace that you can go in and self-service [inaudible 01:05:05], create this internal cloud like environment for your end users, whether it be business owners, technology users to self-serve themselves. The process is pretty straightforward. You, as an operator, or an architect, or [inaudible 01:05:16] create these blueprints. Consumers within the enterprise, whether they be self-service users, whether they'll be end business users, are able to consume them for a simple marketplace, and deploy them on whether it be a private cloud using Nutanix, or public clouds using anything with public choices. Then, as a single frame of glass, as operators you're doing conversed operations, at an application centric level between [inaudible 01:05:41] across any of these clouds. It's this combination of producer, consumer, operator in a curated sense. Much like an iPhone with an app store. It's the core construct that we're trying to get with Calm to up level the abstraction interface across multiple clouds. Maybe we'll do a quick demo of this, and then get into the rest of the stuff, right? Steven: Sure. Let's check it out. Here we have our Prism Central user interface. We can see we have two Nutanix clusters, our cloudy04 as well as our Power8 cluster. One of the key things here that we've added is this apps tab. I'm clicking on this apps tab, we can see that we have a few [inaudible 01:06:19] solutions, we have a TensorFlow solution, a [inaudible 01:06:22] et cetera. The nice thing about this is, this is essentially a marketplace where vendors as well as developers could produce these blueprints for consumption by the public. Now, let's actually go ahead and deploy one of these blueprints. Here we have a HR employment engagement app. We can see we have three different tiers of services part of this. Speaker 2: You need a lot of engagement at HR, you know that. Okay, keep going. Steven: Then the next thing we'll do here is we'll go and click on. Based upon this, we'll specify our blueprint name, HR app. The nice thing when I'm deploying is I can actually put in back doors. We'll click clone. Now what we can see here is our blueprint editor. As a developer, I could actually go make modifications, or even as an in-user given the simple intuitive user interface. Speaker 2: This is the consumers side right here, but it's also the [inaudible 01:07:11]. Steven: Yep, absolutely. Yeah, if I wanted to make any modifications, I could select the tier, I could scale out the number of instances, I could modify the packages. Then to actually deploy, all I do is click launch, specify HR app, and click create. Speaker 2: Awesome. Again, this is coming in 5.5. There's one other feature, by the way, that is coming in 5.5 that's surrounding Calm, and Prism Pro, and everything else. That seems to be a much awaited feature for us. What was that? Steven: Yeah. Obviously when we think about multi-tenant, multi-cloud role based access control is a very critical piece of that. Obviously within the organization, we're going to have multiple business groups, multiple units. Our back's a very critical piece. Now, if we go over here to our projects, we can see in this scenario we just have a single project. What we've added is if you want to specify certain roles, in this case we're going to add our good friend John Doe. We can add them, it could be a user or group, but then we specify their role. We can give a developer the ability to edit and create these blueprints, or consumer the ability to actually provision based upon. Speaker 2: Gotcha. Basically in 5.5, you'll have role based access control now in Prism and Calm burned into that, that I believe it'll support custom role shortly after. Steven: Yep, okay. Speaker 2: Good stuff, good stuff. I think this is where the Nutanix guys are supposed to clap, by the way, so that the rest of the guys can clap. Steven: Thank you, thank you. Okay. What do we have? Speaker 2: We have day one stuff, obviously there's a ton of stuff that's coming in core data path capabilities that most of you guys use. One of the most popular things is synchronous replication, especially in Europe. Everybody wants to do [Metro 01:08:49] for whatever reason. But we've got something new, something even more enhanced than Metro, right? Steven: Yep. Speaker 2: Do you want to talk a little bit about it? Steven: Yeah, let's talk about it. If we think about what we had previously, we started out with a synchronous replication. This is essentially going to be your higher RPO. Then we moved into Metro cluster, which was RPO zero. Those are two ins of the gamete. What we did is we introduced new synchronous replication, which really gives you the best of both worlds where you have very, very decreased RPO's, but zero impact in line mainstream performance. Speaker 2: That's it. Let's show something. Steven: Yeah, yeah. Let's do it. Here, we're back at our Prism Element interface. We'll go over here. At this point, we provisioned our HR app, the next thing we need to do is to protect that data. Let's go here to protection domain. We'll create a new PD for our HR app. Speaker 2: You clearly love HR. Steven: Spent a lot of time there. Speaker 2: Yeah, yeah, yeah. Steven: Here, you can see we have our production lamp DBVM. We'll go ahead and protect that entity. We can see that's protected. The next thing we'll do is create a schedule. Now, what would you say would be a good schedule we should actually shoot for? Speaker 2: I don't know, 15 minutes? Steven: 15 minutes is not bad. But I ... Section 7 of 13 [01:00:00 - 01:10:04] Section 8 of 13 [01:10:00 - 01:20:04] (NOTE: speaker names may be different in each section) Speaker 1: ... 15 minutes. Speaker 2: 15 minutes is not bad, but I think the people here deserve much better than that, so I say let's shoot for ... what about 15 seconds? Speaker 1: Yeah. They definitely need a bathroom break, so let's do 15 seconds. Speaker 2: Alright, let's do 15 seconds. Speaker 1: Okay, sounds good. Speaker 2: K. Then we'll select our retention policy and remote cluster replicate to you, which in this case is wedge. And we'll go ahead and create the schedule here. Now at this point we can see our protection domain. Let's go ahead and look at our entities. We can see our database virtual machine. We can see our 15 second schedule, our local snapshots, as well as we'll start seeing our remote snapshots. Now essentially what occurs is we take two very quick snapshots to essentially see the initial data, and then based upon that then we'll start taking our continuous 15 second snaps. Speaker 1: 15 seconds snaps, and obviously near sync has less of impact than synchronous, right? From an architectural perspective. Speaker 2: Yeah, and that's a nice thing is essentially within the cluster it's truly pure synchronous, but externally it's just a lagged a-sync. Speaker 1: Gotcha. So there you see some 15 second snapshots. So near sync is also built into five-five, it's a long-awaited feature. So then, when we expand in the rest of capabilities, I would say, operations. There's a lot of you guys obviously, have started using Prism Pro. Okay, okay, you can clap. You can clap. It's okay. It was a lot of work, by the way, by the core data pad team, it was a lot of time. So Prism Pro ... I don't know if you guys know this, Prism Central now run from zero percent to more than 50 percent attach on install base, within 18 months. And normally that's a sign of true usage, and true value being supported. And so, many things are new in five-five out on Prism Pro starting with the fact that you can do data[inaudible 01:11:49] base lining, alerting, so that you're not capturing a ton of false positives and tons of alerts. We go beyond that, because we have this core machine-learning technology power, we call it cross fit. And, what we've done is we've used that as a foundation now for pretty much all kinds of operations benefits such as auto RCA, where you're able to actually map to particular [inaudible 01:12:12] crosses back to who's actually causing it whether it's the network, a computer, and so forth. But then the last thing that we've also done in five-five now that's quite different shading, is the fact that you can now have a lot of these one-click recommendations and remediations, such as right-sizing, the fact that you can actually move around [inaudible 01:12:28] VMs, constrained VMs, and so forth. So, I now we've packed a lot of functionality in Prism Pro, so why don't we spend a couple of minutes quickly giving a sneak peak into a few of those things. Speaker 2: Yep, definitely. So here we're back at our Prism Central interface and one of the things we've added here, if we take a look at one of our clusters, we can see we have this new anomalies portion here. So, let's go ahead and select that and hop into this. Now let's click on one of these anomaly events. Now, essentially what the system does is we monitor all the entities and everything running within the system, and then based upon that, we can actually determine what we expect the band of values for these metrics to be. So in this scenario, we can see we have a CPU usage anomaly event. So, normal time, we expect this to be right around 86 to 100 percent utilization, but at this point we can see this is drastically dropped from 99 percent to near zero. So, this might be a point as an administrator that I want to go check out this virtual machine, ensure that certain services and applications are still up and running. Speaker 1: Gotcha, and then also it changes the baseline based on- Speaker 2: Yep. Yeah, so essentially we apply machine-learning techniques to this, so the system will dynamically adjust based upon the value adjustment. Speaker 1: Gotcha. What else? Speaker 2: Yep. So the other thing here that we mentioned was capacity planning. So if we go over here, we can take a look at our runway. So in this scenario we have about 30 days worth of runway, which is most constrained by memory. Now, obviously, more nodes is all good for everyone, but we also want to ensure that you get the maximum value on your investment. So here we can actually see a few recommendations. We have 11 overprovision virtual machines. These are essentially VMs which have more resources than are necessary. As well as 19 inactives, so these are dead VMs essentially that haven't been powered on and not utilized. We can also see we have six constrained, as well as one bully. So, constrained VMs are essentially VMs which are requesting more resources than they actually have access to. This could be running at 100 percent CPU utilization, or 100 percent memory, or storage utilization. So we could actually go in and modify these. Speaker 1: Gotcha. So these are all part of the auto remediation capabilities that are now possible? Speaker 2: Yeah. Speaker 1: What else, do you want to take reporting? Speaker 2: Yeah. Yeah, so I know reporting is a very big thing, so if we think about it, we can't rely on an administrator to constantly go into Prism. We need to provide some mechanism to allow them to get emailed reports. So what we've done is we actually autogenerate reports which can be sent via email. So we'll go ahead and add one of these sample reports which was created today. And here we can actually get specific detailed information about our cluster without actually having to go into Prism to get this. Speaker 1: And you can customize these reports and all? Speaker 2: Yep. Yeah, if we hop over here and click on our new report, we can actually see a list of views we could add to these reports, and we can mix and match and customize as needed. Speaker 1: Yeah, so that's the operational side. Now we also have new services like AFS which has been quite popular with many of you folks. We've had hundreds of customers already on it live with SMB functionality. You want to show a couple of things that is new in five-five? Speaker 2: Yeah. Yep, definitely. So ... let's wait for my screen here. So one of the key things is if we looked at that runway tab, what we saw is we had over a year's worth of storage capacity. So, what we saw is customers had the requirement for filers, they had some excess storage, so why not actually build a software featured natively into the cluster. And that's essentially what we've done with AFS. So here we can see we have our AFS cluster, and one of the key things is the ability to scale. So, this particular cluster has around 3.1 or 3.16 billion files, which are running on this AFS cluster, as well as around 3,000 active concurrent sessions. Speaker 1: So basically thousands of concurrent sessions with billions of files? Speaker 2: Yeah, and the nice thing with this is this is actually only a four node Nutanix cluster, so as the cluster actually scales, these numbers will actually scale linearly as a function of those nodes. Speaker 1: Gotcha, gotcha. There's got to be one more bullet here on this slide so what's it about? Speaker 2: Yeah so, obviously the initial use case was realistically for home folders as well as user profiles. That was a good start, but it wasn't the only thing. So what we've done is we've actually also introduced important and upcoming release of NFS. So now you can now use NFS to also interface with our [crosstalk 01:16:44]. Speaker 1: NFS coming soon with AFS by the way, it's a big deal. Big deal. So one last thing obviously, as you go operationalize it, we've talked a lot of things on features and functions but one of the cool things that's always been seminal to this company is the fact that we all for really good customer service and support experience. Right now a lot of it is around the product, the people, the support guys, and so forth. So fundamentally to the product we have found ways using Pulse to instrument everything. With Pulse HD that has been allowed for a little bit longer now. We have fine grain [inaudible 01:17:20] around everything that's being done, so if you turn on this functionality you get a lot of information now that we built, we've used when you make a phone call, or an email, and so forth. There's a ton of context now available to support you guys. What we've now done is taken that and are now externalizing it for your own consumption, so that you don't have to necessarily call support. You can log in, look at your entire profile across your own alerts, your own advisories, your own recommendations. You can look at collective intelligence now that's coming soon which is the fact that look, here are 50 other customers just like you. These are the kinds of customers that are using workloads like you, what are their configuration profiles? Through this centralized customer insights portal you going to get a lot more insight, not just about your own operations, but also how everybody else is also using it. So let's take a quick look at that upcoming functionality. Speaker 2: Yep. Absolutely. So this is our customer 360 portal, so as [inaudible 01:18:18] mentioned, as a customer I can actually log in here, I can get a high-level overview of my existing environment, my cases, the status of those cases, as well as any relevant announcements. So, here based upon my cluster version, if there's any updates which are available, I can then see that here immediately. And then one of the other things that we've added here is this insights page. So essentially this is information that previously support would leverage to essentially proactively look out to the cluster, but now we've exposed this to you as the customer. So, clicking on this insights tab we can see an overview of our environment, in this case we have three Nutanix clusters, right around 550 virtual machines, and over here what's critical is we can actually see our cases. And one of the nice things about this is these area all autogenerated by the cluster itself, so no human interaction, no manual intervention was required to actually create these alerts. The cluster itself will actually facilitate that, send it over to support, and then support can get back out to you automatically. Speaker 1: K, so look for customer insights coming soon. And obviously that's the full life cycle. One cool thing though that's always been unique to Nutanix was the fact that we had [inaudible 01:19:28] security from day one built-in. And [inaudible 01:19:31] chunk of functionality coming in five-five just around this, because every release we try to insert more and more security capabilities, and the first one is around data. What are we doing? Speaker 2: Yeah, absolutely. So previously we had support for data at rest encryption, but this did have the requirement to leverage self-encrypting drives. These can be very expensive, so what we've done, typical to our fashion is we've actually built this in natively via software. So, here within Prism Element, I can go to data at rest encryption, and then I can go and edit this configuration here. Section 8 of 13 [01:10:00 - 01:20:04] Section 9 of 13 [01:20:00 - 01:30:04] (NOTE: speaker names may be different in each section) Steve: Encryption and then I can go and edit this configuration here. From here I could add my CSR's. I can specify KMS server and leverage native software base encryption without the requirement of SED's. Sunil: Awesome. So data address encryption [inaudible 01:20:15] coming soon, five five. Now data security is only one element, the other element was around network security obviously. We've always had this request about what are we doing about networking, what are we doing about network, and our philosophy has always been simple and clear, right. It is that the problem in networking is not the data plan. Problem in networking is the control plan. As in, if a packing loss happens to the top of an ax switch, what do we do? If there's a misconfigured board, what do we do? So we've invested a lot in full blown new network visualization that we'll show you a preview of that's all new in five five, but then once you can visualize you can take action, so you can actually using our netscape API's now in five five. You can optovision re lands on the switch, you can update reps on your load balancing pools. You can update obviously rules on your firewall. And then we've taken that to the next level, which is beyond all that, just let you go to AWS right now, what do you do? You take 100 VM's, you put it in an AWS security group, boom. That's how you get micro segmentation. You don't need to buy expensive products, you don't need to virtualize your network to get micro segmentation. That's what we're doing with five five, is built in one click micro segmentation. That's part of the core product, so why don't we just quickly show that. Okay? Steve: Yeah, let's take a look. So if we think about where we've been so far, we've done the comparison test, we've done a migration over to a Nutanix. We've deployed our new HR app. We've protected it's data, now we need to protect the network's. So one of the things you'll see that's new here is this security policies. What we'll do is we'll actually go ahead and create a new security policy and we'll just say this is HR security policy. We'll specify the application type, which in this case is HR. Sunil: HR of course. Steve: Yep and we can see our app instance is automatically populated, so based upon the number of running instances of that blueprint, that would populate that drop-down. Now we'll go ahead and click next here and what we can see in the middle is essentially those three tiers that composed that app blueprint. Now one of the important things is actually figuring out what's trying to communicate with this within my existing environment. So if I take a look over here on my left hand side, I can essentially see a few things. I can see a Ha Proxy load balancer is trying to communicate with my app here, that's all good. I want to allow that. I can see some sort of monitoring service is trying to communicate with all three of the tiers. That's good as well. Now the last thing I can see here is this IP address which is trying to access my database. Now, that's not designed and that's not supposed to happen, so what we'll do is we'll actually take a look and see what it's doing. Now hopping over to this database virtual machine or the hack VM, what we can see is it's trying to perform a brute force log in attempt to my MySQL database. This is not good. We can see obviously it can connect on the socket, however, it hasn't guessed the right password. In order to lock that down, we'll go back to our policies here and we're going to click deny. Once we've done that, we'll click next and now we'll go to Apply Now. Now we can see our newly created security policy and if we hop back over to this VM, we can now see it's actually timing out and what this means is that it's not able to communicate with that database virtual machine due to micro segmentation actively blocking that request. Sunil: Gotcha and when you go back to the Prism site, essentially what we're saying now is, it's as simple as that, to set up micro segmentation now inside your existing clusters. So that's one click micro segmentation, right. Good stuff. One other thing before we let Steve walk off the stage and then go to the bathroom, but is you guys know Steve, you know he spends a lot time in the gym, you do. Right. He and I share cubes right beside each other by the way just if you ever come to San Jose Nutanix corporate headquarters, you're always welcome. Come to the fourth floor and you'll see Steve and Sunil beside each other, most of the time I'm not in the cube, most of the time he's in the gym. If you go to his cube, you'll see all kinds of stuff. Okay. It's true, it's true, but the reason why I brought this up, was Steve recently became a father, his first kid. Oh by the way this is, clicker, this is how his cube looks like by the way but he left his wife and his new born kid to come over here to show us a demo, so give him a round of applause. Thank you, sir. Steve: Cool, thanks, Sunil. That was fun. Sunil: Thank you. Okay, so lots of good stuff. Please try out five five, give us feedback as you always do. A lot of sessions, a lot of details, have fun hopefully for the rest of the day. To talk about how their using Nutanix, you know here's one of our favorite customers and partners. He normally comes with sunglasses, I've asked him that I have to be the best looking guy on stage in my keynotes, so he's going to try to reduce his charm a little bit. Please come on up, Alessandro. Thank you. Alessandro R.: I'm delighted to be here, thank you so much. Sunil: Maybe we can stand here, tell us a little bit about Leonardo. Alessandro R.: About Leonardo, Leonardo is a key actor of the aerospace defense and security systems. Helicopters, aircraft, the fancy systems, the fancy electronics, weapons unfortunately, but it's also a global actor in high technology field. The security information systems division that is the division I belong to, 3,000 people located in Italy and in UK and there's several other countries in Europe and the U.S. $1 billion dollar of revenue. It has a long a deep experience in information technology, communications, automation, logical and physical security, so we have quite a long experience to expand. I'm in charge of the security infrastructure business side. That is devoted to designing, delivering, managing, secure infrastructures services and secure by design solutions and platforms. Sunil: Gotcha. Alessandro R.: That is. Sunil: Gotcha. Some of your focus obviously in recent times has been delivering secure cloud services obviously. Alessandro R.: Yeah, obviously. Sunil: Versus traditional infrastructure, right. How did Nutanix help you in some of that? Alessandro R.: I can tell something about our recent experience about that. At the end of two thousand ... well, not so recent. Sunil: Yeah, yeah. Alessandro R.: At the end of 2014, we realized and understood that we had to move a step forward, a big step and a fast step, otherwise we would drown. At that time, our newly appointed CEO confirmed that the IT would be a core business to Leonardo and had to be developed and grow. So we decided to start our digital transformation journey and decided to do it in a structured and organized way. Having clear in mind our targets. We launched two programs. One analysis program and one deployments programs that were essentially transformation programs. We had to renew ourselves in terms of service models, in terms of organization, in terms of skills to invest upon and in terms of technologies to adopt. We were stacking a certification of technologies that adopted, companies merged in the years before and we have to move forward and to rationalize all these things. So we spent a lot of time analyzing, comparing technologies, and evaluating what would fit to us. We had two main targets. The first one to consolidate and centralize the huge amount of services and infrastructure that were spread over 52 data centers in Italy, for Leonardo itself. The second one, to update our service catalog with a bunch of cloud services, so we decided to update our data centers. One of our building block of our new data center architecture was Nutanix. We evaluated a lot, we had spent a lot of time in analysis, so that wasn't a bet, but you are quite pioneers at those times. Sunil: Yeah, you took a lot of risk right as an Italian company- Alessandro R.: At this time, my colleague used to say, "Hey, Alessandro, think it over, remember that not a CEO has ever been fired for having chose IBM." I apologize, Bob, but at that time, when Nutanix didn't run on [inaudible 01:29:27]. We have still a good bunch of [inaudible 01:29:31] in our data center, so that will be the chance to ... Audience Member: [inaudible 01:29:37] Alessandro R.: So much you must [inaudible 01:29:37] what you announced it. Sunil: So you took a risk and you got into it. Alessandro R.: Yes, we got into, we are very satisfied with the results we have reached. Sunil: Gotcha. Alessandro R.: Most of the targets we expected to fulfill have come and so we are satisfied, but that doesn't mean that we won't go on asking you a big discount ... Sunil: Sure, sure, sure, sure. Alessandro R.: On price list. Sunil: Sure, sure, so what's next in terms of I know there are some interesting stuff that you're thinking. Alessandro R.: The next- Section 9 of 13 [01:20:00 - 01:30:04] Section 10 of 13 [01:30:00 - 01:40:04] (NOTE: speaker names may be different in each section) Speaker 1: So what's next, in terms of I know you have some interesting stuff that you're thinking of. Speaker 2: The next, we have to move forward obviously. The name Leonardo is inspired to Leonardo da Vinci, it was a guy that in terms of innovation and technology innovation had some good ideas. And so, I think, that Leonardo with Nutanix could go on in following an innovation target and following really mutual ... Speaker 1: Partnership. Speaker 2: Useful partnership, yes. We surely want to investigate the micro segmentation technologies you showed a minute ago because we have some looking, particularly by the economical point of view ... Speaker 1: Yeah, the costs and expenses. Speaker 2: And we have to give an alternative to the technology we are using. We want to use more intensively AHV, again as an alternative solution we are using. We are selecting a couple of services, a couple of quite big projects to build using AHV talking of Calm we are very eager to understand the announcement that they are going to show to all of us because the solution we are currently using is quite[crosstalk 01:31:30] Speaker 1: Complicated. Speaker 2: Complicated, yeah. To move a step of automation to elaborate and implement[inaudible 01:31:36] you spend 500 hours of manual activities that's nonsense so ... Speaker 1: Manual automation. Speaker 2: (laughs) Yes, and in the end we are very interested also in the prism features, mostly the new features that you ... Speaker 1: Talked about. Speaker 2: You showed yesterday in the preview because one bit of benefit that we received from the solution in the operations field means a bit plus, plus to our customer and a distinctive plus to our customs so we are very interested in that ... Speaker 1: Gotcha, gotcha. Thanks for taking the risk, thanks for being a customer and partner. Speaker 2: It has been a pleasure. Speaker 1: Appreciate it. Speaker 2: Bless you, bless you. Speaker 1: Thank you. So, you know obviously one OS, one click was one of our core things, as you can see the tagline doesn't stop there, it also says "any cloud". So, that's the rest of the presentation right now it's about; what are we doing, to now fulfill on that mission of one OS, one cloud, one click with one support experience across any cloud right? And there you know, we talked about Calm. Calm is not only just an operational experience for your private cloud but as you can see it's a one-click experience where you can actually up level your apps, set up blueprints, put SLA's and policies, push them down to either your AWS, GCP all your [inaudible 01:33:00] environments and then on day one while you can do one click provisioning, day two and so forth you will see new and new capabilities such as, one-click migration and mobility seeping into the product. Because, that's the end game for Calm, is to actually be your cloud autonomy platform right? So, you can choose the right cloud for the right workload. And talk about how they're building a multi cloud architecture using Nutanix and partnership a great pleasure to introduce my other good Italian friend Daniele, come up on stage please. From Telecom Italia Sparkle. How are you sir? Daniele: Not too bad thank you. Speaker 1: You want an espresso, cappuccino? Daniele: No, no later. Speaker 1: You all good? Okay, tell us a little about Sparkle. Daniele: Yeah, Sparkle is a fully owned subsidy of Telecom Italia group. Speaker 1: Mm-hmm (affirmative) Daniele: Spinned off in 2003 with the mission to develop the wholesale and multinational corporate and enterprise business abroad. Huge network, as you can see, hundreds of thousands of kilometers of fiber optics spread between; south east Asia to Europe to the U.S. Most of it proprietary part of it realized on some running cables. Part of them proprietary part of them bilateral part of them[inaudible 01:34:21] with other operators. 37 countries in which we have offices in the world, 700 employees, lean and clean company ... Speaker 1: Wow, just 700 employees for all of this. Daniele: Yep, 1.4 billion revenues per year more or less. Speaker 1: Wow, are you a public company? Daniele: No, fully owned by TIM so far. Speaker 1: So, what is your experience with Nutanix so far? Daniele: Well, in a way similar to what Alessandro was describing. To operate such a huge network as you can see before, and to keep on bringing revenues for the wholesale market, while trying to turn the bar toward the enterprise in a serious way. Couple of years ago the management team realized that we had to go through a serious transformation, not just technological but in terms of the way we build the services to our customers. In terms of how we let our customer feel the Sparkle experience. So, we are moving towards cloud but we are moving towards cloud with connectivity attached to it because it's in our cord as a provider of Telecom services. The paradigm that is driving today is the on-demand, is the dynamic and in order to get these things we need to move to software. Most of the network must become invisible as the Nutanix way. So, we decided instead of creating patchworks onto our existing systems, infrastructure, OSS, BSS and network systems, to build a new data center from scratch. And the paradigm being this new data center, the mantra was; everything is software designed, everything must be easy to manage, performance capacity planning, everything must be predictable and everything to be managed by few people. Nutanix is at the moment the baseline of this data center for what concern, let's say all the new networking tools, meaning as the end controllers that are taking care of automation and programmability of the network. Lifecycle service orchestrator, network orchestrator, cloud automation and brokerage platform and everything at the moment runs on AHV because we are forcing our vendors to certify their application on AHV. The only stack that is not at the moment AHV based is on a specific cloud platform because there we were really looking for the multi[inaudible 01:37:05]things that you are announcing today. So, we hope to do the migration as soon as possible. Speaker 1: Gotcha, gotcha. And then looking forward you're going to build out some more data center space, expose these services Daniele: Yeah. Speaker 1: For the customers as well as your internal[crosstalk 01:37:21] Daniele: Yeah, basically yes for sure we are going to consolidate, to invest more in the data centers in the markets on where we are leader. Italy, Turkey and Greece we are big data centers for [inaudible 01:37:33] and cloud, but we believe that the cloud with all the issues discussed this morning by Diraj, that our locality, customer proximity ... we think as a global player having more than 120 pops all over the world, which becomes more than 1000 in partnerships, that the pop can easily be transformed in a data center, so that we want to push the customer experience of what we develop in our main data centers closer to them. So, that we can combine traditional infrastructure as a service with the new connectivity services every single[inaudible 01:38:18] possibly everything running. Speaker 1: I mean, it makes sense, I mean I think essentially in some ways to summarize it's the example of an edge cloud where you're pushing a micro-cloud closer to the customers edge. Daniele: Absolutely. Speaker 1: Great stuff man, thank you so much, thank you so much. Daniele: Pleasure, pleasure. Thank you. Speaker 1: So, you know a couple of other things before we get in the next demo is the fact that in addition to Calm from multi-cloud management we have Zai, we talked about for extended enterprise capabilities and something for you guys to quickly understand why we have done this. In a very simple way is if you think about your enterprise data center, clearly you have a bunch of apps there, a bunch of public clouds and when you look at the paradigm you currently deploy traditional apps, we call them mode one apps, SAP, Exchange and so forth on your enterprise. Then you have next generation apps whether it be [inaudible 01:39:11] space, whether it be Doob or whatever you want to call it, lets call them mode two apps right? And when you look at these two types of apps, which are the predominant set, most enterprises have a combination of mode one and mode two apps, most public clouds primarily are focused, initially these days on mode two apps right? And when people talk about app mobility, when people talk about cloud migration, they talk about lift and shift, forklift [inaudible 01:39:41]. And that's a hard problem I mean, it's happening but it's a hard problem and ends up that its just not a one time thing. Once you've forklift, once you move you have different tooling, different operation support experience, different stacks. What if for some of your applications that mattered ... Section 10 of 13 [01:30:00 - 01:40:04] Section 11 of 13 [01:40:00 - 01:50:04] (NOTE: speaker names may be different in each section) Speaker 1: What if, for some of your applications that matter to you, that are your core enterprise apps that you can retain the same toolimg, the same operational experience and so forth. And that is what we achieve to do with Xi. It is truly making hybrid invisible, which is a next act for this company. It'll take us a few years to really fulfill the vision here, but the idea here is that you shouldn't think about public cloud as a different silo. You should think of it as an extension of your enterprise data centers. And for any services such as DR, whether it would be dev test, whether it be back-up, and so-forth. You can use the same tooling, same experience, get a public cloud-like capability without lift and shift, right? So it's making this lift and shift invisible by, soft of, homogenizing the data plan, the network plan, the control plan is what we really want to do with Xi. Okay? And we'll show you some more details here. But the simplest way to understand this is, think of it as the iPhone, right? D has mentioned this a little bit. This is how we built this experience. Views IOS as the core, IP, we wrap it up with a great package called the iPhone. But then, a few years into the iPhone era, came iTunes and iCloud. There's no apps, per se. That's fused into IOS. And similarly, think about Xi that way. The more you move VMs, into an internet-x environment, stuff like DR comes burnt into the fabric. And to give us a sneak peek into a bunch of the com and Xi cable days, let me bring back Binny who's always a popular guys on stage. Come on up, Binny. I'd be surprised in Binny untucked his shirt. He's always tucking in his shirt. Binny Gill: Okay, yeah. Let's go. Speaker 1: So first thing is com. And to show how we can actually deploy apps, not just across private and public clouds, but across multiple public clouds as well. Right? Binny Gill: Yeah, basically, you know com is about simplifying the disparity between various public clouds out there. So it's very important for us to be able to take one application blueprint and then quickly deploy in whatever cloud of your choice. Without understanding how one cloud is different. Speaker 1: Yeah, that's the goal. Binny Gill: So here, if you can see, I have market list. And by the way, this market list is a great partner community interest. And every single sort of apps come up here. Let me take a sample app here, Hadoop. And click launch. And now where do you want me to deploy? Speaker 1: Let's start at GCP. Binny Gill: GCP, okay. So I click on GCP, and let me give it a name. Hadoop. GCP. Say 30, right. Clear. So this is one click deployment of anything from our marketplace on to a cloud of your choice. Right now, what the system is doing, is taking the intent-filled description of what the application should look like. Not just the infrastructure level but also within the merchant machines. And it's creating a set of work flows that it needs to go deploy. So as you can see, while we were talking, it's loading the application. Making sure that the provisioning workflows are all set up. Speaker 1: And so this is actually, in real time it's actually extracting out some of the GCP requirements. It's actually talking to GCP. Setting up the constructs so that we can actually push it up on the GCP personally. Binny Gill: Right. So it takes a couple of minutes. It'll provision. Let me go back and show you. Say you worked with deploying AWS. So you Hadoop. Hit address. And that's it. So again, the same work flow. Speaker 1: Same process, I see. Binny Gill: It's going to now deploy in AWS. Speaker 1: See one of the keys things is that we actually extracted out all the isms of each of these clouds into this logical substrate. Binny Gill: Yep. Speaker 1: That you can now piggy-back off of. Binny Gill: Absolutely. And it makes it extremely simple for the average consumer. And you know we like more cloud support here over time. Speaker 1: Sounds good. Binny Gill: Now let me go back and show you an app that I had already deployed. Now 13 days ago. It's on GCP. And essentially what I want to show you is what is the view of the application. Firstly, it shows you the cost summary. Hourly, daily, and how the cost is going to look like. The other is how you manage it. So you know one click ways of upgrading, scaling out, starting, deleting, and so on. Speaker 1: So common actions, but independent of the type of clouds. Binny Gill: Independent. And also you can act with these actions over time. Right? Then services. It's learning two services, Hadoop slave and Hadoop master. Hadoop slave runs fast right now. And auditing. It shows you what are the important actions you've taken on this app. Not just, for example, on the IS front. This is, you know how the VMs were created. But also if you scroll down, you know how the application was deployed and brought up. You know the slaves have to discover each other, and so on. Speaker 1: Yeah got you. So find game invisibility into whatever you were doing with clouds because that's been one of the complaints in general. Is that the cloud abstractions have been pretty high level. Binny Gill: Yeah. Speaker 1: Yeah. Binny Gill: Yeah. So that's how we make the differences between the public clouds. All go away for the Indias of ... Speaker 1: Got you. So why don't we now give folks ... Now a lot of this stuff is coming in five, five so you'll see that pretty soon. You'll get your hands around it with AWS and tree support and so forth. What we wanted to show you was emerging alpha version that is being baked. So is a real production code for Xi. And why don't we just jump right in to it. Because we're running short of time. Binny Gill: Yep. Speaker 1: Give folks a flavor for what the production level code is already being baked around. Binny Gill: Right. So the idea of the design is make sure it's not ... the public cloud is no longer any different from your private cloud. It's a true seamless extension of your private cloud. Here I have my test environment. As you can see I'm running the HR app. It has the DB tier and the Web tier. Yeah. Alright? And the DB tier is running Oracle DB. Employee payroll is the Web tier. And if you look at the availability zones that I have, this is my data center. Now I want to protect this application, right? From disaster. What do I do? I need another data center. Speaker 1: Sure. Binny Gill: Right? With Xi, what we are doing is ... You go here and click on Xi Cloud Services. Speaker 1: And essentially as the slide says, you are adding AZs with one click. Binny Gill: Yeps so this is what I'm going to do. Essentially, you log in using your existing my.nutanix.com credentials. So here I'm going to use my guest credentials and log in. Now while I'm logging in what's happening is we are creating a seamless network between the two sides. And then making the Xi cloud availability zone appear. As if it was my own. Right? Speaker 1: Gotcha. Binny Gill: So in a couple of seconds what you'll notice this list is here now I don't have just one availability zone, but another one appears. Speaker 1: So you have essentially, real time now, paid a one data center doing an availability zone. Binny Gill: Yep. Speaker 1: Cool. Okay. Let's see what else we can do. Binny Gill: So now you think about VR setup. Now I'm armed with another data center, let's do DR Center. Now DR set-up is going to be extremely simple. Speaker 1: Okay but it's also based because on the fact that it is the same stack on both sides. Right? Binny Gill: It's the same stack on both sides. We have a secure network lane connecting the two sides, on top of the secure network plane. Now data can flow back and forth. So now applications can go back and forth, securely. Speaker 1: Gotcha, okay. Let's look at one-click DR. Binny Gill: So for one-click DR set-up. A couple of things we need to know. One is a protection rule. This is the RPO, where does it apply to? Right? And the connection of the replication. The other one is recovery plans, in case disaster happens. You know, how do I bring up my machines and application work-order and so on. So let me first show you, Protection Rule. Right? So here's the protection rule. I'll create one right now. Let me call it Platinum. Alright, and source is my own data center. Destination, you know Xi appears now. Recovery point objective, so maybe in a one hour these snapshots going to the public cloud. I want to retain three in the public side, three locally. And now I select what are the entities that I want to protect. Now instead of giving VMs my name, what I can do is app type employee payroll, app type article database. It covers both the categories of the application tiers that I have. And save. Speaker 1: So one of the things here, by the way I don't know if you guys have noticed this, more and more of Nutanix's constructs are being eliminated to become app-centric. Of course is VM centric. And essentially what that allows one to do is to create that as the new service-level API/abstraction. So that under the cover over a period of time, you may be VMs today, maybe containers tomorrow. Or functions, the day after. Binny Gill: Yep. What I just did was all that needs to be done to set up replication from your own data center to Xi. So we started off with no data center to actually replication happening. Speaker 1: Gotcha. Binny Gill: Okay? Speaker 1: No, no. You want to set up some recovery plans? Binny Gill: Yeah so now set up recovery plan. Recovery plans are going to be extremely simple. You select a bunch of VMs or apps, and then there you can say what are the scripts you want to run. What order in which you want to boot things. And you know, you can set up access these things with one click monthly or weekly and so on. Speaker 1: Gotcha. And that sets up the IPs as well as subnets and everything. Binny Gill: So you have the option. You can maintain the same IPs on frame as the move to Xi. Or you can make them- Speaker 1: Remember, you can maintain your own IPs when you actually use the Xi service. There was a lot of things getting done to actually accommodate that capability. Binny Gill: Yeah. Speaker 1: So let's take a look at some of- Binny Gill: You know, the same thing as VPC, for example. Speaker 1: Yeah. Binny Gill: You need to possess on Xi. So, let's create a recovery plan. A recovery plan you select the destination. Where does the recovery happen. Now, after that Section 11 of 13 [01:40:00 - 01:50:04] Section 12 of 13 [01:50:00 - 02:00:04] (NOTE: speaker names may be different in each section) Speaker 1: ... does the recovery happen. Now, after that you have to think of what is the runbook that you want to run when disaster happens, right? So you're preparing for that, so let me call "HR App Recovery." The next thing is the first stage. We're doing the first stage, let me add some entities by categories. I want to bring up my database first, right? Let's click on the database and that's it. Speaker 2: So essentially, you're building the script now. Speaker 1: Building the script- Speaker 2: ... on the [inaudible 01:50:30] Speaker 1: ... but in a visual way. It's simple for folks to understand. You can add custom script, add delay and so on. Let me add another stage and this stage is about bringing up the web tier after the database is up. Speaker 2: So basically, bring up the database first, then bring up the web tier, et cetera, et cetera, right? Speaker 1: That's it. I've created a recovery plan. I mean usually it's complicated stuff, but we made it extremely simple. Now if you click on "Recovery Points," these are snapshots. Snapshots of your applications. As you can see, already the system has taken three snapshots in response to the protection rule that we had created just a couple minutes ago. And these are now being seeded to Xi data centers. Of course this takes time for seeding, so what I have is a setup already and that's the production environment. I'll cut over to that. This is my production environment. Click "Explore," now you see the same application running in production and I have a few other VMs that are not protected. Let's go to "Recovery Points." It has been running for sometime, these recover points are there and they have been replicated to Xi. Speaker 2: So let's do the failover then. Speaker 1: Yeah, so to failover, you'll have to go to Xi so let me login to Xi. This time I'll use my production account for logging into Xi. I'm logging in. The first thing that you'll see in Xi is a dashboard that gives you a quick summary of what your DR testing has been so far, if there are any issues with the replication that you have and most importantly the monthly charges. So right now I've spent with my own credit card about close to 1,000 bucks. You'll have to refund it quickly. Speaker 2: It depends. If the- Speaker 1: If this works- Speaker 2: IF the demo works. Speaker 1: Yeah, if it works, okay. As you see, there are no VMs right now here. If I go to the recovery points, they are there. I can click on the recovery plan that I had created and let's see how hard it's going to be. I click "Failover." It says three entities that, based on the snapshots, it knows that it can recovery from source to destination, which is Xi. And one click for the failover. Now we'll see what happens. Speaker 2: So this is essentially failing over my production now. Speaker 1: Failing over your production now. [crosstalk 01:52:53] If you click on the "HR App Recovery," here you see now it started the recovery plan. The simple recovery plan that we had created, it actually gets converted to a series of tasks that the system has to do. Each VM has to be hydrated, powered on in the right order and so on and so forth. You don't have to worry about any of that. You can keep an eye on it. But in the meantime, let's talk about something else. We are doing failover, but after you failover, you run in Xi as if it was your own setup and environment. Maybe I want to create a new VM. I create a VM and I want to maybe extend my HR app's web tier. Let me name it as "HR_Web_3." It's going to boot from that disk. Production network, I want to run it on production network. We have production and test categories. This one, I want to give it employee payroll category. Now it applies the same policies as it's peers will. Here, I'm going to create the VM. As you can see, I can already see some VMs coming up. There you go. So three VMs from on-prem are now being filled over here while the fourth VM that I created is already being powered. Speaker 2: So this is basically realtime, one-click failover, while you're using Xi for your [inaudible 01:54:13] operations as well. Speaker 1: Exactly. Speaker 2: Wow. Okay. Good stuff. What about- Speaker 1: Let me add here. As the other cloud vendors, they'll ask you to make your apps ready for their clouds. Well we tell our engineers is make our cloud ready for your apps. So as you can see, this failover is working. Speaker 2: So what about failback? Speaker 1: All of them are up and you can see the protection rule "platinum" has been applied to all four. Now let's look at this recovery plan points "HR_Web_3" right here, it's already there. Now assume the on-prem was already up. Let's go back to on-prem- Speaker 2: So now the scenario is, while Binny's coming up, is that the on-prem has come back up and we're going to do live migration back as in a failback scenario between the data centers. Speaker 1: And how hard is it going to be. "HR App Recovery" the same "HR App Recovery", I click failover and the system is smart enough to understand the direction is reversed. It's also smart enough to figure out "Hey, there are now the four VMs are there instead of three." Xi to on-prem, one-click failover again. Speaker 2: And it's rerunning obviously the same runbook but in- Speaker 1: Same runbook but the details are different. But it's hidden from the customer. Let me go to the VMs view and do something interesting here. I'll group them by availability zone. Here you go. As you can see, this is a hybrid cloud view. Same management plane for both sides public and private. There are two availability zones, the Xi availability zone is in the cloud- Speaker 2: So essentially you're moving from the top- Speaker 1: Yeah, top- Speaker 2: ... to the bottom. Speaker 1: ... to the bottom. Speaker 2: That's happening in the background. While this is happening, let me take the time to go and look at billing in Xi. Speaker 1: Sure, some of the common operations that you can now see in a hybrid view. Speaker 2: So you go to "Billing" here and first let me look at my account. And account is a simple page, I have set up active directory and you can add your own XML file, upload it. You can also add multi-factor authentication, all those things are simple. On the billing side, you can see more details about how did I rack up $966. Here's my credit card. Detailed description of where the cost is coming from. I can also download previous versions, builds. Speaker 1: It's actually Nutanix as a service essentially, right? Speaker 2: Yep. Speaker 1: As a subscription service. Speaker 2: Not only do we go to on-prem as you can see, while we were talking, two VMs have already come back on-prem. They are powered off right now. The other two are on the wire. Oh, there they are. Speaker 1: Wow. Speaker 2: So now four VMs are there. Speaker 1: Okay. Perfect. Sometimes it works, sometimes it doesn't work, but it's good. Speaker 2: It always works. Speaker 1: Always works. All right. Speaker 2: As you can see the platinum protection rule is now already applied to them and now it has reversed the direction of [inaudible 01:57:12]- Speaker 1: Remember, we showed one-click DR, failover, failback, built into the product when Xi ships to any Nutanix fabric. You can start with DSX on premise, obviously when you failover to Xi. You can start with AHV, things that are going to take the same paradigm of one-click operations into this hybrid view. Speaker 2: Let's stop doing lift and shift. The era has come for click and shift. Speaker 1: Binny's now been promoted to the Chief Marketing Officer, too by the way. Right? So, one more thing. Speaker 2: Okay. Speaker 1: You know we don't stop any conferences without a couple of things that are new. The first one is something that we should have done, I guess, a couple of years ago. Speaker 2: It depends how you look at it. Essentially, if you look at the cloud vendors, one of the key things they have done is they've built services as building blocks for the apps that run on top of them. What we have done at Nutanix, we've built core services like block services, file services, now with Calm, a marketplace. Now if you look at [inaudible 01:58:14] applications, one of the core building pieces is the object store. I'm happy to announce that we have the object store service coming up. Again, in true Nutanix fashion, it's going to be elastic. Speaker 1: Let's- Speaker 2: Let me show you. Speaker 1: Yeah, let's show it. It's something that is an object store service by the way that's not just for your primary, but for your secondary. It's obviously not just for on-prem, it's hybrid. So this is being built as a next gen object service, as an extension of the core fabric, but accommodating a bunch of these new paradigms. Speaker 2: Here is the object browser. I've created a bunch of buckets here. Again, object stores can be used in various ways: as primary object store, or for secondary use cases. I'll show you both. I'll show you a Hadoop use case where Hadoop is using this as a primary store and a backup use case. Let's just jump right in. This is a Hadoop bucket. AS you can see, there's a temp directory, there's nothing interesting there. Let me go to my Hadoop VM. There it is. And let me run a Hadoop job. So this Hadoop job essentially is going to create a bunch of files, write them out and after that do map radius on top. Let's wait for the job to start. It's running now. If we go back to the object store, refresh the page, now you see it's writing from benchmarks. Directory, there's a bunch of files that will write here over time. This is going to take time. Let's not wait for it, but essentially, it is showing Hadoop that uses AWS 3 compatible API, that can run with our object store because our object store exposes AWS 3 compatible APIs. The other use case is the HYCU backup. As you can see, that's a- Section 12 of 13 [01:50:00 - 02:00:04] Section 13 of 13 [02:00:00 - 02:13:42] (NOTE: speaker names may be different in each section) Vineet: This is the hycu back up ... As you can see, that's a back-up software that can back-up WSS3. If you point it to Nutanix objects or it can back-up there as well. There are a bunch of back-up files in there. Now, object stores, it's very important for us to be able to view what's going on there and make sure there's no objects sprawled because once it's easy to write objects, you just accumulate a lot of them. So what we wanted to do, in true Nutanix style, is give you a quick overview of what's happening with your object store. So here, as you can see, you can look at the buckets, where the load is, you can look at the bucket sizes, where the data is, and also what kind of data is there. Now this is a dashboard that you can optimize, and customize, for yourself as well, right? So that's the object store. Then we go back here, and I have one more thing for you as well. Speaker 2: Okay. Sounds good. I already clicked through a slide, by the way, by mistake, but keep going. Vineet: That's okay. That's okay. It is actually a quiz, so it's good for people- Speaker 2: Okay. Sounds good. Vineet: It's good for people to have some clues. So the quiz is, how big is my SAP HANA VM, right? I have to show it to you before you can answer so you don't leak the question. Okay. So here it is. So the SAP HANA VM here vCPU is 96. Pretty beefy. Memory is 1.5 terabytes. The question to all of you is, what's different in this screen? Speaker 2: Who's a real Prism user here, by the way? Come on, it's got to be at least a few. Those guys. Let's see if they'll notice something. Vineet: What's different here? Speaker 3: There's zero CVM. Vineet: Zero CVM. Speaker 2: That's right. Yeah. Yeah, go ahead. Vineet: So, essentially, in the Nutanix fabric, every server has to run a [inaudible 02:01:48] machine, right? That's where the storage comes from. I am happy to announce the Acropolis Compute Cloud, where you will be able to run the HV on servers that are storage-less, and add it to your existing cluster. So it's a compute cloud that now can be managed from Prism Central, and that way you can preserve your investments on your existing server farms, and add them to the Nutanix fabric. Speaker 2: Gotcha. So, essentially ... I mean, essentially, imagine, now that you have the equivalent of S3 and EC2 for the enterprise now on Premisis, like you have the equivalent compute and storage services on JCP and AWS, and so forth, right? So the full flexibility for any kind of workload is now surely being available on the same Nutanix fabric. Thanks a lot, Vineet. Before we wrap up, I'd sort of like to bring this home. We've announced a pretty strategic partnership with someone that has always inspired us for many years. In fact, one would argue that the genesis of Nutanix actually was inspired by Google and to talk more about what we're actually doing here because we've spent a lot of time now in the last few months to really get into the product capabilities. You're going to see some upcoming capabilities and 55X release time frame. To talk more about that stuff as well as some of the long-term synergies, let me invite Bill onstage. C'mon up Bill. Tell us a little bit about Google's view in the cloud. Bill: First of all, I want to compliment the demo people and what you did. Phenomenal work that you're doing to make very complex things look really simple. I actually started several years ago as a product manager in high availability and disaster recovery and I remember, as a product manager, my engineers coming to me and saying "we have a shortage of our engineers and we want you to write the fail-over routines for the SAP instance that we're supporting." And so here's the PERL handbook, you know, I haven't written in PERL yet, go and do all that work to include all the network setup and all that work, that's amazing, what you are doing right there and I think that's the spirit of the partnership that we have. From a Google perspective, obviously what we believe is that it's time now to harness the power of scale security and these innovations that are coming out. At Google we've spent a lot of time in trying to solve these really large problems at scale and a lot of the technology that's been inserted into the industry right now. Things like MapReduce, things like TenserFlow algorithms for AI and things like Kubernetes and Docker were first invented at Google to solve problems because we had to do it to be able to support the business we have. You think about search, alright? When you type in search terms within the search box, you see a white screen, what I see is all the data-center work that's happening behind that and the MapReduction to be able to give you a search result back in seconds. Think about that work, think about that process. Taking and pursing those search terms, dividing that over thousands of [inaudible 02:05:01], being able to then search segments of the index of the internet and to be able to intelligent reduce that to be able to get you an answer within seconds that is prioritized, that is sorted. How many of you, out there, have to go to page two and page three to get the results you want, today? You don't because of the power of that technology. We think it's time to bring that to the consumer of the data center enterprise space and that's what we're doing at Google. Speaker 2: Gotcha, man. So I know we've done a lot of things now over the last year worth of collaboration. Why don't we spend a few minutes talking through a couple things that we're started on, starting with [inaudible 02:05:36] going into com and then we'll talk a little bit about XI. Bill: I think one of the advantages here, as we start to move up the stack and virtualize things to your point, right, is virtual machines and the work required of that still takes a fair amount of effort of which you're doing a lot to reduce, right, you're making that a lot simpler and seamless across both On-Prem and the cloud. The next step in the journey is to really leverage the power of containers. Lightweight objects that allow you to be able to head and surface functionality without being dependent upon the operating system or the VM to be able to do that work. And then having the orchestration layer to be able to run that in the context of cloud and On-Prem We've been very successful in building out the Kubernetes and Docker infrastructure for everyone to use. The challenge that you're solving is how to we actually bridge the gap. How do we actually make that work seamlessly between the On-Premise world and the cloud and that's where our partnership, I think, is so valuable. It's cuz you're bringing the secret sauce to be able to make that happen. Speaker 2: Gotcha, gotcha. One last thing. We talked about Xi and the two companies are working really closely where, essentially the Nutanix fabric can seamlessly seep into every Google platform as infrastructure worldwide. Xi, as a service, could be delivered natively with GCP, leading to some additional benefits, right? Bill: Absolutely. I think, first and foremost, the infrastructure we're building at scale opens up all sorts of possibilities. I'll just use, maybe, two examples. The first one is network. If you think about building out a global network, there's a lot of effort to do that. Google is doing that as a byproduct of serving our consumers. So, if you think about YouTube, if you think about there's approximately a billion hours of YouTube that's watched every single day. If you think about search, we have approximately two trillion searches done in a year and if you think about the number of containers that we run in a given week, we run about two billion containers per week. So the advantage of being able to move these workloads through Xi in a disaster recovery scenario first is that you get to take advantage of the scale. Secondly, it's because of the network that we've built out, we had to push the network out to the edge. So every single one of our consumers are using YouTube and search and Google Play and all those services, by the way we have over eight services today that have more than a billion simultaneous users, you get to take advantage of that network capacity and capability just by moving to the cloud. And then the last piece, which is a real advantage, we believe, is that it's not just about the workloads you're moving but it's about getting access to new services that cloud preventers, like Google, provide. For example, are you taking advantage like the next generation Hadoop, which is our big query capability? Are you taking advantage of the artificial intelligence derivative APIs that we have around, the video API, the image API, the speech-to-text API, mapping technology, all those additional capabilities are now exposed to you in the availability of Google cloud that you can now leverage directly from systems that are failing over and systems that running in our combined environment. Speaker 2: A true converged fabric across public and private. Bill: Absolutely. Speaker 2: Great stuff Bill. Thank you, sir. Bill: Thank you, appreciate it. Speaker 2: Good to have you. So, the last few slides. You know we've talked about, obviously One OS, One Click and eCloud. At the end of the day, it's pretty obvious that we're evaluating the move from a form factor perspective, where it's not just an OS across multiple platforms but it's also being distributed genuinely from consuming itself as an appliance to a software form factor, to subscription form factor. What you saw today, obviously, is the fact that, look you know we're still continuing, the velocity has not slowed down. In fact, in some cases it's accelerated. If you ask my quality guys, if you ask some of our customers, we're coming out fast and furious with a lot of these capabilities. And some of this directly reflects, not just in features, but also in performance, just like a public cloud, where our performance curve is going up while our price-performance curve is being more attractive over a period of time. And this is balancing it with quality, it is what differentiates great companies from good companies, right? So when you look at the number of nodes that have been shipping, it was around ten more nodes than where we were a few years ago. But, if you look at the number of customer-found defects, as a percentage of number of nodes shipped it is not only stabilized, it has actually been coming down. And that's directly reflected in the NPS part. That most of you guys love. How many of you guys love your Customer Support engineers? Give them a round of applause. Great support. So this balance of velocity, plus quality, is what differentiates a company. And, before we call it a wrap, I just want to leave you with one thing. You know, obviously, we've talked a lot about technology, innovation, inspiration, and so forth. But, as I mentioned, from last night's discussion with Sir Ranulph, let's think about a few things tonight. Don't take technology too seriously. I'll give you a simple story that he shared with me, that puts things into perspective. The year was 1971. He had come back from Aman, from his service. He was figuring out what to do. This was before he became a world-class explorer. 1971, he had a job interview, came down from Scotland and applied for a role in a movie. And he failed that job interview. But he was selected from thousands of applicants, came down to a short list, he was a ... that's a hint ... he was a good looking guy and he lost out that role. And the reason why I say this is, if he had gotten that job, first of all I wouldn't have met him, but most importantly the world wouldn't have had an explorer like him. The guy that he lost out to was Roger Moore and the role was for James Bond. And so, when you go out tonight, enjoy with your friends [inaudible 02:12:06] or otherwise, try to take life a little bit once upon a time or more than once upon a time. Have fun guys, thank you. Speaker 5: Ladies and gentlemen please make your way to the coffee break, your breakout sessions will begin shortly. Don't forget about the women's lunch today, everyone is welcome. Please join us. You can find the details in the mobile app. Please share your feedback on all sessions in the mobile app. There will be prizes. We will see you back here and 5:30, doors will open at 5, after your last breakout session. Breakout sessions will start sharply at 11:10. Thank you and have a great day. Section 13 of 13 [02:00:00 - 02:13:42]
SUMMARY :
of the globe to be here. And now, to tell you more about the digital transformation that's possible in your business 'Cause that's the most precious thing you actually have, is time. And that's the way you can have the best of both worlds; the control plane is centralized. Speaker 1: Thank you so much, Bob, for being here. Speaker 1: IBM is all things cognitive. and talking about the meaning of history, because I love history, actually, you know, We invented the role of the CIO to help really sponsor and enter in this notion that businesses Speaker 1: How's it different from 1993? Speaker 1: And you said it's bigger than 25 years ago. is required to do that, the experience of the applications as you talked about have Speaker 1: It looks like massive amounts of change for Speaker 1: I'm sure there are a lot of large customers Speaker 1: How can we actually stay not vulnerable? action to be able to deploy cognitive infrastructure in conjunction with the business processes. Speaker 1: Interesting, very interesting. and the core of cognition has to be infrastructure as well. Speaker 1: Which is one of the two things that the two So the algorithms are redefining the processes that the circuitry actually has to run. Speaker 1: It's interesting that you mentioned the fact Speaker 1: Exactly, and now the question is how do you You talked about the benefits of calm and being able to really create that liberation fact that you have the power of software, to really meld the two forms together. Speaker 1: It can serve files and mocks and things like And the reason for that if for any data intensive application like a data base, a no sequel What we want is that optionality, for you to utilize those benefits of the 3X better Speaker 1: Your tongue in cheek remark about commodity That is the core of IBM's business for the last 20, 25, 30 years. what you already have to make it better. Speaker 1: Yeah. Speaker 1: That's what Apple did with musics. It's okay, and possibly easier to do it in smaller islands of containment, but when you Speaker 1: Awesome. Thank you. I know that people are sitting all the way up there as well, which is remarkable. Speaker 3: Ladies and gentlemen, please welcome Chief But before I get into the product and the demos, to give you an idea. The starting point evolves to the score architecture that we believe that the cloud is being dispersed. So, what we're going to do is, the first step most of you guys know this, is we've been Now one of the key things is having the ability to test these against each other. And to do that, we took a hard look and came out with a new product called Xtract. So essentially if we think about what Nutanix has done for the data center really enables and performing the cut over to you. Speaker 1: Sure, some of the common operations that you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Binny Gill | PERSON | 0.99+ |
Daniele | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Binny | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
Julie | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Italy | LOCATION | 0.99+ |
UK | LOCATION | 0.99+ |
Telecom Italia | ORGANIZATION | 0.99+ |
Acropolis | ORGANIZATION | 0.99+ |
100 percent | QUANTITY | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Alessandro | PERSON | 0.99+ |
2003 | DATE | 0.99+ |
Sunil | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
20% | QUANTITY | 0.99+ |
Steven Poitras | PERSON | 0.99+ |
15 seconds | QUANTITY | 0.99+ |
1993 | DATE | 0.99+ |
Leonardo | PERSON | 0.99+ |
Lennox | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Six | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
John Doe | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Arindam Paul, Dell EMC - Dell EMC World 2017
>> Announcer: Live from Las Vegas, it's theCUBE, covering Dell EMC World 2017, brought to you by Dell EMC. >> Welcome back here to Las Vegas, live at the Venetian, theCUBE continuing our coverage of Dell EMC World 2017, where we're extracting a signal from the noise here on theCUBE. Of course, the flagship broadcast outlet for SiliconANGLE TV. I'm John Walls. Good to have you with us here along with Keith Townsend who's the principal of CTO Advisors, and joining us now is Arindam Paul who's the senior consultant of product marketing at Dell EMC. Arindam, thanks for being with us today. >> Thank you, John. >> It's kind of like the XtremIO X2 hour right now on theCUBE. (Everyone chuckling) We just said it's great talking about the launch today. You're heavily involved with X2. Just had the first break-out session and you said you packed the house. >> Yes. >> Standing room only. So I assume it was a big hit. What were the customers, if you will, most interested in and what was your sense of where they were coming from? >> That's right, thank you. Yes, we just had our first break-out session and there was a lot of customer interest. It was primarily the customers wanted to know, obviously, what was great about X2, how would it differentiated versus X1, in terms of speed... Not only what speeds and feats, but also all the features, the software enhancements, everything that we're going to be announcing this week. >> John: So a hungry market? >> Definitely, definitely. We were, actually, to be quite honest, it was on top of the lunch hour, so we were not expecting a very full audience because obviously, we are keeping people from their lunches, but the interest belied our expectation. We were very happy and surprised. >> John: So literally a hungry market then? >> Definitely. >> Over lunch time. >> You're right (laughing). >> So, I'm going to ask a lazy question. What was the biggest question coming out of the session as people stood around and asked? >> Yeah, people loved all the hardware enhancements that we're bringing to markets. There was a lot of impromptu unsolicited clapping and cheering when we announced that our latest GUI, graphical user interface, is going to be without Java. Apparently, that was anticipated for a very long time. >> Keith: I almost clapped just now. (John laughing) >> That right, HTML5 was and we have a lot of enhancements that use graphical interface in terms of, like intuitive, very context-sensitive hints as you'd expect on your iPhone, as you're configuring and walking though the menus. We also have a lot of nice reporting, very beautiful search capabilities that's going to be there for the first time and people, apparently, just loved it. Especially from an administrative perspective. >> Any new, exciting data services that weren't available in XIO1 that's available in XIO2? >> In terms of data services, yes, obviously. Like, now we're going to be scaled up as well scaled out, so we're going to be multidimensionality scaling and then we obviously have done a lot of work in terms of tuning performance, tuning data compression, so you're going to get a lot more compression out of our platform, data reduction out of our platform. Overall, it's a lot of interest. >> When's the last time you got spontaneous applause at a presentation? (Arindam laughing) >> I'll tell you, for as skeptical and as discerning customer base as ours, it's hard to get. >> I imagine. >> You have to earn it. >> You had to feel like, "Hey, we've hit the jackpot here." >> We did, exactly. >> So to speak in Vegas. >> So, customer base, I've been hearing a lot about cheaper, deeper storage in XIO2. What is the target customer for XIO2? Is this only for larger enterprises or is there a play for the SMB mid-size company as well? >> We wanted to make X2 the platform of choice for our customers who are primarily interested in, say for example, copy data management. We've been an amazing copy data management machine, like if you look at our installer base today, we have about 1.5 million snapshots of XtremIO virtual copies that have been used. The vast majority of them, well 50% of them, are actually writable snapshots, so they're being used very differently than primarily dumb backup copies, or secondary copies. They are active citizens, first-class citizens, they're at par with volumes. So copy data management is obviously a big use case for us. Virtual desktops, VDIs, right? >> Before we get off into VDI, copy data management, that's a term I've heard, but some people might not have heard that term. What's copy data management and what's the impact of copy data management to an IT budget, for example? >> Oh, there's tremendous benefits, right? Copy data management, when done right, like we do on our platform, really lets your IT break the chains and it frees IT, and provides for them a lot of business agility so that they're able to make instant copies of the production database virtually at will, without any cost, even in terms of time because they're instant copies, or in terms of occupying spaces. So you could literally create clones of your data, and these clones are perfectly functional clones so you can write to them, you can read to them as if your production data, and that's an amazing capability of itself. By the way, when you're creating these copies, there's zero to no impact to your production performance. Your production performance keeps on being as it is. Now, when you layer on top of that, because of our metadata architecture, metadata delivery architecture, you can make the copies resemble production or make the production resemble the copies. So you can basically restore-refresh at will. Again, without any impact to production, without any downtime, without literally any cost whatsoever. So when you're able to do this kind of stuff right now, think about the use case in your typical tester and their production environment. Where you have one copy of production and then multiple copies for your test engineers. You'd allot your engineers all the analytics copies and all those copies can be, literally, run very close to production because it doesn't cost you hours to basically create those copies or it doesn't take terabytes of space. So it really, truly lets you add agility to your IT and basically run your business much much efficiently and fast. >> Flash storage in general always helped with VDI, seems like there's a connection between copy data, flash storage, and VDI. Am I making an assumption here? >> Well, VDI, when you think about it, is copies of desktops. It would be perfect copies if you're not trying to basically customize them. So we use a slightly different technology, in namely our inline deduplication and compression and how we integrate our inline dedupe and our in-memory metadata with VDI-specific commands such as VAAI xcopy, how you basically clone virtual desktops. So we don't use snapshots to clone the virtual desktops, instead we use something called VAAI xcopy optimized with inline metadata, but the effect is the same. You can literally create roll-out virtual desktops, thousands and thousands of copies of virtual desktops in a really short order and you can manage them and everything compresses and dedupes very efficiently in a very small optimal footprint. >> You've heard from your customers today, at least in a brief amount of time. What do you think is going to be the biggest benefit an X1 user is going to find with X2? At the end of the day, what do you think is going to be the "Aha!" moment for them that's really going to open their eyes as to how you've impacted their businesses. >> Certainly, certainly. So we have a lot of eager customers and I think of the features that were long-sought after by our customer base, I think they're very happy about the economics of the platform. So we have significantly reduced the dollar-per-gigabyte cost to the customer on an effective basis and it's going to be like 1/3rd of what it was in X1. I think people were literally jumping on the seats when they heard that because not only don't you have better performance, better data reduction, new data services, but hey, we just slashed the price >> Save me money. >> 66% >> Right. >> So, outside of cost savings, new data services, one of the things that I heard is data replication natively. >> Right. >> That's a big deal. Walk us through the data replication capability. >> Yes, yes. Again, if you step back, one of the things that our architecture let's us do because of, again, our metadata, our foundation architecture that's based on metadata, is that we're very, very efficient in doing copies. Whether it's VDI copies or database copies, we are a copy machine. When you think of it and step back, replication is a copy problem because you're creating yet another copy, the only difference is that now the copy is happening outside of your box, from one XtremIO to another XtremIO. So what we did was that we leveraged the same foundational architecture, our same architecture, to basically not only replicate changes but actually dedupe changes. Now if you think about a global enterprise that has maybe a multisite replication going on, like four, five, six, seven, eight, up to 16, 32 sites that are replicating to one place, now you can see the power of our architecture. So there are many advantages. One is that you're only replicating deduplicated changes. What I mean by that is if there is a block of data that's already at the target site, you won't need to replicate that again, all you need to do is copy metadata and point it across, and that gives you like 99% savings. That's one. You also change the data transfer problem into a data reduction problem because now the only data you have to put on the wire to replicate is everything after dedupe and compression, and we get about four to one. So you slash your data transfer by 75%. In a global dedupe system, when you're multiple sites replicating to one target site because of the fact that all sites are deduplicating among themselves, we expect savings to be up to 38% on average. So savings at the target site, savings on the WAN bandwidth, and much faster replication. That's our solution. >> That's why they were standing on their seats clapping for you today (Everyone laughing) >> That's true. >> Arindam, thanks for being with us. We appreciate the time. >> Thank you very much. >> Congratulations on a very successful launch and one I'm sure will be many more spontaneous rounds of applause. >> We're just getting started, thank you. >> You bet. >> Thank you, John. We continue here on theCUBE live from Dell EMC World 2017. We're in Las Vegas. Back with more in just a bit. (gentle music)
SUMMARY :
brought to you by Dell EMC. Good to have you with us here along and you said you packed the house. and what was your sense of where they were coming from? and there was a lot of customer interest. but the interest belied our expectation. coming out of the session as people stood around and asked? Yeah, people loved all the hardware enhancements Keith: I almost clapped just now. That right, HTML5 was and we have a lot of enhancements and then we obviously have done a lot of work and as discerning customer base as ours, it's hard to get. What is the target customer for XIO2? like if you look at our installer base today, to an IT budget, for example? So you could literally create clones of your data, always helped with VDI, Well, VDI, when you think about it, is copies of desktops. At the end of the day, what do you think and it's going to be like 1/3rd of what it was in X1. one of the things that I heard That's a big deal. because now the only data you have to put on the wire We appreciate the time. and one I'm sure will be many more We continue here on theCUBE live from Dell EMC World 2017.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Keith Townsend | PERSON | 0.99+ |
Arindam | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
Arindam Paul | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Vegas | LOCATION | 0.99+ |
HTML5 | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
one copy | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
66% | QUANTITY | 0.99+ |
XIO1 | TITLE | 0.99+ |
first time | QUANTITY | 0.98+ |
SiliconANGLE TV | ORGANIZATION | 0.98+ |
XIO2 | TITLE | 0.98+ |
eight | QUANTITY | 0.97+ |
zero | QUANTITY | 0.97+ |
one place | QUANTITY | 0.97+ |
up to 38% | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
seven | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
one | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
about 1.5 million snapshots | QUANTITY | 0.95+ |
first break-out session | QUANTITY | 0.92+ |
XtremIO | TITLE | 0.9+ |
X1 | DATE | 0.89+ |
Venetian | LOCATION | 0.85+ |
one target site | QUANTITY | 0.84+ |
up to 16, 32 sites | QUANTITY | 0.81+ |
CTO | ORGANIZATION | 0.8+ |
Dell EMC World 2017 | EVENT | 0.78+ |
1/3rd | QUANTITY | 0.76+ |
X2 | EVENT | 0.73+ |
VAAI | TITLE | 0.7+ |
terabytes of | QUANTITY | 0.69+ |
EMC World | EVENT | 0.67+ |
Dell | ORGANIZATION | 0.57+ |
2017 | TITLE | 0.56+ |
about | QUANTITY | 0.53+ |
X1 | TITLE | 0.52+ |
first | QUANTITY | 0.51+ |
X2 | TITLE | 0.48+ |