Image Title

Search Results for 44 NVMe:

Kim Leyenaar, Broadcom | SuperComputing 22


 

(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)

Published Date : Nov 16 2022

SUMMARY :

And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Kim LeyenaarPERSON

0.99+

Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

15QUANTITY

0.99+

PaulPERSON

0.99+

BroadcomORGANIZATION

0.99+

KimPERSON

0.99+

30 megabytesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

thousandsQUANTITY

0.99+

9,500QUANTITY

0.99+

28QUANTITY

0.99+

22 yearsQUANTITY

0.99+

six coresQUANTITY

0.99+

Paul GillinPERSON

0.99+

DellORGANIZATION

0.99+

fourQUANTITY

0.99+

DallasLOCATION

0.99+

24 gigQUANTITY

0.99+

two companiesQUANTITY

0.99+

first oneQUANTITY

0.99+

RockyORGANIZATION

0.98+

27.8QUANTITY

0.98+

todayDATE

0.98+

30 terabyteQUANTITY

0.98+

both worldsQUANTITY

0.98+

about 22 yearsQUANTITY

0.97+

two generationsQUANTITY

0.97+

each oneQUANTITY

0.97+

SuperComputing 22ORGANIZATION

0.97+

one controllerQUANTITY

0.97+

threeQUANTITY

0.96+

two really big announcementsQUANTITY

0.96+

250 megabytesQUANTITY

0.96+

one serverQUANTITY

0.94+

Gen fourCOMMERCIAL_ITEM

0.94+

up to 400 gigiQUANTITY

0.93+

Rocky standardsORGANIZATION

0.93+

tens of thousands of serversQUANTITY

0.93+

400 gigiQUANTITY

0.92+

around 200QUANTITY

0.92+

9,500 seriesQUANTITY

0.92+

excelTITLE

0.91+

9,500 seriesCOMMERCIAL_ITEM

0.9+

16QUANTITY

0.9+

InfiniBandORGANIZATION

0.89+

sub 8 microsecondsQUANTITY

0.89+

gen fourCOMMERCIAL_ITEM

0.89+

eight actuatorQUANTITY

0.89+

second bandwidthQUANTITY

0.88+

Atlas 2COMMERCIAL_ITEM

0.86+

GENUWAORGANIZATION

0.86+

ThorORGANIZATION

0.85+

fiveTITLE

0.85+

about 15 years agoDATE

0.84+

28 gigabytesQUANTITY

0.84+

Gen FiveCOMMERCIAL_ITEM

0.83+

27 gigabytes a secondQUANTITY

0.82+

Chance Bingen, NetApp & Jason Massae, VMware | VMware Explore 2022


 

(upbeat music) >> Hey everyone. Welcome back to San Francisco, VMware Explorer 2022, Lisa Martin and Dave Nicholson here. We've been having some great conversations today. Lots of news coming out about VMware and its partner ecosystem. We're going to have another conversation about that next. Please welcome two guests to the program, Chance Bingen, technical marketing engineer at NetApp and Jason Massae, staff technical marketing architect, storage and vVols at VMware. Guys, welcome to the program. >> Thanks. >> Glad to be here. >> It's nice to be back in person. >> It is. It's very nice. Oh my gosh. >> And we're hearing there about 7,000 to 10,000 people here, when I was in the Keynote, this morning it was definitely standing room only. >> Yeah, yeah. You've definitely seen the numbers ticked up at the last minute. It was good to see that. It's good, I think a lot of people have really wanted to get back, get that one on one that face to face. There's nothing like being able to, you know, talk to, the experts, talk to the vendors, you know, see your comrades. I mean, that's the thing. I mean, we've seen people that I haven't seen for years, even on my own team, so really good to be back into it. >> It is and it was lots of news coming out this morning during the Keynote. My goodness. But Jason, talk to me, the NetApp and VMware folks had been in tight partnership for a long time. Talk to me about, get both of your perspective from a technical perspective about the depth of the partnership. >> Yeah, so actually NetApp was one of the original design partners for vVols. And with that, now with some of the stuff we're doing with more current stuff with virtual volumes is, NetApp is back and we've got some pretty neat stuff that we've been working on with vVols. And NetApp's got some pretty neat stuff that they've been working on to enable the customers with more features, more functionality with the virtual volume functionality. >> Yeah, absolutely. >> Give us a quick primer on what is a vVol? What is a virtual volume? How does it fit into the, into this stack of stuff that we do in IT? >> Yeah. So the easiest way to kind of think of what a vVol is or a virtual volume is you can think of it kind of like an RDM, those row device map, which is kind of a four letter word. We don't really like those, but the idea is that object, that virtual volume is native on the array and presented directly to the VM. But now what we do is we're presenting all of the storage array features up to vSphere and we're managing those storage features via policy based management. But instead of applying storage capabilities at a data store level, we're now applying them at a VM or an application level. So you can have one data store and multiple VMs, and every VM can have a different storage capability managed by a policy that the VI admin gets to manage now. So he doesn't have to go to the storage admin to say, I need a new line, or I need a new volume. He can just go in and create a policy or change a policy. And now that storage capability is applied to the VM or the application. >> Yeah. One thing I'd like to add to that is you can mentioned the word capabilities. >> So we look at the actual data protocols, whether they're file based or block based, you know, I-scuzzy, fiber channel, whatever the case might be. Those protocols have defined sets of capabilities and attributes and things they can expose. What vVols along with the VASA protocol brings to the table is the ability to expose things that are just impossible to expose via the data protocols themselves. So that the, actual nature of the array, what kind of array is it? What's it capable of doing? What is the nature of, you know, encryption? You know, is this going to be a secure, encrypted data store? Is it going to be something else? It just allows you to do so much more with the advanced capabilities that modern storage arrays have than you could ever do if you were just using the data protocols by themselves. >> Right? Yeah. Kind of under that same context. If you think about before with traditional storage, the vSphere or the array really doesn't understand what's going on underlying storage, but with vVols the array and vSphere completely understand at a disc level even, how that VM should be treated. So that helps the storage admin. Storage admin can now go in and see a specific disc of a VM and see the performance on the array. They can go in the array and see, oh, this disc on this VM has got performance issues or needs to be encrypted, or here's the size of that disc. And you couldn't easily see that with your traditional storage. So there's really a lot of benefits and it frees up a lot of time for the storage administrator and enables the VI admin to be able to do a lot of the storage management. >> So there have been, there been a lot of movements over the last decade in the realm of software defined storage. Where essentially all of the things that you are talking about are completely abstracted from the underlying hardware. In this case, you're leveraging the horsepower, if you will and the intelligence of a storage array that has a lot of horsepower and intelligence, and you're accessing those features. You mentioned encryption, whether if you're doing a snapshot or something like that, what's interesting here is it kind of maps to what we're looking at now, which is the trend in the direction of things like DPUs. >> If you go back in history long enough, we had the, you know, the TOE, NIC, TCP offload, you know, the idea of, hey, you know what, what if we had a smart device with its own brain power and we leveraged it. Well, you guys have been doing that from a vVols all perspective with NetApp filers, for lack of better term. For how long now, when did, when were they originally? >> 6.0 it was so it's been what? 11, 12 years. Something like that. >> It's been a while. So yeah, but it's been a decade or so. >> Mm-hmm >> So what's on the frontier. What's the latest there in terms of, in terms of cool stuff that's coming out. >> So actually today, in one of the things that we worked with NetApp that was part of the design partnership was, you know, the NVMe over Fabric protocol has become very popular to extend that functionality of all flash to the, an external array. And now we announce today, in including with that NVMe over Fabrics, you can now do vVols with NVMe over Fabrics. And again, that was something that we worked with NetApp to be a design partner for them. >> That's right. We're very excited about it. We've always been, you know, NVMe been something we've been very proud of for a while. Delivering the first end to end NVMe stack from inside the host, through the fabric, to the array, with the arrays front ports, all the way to the disc on the backend. So we're very excited about that. >> So target market joint NetApp, VMware customers, I presume. >> Really it's, the key here that I like to make sure customers understand is to see that vVols are on the leading edge of VMware's storage design. Some tend to think that maybe vVols wasn't the primary focus, but actually now it is the primary focus. Now I always like to give the caveat that VMFS and NFS are not going away. Those are still very much stuff that we work on. It's just that most of the engineering focus is on virtual volumes or vVols. >> Yeah. Similarly, when you talk about and you're sort of alluding to vSAN when we start talking about VMFS and things like that. >> Yeah. >> Architecturally, we've been talking to folks about the recent announcements with capabilities within AWS. You know, NetApp in AWS for VMware environments. Breaking out of the stranglehold that the, oh, you want more storage, you must buy more CPU and memory, building block process entails. The reality is no matter what you do with vSAN, you're going to have certain constraints that go away when now you have the option to leverage storage from the NetApp filers. >> Yeah, absolutely. >> So how does, how do vVols play in the cloud strategy moving forward? >> Well, so one of the things that we do with, vVols currently is mostly On-prem. But when you have the storage architecture, that vVols gives you as far as individual objects, it makes it much easier to migrate up into the cloud because you're not trying to migrate individual VMs that are on another type of system, whatever it might be, those objects are already their own entity. Right, so cloud, Tanzu, those type of things, those vVol objects are already their own entity. So it makes it very easy to migrate them on and off prem. >> So Chance talk to us a little bit about this from NetApp's perspective. You're in customer conversations, who are you talking to? Is this primarily an engineering conversation? Has this gone up the stack in terms of customers are finding themselves in this default multi-cloud environment? >> Yeah, so interestingly, when I talk to customers these days they are almost all either on a journey to a hybrid multi-cloud or they're in some kind of phase of transforming themselves into their own hyperscaler, right? They're be adopting a cloud service provider model and vVols is a perfect fit for that kind of model, because you have the ability to offer different tiers of service, different qualities of service with VM granular controls or VMDK granular controls, even. And even if you look at First Class Disc, right? Which is something that came out largely to support Tanzu, I think which fantastic use case for vVols as well there, but that gives you the ability to offer something like Amazon EBS, right? You can offer Amazon EBS in a native VMware stack using First Class Discs and vVols. And you're able to apply things like quality of service with that granular control that allows you to guarantee that customer the disc that they bought and paid for. They're going to get the IOPS that they're paying for because you're applying those QoS policies directly to that object on the array. And instead of having to worry about is the array going to be able to handle it? Are you going to have one VM that consumes all your IO, you know? You don't have to worry about that with vVols because you've got that integration with the array's native quality controls. >> And Chance what's in this for me as a customer? I'm hearing productivity, I'm hearing cost savings, control efficiency. Talk to me about the benefits in it for the folks that you're talking to. >> Yeah, absolutely. A lot of times it comes down to, you know I mentioned like the cloud service provider model, right? When you're looking to build a robust service catalog and you're able, you want to be able to meet all these like, we mentioned Tanzu, right? Containers as a service, you're able to provide the persistent volumes for your Kubernetes containers that are again, these native objects on the array and you have these fine grain controls, but it's handled at massive scale because it's all handled by storage policies, Kubernetes storage classes, which are natively mapped to VM storage policies through Tanzu. So it just, it gives you the ability to offer all of these services in a, again a rich and robust contents catalog. >> So what are you doing? So you mentioned a couple of things in terms of using array based quality of service. So give me an example of how you're avoiding issues of contention and over subscription in an environment where I'm an administrator and I've got this virtual volume that's servicing this VM or this app on this VM. What kind of visibility do I have down into the actual resources because look at the end of that chain there's a physical resource. And that physical resource represents, what? IOPS and bandwidth and latency and throughput and all of this bundle of things. So how do you avoid colliding with others who are trying to carve vVols out of this world? >> You mean like a noisy neighbor type of thing? >> Yeah. Yeah. >> So that's actually one of the big benefits that you get with vVols is that because those vial objects are native on the array, they're not sharing a loan or a volume. They're not sharing a resource. The only resource they're actually sharing is the array itself. So you don't get that typical noisy neighbor where this one's using all the resources of that volume because really you're looking out at the all encompassing array. And so a storage administrator and the VI admin have a lot more insight. The VI admin can now go to the storage admin if there's say a debugging issue, they want to find a problem. The storage admin now can see those individual objects and say, oh, well this VM, it's not really this, it's not all the discs. It's just disc number two or disc number three or they can actually see at a single disc level on the array, the performance, the latency, you know, the QS, all that stuff. >> Oh, absolutely. >> And that really is what, it frees up at the storage admin's time because the debugging is so much more simple. And it also allows the storage admin a lot more insight. Right? They know those, what's the problem. If you were typically looking at a loaner volume, they don't really know what's going on inside that and neither does the array. But with vVols, the array knows what each disc and how it's supposed to be treated based on the policies that the customer defines. So if one VM is supposed to have a certain QS and another VM isn't. The array knows that that VM, if it goes above it, it's going to be like, nope, you can't have those resources. You weren't granted those resources, but this one was. So you have much more control. And again, it's at an application or a VM level. >> And it's still, it's fairly dynamically configurable. I spoke to a customer just the other day. They are a cloud service provider. And what they do is their customers are able to go in and change their quality of service. So they go into that service portal and they say, okay, I'm paying for gold and I want platinum and they'll go in. They know that they've got a certain time where they need more IO capacity. So they'll go in, they'll pay the fee, increase that capability. And then when they don't need it anymore, they'll downgrade again. >> Okay, so that assumes some ability at the array level to do some sort of resource sharing and balancing to be able to go out and get, say more IO. Because again, fundamentally, if you have a virtual volume, that's drawing its resources from five storage devices, whether those are SSD based or NVMe or spinning disc that represents a finite it amount of resource. The assumption is if you're saying that the array is the pool that you need to worry about, that assumes the array has the ability to go beyond here, based on a policy. >> So that's how it works. It does... >> Well, essentially. I mean, you can't outrun physics. So if the array can't go faster, but the idea is that you understand the performance profile of your array and then you create your service tiers appropriately. >> Okay. >> Yeah. And one of the big benefits is like Chance was saying, if you want to change a profile that used to be a Storage vMotion to a different data store. Now it's just a policy change. The storage admin doesn't have to do anything. The VI admin just changes the policy. And then the array understands, oh, I now need to treat that different. And that's exactly what Chance was talking about in that cloud provider situation, where today I'm using a 100,000 IOPS. I need to use 200,000 tomorrow for special, whatever it is, but I only need to use it for tomorrow. So they don't have to move anything. They just change the policy for that time. And then they change it back. They don't have to do anything on the array itself. They don't have to change anything physically on the VM. It's just a policy change. And that's really where you get that dynamic control of the storage capability. >> So as business dynamics are changing and I'm thinking of like black Friday or Prime day, being able to dial things up and dial it down, they have the ability to do that with a policy. >> Yes. >> Exactly. >> So huge time savings there. >> Oh, it's huge. Yeah. >> Yeah. >> And it simplifies because now, I don't have to have multiple data stores. You can have one data store, all your VMs in there. You can limit test and dev and you can maximize business critical applications. Again, all via policy. So you've simplified your infrastructure. You've gone to more of a programmatic approach of managing your storage capabilities. But you're now managing at the VM level. >> So we mentioned that the cloud chaos (indistinct) that was mentioned this morning during the Keynote and we're saying a lot of customers are still in this cloud chaos phase. They want to get to Cloud Smart. How is this going to be one of those tools that helps customers pull the levers, dial the knobs, to be able to get to eventually, Cloud Smart. >> I could go on for this for hours. (Lisa Laughs) (Chance chuckles) This is really what simplifies storage. Because typically when you use traditional storage, you're going to have to figure out that this data store has this capability or another example, as you mentioned was Tanzu. If you're managing persistent volumes and you're not using something like vVols, if you want to get a certain storage capability, you have to either tag it or you have to create that data store with that capability. All of that goes away when you use vVols. So now that chaos of multiple data stores, multiple lines or multiple volumes, all that stuff goes away. So now you're simplifying your infrastructure, you have a programmatic approach to managing your storage and you can use it for all of your different types of workloads. So cloud, Kubernetes, persistent volumes, all that type of stuff. And again, all being managed via a simple and again, programmatic approach. So you could automate this. You know today, like you said, black Friday. Okay, Black, Friday's coming up. I want to change the policy. You could automate that. So you don't even have to go in and physically make the change of the policy now. You just say on Fridays, change it to this policy on Sunday night, change it back. >> Yep. >> Again, that's not something you can do with traditional storage. >> Okay. >> And I think from a simplification standpoint as well, you know, I was telling you about that other customer a couple days ago, they were running into the inability to grow beyond the bounds of VMFS file systems for very, very large VMs. And so what I talked to them about was look, if you go to vVols, you're not bound by file systems anymore. You have the capacity of the array and you can have VM discs up to 62 terabytes, you know, as many as you want. And it doesn't matter what they fit in because we can fit them all. So it's, to be able to, and that's some of our largest customers, the reason they go with vVols is to be able to grow beyond the bounds of traditional storage, anything like path limits, you know. That's something you have to contend with. >> Path limits, line limits, all that stuff. Typically just disappears with vVols. >> All those limits go away. Guys- >> They go away. >> Amazing. Congratulations on the work that you guys have done. Thank you so much for joining us on theCUBE talking about the value in it for customers and obviously the technical depths of the NetApp, VMware relationship. Guys, we appreciate your time. >> Yeah, thanks for having us on. >> Our pleasure. For my guests and Dave Nicholson. I'm Lisa Martin. You're watching theCUBE live from VMware Explorer 2022, Dave and I will be right back with our next guest. So stick around. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

We're going to have another It's very nice. 7,000 to 10,000 people here, get that one on one that face to face. about the depth of the partnership. of the stuff we're doing the storage admin to say, to add to that is you can that are just impossible to expose So that helps the storage admin. and the intelligence of a storage array the idea of, hey, you know what, 6.0 it was so it's So yeah, but it's been a decade or so. What's the latest there in terms of, in one of the things that the fabric, to the array, So target market joint is to see that vVols are to vSAN when we start talking when now you have the that vVols gives you as So Chance talk to us is the array going to benefits in it for the folks So it just, it gives you the ability So what are you doing? the latency, you know, and how it's supposed to be I spoke to a customer just the other day. the ability to go beyond here, So that's how it works. So if the array can't go So they don't have to move anything. they have the ability to Oh, it's huge. and you can maximize business How is this going to be one of those tools All of that goes away when you use vVols. Again, that's not something you can do to 62 terabytes, you know, limits, all that stuff. All those limits go away. that you guys have done. Dave and I will be right

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Jason MassaePERSON

0.99+

Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

JasonPERSON

0.99+

DavePERSON

0.99+

Chance BingenPERSON

0.99+

Sunday nightDATE

0.99+

Lisa LaughsPERSON

0.99+

200,000QUANTITY

0.99+

each discQUANTITY

0.99+

San FranciscoLOCATION

0.99+

five storage devicesQUANTITY

0.99+

NetAppORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

todayDATE

0.99+

11QUANTITY

0.99+

tomorrowDATE

0.99+

AWSORGANIZATION

0.99+

two guestsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

ChancePERSON

0.99+

bothQUANTITY

0.98+

vSphereTITLE

0.98+

FridaysDATE

0.98+

oneQUANTITY

0.98+

Prime dayEVENT

0.97+

FridayDATE

0.97+

OneQUANTITY

0.97+

12 yearsQUANTITY

0.97+

firstQUANTITY

0.96+

10,000 peopleQUANTITY

0.96+

about 7,000QUANTITY

0.96+

black FridayEVENT

0.96+

vSANTITLE

0.96+

NetAppTITLE

0.95+

100,000 IOPSQUANTITY

0.95+

vVolsOTHER

0.94+

VMware Explorer 2022ORGANIZATION

0.94+

this morningDATE

0.94+

KeynoteEVENT

0.93+

one dataQUANTITY

0.92+

this morningDATE

0.91+

couple days agoDATE

0.91+

VMware Explore 2022TITLE

0.88+

four letterQUANTITY

0.88+

Cloud SmartTITLE

0.87+

TanzuORGANIZATION

0.87+

disc number twoQUANTITY

0.86+

disc number threeQUANTITY

0.86+

up to 62 terabytesQUANTITY

0.84+

one VMQUANTITY

0.83+

VMware Explorer 2022TITLE

0.83+

single discQUANTITY

0.82+

last decadeDATE

0.77+

NVMeOTHER

0.76+

VMFSTITLE

0.76+

decadeQUANTITY

0.74+

vVolsTITLE

0.74+

Vaughn Stewart, Pure Storage | VMware Explore 2022


 

>>Hey everyone. It's the cube live at VMware Explorer, 2022. We're at Mascone center and lovely, beautiful San Francisco. Dave Volante is with me, Lisa Martin. Beautiful weather here today. >>It is beautiful. I couldn't have missed this one because you know, the orange and the pure and VA right. Are history together. I had a, I had a switch sets. You >>Did. You were gonna have FOMO without a guest. Who's back. One of our longtime alumni V Stewart, VP of global technology alliances partners at pure storage one. It's great to have you back on the program, seeing you in 3d >>It's. It's so great to be here and we get a guest interviewer. So this >>Is >>Fantastic. Fly by. Fantastic. >>So talk to us, what's going on at pure. It's been a while since we had a chance to talk, >>Right. Well, well, besides the fact that it's great to see in person and to be back at a conference and see all of our customers, partners and prospects, you know, pure storage has just been on a tear just for your audience. Many, those who don't follow pure, right? We finished our last year with our Q4 being 41% year over year growth. And in the year, just under 2.2 billion, and then we come outta the gates this year, close our Q1 at 50% year over year, quarter quarterly growth. Have you ever seen a storage company or an infrastructure partner at 2 billion grow at that rate? >>Well, the thing was, was striking was that the acceleration of growth, because, you know, I mean, COVID, there were supply chain issues and you know, you saw that. And then, and we've seen this before at cloud companies, we see actually AWS as accelerated growth. So this is my premise here is you guys are actually becoming a cloud-like company building on top of, of infrastructure going from on-prem to cloud. But we're gonna talk about that. >>This is very much that super cloud premise. Well, >>It is. And, and, but I think it's it's one of the characteristics is you can actually, it, you know, we used to see companies, they go, they'd come out of escape velocity, and then they'd they'd growth would slow. I used to be at IDC. We'd see it. We'd see it. Okay. Down then it'd be single digits. You guys are seeing the opposite. >>It's it's not just our bookings. And by the way, I would be remiss if I didn't remind your audience that our second quarter earnings call is tomorrow. So we'll see how this philosophy and momentum keeps going. See, right. But besides the growth, right? All the external metrics around our business are increasing as well. So our net promoter score increased right at 85.2. We are the gold standard, not just in storage in infrastructure period. Like there's no one close to us, >>85. I mean, that's like, that's a, like apple, >>It's higher than apple than apple. It's apple higher than Tesla. It's higher than AWS shopping. And if you look in like our review of our products, flash rate is the leader in the gardener magic quadrant for, for storage array. It's been there for eight years. Port works is the leader in the GIGO OME radar for native Kubernetes storage three years in a row. Like just, it's great to be at a company that's hitting on all cylinders. You know, particularly at a time that's just got so much change going on in our >>Industry. Yeah. Tremendous amount of change. Talk about the, the VMware partnership from a momentum of velocity perspective what's going on there. And some of the things that you're accelerating. >>Absolutely. So VMware is, is the, the oldest or the longest tenured technology partner that we've had. I'm about to start my 10th year at pure storage. It feels like it was yesterday. When I joined, they were a, an Alliance partner before I joined. And so not to make that about me, but that's just like we built some of the key aspects around our first product, the flash array with VMware workloads in mind. And so we are a, a co-development partner. We've worked with them on a number of projects over years of, of late things that are top of mind is like the evolution of vials, the NV support for NVMe over fabric storage, more recently SRM support for automating Dr. With Viv a deployments, you know, and, and, and then our work around VMware ex extends to not just with VMware, they're really the catalyst for a lot of three way partnerships. So partnerships into our investments in data protection partners. Well, you gotta support V ADP for backing up the VMware space, our partnership within Nvidia, well, you gotta support NVA. I, so they can accelerate bringing those technologies into the enterprise. And so it's it, it's not just a, a, a, you know, unilateral partnership. It's a bidirectional piece because for a lot of customers, VMware's kind of like a touchpoint for managing the infrastructure. >>So how is that changing? Because you you've mentioned, you know, all the, the, the previous days, it was like, okay, let's get, make storage work. Let's do the integration. Let's do the hard work. It was kind of a race for the engineering teams to get there. All the storage companies would compete. And it was actually really good for the industry. Yeah, yeah. Right. Because it, it went from, you know, really complex, to much, much simpler. And now with the port works acquisition, it brings you closer to the whole DevOps scene. And you're seeing now VMware it's with its multi-cloud initiatives, really focusing on, you know, the applications and that, and that layer. So how does that dynamic evolve in terms of the partnership and, and where the focus is? >>So there's always in the last decade or so, right. There's always been some amount of overlap or competing with your partnerships, right. Something in their portfolios they're expanding maybe, or you expand you encroach on them. I think, I think two parts to how I would want to answer your question. The retrospective look V VMware is our number one ISV from a, a partner that we, we turn transactions with. The booking's growth that I shared with you, you could almost say is a direct reflection of how we're growing within that, that VMware marketplace. We are bringing a platform that I think customers feel services their workloads well today and gives them the flexibility of what might come in their cloud tomorrow. So you look at programs like our evergreen one subscription model, where you can deploy a consumption based subscription model. So very cloud-like only pay for what you use on-prem and turn that dial as you need to dial it into a, a cloud or, or multiple clouds. >>That's just one example. Looking forward, look, port works is probably the platform that VMware should have bought because when you look at today's story, right, when kit Culbert shared a, a cross cloud services, right, it was, it was the modern version of what VMware used to say, which was, here's a software defined data center. We're gonna standardize all your dissimilar hardware, another saying software defined management to standardize all your dissimilar clouds. We do that for Kubernetes. We talk about accelerating customers' adoption of Kubernetes by, by allowing developers, just to turn on an enable features, be its security, backup high availability, but we don't do it mono in a, you know, in a, in a homogeneous environment, we allow customers to do it heterogeneously so I can deploy VMware Tansu and connect it to Amazon EKS. I can switch one of those over to red head OpenShift, non disruptively, if I need to. >>Right? So as customers are going on this journey, particularly the enterprise customers, and they're not sure where they're going, we're giving them a platform that standardizes where they want to go. On-prem in the cloud and anywhere in between. And what's really interesting is our latest feature within the port works portfolio is called port works data services, and allows customers to deploy databases on demand. Like, install it, download the binaries. You have a cus there, you got a database, you got a database. You want Cassandra, you want Mongo, right? Yeah. You know, and, and for a lot of enterprise customers, who've kind of not, not know where to don't know where to start with port works. We found that to be a great place where they're like, I have this need side of my infrastructure. You can help me reduce cost time. Right. And deliver databases to teams. And that's how they kick off their Tansu journey. For example. >>It's interesting. So port works was the enabler you mentioned maybe VMware should above. Of course they had to get the value out of, out of pivotal. >>Understood. >>So, okay. Okay. So that, so how subsequent to the port works acquisition, how has it changed the way that you guys think about storage and how your customers are actually deploying and managing storage? >>Sure. So you touched base earlier on what was really great about the cloud and VMware was this evolution of simplifying storage technologies, usually operational functions, right? Making things simpler, more API driven, right. So they could be automated. I think what we're seeing customers do to today is first off, there's a tremendous rise in everyone wanting to do every customer, not every customer, a large portion of the customer bases, wanting to acquire technology on as OPEX. And it, I think it's really driven by like eliminate technical debt. I sign a short term agreement, our short, our shortest commitment's nine months. If we don't deliver around what we say, you walk away from us in nine months. Like you, you couldn't do that historically. Furthermore, I think customers are looking for the flexibility for our subscriptions, you know, more from between on-prem and cloud, as I shared earlier, is, is been a, a, a big driver in that space. >>And, and lastly, I would, would probably touch on our environmental and sustainability efforts. You saw this morning, Ragu in the keynote touch on what was it? Zero carbon consumption initiative, or ZCI my apologies to the veer folks. If I missed VO, you know, we've had, we've had sustainability into our products since day one. I don't know if you saw our inaugural ESG report that came out about 60 days ago, but the bottom line is, is, is our portfolio reduces the, the power directly consumed by storage race by up to 80%. And another aspect to look at is that 97% of all of the products that we sold in the last six years are still in the market today. They're not being put into, you know, into, to recycle bins and whatnot, pure storage's goal by the end of this decade is to further drive the efficiency of our platforms by another 66%. And so, you know, it's an ambitious goal, but we believe it's >>Important. Yeah. I was at HQ earlier this month, so I actually did see it. So, >>Yeah. And where is sustainability from a differentiation perspective, but also from a customer requirements perspective, I'm talking to a lot of customers that are putting that requirement when they're doing RFPs and whatnot on the vendors. >>I think we would like to all, and this is a free form VO comment here. So my apologies, but I think we'd all like to, to believe that we can reduce the energy consumption in the planet through these efforts. And in some ways maybe we can, what I fear in the technology space that I think we've all and, and many of your viewers have seen is there's always more tomorrow, right? There's more apps, more vendors, more offerings, more, more, more data to store. And so I think it's really just an imperative is you've gotta continue to be able to provide more services or store more data in this in yesterday's footprint tomorrow. A and part of the way they get to is through a sustainability effort, whether it's in chip design, you know, storage technologies, et cetera. And, and unfortunately it's, it's, it's something that organizations need to adopt today. And, and we've had a number of wins where customers have said, I thought I had to evacuate this data center. Your technology comes in and now it buys me more years of time in this in infrastructure. And so it can be very strategic to a lot of vendors who think their only option is like data center evacuation. >>So I don't want to, I, I don't wanna set you up, but I do want to have the super cloud conversation. And so let's go, and you, can you, you been around a long time, your, your technical, or you're more technical than I am, so we can at least sort of try to figure it out together when I first saw you guys. I think Lisa, so you and I were at, was it, when did you announce a block storage for AWS? The, was that 2019 >>Cloud block store? I believe block four years >>Ago. Okay. So 20 18, 20 18, 20 18. Okay. So we were there at, at accelerate at accelerate and I said, oh, that's interesting. So basically if I, if I go back there, it was, it was a hybrid model. You, you connecting your on-prem, you were, you were using, I think, priority E C two, you know, infrastructure to get high performance and connecting the two. And it was a singular experience yeah. Between on-prem and AWS in a pure customer saw pure. Right. Okay. So that was the first time I started to think about Supercloud. I mean, I think thought about it in different forms years ago, but that was the first actual instantiation. So my, my I'm interested in how that's evolved, how it's evolving, how it's going across clouds. Can you talk just conceptually about how that architecture is, is morphing? >>Sure. I just to set the expectations appropriately, right? We've got, we've got a lot of engineering work that that's going on right now. There's a bunch of stuff that I would love to share with you that I feel is right around the corner. And so hopefully we'll get across the line where we're at today, where we're at today. So the connective DNA of, of flash array, OnPrem cloud block store in the cloud, we can set up for, for, you know, what we call active. Dr. So, so again, customers are looking at these arrays is a, is a, is a pair that allows workloads to be put into the, put into the cloud or, or transferred between the cloud. That's kind of like your basic building, you know, blocking tackling 1 0 1. Like what do I do for Dr. Example, right? Or, or gimme an easy button to, to evacuate a data center where we've seen a, a lot of growth is around cloud block store and cloud block store really was released as like a software version of our hardware, Ray on-prem and it's been, and, and it hasn't been making the news, but it's been continually evolving. >>And so today the way you would look at cloud block store is, is really bringing enterprise data services to like EBS for, for AWS customers or to like, you know, is Azure premium disc for Azure users. And what do I mean by enterprise data services? It's, it's the, the, the way that large scale applications are managed, on-prem not just their performance and their avail availability considerations. How do I stage the, the development team, the sandbox team before they patch? You know, what's my cyber protection, not just data protection, how, how am I protected from a cyber hack? We bring all those capabilities to those storage platforms. And the, the best result is because of our data reduction technologies, which was critical in reducing the cost of flash 10 years ago, reduces the cost of the cloud by 50% or more and pays for the, for pays more than pays for our software of cloud block store to enable these enterprise data services, to give all these rapid capabilities like instant database, clones, instant, you know, recovery from cyber tech, things of that nature. >>Do customers. We heard today that cloud chaos are, are customers saying so, okay, you can run an Azure, you can run an AWS fine. Are customers saying, Hey, we want to connect those islands. Are you hearing that from customers or is it still sort of still too early? >>I think it's still too early. It doesn't mean we don't have customers who are very much in let's buy, let me buy some software that will monitor the price of my cloud. And I might move stuff around, but there's also a cost to moving, right? The, the egress charges can add up, particularly if you're at scale. So I don't know how much I seen. And even through the cloud days, how much I saw the, the notion of workloads moving, like kind of in the early days, like VMO, we thought there might be like a, is there gonna be a fall of the moon computing, you know, surge here, like, you know, have your workload run where power costs are lower. We didn't really see that coming to fruition. So I think there is a, is a desire for customers to have standardization because they gain the benefits of that from an operational perspective. Right. Whether they put that in motion to move workloads back and forth. I think >>So let's say, let's say to be determined, let let's say they let's say they don't move them because your point you knows too expensive, but, but, but, but you just, I think touched on it is they do want some kind of standard in terms of the workflow. Yep. You you're saying you're, you're starting to see demand >>Standard operating practices. Okay. >>Yeah. SOPs. And if they're, if they're big into pure, why wouldn't they want that? If assuming they have, you know, multiple clouds, which a lot of customers do. >>I, I, I I'll assure with you one thing that the going back to like basic primitives and I touched it touched on it a minute ago with data reduction. You have customers look at their, their storage bills in the cloud and say, we're gonna reduce that by half or more. You have a conversation >>Because they can bring your stack yeah. Into the cloud. And it's got more maturity than what you'd find from a cloud company, cloud >>Vendor. Yeah. Just data. Reduction's not part of block storage today in the cloud. So we've got an advantage there that we, we bring to bear. Yeah. >>So here we are at, at VMware Explorer, the first one of this name, and I love the theme, the center of the multi-cloud universe. Doesn't that sound like a Marvel movie. I feel like there should be superheroes walking around here. At some point >>We got Mr. Fantastic. Right here. We do >>Gone for, I dunno it >>Is. But a lot of, a lot of news this morning in the keynote, you were in the keynote, what are some of the things that you're hearing from VMware and what excites you about this continued evolution of the partnership with pure >>Yeah. Great point. So I, I think I touched on the, the two things that really caught my attention. Obviously, you know, we've got a lot of investment in V realize it was now kind of rebranded as ay, that, you know, I think we're really eager to see if we can help drive that consumption a bit higher, cuz we believe that plays into our favor as a vendor. We've we've we have over a hundred templates for the area platform right now to, you know, automation templates, whether it's, you know, levels set your platform, you know, automatically move workloads, deploy on demand. Like just so, so again, I think the focus there is very exciting for us, obviously when they've got a new release, like vSphere eight, that's gonna drive a lot of channel behaviors. So we've gotta get our, you know, we're a hundred percent channel company. And so we've gotta go get our channel ready because with about half of the updates of vSphere is, is hardware refresh. And so, you know, we've gotta be, be prepared for that. So, you know, some of the excitements about just being how to find more points in the market to do more business together. >>All right. Exciting cover the grounds. Right. I mean, so, okay. You guys announce earnings tomorrow, so we can't obviously quiet period, but of course you're not gonna divulge that anyway. So we'll be looking for that. What other catalysts are out there that we should be paying attention to? You know, we got, we got reinvent coming up in yep. In November, you guys are obviously gonna be there in, in a big way. Accelerate was back this year. How was accelerate >>Accelerate in was in Los Angeles this year? Mm. We had great weather. It was a phenomenal venue, great event, great partner event to kick it off. We happened to, to share the facility with the president and a bunch of international delegates. So that did make for a little bit of some logistic securities. >>It was like the summit of the Americas. I, I believe I'm recalling that correctly, but it was fantastic. Right. You, you get, you get to bring the customers out. You get to put a bunch of the engineers on display for the products that we're building. You know, one of the high, you know, two of the highlights there were, we, we announced our new flash blade S so, you know, higher, more performant, more scalable version of our, our scale and object and file platform with that. We also announced the, the next generation of our a I R I, which is our AI ready, AI ready infrastructure within video. So think of it like converged infrastructure for AI workloads. We're seeing tremendous growth in that unstructured space. And so, you know, we obviously pure was funded around block storage, a lot around virtual machines. The data growth is in unstructured, right? >>We're just seeing, we're seeing, you know, just tons of machine learning, you know, opportunities, a lot of opportunities, whether we're looking at health, life sciences, genome sequencing, medical imaging, we're seeing a lot of, of velocity in the federal space. You know, things, I can't talk about a lot of velocity in the automotive space. And so just, you know, from a completeness of platform, you know, flat flash blade is, is really addressing a need really kind of changing the market from NAS as like tier two storage or object is tier three to like both as a tier one performance candidate. And now you see applications that are supporting running on top of object, right? All your analytics platforms are on an object today, Absolut. So it's a, it's a whole new world. >>Awesome. And Pierce also what I see on the website, a tech Fest going on, you guys are gonna be in Seoul, Mexico city in Singapore in the next week alone. So customers get the chance to be able to in person talk with those execs once again. >>Yeah. We've been doing the accelerate tech tech fests, sorry about that around the globe. And if one of those align with your schedule, or you can free your schedule to join us, I would encourage you. The whole list of events dates are on pure storage.com. >>I'm looking at it right now. Vaon thank you so much for joining Dave and me. I got to sit between two dapper dudes, great conversation about what's going on at pure pure with VMware better together and the, and the CATA, the cat catalysis that's going on on both sides. I think that's an actual word I should. Now I have a degree biology for Vaughn Stewart and Dave Valante I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22. We'll be right back with our next guest. So keep it here.

Published Date : Aug 31 2022

SUMMARY :

It's the cube live at VMware Explorer, 2022. I couldn't have missed this one because you know, the orange and the pure and VA right. It's great to have you back on the program, So this Fantastic. So talk to us, what's going on at pure. partners and prospects, you know, pure storage has just been on a So this is my premise here is you guys are actually becoming a cloud-like company This is very much that super cloud premise. it, you know, we used to see companies, they go, they'd come out of escape velocity, and then they'd they'd growth And by the way, I would be remiss if I didn't remind your audience that our And if you look in like our review of our products, flash rate is the leader in And some of the things that you're accelerating. And so it's it, it's not just a, a, a, you know, unilateral partnership. And now with the port works acquisition, it brings you closer to the whole DevOps scene. So very cloud-like only pay for what you use on-prem and turn availability, but we don't do it mono in a, you know, in a, in a homogeneous environment, You have a cus there, you got a database, you got a database. So port works was the enabler you mentioned maybe VMware should above. works acquisition, how has it changed the way that you guys think about storage and how flexibility for our subscriptions, you know, more from between on-prem and cloud, as I shared earlier, is, And so, you know, it's an ambitious goal, but we believe it's So, perspective, I'm talking to a lot of customers that are putting that requirement when they're doing RFPs and to is through a sustainability effort, whether it's in chip design, you know, storage technologies, I think Lisa, so you and I were at, was it, when did you announce a block You, you connecting your on-prem, you were, to share with you that I feel is right around the corner. for, for AWS customers or to like, you know, is Azure premium disc for Azure users. okay, you can run an Azure, you can run an AWS fine. of in the early days, like VMO, we thought there might be like a, is there gonna be a fall of the moon computing, you know, So let's say, let's say to be determined, let let's say they let's say they don't move them because your point you knows too expensive, Okay. you know, multiple clouds, which a lot of customers do. I, I, I I'll assure with you one thing that the going back to like basic primitives and I touched it touched And it's got more maturity than what you'd So we've got an advantage there So here we are at, at VMware Explorer, the first one of this name, and I love the theme, the center of the We do Is. But a lot of, a lot of news this morning in the keynote, you were in the keynote, So we've gotta get our, you know, we're a hundred percent channel company. In November, you guys are obviously gonna be there in, So that did make for a little bit of some logistic securities. You know, one of the high, you know, two of the highlights there were, we, we announced our new flash blade S so, And so just, you know, from a completeness of platform, So customers get the chance to be And if one of those align with your schedule, or you can free your schedule to join us, Vaon thank you so much for joining Dave and me.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave ValantePERSON

0.99+

Dave VolantePERSON

0.99+

TeslaORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

AWSORGANIZATION

0.99+

appleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

10th yearQUANTITY

0.99+

San FranciscoLOCATION

0.99+

2019DATE

0.99+

Vaughn StewartPERSON

0.99+

2 billionQUANTITY

0.99+

three yearsQUANTITY

0.99+

50%QUANTITY

0.99+

nine monthsQUANTITY

0.99+

NovemberDATE

0.99+

Los AngelesLOCATION

0.99+

41%QUANTITY

0.99+

97%QUANTITY

0.99+

LisaPERSON

0.99+

eight yearsQUANTITY

0.99+

RaguPERSON

0.99+

tomorrowDATE

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

twoQUANTITY

0.99+

this yearDATE

0.99+

first productQUANTITY

0.99+

two thingsQUANTITY

0.99+

one exampleQUANTITY

0.99+

yesterdayDATE

0.99+

66%QUANTITY

0.99+

bothQUANTITY

0.99+

both sidesQUANTITY

0.99+

last yearDATE

0.99+

10 years agoDATE

0.99+

SingaporeLOCATION

0.99+

two partsQUANTITY

0.99+

next weekDATE

0.99+

four yearsQUANTITY

0.98+

EBSORGANIZATION

0.97+

V StewartPERSON

0.97+

CassandraPERSON

0.97+

halfQUANTITY

0.97+

vSphere eightTITLE

0.96+

vSphereTITLE

0.96+

Seoul, MexicoLOCATION

0.96+

VaonPERSON

0.96+

OPEXORGANIZATION

0.95+

up to 80%QUANTITY

0.95+

first timeQUANTITY

0.95+

a minute agoDATE

0.95+

OnPremORGANIZATION

0.95+

oneQUANTITY

0.95+

first oneQUANTITY

0.95+

AzureTITLE

0.94+

85QUANTITY

0.94+

OpenShiftTITLE

0.93+

2022DATE

0.93+

this morningDATE

0.92+

earlier this monthDATE

0.92+

Andy Brown, Broadcom


 

hello and welcome to the cube i'm dave nicholson chief technology officer at thecube and we're here for a very special cube conversation with andy brown from broadcom andy welcome to the cube tell us a little about yourself a little bit my about myself my name is andy brown i'm currently the senior director of software architecture and performance analysis here within the data center solutions group at broadcom i've been doing that for about seven years prior to that i held various positions within the system architecture systems engineering and ic development organization but ultimately as well i spent some time in our support organization and managing our support team but ultimately have landed in the architecture organization as well as performance analysis great so a lot of what you do is around improving storage performance tell us more about that so let me give you a brief history of uh storage from from my perspective um you know i as i mentioned i go back about 30 years in my career and that would have started back in the ncr microelectronics days and originally with parallel scuzzy so that would be if anyone would remember the 5380 controller which was one of the original parallel scuzzy controllers that existed and built by ncr microelectronics at the time i've i've seen the advent of parallel scuzzy a stint of fiber channel ultimately leading into the serialization of those of the scuzzy standard into sas as well as sata and then ultimately leading to nvme protocols and the advent of flash moving from hard drives into a flash based media and as well on on that's on the storage side on the host side moving from parallel interfaces isa if everybody could remember that moving to pci pci express that's where we land today so andy we're square in the middle of the era of both nvme and sas what kinds of challenges does that overlap represent well i think you know obviously we've seen sas around for a while it was the conversion from parallel into a serial attached scuzzy and that really sas brings with it the ability to uh connect on really a high number of devices um and uh was was kind of the original scaling of devices and and really uh also enabled uh was was one of the things that enabled flash based media given the the speed and performance that came to the table of course nvme came in as well with the promise of of even higher speeds and as we saw flash media really really take a strong role in storage uh nvme came around and and really was focused on trying to address that whereas sas originated with hard drive technology nvme was really born out of how do we how do we most efficiently deal with flash based media you know sas with its but sas still carries a benefit on scalability nvme maybe has i don't want to say challenges there but it's definitely was not designed as much to be broadly scalable across many many say hundreds or thousands of devices but definitely addressed some of the performance issues that were coming up as flash media was becoming so uh uh was was increasing the overall storage performance that we could experience if you will let's talk about host interfaces like pcie what's the significance there um really uh the all the storage in the world all of the performance in the world and on the storage side is not of much use to you unless you can really feed it into the into the beast if you will into the cpu and into this the rest of the server subsystem and that's really where pci comes into play pci uh originally was in parallel form and then moved to serial with pci express as we know it today and and really has created a pathway to to to enable not not only storage performance but any other adapter or any other networking or other other types of technologies to just open up that pathway and feed the processor if and as we've moved through from pci to pci express pci 2.0 3.0 4.0 and just opening up those pipes has really enabled just a tremendous amount of flow of data into into the compute engine allowing it to be analyzed sorted used to analyze data big data uh ai type applications just those pipes are critical in those types of applications we know we've seen dramatic increases in performance going from one generation of pcie to the next but how does that translate into the worlds of sas sata and nvme um so from a performance perspective when we look at these different types of media whether it be sata sas or nvme um of course there are performance difference inherent in that media sata being probably the lowest performing with nvme uh topping out at higher performing although sas can perform quite well as a flash based you know as a protocol connected to flash based media and of course nvme from us an individual device scaling from a by one to a by four interface really that is where nvme kind of has enabled a bigger pipe directly to the storage media uh being able to scale up to buy four whereas sas is kind of limited to buy one maybe buy two in some cases although most servers only connect the sas device by one so from a difference perspective then you're really wanting to create a solution or or enable the infrastructure to be able to consume that performance that nvme is going to give you and i think that you know that is something where our solutions have really in in the recent generations shine at their ability to really now uh keep up with uh storage performance in nvme uh as well as provide that connectivity back down into the sas and sata world as well let's talk about your perspective on raid today so uh there's been a lot of uh views and opinions on raid over the years it's been a and those have been changing over time raid has been around for a very very long time uh probably about as long as again going back over my 30-year career uh it's been around for almost the entire time obviously raid originally was viewed as as something that was uh very very necessary uh devices fail they don't last forever but the data that's on them is very very important and people care about that so raid was brought about you know knowing that individual devices that are storing that data are going to fail and really took hold as a primary mechanism of protection but as time went on uh and and as performance moved up uh both in the server and both in in the media itself if we start talking about flash uh raid really took on people people started to look at traditional server storage raid uh but with maybe a more of a negative connotation i think that because uh to be quite honest it fell behind a little bit if you look at things like parity raid raid five and rate six very very effective and efficient means of protecting your data very storage efficient but ultimately had some penalties a primarily around wright performance random rights in raid 5 volumes was not keeping up with what really needed to be there and um i think that really shifted uh opinions of raid that hey it's just it's just not it's not going to keep up and we need to move on to other avenues and and we've seen that we've seen disaggregated storage and other solutions pop up to protect your data obviously in cloud environments and things like that it's shown up and uh and they have been successful so one of the drawbacks with raid is always the performance tax associated with generating parity for parity rate what has broadcom done to address those potential bottlenecks we've really solved the raid performance issue the right performance issue we're we're in our latest generation of controllers we're exceeding a million rate five right iops which is enough to satisfy many many many applications as a matter of fact even in virtual environments aggregated solutions we have multiple applications and then as well in the rebuild arena we really have through our architecture through our hardware automation have been able to move the bar on that to where the rebuild not only the rebuild times have been brought down dramatically in sas based or in i'm sorry in flash based solutions but the performance that you can observe while those rebuilds are going on is almost immeasurable so in most applications you would almost observe no performance deficiencies during a rebuild operation which is really night and day compared to where things were just a few short years ago so the fact that you've been able to dramatically decrease the time necessary for a raid rebuild is obviously extremely important but give us your overall performance philosophy from broadcom's point of view you know over the years we have recognized that performance is is obviously critically important for our products and the ability to analyze performance from many many angles is critically important there are literally infinite ways you can look at performance in a storage subsystem what we have done in our labs and in our solutions through not only hardware scaling in our in our in our labs but also through automation scripts and things like that allowed us to collect a substantial amount of data to look at the performance of our solutions from every angle you know iops bandwidth application level performance small topologies large topologies just just many many aspects it's still honestly only scratches the surface of all the possible uh performance points that you could gather but it it has we have moved the bar dramatically in that regard and and it's something that our customers really demanded of us um you know storage technology has gotten more complex and you have to look at it from a lot different angles especially on the performance front to make sure that there are no holes there that somebody's going to run into so based on specific customer needs and requests you look at performance from a variety of different angles um what are some of the trends that you're seeing specifically in storage performance today and moving into the future yeah emerging trends within the storage industry i think that to look at the emerging trends you really need to go back and look at where we started we started uh in compute where people were you would have basically your uh your server that would be under the desk in a small business operation and individual uh businesses would have their own uh set of set of servers and and the storage would really be localized to those obviously the industry has recognized that um that to some extent disaggregation of that we we see that obviously in what's happening in cloud uh in hyper-converged storage and things like that those afford a tremendous amount of flexibility uh and and are obviously uh great players in the storage world today but what with that flexibility is come some sacrifice in performance and actually quite substantial sacrifice and what we're observing is almost uh it comes back full circle the uh the need for inbox high performing server storage that is well protected uh and and with people with confidence that people have confidence that their data is protected and that they can uh extract the performance that they need for the demanding database applications that still exist today and they still operate in in the offices around the country and around the world that really need to protect their data on a local basis in the server and i think that from a trend perspective that's what we're seeing also from the standpoint of nvme store nvme itself is really started out with hey we'll just software rate that we'll just we'll just wrap software around that we can protect the data we had so many customers come back to us saying you know what we really need hardware raid on nvme and when they came to us we were ready we had a solution ready to go and we're able to provide that and now we're seeing going on demand we are we are complementary to other storage solutions out there server storage is not going to necessarily rule the world but it is surely has a place in the broader storage spectrum and we think we have the right solution for that speaking of servers and server-based storage why would for example a dell customer care about the broadcom components in that dell server so uh uh let's say you're configuring a dell server and you're going why does why does hardware raid matter what what what's important about that well i think when you look at today's hardware raid uh first of all you're going to see dramatically better performance you're going to see dramatically better performance in it's going to enable you to put raid 5 volumes a very effective and efficient mechanism for protecting your data a storage efficient mechanism you're going to use raid 5 volumes where you weren't able to do that before because when you're in the millions of iops range you really uh can satisfy a lot of application needs out there and and then you're going to also going to have rebuilt times that are lightning fast your performance is not going to degrade when you're when you're running those application especially database applications but not not only database but streaming applications uh bandwidth uh to to protected raid volumes is is almost almost imperceptibly different from just raw bandwidth to the media so the rate rate configurations in today's dell servers really afford you the opportunity to make use of that storage where you you may not have uh you may have already written it off as well ray just doesn't is not going to get me there quite frankly uh into this in in the storage servers that dell is providing uh with with raid technology uh there are huge windows open and what you can do today with applications well all of this is obviously good news for dell and dell customers thanks again andy for joining us for this cube conversation i'm dave nicholson for the cube [Music]

Published Date : May 5 2022

SUMMARY :

move the bar on that to where the

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Andy BrownPERSON

0.99+

dellORGANIZATION

0.99+

dave nicholsonPERSON

0.99+

andy brownPERSON

0.99+

twoQUANTITY

0.99+

hundredsQUANTITY

0.99+

todayDATE

0.99+

dave nicholsonPERSON

0.97+

millionsQUANTITY

0.97+

andy brownPERSON

0.96+

bothQUANTITY

0.96+

broadcomORGANIZATION

0.95+

about 30 yearsQUANTITY

0.93+

oneQUANTITY

0.92+

one generationQUANTITY

0.91+

30-yearQUANTITY

0.87+

BroadcomORGANIZATION

0.86+

about seven yearsQUANTITY

0.86+

fourQUANTITY

0.85+

thousands of devicesQUANTITY

0.84+

nvmeORGANIZATION

0.84+

few short years agoDATE

0.83+

5380COMMERCIAL_ITEM

0.81+

rate sixOTHER

0.81+

thingsQUANTITY

0.76+

raid fiveOTHER

0.74+

5 volumesQUANTITY

0.73+

one ofQUANTITY

0.72+

millionQUANTITY

0.72+

chiefPERSON

0.7+

windowsTITLE

0.64+

pci expressCOMMERCIAL_ITEM

0.64+

nvmeTITLE

0.64+

thecubeORGANIZATION

0.61+

rate fiveOTHER

0.6+

pciORGANIZATION

0.6+

angleQUANTITY

0.59+

a lotQUANTITY

0.56+

lotQUANTITY

0.55+

customersQUANTITY

0.54+

raidQUANTITY

0.54+

no holesQUANTITY

0.54+

anglesQUANTITY

0.53+

ncrORGANIZATION

0.52+

pciCOMMERCIAL_ITEM

0.52+

expressTITLE

0.41+

Andy Brown, Broadcom


 

(upbeat music) >> Hello and welcome to theCUBE. I'm Dave Nicholson, Chief Technology Officer at theCUBE and we are here for a very special Cube Conversation with Andy Brown from Broadcom. Andy, welcome to theCUBE, tell us a little about yourself. >> Well, a little bit about myself, my name is Andy Brown, I'm currently the Senior Director of Software Architecture and Performance Analysis here within the Data Center Solutions Group at Broadcom. I've been doing that for about seven years prior to that, I held various positions within the system architecture, systems engineering, and IC development organization, but ultimately as well as spent some time in our support organization and managing our support team. But ultimately have landed in the architecture organization as well as performance analysis. >> Great, so a lot of what you do is around improving storage performance, tell us more about that. >> So let me give you a brief history of storage from my perspective. As I mentioned, I go back about 30 years in my career and that would've started back in the NCR Microelectronics days. And originally with Parallel SCSI, so that would be, if anyone would remember the the 5380 Controller, which was one of the original Parallel SCSI controllers that existed in built by NCR Microelectronics at the time, I've seen the advent of Parallel SCSI, a stint of fiber channel, ultimately leading into the serialization of the SCSI standard into SaaS, as well as SATA, and then ultimately leading to NVMe protocols and the advent of flash moving from hard drives into a flash based media and as well on that's on the storage side on the host side, moving from parallel interfaces, ISA if everybody could remember that, moving to PCI, PCI Express and that's where we land today. >> So Andy, we are square in the middle of the era of both NVMe and SaaS. What kinds of challenges does that overlap represent? >> Well, I think obviously we've seen SaaS around for a while, it was the conversion from parallel into a serial attached SCSI and that really SaaS brings with it, the ability to connect really a high number of devices and was kind of the original scaling of devices. And really also enabled was one of the things that enabled flash based media, given the the speed and performance that came to the table. Of course NVMe came in as well with the promise of even higher speeds. And as we saw flash media really, really take a strong role in storage. NVMe came around and really was focused on trying to address that, whereas SaaS originated with hard drive technology. NVMe was really born out of how do we most efficiently deal with flash based media, SaaS with its. But SaaS still carries a benefit on scalability and NVMe maybe has, I don't want to say challenges there, but it's definitely was not designed as much to be broadly scale across many, many, say high hundreds or thousands of devices. But definitely addressed some of the performance issues that were coming up as flash media was becoming. So it was increasing the overall storage performance that we could experience if you will. >> Let's talk about host interfaces, PCIe. What's the significance there? >> Really all the storage in the world, all the performance in the world on the storage side, is not of much use to you unless you can really feed it into the beast, if you will, into the CPU and into the the rest of the service subsystem. And that's really where PCI comes into play. PCI originally was in parallel form and then moved to serial with the PCI Express as we know it today, and really has created a pathway to enable not only storage performance but any other adapter or any other networking or other types of technologies to just open up that pathway and feed the processor. And as we've moved through from PCI to PCI Express PCI 2.0 3.0 4.0, and just opening up those pipes has really enabled just a tremendous amount of flow of data into the compute engine, allowing it to be analyzed, sorted used to analyze data, big data, AI type applications. Just those pipes are critical in those types of applications. >> We know we've seen dramatic increases in performance, going from one generation of PCIe to the next. But how does that translate into the worlds of SaaS, SATA and NVMe? >> So from a performance perspective when we look at these different types of media whether it be SATA, SaaS or NVMe, of course, there are performance difference inherent in that media, SATA being probably the lowest performing with NVMe topping out at higher performing although SaaS can perform quite well as a flash based as protocol connected to flash based media. And of course, NVMe from an individual device scaling, from a by one to a by four interface, really that is where NVMe kind of has enabled a bigger pipe directly to the storage media, being able to scale up to by four whereas SaaS can limit it to by one, maybe by two in some cases, although most servers only connect the SaaS device of by one. So from a different perspective then you're really wanting to create a solution or enable the infrastructure to be able to consume that performance at NVMe is going to give you. And I think that is something where our solutions have really in the recent generation shined, at their ability to really now keep up with storage performance and NVMe, as well as provide that connectivity back down into the SaaS and SATA world as well. >> Let's talk about your perspective on RAID today. >> So there've been a lot of views and opinions on RAID over the years, it's been and those have been changing over time. RAID has been around for a very, very long time, probably about as long as again, going back over my 30 year career, it's been around for almost the entire time. Obviously RAID originally was viewed as some thing that was very, very necessary devices fail. They don't last forever, but the data that's on them is very, very important and people care about that. So RAID was brought about knowing that individual devices that are storing that data are going to fail, and really took cold as a primary mechanism of protection. But as time went on and as performance moved up both in the server and both in the media itself if we start talking about flash. RAID really took on, people started to look at traditional server storage RAID, well, maybe a more of a negative connotation. I think that because to be quite honest, it fell behind a little bit. If you look at things like parity RAID 5 and RAID 6, very, very effective efficient means of protecting your data, very storage efficient, but ultimately had some penalty a primarily around right performance, random rights in RAID 5 volumes was not keeping up with what really needed to be there. And I think that really shifted opinions of RAID that, "Hey it's just not, it's not going to keep up and we need to move on to other avenues." And we've seen that, we've seen disaggregated storage and other solutions pop up and protect your data obviously in cloud environments and things like that have shown up and they have been successful, but. >> So one of the drawbacks with RAID is always the performance tax associated with generating parody for parody RAID. What has Broadcom done to address those potential bottlenecks? >> We've really solved the RAID performance issue the right performance issue. We're in our latest generation of controllers we're exceeding a million RAID 5 right IOPS which is enough to satisfy many, many, many applications as a matter of fact, even in virtual environments aggregated solutions, we have multiple applications. And then as well in the rebuild arena, we really have through our architecture, through our hardware automation have been able to move the bar on that to where the rebuild not only the rebuild times have been brought down dramatically in SaaS based or in I'm sorry in flash based solutions. But the performance that you can observe while those rebuilds are going on is almost immeasurable. So in most applications you would almost observe no performance deficiencies during a rebuild operation which is really night and day compared to where things were just few short years ago. >> So the fact that you've been able to, dramatically decrease the time necessary for a RAID rebuild is obviously extremely important. But give us your overall performance philosophy from Broadcom's point of view. >> Over the years we have recognized that performance is obviously a critically important for our products, and the ability to analyze performance from many many angles is critically important. There are literally infinite ways you can look at performance in a storage subsystem. What we have done in our labs and in our solutions through not only hardware scaling in our labs, but also through automation scripts and things like that, have allowed us to collect a substantial amount of data to look at the performance of our solutions from every angle. IOPS, bandwidth application level performance, small topologies, large topologies, just many, many aspects. It still honestly only scratches the surface of all the possible performance points that you could gather, but we have moved them bar dramatically in that regard. And it's something that our customers really demanded of us. Storage technology has gotten more complex, and you have to look at it from a lot different angles, especially on the performance front to make sure that there are no holes there that somebody's going to run into. >> So based on specific customer needs and requests, you look at performance from a variety of different angles. What are some of the trends that you're seeing specifically in storage per performance today and moving into the future? >> Yeah, emerging trends within the storage industry. I think that to look at the emerging trends, you really need to go back and look at where we started. We started in compute where people were you would have basically your server that would be under the desk in a small business operation and individual businesses would have their own set of servers, and the storage would really be localized to those. Obviously the industry has recognized that to some extent, disaggregation of that, we see that obviously in what's happening in cloud, in hyper-converged storage and things like that. Those afford a tremendous amount of flexibility and are obviously great players in the storage world today. But with that flexibility has come some sacrifice and performance and actually quite substantial sacrifice. And what we're observing is almost, it comes back full circle. The need for inbox high performing server storage that is well protected. And with people with confidence that people have confidence that their data is protected and that they can extract the performance that they need for the demanding database applications, that still exists today, and that still operate in the offices around the country and around the world, that really need to protect their data on a local basis in the server. And I think that from a trend perspective that's what we're seeing. We also, from the standpoint of NVMe itself is really started out with, "Hey, we'll just software rate that. We'll just wrap software around that, we can protect the data." We had so many customers come back to us saying, you know what? We really need hardware RAID on NVMe. And when they came to us, we were ready. We had a solution ready to go, and we're able to provide that, and now we're seeing ongoing on demand. We are complimentary to other storage solutions out there. Server storage is not going to necessarily rule a world but it is surely has a place in the broader storage spectrum. And we think we have the right solution for that. >> Speaking of servers and server-based storage. Why would, for example, a Dell customer care about the Broadcom components in that Dell server. >> So let's say you're configuring a Dell server and you're going, why does hardware where RAID matter? What's important about that? Well, I think when you look at today's hardware RAID, first of all, you're going to see a dramatically better performance. You're going to see dramatically better performance it's going to enable you to put RAID 5 volumes a very effective and efficient mechanism for protecting your data, a storage efficient mechanism. You're going to use RAID 5 volumes where you weren't able to do that before, because when you're in the millions of IOPS range you really can satisfy a lot of application needs out there. And then you're going to also going to have rebuilt times that are lightning fast. Your performance is not going to degrade, when you're running those application, especially database applications, but not only database, but streaming applications, bandwidth to protected RAID volumes is almost imperceptively different from just raw bandwidth to the media. So the RAID configurations in today's Dell servers really afford you the opportunity to make use of that storage where you may not have already written it off as well RAID just doesn't, is not going to get me there. Quite frankly, into this in the storage servers that Dell is providing with RAID technology, there are huge windows open in what you can do today with applications. >> Well, all of this is obviously good news for Dell and Dell customers, thanks again, Andy for joining us, for this Cube Conversation, I'm Dave Nicholson for theCUBE. (upbeat music)

Published Date : Apr 28 2022

SUMMARY :

and we are here for a very I'm currently the Senior Great, so a lot of what you do and the advent of flash in the middle of the era and performance that came to the table. What's the significance there? and into the the rest of of PCIe to the next. have really in the Let's talk about your both in the server and So one of the drawbacks with RAID on that to where the rebuild So the fact that you've been able to, and the ability to analyze performance and moving into the future? and the storage would really about the Broadcom components in the storage servers and Dell customers, thanks

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

AndyPERSON

0.99+

Andy BrownPERSON

0.99+

DellORGANIZATION

0.99+

twoQUANTITY

0.99+

Data Center Solutions GroupORGANIZATION

0.99+

BroadcomORGANIZATION

0.99+

millionsQUANTITY

0.99+

todayDATE

0.98+

oneQUANTITY

0.98+

hundredsQUANTITY

0.97+

bothQUANTITY

0.97+

theCUBEORGANIZATION

0.97+

PCI 2.0 3.0 4.0OTHER

0.97+

NCR MicroelectronicsORGANIZATION

0.94+

about 30 yearsQUANTITY

0.94+

PCI ExpressOTHER

0.94+

one generationQUANTITY

0.93+

a millionQUANTITY

0.92+

thousands of devicesQUANTITY

0.9+

fourQUANTITY

0.88+

few short years agoDATE

0.87+

Parallel SCSIOTHER

0.85+

RAID 5OTHER

0.84+

RAID 5TITLE

0.77+

30 yearQUANTITY

0.73+

NCRORGANIZATION

0.72+

RAID 6TITLE

0.71+

5380 ControllerCOMMERCIAL_ITEM

0.71+

Parallel SCSIOTHER

0.71+

one ofQUANTITY

0.7+

RAIDTITLE

0.68+

NVMeTITLE

0.64+

SaaSTITLE

0.63+

about seven yearsQUANTITY

0.6+

thingsQUANTITY

0.57+

PCIOTHER

0.54+

IOPSQUANTITY

0.47+

Anahad Dhillon, Dell EMC | CUBE Conversation, October 2021


 

(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Oct 5 2021

SUMMARY :

and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

November 5thDATE

0.99+

DellORGANIZATION

0.99+

Anahad DhillonPERSON

0.99+

October 2021DATE

0.99+

November 2ndDATE

0.99+

2020DATE

0.99+

two productsQUANTITY

0.99+

EMCORGANIZATION

0.99+

AnahadPERSON

0.99+

ObjectScaleTITLE

0.99+

VMware 2021TITLE

0.99+

todayDATE

0.99+

thousandsQUANTITY

0.99+

vSphereTITLE

0.99+

both productsQUANTITY

0.99+

two productQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

bothQUANTITY

0.99+

Dell EMCORGANIZATION

0.99+

early 2020DATE

0.98+

OpenShiftTITLE

0.98+

step oneQUANTITY

0.98+

this yearDATE

0.98+

hundreds of nodesQUANTITY

0.98+

two code streamsQUANTITY

0.98+

ECSTITLE

0.97+

12 nodesQUANTITY

0.97+

single codeQUANTITY

0.97+

oneQUANTITY

0.97+

KubernetesTITLE

0.97+

10QUANTITY

0.96+

4.0OTHER

0.96+

Red Hat OpenShiftTITLE

0.95+

3.6OTHER

0.95+

Dell TechnologyORGANIZATION

0.94+

S3TITLE

0.92+

Hundreds of notesQUANTITY

0.92+

two worldsQUANTITY

0.92+

EXF900COMMERCIAL_ITEM

0.92+

up to 30 terabytesQUANTITY

0.91+

ObjectScaleORGANIZATION

0.91+

ECS 3.XTITLE

0.91+

petabytesQUANTITY

0.89+

VMwareTITLE

0.89+

firstQUANTITY

0.87+

3.XTITLE

0.87+

Dev Test SandboxTITLE

0.87+

ECSORGANIZATION

0.86+

Red HatTITLE

0.84+

Anahad Dhillon, Dell EMC | CUBEConversation


 

(upbeat music) >> Welcome everybody to this CUBE Conversation. My name is Dave Vellante, and we're here to talk about Object storage and the momentum in the space. And what Dell Technologies is doing to compete in this market, I'm joined today by Anahad Dhillon, who's the Product Manager for Dell, EMC's ECS, and new ObjectScale products. Anahad, welcome to theCUBE, good to see you. >> Thank you so much Dave. We appreciate you having me and Dell (indistinct), thanks. >> Its always a pleasure to have you guys on, we dig into the products, talk about the trends, talk about what customers are doing. Anahad before the Cloud, Object was this kind of niche we seen. And you had simple get, put, it was a low cost bit bucket essentially, but that's changing. Tell us some of the trends in the Object storage market that you're observing, and how Dell Technology sees this space evolving in the future please. >> Absolutely, and you hit it right on, right? Historically, Object storage was considered this cheap and deep place, right? Customers would use this for their backup data, archive data, so cheap and deep, no longer the case, right? As you pointed out, the ObjectSpace is now maturing. It's a mature market and we're seeing out there customers using Object or their primary data so, for their business critical data. So we're seeing big data analytics that we use cases. So it's no longer just cheap and deep, now your primary workloads and business critical workloads being put on with an object storage now. >> Yeah, I mean. >> And. >> Go ahead please. >> Yeah, I was going to say, there's not only the extend of the workload being put in, we'll also see changes in how Object storage is being deployed. So now we're seeing a tighter integration with new depth models where Object storage or any storage in general is being deployed. Our applications are being (indistinct), right? So customers now want Object storage or storage in general being orchestrated like they would orchestrate their customer applications. Those are the few key trends that we're seeing out there today. >> So I want to dig into this a little bit with you 'cause you're right. It used to be, it was cheap and deep, it was slow and it required sometimes application changes to accommodate. So you mentioned a few of the trends, Devs, everybody's trying to inject AI into their applications, the world has gone software defined. What are you doing to respond to all these changes in these trends? >> Absolutely, yeah. So we've been making tweaks to our object offering, the ECS, Elastic Cloud Storage for a while. We started off tweaking the software itself, optimizing it for performance use cases. In 2020, early 2020, we actually introduced SSDs to our notes. So customers were able to go in, leverage these SSD's for metadata caching improving their performance quite a bit. We use these SSDs for metadata caching. So the impact on the performance improvement was focused on smaller reads and writes. What we did now is a game changer. We actually went ahead later in 2020, introduced an all flash appliance. So now, EXF900 and ECS all flash appliance, it's all NVME based. So it's NVME SSDs and we leveraged NVME over fabric xx for the back end. So we did it the right way did. We didn't just go in and qualified an SSD based server and ran object storage on it, we invested time and effort into supporting NVME fabric. So we could give you that performance at scale, right? Object is known for scale. We're not talking 10, 12 nodes here, we're talking hundreds of nodes. And to provide you that kind of performance, we went to ahead. Now you've got an NVME based offering EXF900 that you can deploy with confidence, run your primary workloads that require high throughput and low latency. We also come November 5th, are releasing our next gen SDS offering, right? This takes the Troven ECS code that our customers are familiar with that provides the resiliency and the security that you guys expect from Dell. We're re platforming it to run on Kubernetes and be orchestrated by Kubernetes. This is what we announced that VMware 2021. If you guys haven't seen that, is going to go on-demand for VMware 2021, search for ObjectScale and you get a quick demo on that. With ObjectScale now, customers can quickly deploy enterprise grade Object storage on their existing environment, their existing it infrastructure, things like VMware, infrastructure like VMware and infrastructure like OpenShift. I'll give you an example. So if you were in a VMware shop that you've got vSphere clusters in your data center, with ObjectScale, you'll be able to quickly deploy your Object enterprise grid Object offering from within vSphere. Or if you are an OpenShift customer, right? If you've got OpenShift deployed in your data center and your Red Hat shop, you could easily go in, use that same infrastructure that your applications are running on, deploy ObjectScale on top of your OpenShift infrastructure and make available Object storage to your customers. So you've got the enterprise grade ECS appliance or your high throughput, low latency use cases at scale, and you've got this software defined ObjectScale, which can deploy on your existing infrastructure, whether that's VMware or Red Hat OpenShift. >> Okay, I got a lot of follow up questions, but let me just go back to one of the earlier things you said. So Object was kind of cheap, deep and slow, but scaled. And so, your step one was metadata caching. Now of course, my understanding is with Object, the metadata and the data within the object. So, maybe you separated that and made it high performance, but now you've taken the next step to bring in NVME infrastructure to really blow away all the old sort of scuzzy latency and all that stuff. Maybe you can just educate us a little bit on that if you don't mind. >> Yeah, absolutely. Yeah, that was exactly the stepped approach that we took. Even though metadata is tightly integrated in Object world, in order to read the actual data, you still got to get to the metadata first, right? So we would cache the metadata into SSDs reducing that lookup that happens for that metadata, right? And that's why it gave you the performance benefit. But because it was just tied to metadata look-ups, the performance for larger objects stayed the same because the actual data read was still happening from the hard drives, right? With the new EXF900 which is all NVME based, we've optimized the our ECS Object code leveraging VME, data sitting on NVME drives, the internet connectivity, the communication is NVME over fabric, so it's through and through NVME. Now we're talking milliseconds and latency and thousands and thousands of transactions per second. >> Got it, okay. So this is really an inflection point for Objects. So these are pretty interesting times at Dell, you got the cloud expanding on prem, your company is building cloud-like capabilities to connect on-prem to the cloud across cloud, you're going out to the edge. As it pertains to Object storage though, it sounds like you're taking a sort of a two product approach to your strategy. Why is that, and can you talk about the go-to market strategy in that regard? >> Absolutely, and yeah, good observation there. So yes and no, so we continued to invest in ECS. ECS continues to stay a product of choice when customer wants that traditional appliance deployment model. But this is a single hand to shape model where you're everything from your hardware to your software the object solution software is all provided by Dell. ECS continues to be the product where customers are looking for that high performance, fine tune appliance use case. ObjectScale comes into play when the needs are software defined. When you need to deploy the storage solution on top of the same infrastructure that your applications are run, right? So yes, in the short-term, in the interim, it's a two product approach of both products taking a very distinct use case. However, in the long-term, we're merging the two quote streams. So in the long-term, if you're an ECS customer and you're running ECS, you will have an in-place data upgrade to ObjectScale. So we're not talking about no forklift upgrades, we're not talking about you're adding additional servers and do a data migration, it's a code upgrade. And then I'll give you an example, today on ECS, we're at code variation 3.6, right? So if you're a customer running ECS, ECS 3.X in the future, and so we've got a roadmap where 3.7 is coming out later on this year. So from 3.X, customers will upgrade the code data in place. Let's call it 4.0, right? And that brings them up to ObjectScale. So there's no nodes left behind, there's an in-place code upgrade from ECS to the ObjectScale merging the two code streams and the long-term, single code, short-term, two products for both solving the very distinct users. >> Okay, let me follow up, put on my customer hat. And I'm hearing that you can tell us with confidence that irrespective of whether a customer invested ECS or ObjectScale, you're not going to put me into a dead-end. Every customer is going to have a path forward as long as their ECS code is up-to-date, is that correct? >> Absolutely, exactly, and very well put, yes. No nodes left behind, investment protection, whether you've got ECS today, or you want to invest into ECS or ObjectScale in the future, correct. >> Talk a little bit more about ObjectScale. I'm interested in kind of what's new there, what's special about this product, is there unique functionality that you're adding to the product? What differentiates it from other Object stores? >> Absolutely, my pleasure. Yeah, so I'll start by reiterating that ObjectScale it's built on that Troven ECS code, right? It's the enterprise grid, reliability and security that our customers expect from Dell EMC, right? Now we're re platforming ECS who allow ObjectScale to be Kubernetes native, right? So we're leveraging that microservices-based architecture, leveraging that native orchestration capabilities of Kubernetes, things like resource isolation or seamless (indistinct), I'm sorry, load balancing and things like that, right? So the in-built native capabilities of Kubernetes. ObjectScale is also build with scale in mind, right? So it delivers limitless scale. So you could start with terabytes and then go up to petabytes and beyond. So unlike other file system-based Object offerings, ObjectScale software would have a limit on your number of object stores, number of buckets, number of objects you store, it's limitless. As long as you can provide the hardware resources under the covers, the software itself is limitless. It allows our customers to start small, so you could start as small as three node and grow their environment as your business grows, right? Hundreds of notes. With ObjectScale, you can deploy workloads at public clouds like scale, but with the reliability and control of a private cloud data, right? So, it's then your own data center. And ObjectScale is S3 compliant, right? So while delivering the enterprise features like global replication, native multi-tenancy, fueling everything from Dev Test Sandbox to globally distributed data, right? So you've got in-built ObjectScale replication that allows you to place your data anywhere you got ObjectScale (indistinct). From edge to core to data center. >> Okay, so it fits into the Kubernetes world. I call it Kubernetes compatible. The key there is automation, because that's the whole point of containers is, right? It allows you to deploy as many apps as you need to, wherever you need to in as many instances and then do rolling updates, have the same security, same API, all that level of consistency. So that's really important. That's how modern apps are being developed. We're in a new age year. It's no longer about the machines, it's about infrastructure as code. So once ObjectScale is generally available which I think is soon, I think it's this year, What should customers do, what's their next step? >> Absolutely, yeah, it's coming out November 2nd. Reach out to your Dell representatives, right? Get an in-depth demo on ObjectScale. Better yet, you get a POC, right? Get a proof of concept, have it set up in your data center and play with it. You can also download the free full featured community edition. We're going to have a community edition that's free up to 30 terabytes of usage, it's full featured. Download that, play with it. If you like it, you can upgrade that free community edition, will license paid version. >> And you said that's full featured. You're not neutering the community edition? >> Exactly, absolutely, it's full featured. >> Nice, that's a great strategy. >> We're confident, we're confident in what we're delivering, and we want you guys to play with it without having your money tied up. >> Nice, I mean, that's the model today. Gone are the days where you got to get new customers in a headlock to get them to, they want to try before they buy. So that's a great little feature. Anahad, thanks so much for joining us on theCUBE. Sounds like it's been a very busy year and it's going to continue to be so. Look forward to see what's coming out with ECS and ObjectScale and seeing those two worlds come together, thank you. >> Yeah, absolutely, it was a pleasure. Thank you so much. >> All right, and thank you for watching this CUBE Conversation. This is Dave Vellante, we'll see you next time. (upbeat music)

Published Date : Sep 14 2021

SUMMARY :

and the momentum in the space. We appreciate you having me to have you guys on, Absolutely, and you of the workload being put in, So you mentioned a few So we could give you that to one of the earlier things you said. And that's why it gave you Why is that, and can you talk about So in the long-term, if And I'm hearing that you or ObjectScale in the future, correct. that you're adding to the product? that allows you to place your data because that's the whole Reach out to your Dell And you said that's full featured. it's full featured. and we want you guys to play with it Gone are the days where you Thank you so much. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DavePERSON

0.99+

November 5thDATE

0.99+

DellORGANIZATION

0.99+

Anahad DhillonPERSON

0.99+

November 2ndDATE

0.99+

2020DATE

0.99+

two productsQUANTITY

0.99+

AnahadPERSON

0.99+

EMCORGANIZATION

0.99+

thousandsQUANTITY

0.99+

VMware 2021TITLE

0.99+

ObjectScaleTITLE

0.99+

two code streamsQUANTITY

0.99+

vSphereTITLE

0.99+

Dell TechnologiesORGANIZATION

0.99+

todayDATE

0.99+

both productsQUANTITY

0.99+

two productQUANTITY

0.99+

bothQUANTITY

0.98+

early 2020DATE

0.98+

Dell EMCORGANIZATION

0.98+

this yearDATE

0.98+

oneQUANTITY

0.98+

step oneQUANTITY

0.98+

ECSTITLE

0.98+

hundreds of nodesQUANTITY

0.98+

OpenShiftTITLE

0.98+

12 nodesQUANTITY

0.97+

KubernetesTITLE

0.97+

single codeQUANTITY

0.97+

singleQUANTITY

0.96+

10QUANTITY

0.96+

two worldsQUANTITY

0.96+

Red Hat OpenShiftTITLE

0.95+

4.0OTHER

0.94+

petabytesQUANTITY

0.94+

Dell TechnologyORGANIZATION

0.94+

S3TITLE

0.94+

ECS 3.XTITLE

0.93+

3.6OTHER

0.91+

VMwareTITLE

0.9+

ObjectScaleORGANIZATION

0.9+

EXF900COMMERCIAL_ITEM

0.9+

Hundreds of notesQUANTITY

0.89+

firstQUANTITY

0.89+

up to 30 terabytesQUANTITY

0.88+

Red HatTITLE

0.88+

Jon Siegal, Dell Technologies | CUBE Conversation 2021


 

(bright upbeat music) >> Welcome to theCUBE, our coverage of Dell Technologies World, the Digital Experience continues. I have a long-time guest coming back, joining me in the next segment here. Jon Siegal is back, the Vice President of Product Marketing at Dell Technologies. Jon, it's good to see you, welcome back to the program. >> Thanks Lisa, always great to be on. >> We last spoke about six months ago and here we are still at home. >> I know. >> But there has been no slowdown whatsoever in the last year. We were talking to you a lot about Edge last time but we're going to talk about PowerStore today. It's just coming up on its one year anniversary. You launched it right when the pandemic happened. >> That's right. >> Talk to me about what's happened in the last year with respect to PowerStore. Adoption, momentum, what's going on? >> Yeah, great, listen, what a year it's been, right? But certainly for PowerStore especially, I mean, customers and partners around the world have really embraced PowerStore, specifically really it's modern architecture. What many people may not know is this is actually the fastest ramping new architecture we've had in all of Dell's history, which is quite a history of course. And we saw 4 X quarter over quarter growth in the most recent quarter. And you know, in terms of shipments, we've shipped well over 400 petabytes of PowerStore, you know, so special thanks to lots of our customers around the world and industries like education, gaming, transportation, retail. More than 60 countries, I think 62 countries now. They include customers like Columbia Southern University, Habib Bank, Real Page, the University of Pisa and Ultra Leap, just to name a few. And to give you a sense of how truly game changing it's been in the market is that approximately 20% of the customers with PowerStore are new to Dell, new to Dell Technologies. And we've tripled the number of wins against some of our key competitors in just the last quarter as well. So look, it's been quite a year, like you said and we're not stopping there. >> Yeah, you must have to wear a neck brace from that whiplash of moving so quickly. (both laughing) But that's actually a good problem to have. >> It is. >> And curious about, is it 20% of the PowerStore customers are net new to Dell? >> Yeah. >> Interesting that you've captured that much in a very turbulent year. Any industries in particular that you see as really being transformed by the technology? >> Yeah, it's a great question. I think just like we're bringing a disruptive technology to market, there's a lot of industries out there that are disrupting themselves as well, right, and how they transform, particularly with, you know, in this new era during the pandemic. I think, I can give you a great example. One of the new capabilities of PowerStore is AppsON just for those that aren't familiar. AppsON is the ability for PowerStore to run apps directly on the appliance, good name, right? And it's thanks to a built-in VMware ESXi hypervisor. And where we've seen really good traction with AppsON, is in storage intensive applications at the edge. And that brings me to my example. And this one's in retail. And you know, of course just like every industry I think it's been up-ended in the past year. There's a large supermarket chain in northern China that is new to Dell. During the pandemic they needed to fast-track the development of a smart autonomous retail system in all their stores, so that their customers could make their purchases via smartphone app. And again, just limiting the essentially the person to person interaction during the pandemic and this required a significant increase in transaction processing to get to the store locations that they didn't have equipment for before, as well as support for big data analytics applications to understand the customer behavior that's going on in real-time. So the net result is they chose PowerStore. They were new to Dell and they deployed it in their stores and delivered a seamless shopping experience via smartphone apps. The whole shopping experience was completely revolutionized. And I think this is really a great example of again, how the innovations that are in PowerStore are enabling our customers to really rethink how they're transacting business. >> Well, enabling the supermarkets to be the edge but also in China where everything started, so much, the market dynamics are still going on, but how quickly were they able to get PowerStore up and running and facilitate that seamless smartphone shopping experience? >> It was only weeks, only weeks, weeks from beginning to getting them up to speed. I mean, we've had great coverage, great support. And again, they embraced, I mean, they happened to leverage the AppsON capabilities, so they were able to run some of their applications directly on the appliance and they were able to get that up and running very quickly. And they were already a VMware customer as well. So they were already familiar with some of the tools and the integration of the VMware. And again, that's also been a sweetspot for this particular offer. >> Okay, got it. So a lot in it's first year. You said 4 X growth, over 60 countries, 400 petabytes plus shipped, a lot of new net new customers. What is new? What are you announcing that's new and that's going to take that up even a higher level? >> That's right. We're always going to up the ante, right? We're always going to, we can't rest on our laurels for too long. Look, we're very excited to share what's new for PowerStore. And that is one of the reasons we're here of course. I can break it down into two key highlights. First is a major software update that brings more enterprise innovation, more speed, more automation in particular to both new and existing customers. And we're also excited to announce a new lower cost entry model for the PowerStore family called the PowerStore 500. And this offers an incredible amount of enterprise class storage capabilities, much of which I have talked about and will talk more about today, for the price. And the price itself is what's going to surprise some folks. It starts as low as 28,000 US street price which is pretty significant, you know, in terms of a game changer, we think, in this industry. >> So let's talk about the software update first. You've got PowerStore 2.0, happy birthday to your customers who are going to take advantage of this. >> That's right. >> Kind of talk me through what some of the technological advancements are that your customers are going to be able to leverage? >> That's a great point. Yeah, so from a software perspective I like how you said that, happy birthday, yeah so all of our, just to be clear from a software update perspective, all of our existing customers are going to get this as a simple free non-disruptive update. And this is a commitment we've had to our customers for some time. And really it's the mantra if you will, of PowerStore, which is all about ensuring that our customers can encounter our very flexible platform that will keep giving them the latest and greatest. So really a couple of things I want to highlight from PowerStore that are brand new. One is we're giving a speed boost to the entire PowerStore lineup. Customers now, existing customers, you get up to 25% faster, mixed workload performance which is incredible, right off the bat. Secondly, we're enabling our customers to take full advantage of NVME now across the data center with the option of running NVME over fiber channel. And this again requires just a simple software update and no additional hardware if they already have 32 gig capable switches and HBAs on-prem. We've also made our unique AppsON feature, which I just talked about in the China example, we've made that more powerful and with scale out. This means more aggregate power, more aggregate capacity and it makes it even more ideal now for storage intensive apps to run at the edge with PowerStore. Another capability that's been very popular with our customers is our data reduction specifically our intelligent Dido which is always on and automated. And now what it does is it enables customers to boost performance while still guaranteeing the four to one data reduction that we have, at the same time. So just to give a quick example, when the system is under extreme IO, duress if you will, it automatically prioritize that IO versus the DDUP itself and provides a 20% turbo boost if you will, of performance boost for the applications running. All this is done automatically, zero management effort, zero impact to the data reduction guarantee of four to one that we already have in place. And then the last highlight I'd like to bring up is, last but not least, is one we're really proud of is the ability for our customers to now take more cost advantage, if you will, cost effective advantage of SCM or storage class memory. PowerStore now differentiates between SCM drives and NVME drives within the same chassis. So they can use SCM as a high-performance layer, if you will with as few as one drive, right? So they don't have to populate the whole chassis, they can use just one SCM drive for cost-effectiveness, for embedded data access. And this actually helps reduce the workload latency by up to 15%. So, another great example on top of NVME that I already mentioned, of how PowerStore is leading the practical adoption of next generation technologies. >> Are you seeing with the lower cost PowerStore 500, is that an opportunity for Dell to expand into the midsize market and an opportunity for those smaller customers to be able to take advantage of this technology? >> Absolutely, yeah. So the PowerStore finder, which we're really excited about introducing does exactly what you just said, Lisa. It is going to allow us to bring PowerStore and the experience of PowerStore to a broad range of businesses, a much broader range of edge use cases as well. And we're really excited about that. It's an incredible amount of enterprise storage class performance, as I mentioned, and functionality for the price that is again, 28,000 starting. And this includes all of the enterprise software capabilities I've been talking about. The ability to cluster, four to one data reduction guarantee, anytime upgrades. And to put this in context, a single 2U appliance, the PowerStore 500 supports up to 2.4 million SQL transactions per minute. I mean, this thing packs a punch, like no other, right? And it's a great fit for stand-alone or edge deployments in virtually every industry, we've mentioned retail already also healthcare, manufacturing, education and more. It's an offering that's really ideal for any solution that requires an optimization of price/performance, small footprint and effortless automation. And I can tell you, it's not just customers that are excited about this, as you can imagine our channel partners, they can't wait to get their hands on this either. >> Was just going to ask you about the channel. >> It is going to help them reach new sets of customers that they never had before. You mentioned midsize, but also in addition to that, it's just going to open it up to all new sets of use cases as well. So I'm really excited to see the creativity from our channel partners and customers and how they adopt and use the PowerStore 500 going forward. >> Tell me about some of those new use cases that it's going to open up. We've seen so many new things in the last year and such acceleration. What are some of the new use cases that this is going to help unlock value for? >> Yeah, again, I think it's going to come down a lot to the edge in particular, as well as mid-size, it can run, again, this can run storage, intensive applications. So it's really about coming down to a price point that I think the biggest example will be mid-sized businesses that now, it's now affordable to. That they weren't able to get this enterprise class capabilities in the past more than anything else. Cause it's all the same capabilities that I've mentioned but it allows them to run all types of things. It could be, they could run, new next-generation intensive data, intensive databases. They can run VDI, they can run SQL, it does, essentially more than anything else makes existing use cases more accessible to mid-sized businesses. >> Got it, okay. So, so much momentum going on in the first year. A lot of that you're souping it up with this your new software, we talked about the new mid-size enterprise version PowerStore 500. What else can we expect from PowerStore, the rest of calendar 2021? >> Yeah, I think lots of things. So first of all we're so pleased at the amount of commitment to innovation that we've had over the past year. We're going to continue to work very closely with VMware to drive more and more innovation and enhancements with capabilities like AppsON that I talked about, and VM-ware or (indistinct) which is a key enabler for that. We're also committed to continuing to lead the industry in the adoption of modern technologies. I gave some good examples today of NVME and AppsON and SCM, storage class memory, and customers can expect that continued commitment. Look, we've designed PowerStore from the ground up to be very flexible so that it can be enhanced and improved non-disruptively. And I think we did that with this release. We proved that and no one can predict the future, clearly, it's been a crazy year. And so businesses need storage that's going to be flexible with them and grow with them and evolve with them. And customers can expect that from PowerStore. And we plan on doing just that. >> So customers can, that are interested can go direct to Dell. They can also go through your huge channel, you said, in terms of those customers that are thinking about it maybe adding to the percentage of new customers. What's your advice on them in terms of next steps? >> Yeah, next steps is, you know, I got to say this, we've done, it's crazy, we've done over 20,000 demos of PowerStore in one year, no joke. And you know, it's a new world. And so the next step is to reach out to Dell. We'd love to showcase this through a demo, give them whether it's a remote experience that way or remote proof of concept but yeah, reach out to Dell, your local rep or local channel partner and we'd love to show you what's possible more than anything else and look, we're really proud of what we've accomplished here. Just as impressive as these updates, I must say, is that in many instances, the team that brought this to market, the engineering team, they did this just like we're doing today, right? Over Zoom, remotely, while balancing life and work. So I just also want to thank the team for their commitment to delivering innovation to our customers. It hasn't wavered at all and I want to thank our top notch team. >> Right, an amazing amount of work done. You've had a very busy year and glad that you're well and healthy and been as successful with PowerStore. We can't wait to see in the next year those numbers that you shared even go up even more. Jon, thank you for joining us >> Looking forward to it. and sharing what's new with PowerStore. We appreciate your time. >> Always a pleasure, Lisa. >> Likewise >> Look forward to talking to you soon. >> Yeah >> Take care. >> For Jon Siegal, I'm Lisa Martin, you're watching theCUBE's coverage of Dell Technologies World, a Digital Experience. (slow upbeat music)

Published Date : Apr 20 2021

SUMMARY :

Jon, it's good to see you, and here we are still at home. in the last year. Talk to me about And to give you a sense of how good problem to have. by the technology? And that brings me to my example. and the integration of the VMware. and that's going to take And that is one of the happy birthday to your customers the four to one data And to put this in context, Was just going to ask it's just going to open it up that this is going to but it allows them to on in the first year. that's going to be flexible with them can go direct to Dell. the team that brought this to and glad that you're well Looking forward to it. of Dell Technologies World,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
LisaPERSON

0.99+

JonPERSON

0.99+

Lisa MartinPERSON

0.99+

20%QUANTITY

0.99+

Jon SiegalPERSON

0.99+

Habib BankORGANIZATION

0.99+

DellORGANIZATION

0.99+

ChinaLOCATION

0.99+

Columbia Southern UniversityORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

32 gigQUANTITY

0.99+

last yearDATE

0.99+

Ultra LeapORGANIZATION

0.99+

Real PageORGANIZATION

0.99+

University of PisaORGANIZATION

0.99+

FirstQUANTITY

0.99+

PowerStoreORGANIZATION

0.99+

400 petabytesQUANTITY

0.99+

28,000QUANTITY

0.99+

last quarterDATE

0.99+

northern ChinaLOCATION

0.99+

next yearDATE

0.98+

over 60 countriesQUANTITY

0.98+

todayDATE

0.98+

bothQUANTITY

0.98+

first yearQUANTITY

0.98+

SQLTITLE

0.98+

More than 60 countriesQUANTITY

0.98+

oneQUANTITY

0.97+

up to 15%QUANTITY

0.97+

one yearQUANTITY

0.97+

over 20,000 demosQUANTITY

0.97+

up to 25%QUANTITY

0.97+

PowerStoreTITLE

0.97+

62 countriesQUANTITY

0.97+

OneQUANTITY

0.97+

fourQUANTITY

0.96+

approximately 20%QUANTITY

0.96+

SecondlyQUANTITY

0.96+

one driveQUANTITY

0.96+

firstQUANTITY

0.95+

up to 2.4 millionQUANTITY

0.95+

over 400 petabytesQUANTITY

0.94+

PowerStore 500COMMERCIAL_ITEM

0.94+

pandemicEVENT

0.93+

past yearDATE

0.93+

VMwareORGANIZATION

0.92+

one year anniversaryQUANTITY

0.91+

AppsONTITLE

0.9+

zeroQUANTITY

0.89+

rStoreTITLE

0.89+

singleQUANTITY

0.89+

Maurizio Davini, University of Pisa and Thierry Pellegrino, Dell Technologies | VMworld 2020


 

>> From around the globe, it's theCUBE, with digital coverage of VMworld 2020, brought to you by the VMworld and its ecosystem partners. >> I'm Stu Miniman, and welcome back to theCUBES coverage of VMworld 2020, our 11th year doing this show, of course, the global virtual event. And what do we love talking about on theCUBE? We love talking to customers. It is a user conference, of course, so really happy to welcome to the program. From the University of Pisa, the Chief Technology Officer Maurizio Davini and joining him is Thierry Pellegrini, one of our theCUBE alumni. He's the vice president of worldwide, I'm sorry, Workload Solutions and HPC with Dell Technologies. Thierry, thank you so much for joining us. >> Thanks too. >> Thanks to you. >> Alright, so let, let's start. The University of Pisa, obviously, you know, everyone knows Pisa, one of the, you know, famous city iconic out there. I know, you know, we all know things in Europe are a little bit longer when you talk about, you know, some of the venerable institutions here in the United States, yeah. It's a, you know, it's a couple of hundred years, you know, how they're using technology and everything. I have to imagine the University of Pisa has a long storied history. So just, if you could start before we dig into all the tech, give us our audience a little bit, you know, if they were looking up on Wikipedia, what's the history of the university? >> So University of Pisa is one of the oldest in the world because there has been founded in 1343 by a pope. We were authorized to do a university teaching by a pope during the latest Middle Ages. So it's really one of the, is not the oldest of course, but the one of the oldest in the world. It has a long history, but as never stopped innovating. So anything in Pisa has always been good for innovating. So either for the teaching or now for the technology applied to a remote teaching or a calculation or scientific computing, So never stop innovating, never try to leverage new technologies and new kind of approach to science and teaching. >> You know, one of your historical teachers Galileo, you know, taught at the university. So, you know, phenomenal history help us understand, you know, you're the CTO there. What does that encompass? How, you know, how many students, you know, are there certain areas of research that are done today before we kind of get into the, you know, the specific use case today? >> So consider that the University of Pisa is a campus in the sense that the university faculties are spread all over the town. Medieval like Pisa poses a lot of problems from the infrastructural point of view. So, we have bought a lot in the past to try to adapt the Medieval town to the latest technologies advancement. Now, we have 50,000 students and consider that Pisa is a general partners university. So, we cover science, like we cover letters in engineering, medicine, and so on. So, during the, the latest 20 years, the university has done a lot of effort to build an infrastructure that was able to develop and deploy the latest technologies for the students. So for example, we have a private fiber network covering all the town, 65 kilometers of a dark fiber that belongs to the university, four data centers, one big and three little center connected today at 200 gigabit ethernet. We have a big data center, big for an Italian University, of course, and not Poland and U.S. university, where is, but also hold infrastructure for the enterprise services and the scientific computing. >> Yep, Maurizio, it's great that you've had that technology foundation. I have to imagine the global pandemic COVID-19 had an impact. What's it been? You know, how's the university dealing with things like work from home and then, you know, Thierry would love your commentary too. >> You know, we, of course we were not ready. So we were eaten by the pandemic and we have to adapt our service software to transform from imperson to remote services. So we did a lot of work, but we are able, thanks to the technology that we have chosen to serve almost a 100% of our curriculum studies program. We did a lot of work in the past to move to virtualization, to enable our users to work for remote, either for a workstation or DC or remote laboratories or remote calculation. So virtualization has designed in the past our services. And of course when we were eaten by the pandemic, we were almost ready to transform our service from in person to remote. >> Yeah, I think it's, it's true, like Maurizio said, nobody really was preparing for this pandemic. And even for, for Dell Technologies, it was an interesting transition. And as you can probably realize a lot of the way that we connect with customers is in person. And we've had to transition over to modes or digitally connecting with customers. We've also spent a lot of our energy trying to help the community HPC and AI community fight the COVID pandemic. We've made some of our own clusters that we use in our HPC and AI innovation center here in Austin available to genomic research or other companies that are fighting the the virus. And it's been an interesting transition. I can't believe that it's already been over six months now, but we've found a new normal. >> Detailed, let's get in specifically to how you're partnering with Dell. You've got a strong background in the HPC space, working with supercomputers. What is it that you're turning to Dell in their ecosystem to help the university with? >> So we are, we have a long history in HPC. Of course, like you can imagine not to the biggest HPC like is done in the U.S. so in the biggest supercomputer center in Europe. We have several system for doing HPC. Traditionally, HPC that are based on a Dell Technologies offer. We typically host all kind of technology's best, but now it's available, of course not in a big scale but in a small, medium scale that we are offering to our researcher, student. We have a strong relationship with Dell Technologies developing together solution to leverage the latest technologies, to the scientific computing, and this has a lot during the research that has been done during this pandemic. >> Yeah, and it's true. I mean, Maurizio is humble, but every time we have new technologies that are to be evaluated, of course we spend time evaluating in our labs, but we make it a point to share that technology with Maurizio and the team at the University of Pisa, That's how we find some of the better usage models for customers, help tuning some configurations, whether it's on the processor side, the GPU side, the storage and the interconnect. And then the topic of today, of course, with our partners at VMware, we've had some really great advancements Maurizio and the team are what we call a center of excellence. We have a few of them across the world where we have a unique relationship sharing technology and collaborating on advancement. And recently Maurizio and the team have even become one of the VMware certified centers. So it's a great marriage for this new world where virtual is becoming the norm. >> But well, Thierry, you and I had a conversation to talk earlier in the year when VMware was really geering their full kind of GPU suite and, you know, big topic in the keynote, you know, Jensen, the CEO of Nvidia was up on stage. VMware was talking a lot about AI solutions and how this is going to help. So help us bring us in you work with a lot of the customers theory. What is it that this enables for them and how to, you know, Dell and VMware bring, bring those solutions to bear? >> Yes, absolutely. It's one statistic I'll start with. Can you believe that only on average, 15 to 20% of GPU are fully utilized? So, when you think about the amount of technology that's are at our fingertips and especially in a world today where we need that technology to advance research and scientistic discoveries. Wouldn't it be fantastic to utilize those GPU's to the best of our ability? And it's not just GPU's , I think the industry has in the IT world, leverage virtualization to get to the maximum recycles for CPU's and storage and networking. Now you're bringing the GPU in the fold and you have a perfect utilization and also flexibility across all those resources. So what we've seen is that convergence between the IT world that was highly virtualized, and then this highly optimized world of HPC and AI because of the resources out there and researchers, but also data scientists and company want to be able to run their day to day activities on that infrastructure. But then when they have a big surge need for research or a data science use that same environment and then seamlessly move things around workload wise. >> Yeah, okay I do believe your stat. You know, the joke we always have is, you know, anybody from a networking background, there's no such thing as eliminating a bottleneck, you just move it. And if you talk about utilization, we've been playing the shell game for my entire career of, let's try to optimize one thing and then, oh, there's something else that we're not doing. So,you know, so important. Retail, I want to hear from your standpoint, you know, virtualization and HPC, you know, AI type of uses there. What value does this bring to you and, you know, and key learnings you've had in your organization? >> So, we as a university are a big users of the VMware technologies starting from the traditional enterprise workload and VPI. We started from there in the sense that we have an installation quite significant. But also almost all the services that the university gives to our internal users, either personnel or staff or students. At a certain point that we decided to try to understand the, if a VMware virtualization would be good also for scientific computing. Why? Because at the end of the day, their request that we have from our internal users is flexibility. Flexibility in the sense of be fast in deploying, be fast to reconfiguring, try to have the latest beats on the software side, especially on the AI research. At the end of the day we designed a VMware solution like you, I can say like a whiteboard. We have a whiteboard, and we are able to design a new solution of this whiteboard and to deploy as fast as possible. Okay, what we face as IT is not a request of the maximum performance. Our researchers ask us for flexibility then, and want to be able to have the maximum possible flexibility in configuring the systems. How can I say I, we can deploy as more test cluster on the visual infrastructure in minutes or we can use GPU inside the infrastructure tests, of test of new algorithm for deep learning. And we can use faster storage inside the virtualization to see how certain algorithm would vary with our internal developer can leverage the latest, the beat in storage like NVME, MVMS or so. And this is why at the certain point, we decided to try visualization as a base for HPC and scientific computing, and we are happy. >> Yeah, I think Maurizio described it it's flexibility. And of course, if you think optimal performance, you're looking at the bare medal, but in this day and age, as I stated at the beginning, there's so much technology, so much infrastructure available that flexibility at times trump the raw performance. So, when you have two different research departments, two different portions, two different parts of the company looking for an environment. No two environments are going to be exactly the same. So you have to be flexible in how you aggregate the different components of the infrastructure. And then think about today it's actually fantastic. Maurizio was sharing with me earlier this year, that at some point, as we all know, there was a lot down. You could really get into a data center and move different cables around or reconfigure servers to have the right ratio of memory, to CPU, to storage, to accelerators, and having been at the forefront of this enablement has really benefited University of Pisa and given them that flexibility that they really need. >> Wonderful, well, Maurizio my understanding, I believe you're giving a presentation as part of the activities this week. Give us a final glimpses to, you know, what you want your peers to be taking away from what you've done? >> What we have done that is something that is very simple in the sense that we adapt some open source software to our infrastructure in order to enable our system managers and users to deploy HPC and AI solution fastly and in an easy way to our VMware infrastructure. We started doing a sort of POC. We designed the test infrastructure early this year and then we go fastly to production because we had about the results. And so this is what we present in the sense that you can have a lot of way to deploy Vitola HPC, Barto. We went for a simple and open source solution. Also, thanks to our friends of Dell Technologies in some parts that enabled us to do the works and now to go in production. And that's theory told before you talked to has a lot during the pandemic due to the effect that stay at home >> Wonderful, Thierry, I'll let you have the final word. What things are you drawing customers to, to really dig in? Obviously there's a cost savings, or are there any other things that this unlocks for them? >> Yeah, I mean, cost savings. We talked about flexibility. We talked about utilization. You don't want to have a lot of infrastructure sitting there and just waiting for a job to come in once every two months. And then there's also the world we live in, and we all live our life here through a video conference, or at times through the interface of our phone and being able to have this web based interaction with a lot of infrastructure. And at times the best infrastructure in the world, makes things simpler, easier, and hopefully bring science at the finger tip of data scientists without having to worry about knowing every single detail on how to build up that infrastructure. And with the help of the University of Pisa, one of our centers of excellence in Europe, we've been innovating and everything that's been accomplished for, you know at Pisa can be accomplished by our customers and our partners around the world. >> Thierry, Maurizio, thank you much for so much for sharing and congratulations on all I know you've done building up that COE. >> Thanks to you. >> Thank you. >> Stay with us, lots more covered from VMworld 2020. I'm Stu Miniman as always. Thank you for watching the theCUBE. (soft music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by the VMworld of course, the global virtual event. here in the United States, yeah. So either for the teaching or you know, you're the CTO there. So consider that the University of Pisa and then, you know, Thierry in the past our services. that are fighting the the virus. background in the HPC space, so in the biggest Maurizio and the team are the keynote, you know, Jensen, because of the resources You know, the joke we in the sense that we have an and having been at the as part of the activities this week. and now to go in production. What things are you drawing and our partners around the world. Thierry, Maurizio, thank you much Thank you for watching the theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MaurizioPERSON

0.99+

ThierryPERSON

0.99+

Thierry PellegriniPERSON

0.99+

EuropeLOCATION

0.99+

15QUANTITY

0.99+

VMwareORGANIZATION

0.99+

DellORGANIZATION

0.99+

AustinLOCATION

0.99+

Stu MinimanPERSON

0.99+

University of PisaORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

JensenPERSON

0.99+

Maurizio DaviniPERSON

0.99+

1343DATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

United StatesLOCATION

0.99+

65 kilometersQUANTITY

0.99+

50,000 studentsQUANTITY

0.99+

U.S.LOCATION

0.99+

200 gigabitQUANTITY

0.99+

PisaLOCATION

0.99+

three little centerQUANTITY

0.99+

GalileoPERSON

0.99+

todayDATE

0.99+

11th yearQUANTITY

0.99+

VMworld 2020EVENT

0.99+

over six monthsQUANTITY

0.99+

20%QUANTITY

0.98+

oneQUANTITY

0.98+

two different partsQUANTITY

0.97+

Thierry PellegrinoPERSON

0.97+

pandemicEVENT

0.97+

four data centersQUANTITY

0.96+

one bigQUANTITY

0.96+

earlier this yearDATE

0.96+

this weekDATE

0.96+

Middle AgesDATE

0.96+

COVID pandemicEVENT

0.96+

theCUBEORGANIZATION

0.95+

VMworldORGANIZATION

0.95+

100%QUANTITY

0.95+

early this yearDATE

0.95+

20 yearsQUANTITY

0.91+

HPCORGANIZATION

0.9+

two different research departmentsQUANTITY

0.9+

two different portionsQUANTITY

0.89+

PolandLOCATION

0.88+

one thingQUANTITY

0.87+

WikipediaORGANIZATION

0.86+

Vaughn Stewart, Pure Storage | VMworld 2020


 

>> Narrator: From around the globe, it's theCUBE. With digital coverage of VMworld 2020 brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stuart Miniman and this is theCUBES's coverage of VMworld 2020. Our 11th year doing the show and happy to welcome back to the program one of our CUBE's alums. Somebody that's is going to VMworld longer than we have been doing it for theCUBE. So Vaughn Stewart he is the Vice President of Technology Alliances with Pure Storage Vaughn, nice to see you. How you doing? >> Hey, Stu. CUBE thanks for having me back. I miss you guys I wish we were doing this in person. >> Yeah, we all wish we were in person but as we've been saying all this year, we get to be together even while we're apart. So we look to you on little screens and things like that rather than bumping into each other at some of the after parties or the coffee shops all around San Francisco. So Vaughn, obviously you know Pure Storage long, long, long partnership with VMware. I think back the first time that I probably met with the Pure team, in person, it probably was around Moscone, having a breakfast having a lunch, having a briefing or the likes. So just give us the high level. I know we've got a lot of things to dig into. Pure and VMware, how's the partnership going these days? >> Partnership is growing fantastic Pure invests a lot of engineering resources in programs with VMware. Particularly the VMware design partner programs for vVols, Container-Native Storage et cetera. The relationship is healthy the business is growing strong. I'm very excited about the investments that VMware is making around VMware Cloud Foundation as a replatforming of what's going on MPREM to help better enable hybrid cloud and to support Tanzu and Kubernetes platforms. So a lot going on at the infrastructure level that ultimately helps customers of all to adopt cloud native workloads and applications. >> Wonderful. Well a lot of pieces to unpack that. Of course Tanzu big piece of what they're talking about. But let's start. You mentioned VCF. You know what is it on the infrastructure side, that is kind of driving your customer adoption these days, and the some of the latest integrations that you're doing? >> Yeah you know VCF has really caught the attention of our mid to large or mid to enterprise size customers. The focus around, as I use the phrase replatform is planning out with VMworld phrase. But the focus on simplifying the lifecycle management, giving you a greater means to connect to the public cloud. I don't know if you're aware, but all VMware public cloud offerings have the VCF framework in terms of architectural framework. So now bringing that back on-prem, allowing customers on a per workload domain basis to extend to a hybrid cloud capability. It's a really big advancement from kind of the base vSphere infrastructure, which architecturally hasn't had a significant advancement in a number of years. What's really big around VCF besides the hybrid connectivity, is the couple of new tools SDDC Manager and vSphere Lifecycle Manager. These tools can actually manage the infrastructure from bare metal up to workload domains and then from workload domains you're now handing off to considered like delegated vCenter Servers right? So that the owner of a workload if you will and then that person can go ahead and provision virtual machines or containers, based on whatever is required to run their workloads. So for us the big gain of this is the advancement in the VMware management. They are bringing their strength in providing simplicity, and end-to-end hardwared application management to disaggregated architectures. Where the focus of that capability has been with HCI over say the past five or six years. And so this really helps close that last gap, if you will, and completes a 360 degree view of providing simplified management across dissimilar architecture and it's consistent and it's standardized by VMware. So HCI, disaggregated architecture, public cloud, it all operates the same. >> So Vaughn, you made a comment about not a lot of changes. If I remember our friends at VMware they made a statement vSphere 7 was the biggest architectural change in over a decade. Of course bringing in Kubernetes it's a major piece of the Tanzu discussion. Pure. Your team's been pretty busy in the Kubernetes space too. Recent acquisition of Portwox to help accelerate that. Maybe let's talk a little bit about you know cloud native. What you're hearing from your customers. (chuckles) And yeah, like we've Dave Vellante had a nice interview with, Pure and Portwox CEOs. Give the VMworld audience a little bit of an update as you know where you all fit in the Kubernetes space. >> Yeah and actually, there was a lot that you shared there kind of in connecting the VCF piece through to vSphere 7 and a lot of changes there in driving into Tanzu and containers. So maybe we're going to jump around here a bit but look we're really excited. We've been working with VMware, but in addition to all of our application partners, you are seeing nearly every traditional enterprise application being replatformed to support containers. I'd love to share with you more details, but there's a lot of NDAs I'd be breaking in that. But the way for enterprise adoption of containers is right upon us. And so the timing for VMware Tanzu is ideal. Our focus has always been around providing a rich set of data services. One that provides faster provisioning, simplified fleet management, and the ability to move that container and those data services between different clouds and different cloud platforms, Be it on-prem, or in the public cloud space. We've had a lot of success doing that with the Pure Service Orchestrator Version 6.0 enables CSI compliant persistent storage capabilities. And it does support Tanzu today. The addition or I should say the acquisition of Portworx is really interesting. Because now we're bringing on an enhanced set of data services that not only run on a Pure Storage storage products, but runs universally regardless of the storage platform, or the Cloud architecture. The capabilities within Portworx are above and beyond what we had in PSO. So this is a great expansion of our capabilities. And ultimately we want to help customers. Whether they want to do containers solely on Tanzu, or if they're going to mix Tanzu with say Amazon EKS, or they've got some department that does development on OpenShift. Whatever it might be. You know that the focus of storage vendors is obviously to help customers make that data available on these platforms through a consistent control plane. >> Yeah. Vaughn it's a great acquisition. Think a nice fit. Anybody that's been talking to Pure the last year or so you've been. How do we take the storage make it more cloud native if you will. So you've got code. Obviously, you've got a great partnership with VMware, but as you said, in Amazon and some of the other hyper clouds those clouds, those storage services, no matter where a customer is, so that that core value, of course we know, is this the software underneath it. And that's what Portworx is. So you know not only Pure's, but other hardware, other clouds and the likes. So a really interesting space You know Vaughn, you and I've been covering this, since the early days of VMware. Hey this software is kind of a big deal and you know (chuckles) cloud in many ways is an extension of what we're doing. I know we used to joke how many years was it that VMworld was storage world? You know. >> Ooh yeah. >> There was talk about like big architectural changes, you know vVols When that finally came out, it was years of hard work by many of the big companies, including your previous and current you know employer. What's the latest? My understanding is that there are some updates there when it comes to the underlying vVols. What are the storage people need to know? >> Yeah. So great question and VMware is always been infrastructure world really Right? Like it is a showcase for storage. But it's also been a showcase for the compute vendors and every Intel partner. From a storage perspective, a lot is going on this year that should really excite both VMware admins and those who are storage centric in their day-to-day jobs. Let's start with the recent news. vVols has been promoted within VCF to being principal storage. For those of you who maybe are unfamiliar with this term 'principal storage' VMware Cloud Foundation supports any form of storage that's supported by vSphere. But SDDC manager tool that I was sharing with you earlier that really excites large scale organizations around it's end-to-end simplicity and management. It had a smaller, less robust support list when it comes to provisioning external storage. And so it had two tiers. Principal and secondary. Principal meant SDDC manager could provision and deprovision sub-tenants. So the recent news brings vVols both on Fiber Channel and iSCSI up to that principal tier. Pure Storage is a VMware design partner around vVols. We are one of the most adopted vVols storage platforms, and we are really leaning in on VCF. So we are very happy to see that come to fruition for our customers. Part of why VMware partners with Pure Storage around VCF, is they want VCF enabled on any Fabric. And you know some vendors only offer ethernet only forms of connectivity. But with Pure Storage, we don't care what your Fabric is right. We just want to provide the data services be it ethernet, fiber channel or next generation NVMe over Fabric. That last point segments into another recent announcement from from VMware. Which is the support for NVMe over Fabric within vSphere 7. This is key because NVMe over Fabric allows the IO path to move away from SCSI based form of communication one to a memory based form of communication. And this unleashes a new level of performance, a way to better support those business and mission critical applications. Or a way to drive greater density into a smaller form factor and footprint within your data center. Obviously Fabric upgrades tend to not happen in conjunction with hypervisor upgrades, but the ability to provide customers a roadmap and a means to be able to continually evolve their infrastructure non disruptively, is our key there. It would be remiss of me to not point out one kind of orthogonal element, which is the new vMotion capabilities that are in vSphere 7. Customers have been tried for a number of years, probably from vSphere 4 through six to virtualize more performance centric and resource intense applications. And they've had some challenges around scale, particularly with the non-disruptive. The ability to non disruptively move a workload. VMware rewrote vMotion for vSphere 7 so it can tackle these larger more performance centric workloads. And when you combine that along with the addition of like NVMe over Fabric support, I think you're truly at a time where you can say, almost every workload can run on a VMware platform, right? From your traditional two two consolidation where you started to looking at performance centric AI, in machine learning workloads. >> Yeah. A lot of pieces you just walked through Vaughn, I'm glad especially the NVMe over Fabric piece. Just want to drill down one level there. As you said, there's a lot of pieces to make sure that this is fully worked. The standards are done, the software is there, the hardware, the various interconnects there and then okay, when's does the customer actually ready to upgrade that? How much of that is just you know okay hitting the update button. How much of that is do I need to do a refresh? And we understand that the testing and purchasing cycles there. So how many customers are you talking to that are like, "Okay I've got all the pieces, "we're ready to roll, "we're implementing in 2020." And you know, what's that roadmap look like for kind of the typical enterprise, which I know is a bit of an oxymoron? (laughs) >> So we've got a handful. I think that's a fair way to give you a size without giving you an exact number. We had a handful of customers who have NVMe over Fabric deployments today. The deployments tend to be application or workload centric versus ubiquitous across the data center. Which I think does bear an opportunity for VMware adoption to be a little bit earlier than across the entire data center. Because most VMware architectures today are based on top of rack switching. Whether that switching is fiber channel or ethernet base, I think the ability to then upgrade that switch. Either you've got modern hardware and it just needs a firmware update, or you've got to replace that hardware and implement NVMe over Fabric. I think that's very attractive. Particularly that you can do so in a non disruptive manner with a flash array or with flash deck. We expect to see the adoption really start to take take hold in 2021. But you probably won't see large market gains until 2022 or 23. >> Well that's super helpful Vaughn especially Pure Storage you've got customers that have some of the most demanding performance environments out there. So they are some of the early adopters that you would expect go into adopting this new technology. All right. I guess last piece, listening to the keynote looking at all the announcements that they have you know, VMware obviously has a big push into the cloud native space they've made a whole lot of acquisitions. We touched on a little bit before but what's your take as to what you are hearing from your customers, where they are with adoption into really modernizing and accelerating their businesses today? >> I think for the majority of our customers and again I would consider more of a commercial or mid market centric up through enterprise. They've particularity enterprise, they've adapted cloud native technologies particularity in developing their own internal or customer facing applications. So I don't think the technology is new. I think where it's newer is this re platforming of enterprise applications and I think that what's driving the timeline for VMware. We have a number of Pivotal deployments that run up here. Very large scale Pivotal deployments that run on Pure. And hopefully as you audience knows Pivotal is what VMware Tanzu has been rebranded as. So we've had success there. We've have had success in the test and development and in the web facing application space. But now this is a broader initiative from VMware supporting enterprise apps along with you know the cloud native disaggregated applications that have been built over the last say five to 10 years. But to provide it though a single management plane. So I'm bullish, I'm really bullish I think they are in a unique position compared to the rest of our technology partners you know they own the enterprise virtualization real estate and as so their ability to successfully add cloud native application to that, I think it's a powerful mix . For us the opportunity is great. I want to thank you for focusing on the fact that we've been able to deliver performance. But performances found on any flash product. And it's not to demote our performance by any means, but when you look at our customers and what they purchase us in terms of the repeat purchases, it's around simplicity, it's around the native integration with VMware and the extending of that value prop through our capabilities whether it's through the end-to-end infrastructure management, through data protection extending in the hybrid cloud. That's where Pure Storage customers fall in love with Pure Storage. And so it's a combination of performance, simplicity and ultimately, you know, economics. As we know economics drive most technical decisions not the actual technology itself. >> Well, Vaughn Stewart thank you so much for the update, congratulation on all the new things that are being brought out in the partnership >> Thank you Stu appreciate being on theCUBE, big shout out to VMware congratulations on VMworld 2020, look forward to seeing everybody soon >> All right, stay tuned for more coverage VMworld 2020 I'm Stu Miniman and that you for watching theCUBE. (bright upbeat music)

Published Date : Sep 30 2020

SUMMARY :

brought to you by VMware and happy to welcome back to the program I miss you guys a briefing or the likes. and to support Tanzu and and the some of the latest So that the owner of in the Kubernetes space too. and the ability to move that container and you know (chuckles) What are the storage people need to know? but the ability to provide for kind of the typical enterprise, I think the ability to to what you are hearing and in the web facing application space. I'm Stu Miniman and that

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

VMwareORGANIZATION

0.99+

Stuart MinimanPERSON

0.99+

2020DATE

0.99+

2021DATE

0.99+

AmazonORGANIZATION

0.99+

two tiersQUANTITY

0.99+

San FranciscoLOCATION

0.99+

Vaughn StewartPERSON

0.99+

360 degreeQUANTITY

0.99+

TanzuORGANIZATION

0.99+

VMware Cloud FoundationORGANIZATION

0.99+

Stu MinimanPERSON

0.99+

fiveQUANTITY

0.99+

PureORGANIZATION

0.99+

vSphere 7TITLE

0.99+

2022DATE

0.99+

11th yearQUANTITY

0.99+

last yearDATE

0.99+

one levelQUANTITY

0.98+

MosconeLOCATION

0.98+

bothQUANTITY

0.98+

Fiber ChannelORGANIZATION

0.98+

PortworxORGANIZATION

0.98+

23DATE

0.98+

VaughnPERSON

0.98+

first timeQUANTITY

0.97+

10 yearsQUANTITY

0.96+

Vice PresidentPERSON

0.96+

vMotionTITLE

0.96+

singleQUANTITY

0.96+

vSphereTITLE

0.96+

vSphere Lifecycle ManagerTITLE

0.95+

iSCORGANIZATION

0.95+

Amazon EKSORGANIZATION

0.95+

todayDATE

0.94+

IntelORGANIZATION

0.93+

oneQUANTITY

0.93+

PortwoxORGANIZATION

0.93+

VMworld 2020EVENT

0.93+

sixQUANTITY

0.92+

VCFTITLE

0.92+

coupleQUANTITY

0.91+

SDDC ManagerTITLE

0.9+

VMware TanzuORGANIZATION

0.89+

KubernetesTITLE

0.89+

Pure StorageORGANIZATION

0.88+

HCIORGANIZATION

0.87+

vSphere 4TITLE

0.87+

PivotalTITLE

0.85+

over a decadeQUANTITY

0.85+

Version 6.0OTHER

0.85+

VMworldORGANIZATION

0.84+

KubernetesORGANIZATION

0.84+

Eric Herzog, IBM | VMworld 2020


 

>> Announcer: From around the globe, it's theCUBE. With digital coverage of VMworld 2020, brought to you by VMware and its ecosystem partners. >> Welcome back, I'm Stu Miniman. This is theCUBE's coverage of VMworld 2020 of course, happening virtually. And there are certain people that we talk to every year at theCUBE, and this guest, I believe, has been on theCUBE at VMworld more than any others. It's actually not Pat Gelsinger, Eric Herzog. He is the chief marketing officer and vice president of global storage channels at IBM. Eric, Mr. Zoginstor, welcome back to theCUBE, nice to see you. >> Thank you very much, Stu. IBM always enjoys hanging with you, John, and Dave. And again, glad to be here, although not in person this time at VMworld 2020 virtual. Thanks again for having IBM. >> Alright, so, you know, some things are the same, others, very different. Of course, Eric, IBM, a long, long partner of VMware's. Why don't you set up for us a little bit, you know, 2020, the major engagements, what's new with IBM and VMware? >> So, a couple of things, first of all, we have made our Spectrum Virtualize software, software defined block storage work in virtual machines, both in AWS and IBM Cloud. So we started with IBM Cloud and then earlier this year with AWS. So now we have two different cloud platforms where our Spectrum Virtualize software sits in a VM at the cloud provider. The other thing we've done, of course, is V7 support. In fact, I've done several VMUGs. And in fact, my session at VMworld is going to talk about both our support for V7 but also what we're doing with containers, CSI, Kubernetes overall, and how we can support that in a virtual VMware environment, and also we're doing with traditional ESX and VMware configurations as well. And of course, out to the cloud, as I just talked about. >> Yeah, that discussion of hybrid cloud, Eric, is one that we've been hearing from IBM for a long time. And VMware has had that message, but their cloud solutions have really matured. They've got a whole group going deep on cloud native. The Amazon solutions have been something that they've been partnering, making sure that, you know, data protection, it can span between, you know, the traditional data center environment where VMware is so dominant, and the public clouds. You're giving a session on some of those hybrid cloud solutions, so share with us a little bit, you know, where do the visions completely agree? What's some of the differences between what IBM is doing and maybe what people are hearing from VMware? >> Well, first of all, our solutions don't always require VMware to be installed. So for example, if you're doing it in a container environment, for example, with Red Hat OpenShift, that works slightly different. Not that you can't run Red Hat products inside of a virtual machine, which you can, but in this case, I'm talking Red Hat native. We also of course do VMware native and support what VMware has announced with their Kubernetes based solutions that they've been talking about since VMworld last year, obviously when Pat made some big announcements onstage about what they were doing in the container space. So we've been following that along as well. So from that perspective, we have agreement on a virtual machine perspective and of course, what VMware is doing with the container space. But then also a slightly different one when we're doing Red Hat OpenShift as a native configuration, without having a virtual machine involved in that configuration. So those are both the commonalities and the differences that we're doing with VMware in a hybrid cloud configuration. >> Yeah. Eric, you and I both have some of those scars from making sure that storage works in a virtual environment. It took us about a decade to get things to really work at the VM level. Containers, it's been about five years, it feels like we've made faster progress to make sure that we can have stateful environments, we can tie up with storage, but give us a little bit of a look back as to what we've learned and how we've made sure that containerized, Kubernetes environments, you know, work well with storage for customers today. >> Well, I think there's a couple of things. First of all, I think all the storage vendors learn from VMware. And then the expansion of virtual environments beyond VMware to other virtual environments as well. So I think all the storage vendors, including IBM learned through that process, okay, when the next thing comes, which of course in this case happens to be containers, both in a VMware environment, but in an open environment with the Kubernetes management framework, that you need to be able to support it. So for example, we have done several different things. We support persistent volumes in file block and object store. And we started with that almost three years ago on the block side, then we added the file side and now the object storage side. We also can back up data that's in those containers, which is an important feature, right? I am sitting there and I've got data now and persistent volume, but I got to back it up as well. So we've announced support for container based backup either with Red Hat OpenShift or in a generic Kubernetes environment, because we're realistic at IBM. We know that you have to exist in the software infrastructure milieu, and that includes VMware and competitors of VMware. It includes Red Hat OpenShift, but also competitors to Red Hat. And we've made sure that we support whatever the end user needs. So if they're going with Red Hat, great. If they're going with a generic container environment, great. If they're going to use VMware's container solutions, great. And on the virtualization engines, the same thing. We started with VMware, but also have added other virtualization engines. So you think the storage community as a whole and IBM in particular has learned, we need to be ready day one. And like I said, three years ago, we already had persistent volume support for block store. It's still the dominant storage and we had that three years ago. So for us, that would be really, I guess, two years from what you've talked about when containers started to take off. And within two years we had something going that was working at the end user level. Our sales team could sell our business partners. As you know, many of the business partners are really rallying around containers, whether it be Red Hat or in what I'll call a more generic environment as well. They're seeing the forest through the trees. I do think when you look at it from an end user perspective, though, you're going to see all three. So, particularly in the Global Fortune 1000, you're going to see Red Hat environments, generic Kubernetes environments, VMware environments, just like you often see in some instances, heterogeneous virtualization environments, and you're still going to see bare metal. So I think it's going to vary by application workload and use case. And I think all, I'd say midsize enterprise up, let's say, $5 billion company and up, probably will have at least two, if not all three of those environments, container, virtual machine, and bare metal. So we need to make sure that at IBM we support all those environments to keep those customers happy. >> Yeah, well, Eric, I think anybody, everybody in the industry knows, IBM can span those environments, you know, support through generations. And very much knows that everything in IT tends to be additive. You mentioned customers, Eric, you talk to a lot of customers. So bring us inside, give us a couple examples if you would, how are they dealing with this transition? For years we've been talking about, you know, enabling developers, having them be tied more tightly with what the enterprise is doing. So what are you seeing from some of your customers today? >> Well, I think the key thing is they'd like to use data reuse. So, in this case, think of a backup, a snap or replica dataset, which is real world data, and being able to use that and reuse that. And now the storage guys want to make sure they know who's, if you will, checked it out. We do that with our Spectrum Copy Data Management. You also have, of course, integration with the Ansible framework, which IBM supports, in fact, we'll be announcing some additional support for more features in Ansible coming at the end of October. We'll be doing a large launch, very heavily on containers. Containers and primary storage, containers in hybrid cloud environments, containers in big data and AI environments, and containers in the modern data protection and cyber resiliency space as well. So we'll be talking about some additional support in this case about Ansible as well. So you want to make sure, one of the key things, I think, if you're a storage guy, if I'm the VP of infrastructure, or I'm the CIO, even if I'm not a storage person, in fact, if you think about it, I'm almost 70 now. I have never, ever, ever, ever met a CIO who used to be a storage guy, ever. Whether I, I've been with big companies, I was at EMC, I was at Seagate Maxtor, I've been at IBM actually twice. I've also done seven startups, as you guys know at theCUBE. I have never, ever met a CIO who used to be a storage person. Ever, in all those years. So, what appeals to them is, how do I let the dev guys and the test guys use that storage? At the same time, they're smart enough to know that the software guys and the test guys could actually screw up the storage, lose the data, or if they don't lose the data, cost them hundreds of thousands to millions of dollars because they did something wrong and they have to reconfigure all the storage solutions. So you want to make sure that the CIO is comfortable, that the dev and the test teams can use that storage properly. It's a part of what Ansible's about. You want to make sure that you've got tight integration. So for example, we announced a container native version of our Spectrum Discover software, which gives you comprehensive metadata, cataloging and indexing. Not only for IBM's scale-out file, Spectrum Scale, not only for IBM object storage, IBM cloud object storage, but also for Amazon S3 and also for NetApp filers and also for EMC Isilon. And it's a container native. So you want to make sure in that case, we have an API. So the AI software guys, or the big data software guys could interface with that API to Spectrum Discover, let them do all the work. And we're talking about a piece of software that can traverse billions of objects in two seconds, billions of them. And is ideal to use in solutions that are hundreds of petabytes, up into multiple exabytes. So it's a great way that by having that API where the CIO is confident that the software guys can use the API, not mess up the storage because you know, the storage guys and the data scientists can configure Spectrum Discover and then save it as templates and run an AI workload every Monday, and then run a big data workload every Tuesday, and then Wednesday run a different AI workload and Thursday run a different big data. And so once they've set that up, everything is automated. And CIOs love automation, and they really are sensitive. Although they're all software guys, they are sensitive to software guys messing up the storage 'cause it could cost them money, right? So that's their concern. We make it easy. >> Absolutely, Eric, you know, it'd be lovely to say that storage is just invisible, I don't need to think about it, but when something goes wrong, you need those experts to be able to dig in. You spent some time talking about automation, so critically important. How about the management layer? You know, you think back, for years it was, vCenter would be the place that everything can plug in. You could have more generalists using it. The HCI waves were people kind of getting away from being storage specialists. Today VMware has, of course vCenter's their main estate, but they have Tanzu. On the IBM and Red Hat side, you know, this year you announced the Advanced Cluster Management. What's that management landscape look like? How does the storage get away from managing some of the bits and bytes and, you know, just embrace more of that automation that you talked about? >> So in the case of IBM, we make sure we can support both. We need to appeal to the storage nerd, the storage geek if you will. The same time to a more generalist environment, whether it be an infrastructure manager, whether it be some of the software guys. So for example, we support, obviously vCenter. We're going to be supporting all of the elements that are going to happen in a container environment that VMware is doing. We have hot integration and big time integration with Red Hat's management framework, both with Ansible, but also in the container space as well. We're announcing some things that are coming again at the end of October in the container space about how we interface with the Red Hat management schema. And so you don't always have to have the storage expert manage the storage. You can have the Red Hat administrator, or in some cases, the DevOps guys do it. So we're making sure that we can cover both sides of the fence. Some companies, this just my personal belief, that as containers become commonplace while the software guys are going to want to still control it, there eventually will be a Red Hat/container admin, just like all the big companies today have VMware admins. They all do. Or virtualization admins that cover VMware and VMware's competitors such as Hyper-V. They have specialized admins to run that. And you would argue, VMware is very easy to use, why aren't the software guys playing with it? 'Cause guess what? Those VMs are sitting on servers containing both apps and data. And if the software guy comes in to do something, messes it up, so what have of the big entities done? They've created basically a virtualization admin layer. I think that over time, either the virtualization admins become virtualization/container admins, or if it's a big enough for both estates, there'll be container admins at the Global Fortune 500, and they'll also be virtualization admins. And then the software guys, the devOps guys will interface with that. There will always be a level of management framework. Which is why we integrate, for example, with vCenter, what we're doing with Red Hat, what we do with generic Kubernetes, to make sure that we can integrate there. So we'll make sure that we cover all areas because a number of our customers are very large, but some of our customers are very small. In fact, we have a company that's in the software development space for autonomous driving. They have over a hundred petabytes of IBM Spectrum Scale in a container environment. So that's a small company that's gone all containers, at the same time, we have a bunch of course, Global Fortune 1000s where IBM plays exceedingly well that have our products. And they've got some stuff sitting in VMware, some such sitting in generic Kubernetes, some stuff sitting in Red Hat OpenShift and some stuff still in bare metal. And in some cases they don't want their software people to touch it, in other cases, these big accounts, they want their software people empowered. So we're going to make sure we could support both and both management frameworks. Traditional storage management framework with each one of our products and also management frameworks for virtualization, which we've already been doing. And now management frame first with container. We'll make sure we can cover all three of those bases 'cause that's what the big entities will want. And then in the smaller names, you'll have to see who wins out. I mean, they may still use three in a small company, you really don't know, so you want to make sure you've got everything covered. And it's very easy for us to do this integration because of things we've already historically done, particularly with the virtualization environment. So yes, the interstices of the integration are different, but we know here's kind of the process to do the interconnectivity between a storage management framework and a generic management framework, in, originally of course, vCenter, and now doing it for the container world as well. So at least we've learned best practices and now we're just tweaking those best practices in the difference between a container world and a virtualization world. >> Eric, VMworld is one of the biggest times of the year, where we all get together. I know how busy you are going to the show, meeting with customers, meeting with partners, you know, walking the hallways. You're one of the people that traveled more than I did pre-COVID. You know, you're always at the partner shows and meeting with people. Give us a little insight as to how you're making sure that, partners and customers, those conversations are still happening. We understand everything over video can be a little bit challenging, but, what are you seeing here in 2020? How's everybody doing? >> Well, so, a couple of things. First of all, I already did two partner meetings today. (laughs) And I have an end user meeting, two end user meetings tomorrow. So what we've done at IBM is make sure we do a couple things. One, short and to the point, okay? We have automated tools to actually show, drawing, just like the infamous walk up to the whiteboard in a face to face meeting, we've got that. We've also now tried to make sure everybody is being overly inundated with WebEx. And by the way, there's already a lot of WebEx anyway. I can think of meeting I had with a telco, one of the Fortune 300, and this was actually right before Thanksgiving. I was in their office in San Jose, but they had guys in Texas and guys in the East Coast all on. So we're still over WebEx, but it also was a two and a half hour meeting, actually almost a three hour meeting. And both myself and our Flash CTO went up to the whiteboard, which you could then see over WebEx 'cause they had a camera showing up onto the whiteboard. So now you have to take that and use integrated tools. One, but since people are now, I would argue, over WebEx. There is a different feel to doing the WebEx than when you're doing it face to face. We have to fly somewhere, or they have to fly somewhere. We have to even drive somewhere, so in between meetings, if you're going to do four customer calls, Stu, as you know, I travel all over the world. So I was in Sweden actually right before COVID. And in one day, the day after we had a launch, we launched our new Flash System products in February on the 11th, on February 12th, I was still in Stockholm and I had two partner meetings and two end user meetings. But the sales guy was driving me around. So in between the meetings, you'd be in the car for 20 minutes or half an hour. So it connects different when you can do WebEx after WebEx after WebEx with basically no break. So you have to be sensitive to that when you're talking to your partners, sensitive of that when you're talking to the customers sensitive when you're talking to the analysts, such as you guys, sensitive when you're talking to the press and all your various constituents. So we've been doing that at IBM, really, since the COVID thing got started, is coming up with some best practices so we don't overtax the end users and overtax our channel partners. >> Yeah, Eric, the joke I had on that is we're all following the Bill Belichick model now, no days off, just meeting, meeting, meeting every day, you can stack them up, right? You used to enjoy those downtimes in between where you could catch up on a call, do some things. I had to carve out some time to make sure that stack of books that normally I would read in the airports or on flights, everything, you know. I do enjoy reading a book every now and again, so. Final thing, I guess, Eric. Here at VMworld 2020, you know, give us final takeaways that you want your customers to have when it comes to IBM and VMware. >> So a couple of things, A, we were tightly integrated and have been tightly integrated for what they've been doing in their traditional virtualization environment. As they move to containers we'll be tightly integrated with them as well, as well as other container platforms, not just from IBM with Red Hat, but again, generic Kubernetes environments with open source container configurations that don't use IBM Red Hat and don't use VMware. So we want to make sure that we span that. In traditional VMware environments, like with Version 7 that came out, we make sure we support it. In fact, VMware just announced support for NVMe over Fibre Channel. Well, we've been shipping NVMe over Fibre Channel for just under two years now. It'll be almost two years, well, it will be two years in October. So we're sitting here in September, it's almost been two years since we've been shipping that. But they haven't supported it, so now of course we actually, as part of our launch, I pre say something, as part of our launch, the last week of October at IBM's TechU it'll be on October 27th, you can join for free. You don't need to attend TechU, we'll have a free registration page. So just follow Zoginstor or look at my LinkedIns 'cause I'll be posting shortly when we have the link, but we'll be talking about things that we're doing around V7, with support for VMware's announcement of NVMe over Fibre Channel, even though we've had it for two years coming next month. But they're announcing support, so we're doing that as well. So all of those sort of checkbox items, we'll continue to do as they push forward into the container world. IBM will be there right with them as well because we know it's a very large world and we need to support everybody. We support VMware. We supported their competitors in the virtualization space 'cause some customers have, in fact, some customers have both. They've got VMware and maybe one other of the virtualization elements. Usually VMware is the dominant of course, but if they've got even a little bit of it, we need to make sure our storage works with it. We're going to do the same thing in the container world. So we will continue to push forward with VMware. It's a tight relationship, not just with IBM Storage, but with the server group, clearly with the cloud team. So we need to make sure that IBM as a company stays very close to VMware, as well as, obviously, what we're doing with Red Hat. And IBM Storage makes sure we will do both. I like to say that IBM Storage is a Switzerland of the storage industry. We work with everyone. We work with all these infrastructure players from the software world. And even with our competitors, our Spectrum Virtualized software that comes on our Flash Systems Array supports over 550 different storage arrays that are not IBM's. Delivering enterprise-class data services, such as snapshot, replication data, at rest encryption, migration, all those features, but you can buy the software and use it with our competitors' storage array. So at IBM we've made a practice of making sure that we're very inclusive with our software business across the whole company and in storage in particular with things like Spectrum Virtualize, with what we've done with our backup products, of course we backup everybody's stuff, not just ours. We're making sure we do the same thing in the virtualization environment. Particularly with VMware and where they're going into the container world and what we're doing with our own, obviously sister division, Red Hat, but even in a generic Kubernetes environment. Everyone's not going to buy Red Hat or VMware. There are people going to do Kubernetes industry standard, they're going to use that, if you will, open source container environment with Kubernetes on top and not use VMware and not use Red Hat. We're going to make sure if they do it, what I'll call generically, if they use Red Hat, if they use VMware or some combo, we will support all of it and that's very important for us at VMworld to make sure everyone is aware that while we may own Red Hat, we have a very strong, powerful connection to VMware and going to continue to do that in the future as well. >> Eric Herzog, thanks so much for joining us. Always a pleasure catching up with you. >> Thank you very much. We love being with theCUBE, you guys do great work at every show and one of these days I'll see you again and we'll have a beer. In person. >> Absolutely. So, definitely, Dave Vellante and John Furrier send their best, I'm Stu Miniman, and thank you as always for watching theCUBE. (relaxed electronic music)

Published Date : Sep 29 2020

SUMMARY :

brought to you by VMware He is the chief marketing officer And again, glad to be here, you know, 2020, the major engagements, So we started with IBM Cloud so share with us a little bit, you know, and the differences that we're doing to make sure that we can and now the object storage side. So what are you seeing from and containers in the On the IBM and Red Hat side, you know, So in the case of IBM, we and meeting with people. and guys in the East Coast all on. in the airports or on and maybe one other of the Always a pleasure catching up with you. We love being with theCUBE, and thank you as always

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EricPERSON

0.99+

Pat GelsingerPERSON

0.99+

IBMORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

JohnPERSON

0.99+

ZoginstorPERSON

0.99+

TexasLOCATION

0.99+

DavePERSON

0.99+

StockholmLOCATION

0.99+

SwedenLOCATION

0.99+

20 minutesQUANTITY

0.99+

Dave VellantePERSON

0.99+

$5 billionQUANTITY

0.99+

San JoseLOCATION

0.99+

Stu MinimanPERSON

0.99+

FebruaryDATE

0.99+

SeptemberDATE

0.99+

billionsQUANTITY

0.99+

2020DATE

0.99+

October 27thDATE

0.99+

AWSORGANIZATION

0.99+

last yearDATE

0.99+

John FurrierPERSON

0.99+

VMworldORGANIZATION

0.99+

two secondsQUANTITY

0.99+

half an hourQUANTITY

0.99+

VMwareORGANIZATION

0.99+

ThursdayDATE

0.99+

WednesdayDATE

0.99+

Red HatTITLE

0.99+

bothQUANTITY

0.99+

February 12thDATE

0.99+

Red Hat OpenShiftTITLE

0.99+

Red HatORGANIZATION

0.99+

two yearsQUANTITY

0.99+

twoQUANTITY

0.99+

end of OctoberDATE

0.99+

twiceQUANTITY

0.99+

two and a half hourQUANTITY

0.99+

tomorrowDATE

0.99+

OctoberDATE

0.99+

SwitzerlandLOCATION

0.99+

hundreds of petabytesQUANTITY

0.99+

hundreds of thousandsQUANTITY

0.99+

StuPERSON

0.99+

PatPERSON

0.99+

Seagate MaxtorORGANIZATION

0.99+

telcoORGANIZATION

0.99+

three years agoDATE

0.99+

Phil Bullinger, Western Digital | CUBE Conversation, August 2020


 

>> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a Cube conversation. >> Hey welcome back everybody, Jeff Frick here with theCUBE. We are in our Palo Alto studios, COVID is still going on, so all of the interviews continue to be remote, but we're excited to have a Cube alumni, he hasn't been on for a long time, and this guy has been in the weeds of the storage industry for a very very long time and we're happy to have him on and get an update because there continues to be a lot of exciting developments. He's Phil Bullinger, he is the SVP and general manager, data center business unit from Western Digital joining us, I think for Colorado, so Phil, great to see you, how's the weather in Colorado today? >> Hi Jeff, it's great to be here. Well, it's a hot, dry summer here, I'm sure like a lot of places. But yeah, enjoying the summer through these unusual times. >> It is unusual times, but fortunately there's great things like the internet and heavy duty compute and store out there so we can get together this way. So let's jump into it. You've been in the he business a long time, you've been at Western Digital, you were at EMC, you worked on Isilon, and you were at storage companies before that. And you've seen kind of this never-ending up and to the right slope that we see kind of ad nauseum in terms of the amount of storage demands. It's not going anywhere but up, and please increase complexity in terms of unstructure data, sources of data, speed of data, you know the kind of classic big V's of big data. So I wonder, before we jump into specifics, if you can kind of share your perspective 'cause you've been kind of sitting in the Catford seat, and Western Digital's a really unique company; you not only have solutions, but you also have media that feeds other people solutions. So you guys are really seeing and ultimately all this compute's got to put this data somewhere, and a whole lot of it's sitting on Western Digital. >> Yeah, it's a great intro there. Yeah, it's been interesting, through my career, I've seen a lot of advances in storage technology. Speeds and feeds like we often say, but the advancement through mechanical innovation, electrical innovation, chemistry, physics, just the relentless growth of data has been driven in many ways by the relentless acceleration and innovation of our ability to store that data, and that's been a very virtuous cycle through what, for me, has been 30 years in enterprise storage. There are some really interesting changes going on though I think. If you think about it, in a relatively short amount of time, data has gone from this artifact of our digital lives to the very engine that's driving the global economy. Our jobs, our relationships, our health, our security, they all kind of depend on data now, and for most companies, kind of irrespective of size, how you use data, how you store it, how you monetize it, how you use it to make better decisions to improve products and services, it becomes not just a matter of whether your company's going to thrive or not, but in many industries, it's almost an existential question; is your company going to be around in the future, and it depends on how well you're using data. So this drive to capitalize on the value of data is pretty significant. >> It's a really interesting topic, we've had a number of conversations around trying to get a book value of data, if you will, and I think there's a lot of conversations, whether it's accounting kind of way, or finance, or kind of good will of how do you value this data? But I think we see it intrinsically in a lot of the big companies that are really data based, like the Facebooks and the Amazons and the Netflixes and the Googles, and those types of companies where it's really easy to see, and if you see the valuation that they have, compared to their book value of assets, it's really baked into there. So it's fundamental to going forward, and then we have this thing called COVID hit, which I'm sure you've seen all the memes on social media. What drove your digital transformation, the CEO, the CMO, the board, or COVID-19? And it became this light switch moment where your opportunities to think about it are no more; you've got to jump in with both feet, and it's really interesting to your point that it's the ability to store this and think about it now differently as an asset driving business value versus a cost that IT has to accommodate to put this stuff somewhere, so it's a really different kind of a mind shift and really changes the investment equation for companies like Western Digital about how people should invest in higher performance and higher capacity and more unified and kind of democratizing the accessibility that data, to a much greater set of people with tools that can now start making much more business line and in-line decisions than just the data scientist kind of on Mahogany Row. >> Yeah, as you mentioned, Jeff, here at Western Digital, we have such a unique kind of perch in the industry to see all the dynamics in the OEM space and the hyperscale space and the channel, really across all the global economies about this growth of data. I have worked at several companies and have been familiar with what I would have called big data projects and fleets in the past. But at Western Digital, you have to move the decimal point quite a few digits to the right to get the perspective that we have on just the volume of data that the world has just relentless insatiably consuming. Just a couple examples, for our drive projects we're working on now, our capacity enterprise drive projects, you know, we used to do business case analysis and look at their lifecycle capacities and we measured them in exabytes, and not anymore, now we're talking about zettabyte, we're actually measuring capacity enterprise drive families in terms of how many zettabyte they're going to ship in their lifecycle. If we look at just the consumption of this data, the last 12 months of industry TAM for capacity enterprise compared to the 12 months prior to that, that annual growth rate was north of 60%. And so it's rare to see industries that are growing at that pace. And so the world is just consuming immense amounts of data, and as you mentioned, the COVID dynamics have been both an accelerant in some areas, as well as headwinds in others, but it's certainly accelerated digital transformation. I think a lot of companies we're talking about, digital transformation and hybrid models and COVID has really accelerated that, and it's certainly driving, continues to drive just this relentless need to store and access and take advantage of data. >> Yeah, well Phil, in advance of this interview, I pulled up the old chart with all the different bytes, kilobytes, megabytes, gigabytes, terabytes, petabytes, exabytes, and zettabytes, and just per the Wikipedia page, what is a zettabyte? It's as much information as there are grains of sand in all the world's beaches. For one zettabyte. You're talking about thinking in terms of those units, I mean, that is just mind boggling to think that that is the scale in which we're operating. >> It's really hard to get your head wrapped around a zettabyte of storage, and I think a lot of the industry thinks when we say zettabyte scale era, that it's just a buzz word, but I'm here to say it's a real thing. We're measuring projects in terms of zettabytes now. >> That's amazing. Well, let's jump into some of the technology. So I've been fortunate enough here at theCUBE to be there at a couple of major announcements along the way. We talked before we turned the cameras on, the helium announcement and having the hard drive sit in the fish bowl to get all types of interesting benefits from this less dense air that is helium versus oxygen. I was down at the Mammer and Hammer announcement, which was pretty interesting; big heavy technology moves there, to again, increase the capacity of the hard drive's base systems. You guys are doing a lot of stuff on RISC-V I know is an Open source project, so you guys have a lot of things happening, but now there's this new thing, this new thing called zonedd storage. So first off, before we get into it, why do we need zoned storage, and really what does it now bring to the table in terms of a capability? >> Yeah, great question, Jeff. So why now, right? Because I mentioned storage, I've been in storage for quite some time. In the last, let's just say in the last decade, we've seen the advent of the hyperscale model and certainly a whole nother explosion level of data and just the veracity with which they hyperscalers can create and consume and process and monetize data. And of course with that, has also come a lot of innovation, frankly, in the compute space around how to process that data and moving from what was just a general purpose CPU model to GPU's and DPU's and so we've seen a lot of innovation on that side, but frankly, in the storage side, we haven't seen much change at all in terms of how operating systems, applications, file systems, how they actually use the storage or communicate with the storage. And sure, we've seen advances in storage capacities; hard drives have gone from two to four, to eight, to 10 to 14, 16, and now our leading 18 and 20 terabyte hard drives. And similarly, on the SSD side, now we're dealing with the capacities of seven, and 15, and 30 terabytes. So things have gotten larger, as you expect. And some interfaces have improved, I think NVME, which we'll talk about, has been a nice advance in the industry; it's really now brought a very modern scalable low latency multi-threaded interface to a NAM flash to take advantage of the inherent performance of transistor based persistent storage. But really when you think about it, it hasn't changed a lot. But what has changed is workloads. One thing that definitely has evolved in the space of the last decade or so is this, the thing that's driving a lot of this explosion of data in the industry is around workloads that I would characterize as sequential in nature, they're serial, you can capture it in written. They also have a very consistent life cycle, so you would write them in a big chunk, you would read them maybe in smaller pieces, but the lifecycle of that data, we can treat more as a chunk of data, but the problem is applications, operating systems, vial systems continue to interface with storage using paradigms that are many decades old. The old 512 byte or even Forte, Sector size constructs were developed in the hard drive industry just as convenient paradigms to structure what is an unstructured sea of magnetic grains into something structured that can be used to store and access data. But the reality is when we talk about SSDs, structure really matters, and so what has changed in the industry is the workloads are driving very very fresh looks at how more intelligence can be applied to that application OS storage device interface to drive much greater efficiency. >> Right, so there's two things going on here that I want to drill down on. On one hand, you talked about kind of the introduction of NAND and flash, and treating it like you did, generically you did a regular hard drive. But you could get away and you could do some things because the interface wasn't taking full advantage of the speed that was capable in the NAND. But NVME has changed that, and now forced kind of getting rid of some of those inefficient processes that you could live with, so it's just kind of classic next level step up and capabilities. One is you get the better media, you just kind of plug it into the old way. Now actually you're starting to put in processes that take full advantage of the speed that that flash has. And I think obviously prices have come down dramatically since the first introduction, where before it was always kind of a clustered off or super high end, super low latency, super high value apps, it just continues to spread and proliferate throughout the data center. So what did NVME force you to think about in terms of maximizing the return on the NAND and flash? >> Yeah, NVME, which we've been involved in the standardization, I think it's been a very successful effort, but we have to remember NVME is about a decade old, or even more when the original work started around defining this interface, but it's been very successful. The NVME standard's body is very productive cross company effort, it's really driven a significant change, and what we see now is the rapid adoption of NVME in all of data center architectures, whether it's very large hyperscale to classic on prem enterprise to even smaller applications, it's just a very efficient interface mechanism for connecting SSDs into a server. So we continue to see evolution at NVME, which is great, and we'll talk about ZNS today as one of those evolutions. We're also very keenly interested in NVME protocol over fabrics, and so one of the things that Western Digital has been talking about a lot lately is incorporating NVME over fabrics as a mechanism for now connecting shared storage into multiple post architectures. We think this is a very attractive way to build shared storage architectures of the future that are scalable, that are composable, that really have a lot more agility with respect to rack level infrastructure and applying that infrastructure to applications. >> Right, now one thing that might strike some people as kind of counterintuitive is within the zoned storage in zoning off parts of the media, to think of the data also kind of in these big chunks, is it feels contrary to kind of atomization that we're seeing in the rest of the data center, right? So smaller units of compute, smaller units of store, so that you can assemble and disassemble them in different quantities as needed. So what was the special attribute that you had to think about and actually come back and provide a benefit in actually kind of re-chunking, if you will, in these zones versus trying to get as atomic as possible? >> Yeah, it's a great question, Jeff, and I think it's maybe not intuitive in terms of why zoned storage actually creates a more efficient storage paradigm when you're storing stuff essentially in larger blocks of data, but this is really where the intersection of structure and workload and sort of the nature of the data all come together. If you turn back the clock maybe four or five years when SMR hard drives host managers SMR hard drives first emerged on the scene. This was really taking advantage of the fact that the right head on a hard disk drive is larger than the read head, or the read head can be much smaller, and so the notion of overlapping or shingling the data on the drive, giving the read head a smaller target to read, but the writer a larger write pad to write the data could actually, what we found was it increases aerial density significantly. And so that was really the emergence of this notion of sequentially written larger blocks of data being actually much more efficiently stored when you think about physically how it's being stored. What's very new now and really gaining a lot of traction is the SSD corollary to SMR on the hard drive, on the SSD side, we had the ZNS specification, which is, very similarly where you'd divide up the name space of an SSD into fixed size zones, and those zones are written sequentially, but now those zones are intimately tied to the underlying physical architecture of the NAND itself; the dyes, the planes, the read pages, the erase pages. So that, in treating data as a block, you're actually eliminating a lot of the complexity and the work that an SSD has to do to emulate a legacy hard drive, and in doing so, you're increasing performance and endurance and the predictable performance of the device. >> I just love the way that you kind of twist the lens on the problem, and on one hand, by rule, just looking at my notes here, the zoned storage device is the ZSD's introduce a number of restrictions and limitations and rules that are outside the full capabilities of what you might do. But in doing so, an aggregate, the efficiency, and the performance of the system in the whole is much much better, even though when you first look at it, you think it's more of a limiter, but it's actually opens up. I wonder if there's any kind of performance stats you can share or any kind of empirical data just to give people kind of a feel for what that comes out as. >> So if you think about the potential of zoned storage in general and again, when I talk about zoned storage, there's two components; there's an HDD component of zoned storage that we refer to as SMR, and there's an SSD version of that that we call ZNS. So we think about SMR, the value proposition there is additional capacity. So effectively in the same drive architecture, with roughly the same bill of material used to build the drive, we can overlap or shingle the data on the drive. And generally for the customer, additional capacity. Today with our 18, 20 terabyte offerings that's on the order of just over 10%, but that delta is going to increase significantly going forward to 20% or more. And when you think about a hyperscale customer that has not hundreds or thousands of racks, but tens of thousands of racks. A 10 or 20% improvement in effective capacity is a tremendous TCO benefit, and the reason we do that is obvious. I mean, the economic paradigm that drives large at-scale data centers is total custom ownership, both acquisition costs and operating costs. And if you can put more storage in a square tile of data center space, you're going to generally use less power, you're going to run it more efficiently, you're actually, from an acquisition cost, you're getting a more efficient purchase of that capacity. And in doing that, our innovation, we benefit from it and our customers benefit from it. So the value proposition for zoned storage in capacity enterprise HDV is very clear, it's additional capacity. The exciting thing is, in the SSD side of things, or ZNS, it actually opens up even more value proposition for the customer. Because SSDs have had to emulate hard drives, there's been a lot of inefficiency and complexity inside an enterprise SSD dealing with things like garbage collection and right amplification reducing the endurance of the device. You have to over-provision, you have to insert as much as 20, 25, even 28% additional man bits inside the device just to allow for that extra space, that working space to deal with delete of data that are smaller than the block erase that the device supports. So you have to do a lot of reading and writing of data and cleaning up. It creates for a very complex environment. ZNS by mapping the zoned size with the physical structure of the SSD essentially eliminates garbage collection, it reduces over-provisioning by as much as 10x. And so if you were over provisioning by 20 or 25% on an enterprise SSD, and a ZNS SSD, that can be one or two percent. The other thing I have to keep in mind is enterprise SSD is typically incorporate D RAM and that D RAM is used to help manage all those dynamics that I just mentioned, but with a much simpler structure where the pointers to the data can be managed without all the D RAM. We can actually reduce the amount of D RAM in an enterprise SSD by as much as eight X. And if you think about the MILA material of an enterprise SSD, D RAM is number two on the list in terms of the most expensive bomb components. So ZNS and SSDs actually have a significant customer total cost of ownership impact. It's an exciting standard, and now that we have the standard ratified through the NVME working group, it can really accelerate the development of the software ecosystem around. >> Right, so let's shift gears and talk a little bit about less about the tech and more about the customers and the implementation of this. So you talked kind of generally, but are there certain types of workloads that you're seeing in the marketplace where this is a better fit or is it just really the big heavy lifts where they just need more and this is better? And then secondly, within these hyperscale companies, as well as just regular enterprises that are also seeing their data demands grow dramatically, are you seeing that this is a solution that they want to bring in for kind of the marginal kind of next data center, extension of their data center, or their next cloud region? Or are they doing lift and shift and ripping stuff out? Or do they enough data growth organically that there's plenty of new stuff that they can put in these new systems? >> Yeah, I love that. The large customers don't rip and shift; they ride their assets for a long lifecycle, 'cause with the relentless growth of data, you're primarily investing to handle what's coming in over the transom. But we're seeing solid adoption. And in SMRS you know we've been working on that for a number of years. We've got significant interest and investment, co-investment, our engineering, and our customer's engineering adapting the application environment's to take advantage of SMR. The great thing is now that we've got the NVME, the ZNS standard gratified now in the NVME working group, we've got a very similar, and all approved now, situation where we've got SMR standards that have been approved for some time, and the SATA and SCSI standards. Now we've got the same thing in the NVME standard, and the great thing is once a company goes through the lift, so to speak, to adapt an application, file system, operating system, ecosystem, to zoned storage, it pretty much works seamlessly between HDD and SSD, and so it's not an incremental investment when you're switching technologies. Obviously the early adopters of these technologies are going to be the large companies who design their own infrastructure, who have mega fleets of racks of infrastructure where these efficiencies really really make a difference in terms of how they can monetize that data, how they compete against the landscape of competitors they have. For companies that are totally reliant on kind of off the shelf standard applications, that adoption curve is going to be longer, of course, because there are some software changes that you need to adapt to enable zoned storage. One of the things Western Digital has done and taken the lead on is creating a landing page for the industry with zoned storage.io. It's a webpage that's actually an area where many companies can contribute Open source tools, code, validation environments, technical documentation. It's not a marketeering website, it's really a website built to land actual Open source content that companies can use and leverage and contribute to to accelerate the engineering work to adapt software stacks to zoned storage devices, and to share those things. >> Let me just follow up on that 'cause, again, you've been around for a while, and get your perspective on the power of Open source. And it used to be the best secrets, the best IP were closely guarded and held inside, and now really we're in an age where it's not necessarily. And the brilliant minds and use cases and people out there, just by definition, it's more groups of engineers, more engineers outside your building than inside your building, and how that's really changed kind of a strategy in terms of development when you can leverage Open source. >> Yeah, Open source clearly has accelerated innovation across the industry in so many ways, and it's the paradigm around which companies have built business models and innovated on top of it, I think it's always important as a company to understand what value ad you're bringing, and what value ad the customers want to pay for. What unmet needs in your customers are you trying to solve for, and what's the best mechanism to do that? And do you want to spend your RND recreating things, or leveraging what's available and innovating on top of it? It's all about ecosystem. I mean, the days where a single company could vertically integrate top to bottom a complete end solution, you know, those are fewer and far between. I think it's about collaboration and building ecosystems and operating within those. >> Yeah, it's such an interesting change, and one more thing, again, to get your perspective, you run the data center group, but there's this little thing happening out there that we see growing, IOT, in the industrial internet of things, and edge computing as we try to move more compute and store and power kind of outside the pristine world of the data center and out towards where this data is being collected and processed when you've got latency issues and all kinds of reasons to start to shift the balance of where the compute is and where the store and relies on the network. So when you look back from the storage perspective in your history in this industry and you start to see basically everything is now going to be connected, generating data, and a lot of it is even Opensource. I talked to somebody the other day doing kind of Opensource computer vision on surveillance video. So the amount of stuff coming off of these machines is growing in crazy ways. At the same time, it can't all be processed at the data center, it can't all be kind of shipped back and then have a decision and then ship that information back out to. So when you sit back and look at Edge from your kind of historical perspective, what goes through your mind, what gets you excited, what are some opportunities that you see that maybe the laymen is not paying close enough attention to? >> Yeah, it's really an exciting time in storage. I get asked that question from time to time, having been in storage for more than 30 years, you know, what was the most interesting time? And there's been a lot of them, but I wouldn't trade today's environment for any other in terms of just the velocity with which data is evolving and how it's being used and where it's being used. A TCO equation may describe what a data center looks like, but data locality will determine where it's located, and we're excited about the Edge opportunity. We see that as a pretty significant, meaningful part of the TAM as we look three to five years. Certainly 5G is driving much of that, I think just any time you speed up the speed of the connected fabric, you're going to increase storage and increase the processing the data. So the Edge opportunity is very interesting to us. We think a lot of it is driven by low latency work loads, so the concept of NVME is very appropriate for that, we think, in general SSDs deployed and Edge data centers defined as anywhere from a meter to a few kilometers from the source of the data. We think that's going to be a very strong paradigm. The workloads you mentioned, especially IOT, just machine-generated data in general, now I believe, has eclipsed human generated data, in terms of just the amount of data stored, and so we think that curve is just going to keep going in terms of machine generated data. Much of that data is so well suited for zoned storage because it's sequential, it's sequentially written, it's captured, and it has a very consistent and homogenous lifecycle associated with it. So we think what's going on with zoned storage in general and ZNS and SMR specifically are well suited for where a lot of the data growth is happening. And certainly we're going to see a lot of that at the Edge. >> Well, Phil, it's always great to talk to somebody who's been in the same industry for 30 years and is excited about today and the future. And as excited as they have been throughout their whole careers. So that really bodes well for you, bodes well for Western Digital, and we'll just keep hoping the smart people that you guys have over there, keep working on the software and the physics, and the mechanical engineering and keep moving this stuff along. It's really just amazing and just relentless. >> Yeah, it is relentless. What's exciting to me in particular, Jeff, is we've driven storage advancements largely through, as I said, a number of engineering disciplines, and those are still going to be important going forward, the chemistry, the physics, the electrical, the hardware capabilities. But I think as widely recognized in the industry, it's a diminishing curve. I mean, the amount of energy, the amount of engineering effort, investment, that cost and complexity of these products to get to that next capacity step is getting more difficult, not less. And so things like zoned storage, where we now bring intelligent data placement to this paradigm, is what I think makes this current juncture that we're at very exciting. >> Right, right, well, it's applied AI, right? Ultimately you're going to have more and more compute power driving the storage process and how that stuff is managed. As more cycles become available and they're cheaper, and ultimately compute gets cheaper and cheaper, as you said, you guys just keep finding new ways to move the curve in. And we didn't even get into the totally new material science, which is also coming down the pike at some point in time. >> Yeah, very exciting times. >> It's been great to catch up with you, I really enjoy the Western Digital story; I've been fortunate to sit in on a couple chapters, so again, congrats to you and we'll continue to watch and look forward to our next update. Hopefully it won't be another four years. >> Okay, thanks Jeff, I really appreciate the time. >> All right, thanks a lot. All right, he's Phil, I'm Jeff, you're watching theCUBE. Thanks for watching, we'll see you next time.

Published Date : Aug 25 2020

SUMMARY :

all around the world, this so all of the interviews Hi Jeff, it's great to be here. in terms of the amount of storage demands. be around in the future, that it's the ability to store this and the channel, really across and just per the Wikipedia and I think a lot of the and having the hard drive of data and just the veracity with which kind of the introduction and so one of the things of the data center, right? and so the notion of I just love the way that you kind of and the reason we do that is obvious. and the implementation of this. and the great thing is And the brilliant minds and use cases and it's the paradigm around which and all kinds of reasons to start to shift and increase the processing the data. and the mechanical engineering I mean, the amount of energy, driving the storage process I really enjoy the Western Digital story; really appreciate the time. we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JeffPERSON

0.99+

Jeff FrickPERSON

0.99+

Western DigitalORGANIZATION

0.99+

Phil BullingerPERSON

0.99+

Western DigitalORGANIZATION

0.99+

ColoradoLOCATION

0.99+

oneQUANTITY

0.99+

20QUANTITY

0.99+

PhilPERSON

0.99+

August 2020DATE

0.99+

AmazonsORGANIZATION

0.99+

30 yearsQUANTITY

0.99+

both feetQUANTITY

0.99+

NetflixesORGANIZATION

0.99+

18QUANTITY

0.99+

threeQUANTITY

0.99+

two percentQUANTITY

0.99+

BostonLOCATION

0.99+

twoQUANTITY

0.99+

Palo AltoLOCATION

0.99+

FacebooksORGANIZATION

0.99+

15QUANTITY

0.99+

25%QUANTITY

0.99+

20%QUANTITY

0.99+

fourQUANTITY

0.99+

10QUANTITY

0.99+

GooglesORGANIZATION

0.99+

hundredsQUANTITY

0.99+

28%QUANTITY

0.99+

TodayDATE

0.99+

eightQUANTITY

0.99+

theCUBEORGANIZATION

0.99+

14QUANTITY

0.99+

five yearsQUANTITY

0.99+

30 terabytesQUANTITY

0.99+

EMCORGANIZATION

0.99+

COVID-19OTHER

0.99+

10xQUANTITY

0.99+

more than 30 yearsQUANTITY

0.99+

sevenQUANTITY

0.99+

CubeORGANIZATION

0.99+

two componentsQUANTITY

0.99+

OpensourceORGANIZATION

0.99+

two thingsQUANTITY

0.99+

four yearsQUANTITY

0.99+

IsilonORGANIZATION

0.99+

firstQUANTITY

0.98+

25QUANTITY

0.98+

20 terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

18, 20 terabyteQUANTITY

0.98+

16QUANTITY

0.98+

over 10%QUANTITY

0.98+

COVIDOTHER

0.97+

tens of thousands of racksQUANTITY

0.97+

first introductionQUANTITY

0.97+

todayDATE

0.96+

TAMORGANIZATION

0.96+

theCUBE StudiosORGANIZATION

0.96+

NVMEORGANIZATION

0.95+

last decadeDATE

0.94+

EdgeORGANIZATION

0.94+

Mammer and HammerORGANIZATION

0.93+

OneQUANTITY

0.92+

COVIDORGANIZATION

0.92+

Silvano Gai, Pensando | Future Proof Your Enterprise 2020


 

>> Narrator: From the Cube Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, and welcome to this CUBE conversation, I'm Stu Min and I'm coming to you from our Boston area studio, we've been digging in with the Pensando team, understand how they're fitting into the cloud, multi-cloud, edge discussion, really thrilled to welcome to the program, first time guest, Silvano Gai, he's a fellow with Pensando. Silvano, really nice to see you again, thanks so much for joining us on theCUBE. >> Stuart, it's so nice to see you, we used to work together many years ago and that was really good and is really nice to come to you from Oregon, from Bend, Oregon. A beautiful town in the high desert of Oregon. >> I do love the Pacific North West, I miss the planes and the hotels, I should say, I don't miss the planes and the hotels, but going to see some of the beautiful places is something I do miss and getting to see people in the industry I do like. As you mentioned, you and I crossed paths back through some of the spin-ins, back when I was working for a very large storage company, you were working for SISCO, you were known for writing the book, you were a professor in Italy, many of the people that worked on some of those technologies were your students. But Silvano, my understanding is you retired so, maybe share for our audience, what brought you out of that retirement and into working once again with some of your former colleagues and on the Pensando opportunity. >> I did retire for a while, I retired in 2011 from Cisco if I remember correctly. But at the end of 2016, beginning of 2017, some old friend that you may remember and know called me to discuss some interesting idea, which was basically the seed idea that is behind the Pensando product and their idea were interesting, what we built, of course, is not exactly the original idea because you know product evolve over time, but I think we have something interesting that is adequate and probably superb for the new way to design the data center network, both for enterprise and cloud. >> All right, and Silvano, I mentioned that you've written a number of books, really the authoritative look on when some new products had been released before. So, you've got a new book, "Building a Future-Proof Cloud Infrastructure," and look at you, you've got the physical copy, I've only gotten the soft version. The title, really interesting. Help us understand how Pensando's platform is meeting that future-proof cloud infrastructure that you discuss. >> Well, network have evolved dramatically in the data center and in the cloud. You know, now the speed of classical server in enterprise is probably 25 gigabits, in the cloud we are talking of 100 gigabit of speed for a server, going to 200 gigabit. Now, the backbone are ridiculously fast. We no longer use Spanning Tree and all the stuff, we no longer use access code aggregation. We switched to closed network, and with closed network, we have huge enormous amount of bandwidth and that is good but it also imply that is not easy to do services in a centralized fashion. If you want to do a service in a centralized fashion, what you end up doing is creating a giant bottleneck. You basically, there is this word that is being used, that is trombone or tromboning. You try to funnel all this traffic through the bottleneck and this is not really going to work. The only place that you can really do services is at the edge, and this is not an invention, I mean, even all the principles of cloud is move everything to the edge and maintain the network as simple as possible. So, we approach services with the same general philosophy. We try to move services to the edge, as close as possible to the server and basically at the border between the sever and the network. And when I mean services I mean three main categories of services. The networking services of course, there is the basic layer, two-layer, three stuff, plus the bonding, you know VAMlog and what is needed to connect a server to a network. But then there is the overlay, overlay like the xLAN or Geneva, very very important, basically to build a cloud infrastructure, and that are basically the network service. We can have others but that, sort of is the core of a network service. Some people want to run BGP layers, some people don't want to run BGP. There may be a VPN or kind of things like that but that is the core of a network service. Then of course, and we go back to the time we worked together, there are storage services. At that time, we were discussing mostly about fiber tunnel, now the BUS world is clearly NVMe, but it's not just the BUS world, it's really a new way of doing storage, and is very very interesting. So, NVMe kind of service are very important and NVMe as a version that is called NVMeOF, over fiber. Which is basically, sort of remote version of NVMe. And then the third, least but not last, most important category probably, is security. And when I say that security is very very important, you know, the fact that security is very important is clear to everybody in our day, and I think security has two main branches in terms of services. There is the classical firewall and micro-segmentation, in which you basically try to enforce the fact that only who is allowed to access something can access something. But you don't, at that point, care too much about the privacy of the data. Then there is the other branch that encryption, in which you are not trying to enforce to decide who can access or not access the resource, but you are basically caring about the privacy of the data, encrypting the data so that if it is hijacked, snooped or whatever, it cannot be decoded. >> Eccellent, so Silvano, absolutely the edge is a huge opportunity. When someone looks at the overall solution and say you're putting something in the edge, you know, they could just say, "This really looks like a NIC." You talked about some of the previous engagement we'd worked on, host bus adapters, smart NICs and the like. There were some things we could build in but there were limits that we had, so, what differentiates the Pensando solution from what we would traditionally think of as an adapter card in the past? >> Well, the Pensando solution has two main, multiple pieces but in term of hardware, has two main pieces, there is an ASIC that we call copper internally. That ASIC is not strictly related to be used only in an adapter form, you can deploy it also in other form factors in another part of the network in other embodiment, et cetera. And then there is a card, the card has a PCI-E interface and sit in a PCI-E slot. So yes, in that sense, somebody can can call it a NIC and since it's a pretty good NIC, somebody can call it a smart NIC. We don't really like that two terms, we prefer to call it DSC, domain specific card, but the real term that I like to use is domain specific hardware, and I like to use domain specific hardware because it's the same term that Hennessy and Patterson use in a beautiful piece of literature that is the Turing Award lecture. It's on the internet, it's public, I really ask everybody to go and try to find it and listen to that beautiful piece of literature, modern literature on computer architecture. The Turing Award lecture of Hennessy and Patterson. And they have introduced the concept of domain specific hardware, and they explain also the justification for why now is important to look at domain specific hardware. And the justification is basically in a nutshell and we can go more deep if you're interested, but in a nutshell is that the specing, that is the single tried performer's measurement of a CPU, is not growing fast at all, is only growing nowadays like a few point percent a year, maybe 4% per year. And with this slow grow, over specing performance of a core, you know the core need to be really used for user application, for customer application, and all what is known as Sentian can be moved to some domain specific hardware that can do that in a much better fashion, and by no mean I imply that the DSC is the best example of domain specific hardware. The best example of domain specific hardware is in front of all of us, and are GPUs. And not GPUs for graphic processing which are also important, but GPU used basically for artificial intelligence, machine learning inference. You know, that is a piece of hardware that has shown that something can be done with performance that the purpose processor can do. >> Yeah, it's interesting right. If you term back the clock 10 or 15 years ago, I used to be in arguments, and you say, "Do you build an offload, "or do you let it happen is software." And I was always like, "Oh, well Moore's law with mean that, "you know, the software solution will always win, "because if you bake it in hardware, it's too slow." It's a very different world today, you talk about how fast things speed up. From your customer standpoint though, often some of those architectural things are something that I've looked for my suppliers to take care of that. Speak to the use case, what does this all mean from a customer stand point, what are some of those early use cases that you're looking at? >> Well, as always, you get a bit surprised by the use cases, in the sense that you start to design a product thinking that some of the most cool thing will be the dominant use cases, and then you discover that something that you have never really fought have the most interesting use case. One that we have fought about since day one, but it's really becoming super interesting is telemetry. Basically, measuring everything in the network, and understanding what is happening in the network. I was speaking with a friend the other day, and the friend was asking me, "Oh, but we have SNMP for many many years, "which is the difference between SNMP and telemetry?" And the difference is to me, the real difference is in SNMP or in many of these management protocol, you involve a management plan, you involve a control plan, and then you go to read something that is in the data plan. But the process is so inefficient that you cannot really get a huge volume of data, and you cannot get it practically enough, with enough performance. Doing telemetry means thinking a data path, building a data path that is capable of not only measuring everything realtime, but also sending out that measurement without involving anything else, without involving the control path and the management path so that the measurement becomes really very efficient and the data that you stream out becomes really usable data, actionable data in realtime. So telemetry is clearly the first one, is important. One that you honestly, we had built but we weren't thinking this was going to have so much success is what we call Bidirectional ERSPAN. And basically, is just the capability of copying data. And sending data that the card see to a station. And that is very very useful for replacing what are called TAP network, Which is just network, but many customer put in parallel to the real network just to observe the real network and to be able to troubleshoot and diagnose problem in the real network. So, this two feature telemetry and ERSPAN that are basically troubleshooting feature are the two features that are beginning are getting more traction. >> You're talking about realtime things like telemetry. You know, the applications and the integrations that you need to deal with are so important, back in some of the previous start-ups that you done was getting ready for, say how do we optimize for virtualization, today you talk cloud-native architectures, streaming, very popular, very modular, often container based solutions and things change constantly. You look at some of these architectures, it's not a single thing that goes on for a long period of time, but it's lots of things that happen over shorter periods of time. So, what integrations do you need to do, and what architecturally, how do you build things to make them as you talk, future-proof for these kind of cloud architectures? >> Yeah, what I mentioned were just the two low hanging fruit, if you want the first two low hanging fruit of this architecture. But basically, the two that come immediately after and where there is a huge amount of radio are distributor's state for firewall, with micro-segmentation support. That is a huge topic in itself. So important nowadays that is absolutely fundamental to be able to build a cloud. That is very important, and the second one is wire rate encryption. There is so much demand for privacy, and so much demand to encrypt the data. Not only between data center but now also inside the data center. And when you look at a large bank for example. A large bank is no longer a single organization. A large bank is multiple organizations that are compartmentalized by law. That need to keep things separate by law, by regulation, by FCC regulation. And if you don't have encryption, and if you don't have distributed firewall, is really very difficult to achieve that. And then you know, there are other applications, we mentioned storage NVME, and is a very nice application, and then we have even more, if you go to look at load balance in between server, doing compression for storage and other possible applications. But I sort of lost your real question. >> So, just part of the pieces, when you look at integrations that Pensando needs to do, for maybe some of the applications that you would tie in to any of those that come to mind? >> Yeah, well for sure. It depends, I see two main branches again. One is the cloud provider, and one are the enterprise. In the cloud provider, basically this cloud provider have a huge management infrastructure that is already built and they want just the card to adapt to this, to be controllable by this huge management infrastructure. They already know which rule they want to send to the card, they already know which feature they want to enable on the card. They already have all that, they just want the card to provide the data plan performers for that particular feature. So they're going to build something particular that is specific for that particular cloud provider that adapt to that cloud provider architecture. We want the flexibility of having an API on the card that is like a rest API or a gRPC which they can easily program, monitor and control that card. When you look at the enterprise, the situation is different. Enterprise is looking to at two things. Two or three things. The first thing is a complete solution. They don't want to, they don't have the management infrastructure that they have built like a cloud provider. They want a complete solution that has the card and the management station and there's all what is required to make from day one, a working solution, which is absolutely correct in an enterprise environment. They also want integration, and integration is the tool that they already have. If you look at main enterprise, one of a dominant presence is clearly VMware virtualization in terms of ESX and vSphere and NSX. And so most of the customer are asking us to integrate with VMware, which is a very reasonable demand. And then of course, there are other player, not so much in the virtualization's space, but for example, in the data collections space, and the data analysis space, and for sure Pensando doesn't want to reinvent the wheel there, doesn't want to build a data collector or data analysis engine and whatever, there is a lot of work, and there are a lot out there, so integration with things like Splunk for example are kind of natural for Pensando. >> Eccellent, so wait, you talked about some of the places where Pensando doesn't need to reinvent the wheel, you talk through a lot of the different technology pieces. If I had to have you pull out one, what would you say is the biggest innovation that Pensando has built into the platform. >> Well, the biggest innovation is this P4 architecture. And the P4 architecture was a sort of gift that was given us in the sense that it was not invented for what we use it. P4 was basically invented to have programmable switches. The first big P4 company was clearly Barefoot that then was acquired by Intel and Barefoot built a programmable switch. But if you look at the reality of today, the network, most of the people want the network to be super easy. They don't want to program anything into the network. They want to program everything at the edge, they want to put all the intelligence and the programmability of the edge, so we borrowed the P4 architecture, which is fantastic programmable architecture and we implemented that yet. It's also easier because the bandwidth is clearly more limited at the edge compared to being in the core of a network. And that P4 architecture give us a huge advantage. If you, tomorrow come up with the Stuart Encapsulation Super Duper Technology, I can implement in the copper The Stuart, whatever it was called, Super Duper Encapsulation Technology, even when I design the ASIC I didn't know that encapsulation exists. Is the data plan programmability, is the capability to program the data plan and programming the data plan while maintaining wire-speed performance, which I think is the biggest benefit of Pensando. >> All right, well Silvano, thank you so much for sharing, your journey with Pensando so far, really interesting to dig into it and absolutely look forward to following progress as it goes. >> Stuart, it's been really a pleasure to talk with you, I hope to talk with you again in the near future. Thank you so much. >> All right, and thank you for watching theCUBE, I'm Stu Miniman, thanks for watching. (upbeat music)

Published Date : Jun 17 2020

SUMMARY :

leaders all around the world, I'm Stu Min and I'm coming to you and is really nice to and on the Pensando opportunity. that is behind the Pensando product I've only gotten the soft version. but that is the core of a network service. as an adapter card in the past? but the real term that I like to use "you know, the software and the data that you stream out becomes really usable data, and the integrations and the second one is and integration is the tool that Pensando has built into the platform. is the capability to program the data plan and absolutely look forward to I hope to talk with you you for watching theCUBE,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
SilvanoPERSON

0.99+

OregonLOCATION

0.99+

SISCOORGANIZATION

0.99+

2011DATE

0.99+

Stu MinPERSON

0.99+

PensandoORGANIZATION

0.99+

TwoQUANTITY

0.99+

ItalyLOCATION

0.99+

Silvano GaiPERSON

0.99+

BarefootORGANIZATION

0.99+

BostonLOCATION

0.99+

StuartPERSON

0.99+

CiscoORGANIZATION

0.99+

two featuresQUANTITY

0.99+

two main piecesQUANTITY

0.99+

Stu MinimanPERSON

0.99+

200 gigabitQUANTITY

0.99+

OneQUANTITY

0.99+

Palo AltoLOCATION

0.99+

100 gigabitQUANTITY

0.99+

two termsQUANTITY

0.99+

25 gigabitsQUANTITY

0.99+

FCCORGANIZATION

0.99+

Pacific North WestLOCATION

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

Bend, OregonLOCATION

0.99+

two thingsQUANTITY

0.99+

Building a Future-Proof Cloud InfrastructureTITLE

0.99+

thirdQUANTITY

0.98+

10DATE

0.98+

first oneQUANTITY

0.98+

Future Proof Your EnterpriseTITLE

0.98+

two main branchesQUANTITY

0.98+

vSphereTITLE

0.98+

ESXTITLE

0.98+

firstQUANTITY

0.98+

two-layerQUANTITY

0.98+

tomorrowDATE

0.98+

three thingsQUANTITY

0.97+

MoorePERSON

0.97+

Cube StudiosORGANIZATION

0.97+

two featureQUANTITY

0.97+

bothQUANTITY

0.97+

todayDATE

0.97+

two main branchesQUANTITY

0.96+

two mainQUANTITY

0.96+

single thingQUANTITY

0.96+

first timeQUANTITY

0.95+

4% per yearQUANTITY

0.95+

HennessyORGANIZATION

0.95+

first thingQUANTITY

0.95+

15 years agoDATE

0.94+

second oneQUANTITY

0.93+

single organizationQUANTITY

0.92+

NSXTITLE

0.91+

singleQUANTITY

0.9+

CUBEORGANIZATION

0.89+

ERSPANORGANIZATION

0.89+

SplunkORGANIZATION

0.88+

P4COMMERCIAL_ITEM

0.85+

P4ORGANIZATION

0.84+

PensandoLOCATION

0.84+

2016DATE

0.83+

TuringEVENT

0.82+

two low hangingQUANTITY

0.79+

VMwareTITLE

0.77+

2020DATE

0.77+

Super Duper Encapsulation TechnologyOTHER

0.77+

PattersonORGANIZATION

0.76+

UNLIST TILL 4/1 - How The Trade Desk Reports Against Two 320-node Clusters Packed with Raw Data


 

hi everybody thank you for joining us today for the virtual Vertica BBC 2020 today's breakout session is entitled Vertica and en mode at the trade desk my name is su LeClair director of marketing at Vertica and I'll be your host for this webinar joining me is Ron Cormier senior Vertica database engineer at the trade desk before we begin I encourage you to submit questions or comments during the virtual session you don't have to wait just type your question or comment in the question box below the slides and click submit there will be a Q&A session at the end of the presentation we'll answer as many questions as we're able to during that time any questions that we don't address we'll do our best to answer them offline alternatively you can visit vertical forums to post your questions there after the session our engineering team is planning to join the forums to keep the conversation going also a quick reminder that you can maximize your screen by clicking the double arrow button in the lower right corner of the slide and yes this virtual session is being recorded and will be available to view on demand this week we'll send you a notification as soon as it's ready so let's get started over to you run thanks - before I get started I'll just mention that my slide template was created before social distancing was a thing so hopefully some of the images will harken us back to a time when we could actually all be in the same room but with that I want to get started uh the date before I get started in thinking about the technology I just wanted to cover my background real quick because I think it's peach to where we're coming from with vertically on at the trade desk and I'll start out just by pointing out that prior to my time in the trade desk I was a tech consultant at HP HP America and so I traveled the world working with Vertica customers helping them configure install tune set up their verdict and databases and get them working properly so I've seen the biggest and the smallest implementations and everything in between and and so now I'm actually principal database engineer straight desk and and the reason I mentioned this is to let you know that I'm a practitioner I'm working with with the product every day or most days this is a marketing material so hopefully the the technical details in this presentation are are helpful I work with Vertica of course and that is most relative or relevant to our ETL and reporting stack and so what we're doing is we're taking about the data in the Vertica and running reports for our customers and we're an ad tech so I did want to just briefly describe what what that means and how it affects our implementation so I'm not going to cover the all the details of this slide but basically I want to point out that the trade desk is a DSP it's a demand-side provider and so we place ads on behalf of our customers or agencies and ad agencies and their customers that are advertised as brands themselves and the ads get placed on to websites and mobile applications and anywhere anywhere digital advertising happens so publishers are what we think ocean like we see here espn.com msn.com and so on and so every time a user goes to one of these sites or one of these digital places and an auction takes place and what people are bidding on is the privilege of showing and add one or more ads to users and so this is this is really important because it helps fund the internet ads can be annoying sometimes but they actually help help are incredibly helpful in how we get much much of our content and this is happening in real time at very high volumes so on the open Internet there is anywhere from seven to thirteen million auctions happening every second of those seven to thirteen million auctions happening every second the trade desk bids on hundreds of thousands per second um so that gives it and anytime we did we have an event that ends up in Vertica that's that's one of the main drivers of our data volume and certainly other events make their way into Vertica as well but that wanted to give you a sense of the scale of the data and sort of how it's impacting or how it is impacted by sort of real real people in the world so um the uh let's let's take a little bit more into the workload and and we have the three B's in spades late like many many people listening to a massive volume velocity and variety in terms of the data sizes I've got some information here some stats on on the raw data sizes that we deal with on a daily basis per day so we ingest 85 terabytes of raw data per day and then once we get it into Vertica we do some transformations we do matching which is like joins basically and we do some aggregation group buys to reduce the data and make it clean it up make it so it's more efficient to consume buy our reporting layer so that matching in aggregation produces about ten new terabytes of raw data per day it all comes from the it all comes from the data that was ingested but it's new data and so that's so it is reduced quite a bit but it's still pretty pretty high high volume and so we have this aggregated data that we then run reports on on behalf of our customers so we have about 40,000 reports per day oh that's probably that's actually a little bit old and older number it's probably closer to 50 or 55,000 reports per day at this point so it's I think probably a pretty common use case for for Vertica customers it's maybe a little different in the sense that most of the reports themselves are >> reports so they're not it's not a user sitting at a keyboard waiting for the result basically we have we we have a workflow where we do the ingest we do this transform and then and then once once all the data is available for a day we run reports on behalf of our customer to let me have our customers on that that daily data and then we send the reports out you via email or we drop them in a shared location and then they they look at the reports at some later point of time so it's up until yawn we did all this work on on enterprise Vertica at our peak we had four production enterprise clusters each which held two petabytes of raw data and I'll give you some details on on how those enterprise clusters were configured in the hardware but before I do that I want to talk about the reporting workload specifically so the the reporting workload is particularly lumpy and what I mean by that is there's a bunch of work that becomes available bunch of queries that we need to run in a short period of time after after the days just an aggregation is completed and then the clusters are relatively quiet for the remaining portion of the day that's not to say they are they're not doing anything as far as read workload but they certainly are but it's much less reactivity after that big spike so what I'm showing here is our reporting queue and the spike is is when all those reports become a bit sort of ailable to be processed we can't we can't process we can't run the report until we've done the full ingest and matching and aggregation for the day and so right around 1:00 or 2:00 a.m. UTC time every day that's when we get this spike and the spike we affectionately called the UTC hump but basically it's a huge number of queries that need to be processed sort of as soon as possible and we have service levels that dictate what as soon as possible means but I think the spike illustrates our use case pretty pretty accurately and um it really as we'll see it's really well suited for pervert icky on and we'll see what that means so we've got our we had our enterprise clusters that I mentioned earlier and just to give you some details on what they look like there they were independent and mirrored and so what that means is all four clusters held the same data and we did this intentionally because we wanted to be able to run our report anywhere we so so we've got this big queue over port is big a number of reports that need to be run and we've got these we started we started with one cluster and then we got we found that it couldn't keep up so we added a second and we found the number of reports went up that we needed to run that short period of time and and so on so we eventually ended up with four Enterprise clusters basically with this with the and we'd say they were mirrored they all had the same data they weren't however synchronized they were independent and so basically we would run the the tailpipe line so to speak we would run ingest and the matching and the aggregation on all the clusters in parallel so they it wasn't as if each cluster proceeded to the next step in sync with which dump the other clusters they were run independently so it was sort of like each each cluster would eventually get get consistent and so this this worked pretty well for for us but it created some imbalances and there was some cost concerns that will dig into but just to tell you about each of these each of these clusters they each had 50 nodes they had 72 logical CPU cores a half half a terabyte of RAM a bunch of raid rated disk drives and 2 petabytes of raw data as I stated before so pretty big beefy nodes that are physical physical nodes that we held we had in our data centers we actually reached these nodes so so it was on our data center providers data centers and the these were these these were what we built our business on basically but there was a number of challenges that we ran into as we as we continue to build our business and add data and add workload and and the first one is is some in ceremony can relate to his capacity planning so we had to prove think about the future and try to predict the amount of work that was going to need to be done and how much hardware we were going to need to satisfy that work to meet that demand and that's that's just generally a hard thing to do it's very difficult to verdict the future as we can probably all attest to and how much the world has changed and even in the last month so it's a it's a very difficult thing to do to look six twelve eighteen eighteen months into the future and sort of get it right and and and what people what we tended to do is we reach or we tried to our art plans our estimates were very conservative so we overbought in a lot of cases and not only that we had to plan for the peak so we're planning for that that that point in time that those number of hours in the early morning when we had to we had all those reports to run and so that so so we ended up buying a lot of hardware and we actually sort of overbought at times and then and then as the hardware were days it would kind of come into it would come into maturity and we have our our our workload would sort of come approach matching the demand so that was one of the big challenges the next challenge is that we were running on disk you can we wanted to add data in sort of two dimensions the only dimensions that everybody can think about we wanted to add more columns to our big aggregates and we wanted to keep our big aggregates for for longer periods of time so both horizontally and vertically we wanted to expand the datasets but we basically were running out of disk there was no more disk in and it's hard to add a disc to Vertica in enterprise mode not not impossible but certainly hard and and one cannot add discs without adding compute because enterprise mode the disk is all local to each of the nodes for most most people you can do not exchange with sands and other external rays but that's there are a number of other challenges with that so um adding in order to add disk we had to add compute and that basically meant kept us out of balance we're adding more compute than we needed for the amount of disk so that was the problem certainly physical nodes getting them the order delivered racked cables even before we even start such Vertica there's lead times there and and so it's also long commitment since we like I mentioned me Lisa hardware so we were committing to these nodes these physical servers for two or three years at a time and I mentioned that can be a hard thing to do but we wanted to least to keep our capex down so we wanted to keep our aggregates for a long period of time we could have done crazy things or more exotic things to to help us with this if we had to in enterprise mode we could have started to like daisy chain clusters together and that would have been sort of a non-trivial engineering effort because we would need to then figure out how to migrate data source first to recharge the data across all the clusters and we had to migrate data from one cluster to another cluster hesitation and we would have to think about how to aggregate run queries across clusters so if you assured data set spans two clusters it would have had to sort of aggregated within each cluster maybe and then build something on top the aggregated the data from each of those clusters so not impossible things but certainly not easy things and luckily for us we started talking about two Vertica about separation of compute and storage and I know other customers were talking to Vertica as we were people had had these problems and so Vertica inyeon mode came to the rescue and what I want to do is just talk about nyan mode really briefly for for those in the audience who aren't familiar but it's basically Vertigo's answered to the separation of computing storage it allows one to scale compute and or storage separately and and this there's a number of advantages to doing that whereas in the old enterprise days when you add a compute you added stores and vice-versa now we can now we can add one or the other or both according to how we want to and so really briefly how this works this slide this figure was taken directly from the verdict and documentation and so just just to talk really briefly about how it works the taking advantage of the cloud and so in this case Amazon Web Services the elasticity in the cloud and basically we've got you seen two instances so elastic cloud compute servers that access data that's in an s3 bucket and so three three ec2 nodes and in a bucket or the the blue objects in this diagram and the difference is a couple of a couple of big differences one the data no longer the persistent storage of the data the data where the data lives is no longer on each of the notes the persistent stores of the data is in s3 bucket and so what that does is it basically solves one of our first big problems which is we were running out of disk the s3 has for all intensive purposes infinite storage so we can keep much more data there and that mostly solved one of our big problems so the persistent data lives on s3 now what happens is when a query runs it runs on one of the three nodes that you see here and assuming we'll talk about depo in a second but what happens in a brand new cluster where it's just just spun up the hardware is the query will will run on those ec2 nodes but there will be no data so those nodes will reach out to s3 and run the query on remote storage so that so the query that the nodes are literally reaching out to the communal storage for the data and processing it entirely without using any data on on the nodes themselves and so that that that works pretty well it's not as fast as if the data was local to the nodes but um what Vertica did is they built a caching layer on on each of the node and that's what the depot represents so the depot is some amount of disk that is relatively local to the ec2 node and so when the query runs on remote stores on the on the s3 data it then queues up the data for download to the nodes and so the data will get will reside in the Depot so that the next query or the subsequent subsequent queries can run on local storage instead of remote stores and that speeds things up quite a bit so that that's that's what the role of the Depot is the depot is basically a caching layer and we'll talk about the details of how we can see your in our Depot the other thing that I want to point out is that since this is the cloud another problem that helps us solve is the concurrency problem so you can imagine that these three nodes are one sort of cluster and what we can do is we can spit up another three nodes and have it point to the same s3 communal storage bucket so now we've got six nodes pointing to the same data but we've you isolated each of the three nodes so that they act as if they are their own cluster and so vertical calls them sub-clusters so we've got two sub clusters each of which has three nodes and what this has essentially done it is it doubled the concurrency doubled the number of queries that can run at any given time because we've now got this new place which new this new chunk of compute which which can answer queries and so that has given us the ability to add concurrency much faster and I'll point out that for since it's cloud and and there are on-demand pricing models we can have significant savings because when a sub cluster is not needed we can stop it and we pay almost nothing for it so that's that's really really important really helpful especially for our workload which I pointed out before was so lumpy so those hours of the day when it's relatively quiet I can go and stop a bunch of sub clusters and and I will pay for them so that that yields nice cost savings let's be on in a nutshell obviously engineers and the documentation can use a lot more information and I'm happy to field questions later on as well but I want to talk about how how we implemented beyond at the trade desk and so I'll start on the left hand side at the top the the what we're representing here is some clusters so there's some cluster 0 r e t l sub cluster and it is a our primary sub cluster so when you get into the world of eon there's primary Club questions and secondary sub classes and it has to do with quorum so primary sub clusters are the sub clusters that we always expect to be up and running and they they contribute to quorum they decide whether there's enough instances number a number of enough nodes to have the database start up and so these this is where we run our ETL workload which is the ingest the match in the aggregate part of the work that I talked about earlier so these nodes are always up and running because our ETL pipeline is always on we're internet ad tech company like I mentioned and so we're constantly getting costly running ad and there's always data flowing into the system and the matching is happening in the aggregation so that part happens 24/7 and we wanted so that those nodes will always be up and running and we need this we need that those process needs to be super efficient and so what that is reflected in our instance type so each of our sub clusters is sixty four nodes we'll talk about how we came at that number but the infant type for the ETL sub cluster the primary subclusters is I 3x large so that is one of the instance types that has quite a bit of nvme stores attached and we'll talk about that but on 32 cores 240 four gigs of ram on each node and and that what that allows us to do I should have put the amount of nvme but I think it's seven terabytes for anything me storage what that allows us to do is to basically ensure that our ETL everything that this sub cluster does is always in Depot and so that that makes sure that it's always fast now when we get to the secondary subclusters these are as mentioned secondary so they can stop and start and it won't affect the cluster going up or down so they're they're sort of independent and we've got four what we call Rhian subclusters and and they're not read by definition or technically they're not read only any any sub cluster can ingest and create your data within the database and that'll all get that'll all get pushed to the s3 bucket but logically for us they're read only like these we just most of these the work that they happen to do is read only which it is which is nice because if it's read only it doesn't need to worry about commits and we let we let the primary subclusters or ETL so close to worry about committing data and we don't have to we don't have to have the all nodes in the database participating in transaction commits so we've got a for read subclusters and we've got one EP also cluster so a total of five sub clusters each so plus they're running sixty-four nodes so that gives us a 320 node database all things counted and not all those nodes are up at the same time as I mentioned but often often for big chunks of the days most of the read nodes are down but they do all spin up during our during our busy time so for the reading so clusters we've got I three for Excel so again the I three incidents family type which has nvme stores these notes have I think three and a half terabytes of nvme per node we just rate it to nvme drives we raid zero them together and 16 cores 122 gigs of ram so these are smaller you'll notice but it works out well for us because the the read workload is is typically dealing with much smaller data sets than then the ingest or the aggregation workbook so we can we can run these workloads on on smaller instances and leave a little bit of money and get more granularity with how many sub clusters are stopped and started at any given time the nvme doesn't persist the data on it isn't persisted remember you stop and start this is an important detail but it's okay because the depot does a pretty good job in that in that algorithm where it pulls data in that's recently used and the that gets pushed out a victim is the data that's least reasons use so it was used a long time ago so it's probably not going to be used to get so we've got um five sub-clusters and we have actually got to two of those so we've got a 320 node cluster in u.s. East and a 320 node cluster in u.s. West so we've got a high availability region diversity so and their peers like I talked about before they're they're independent but but yours they are each run 128 shards and and so with that what that which shards are is basically the it's similar to segmentation when you take those dataset you divide it into chunks and though and each sub cluster can concede want the data set in its entirety and so each sub cluster is dealing with 128 shards it shows 128 because it'll give us even distribution of the data on 64 node subclusters 60 120 might evenly by 64 and so there's so there's no data skew and and we chose 128 because the sort of ginger proof in case we wanted to double the size of any of the questions we can double the number of notes and we still have no excuse the data would be distributed evenly the disk what we've done is so we've got a couple of raid arrays we've got an EBS based array that they're catalog uses so the catalog storage location and I think we take for for EBS volumes and raid 0 them together and come up with 128 gigabyte Drive and we wanted an EPS for the catalog because it we can stop and start nodes and that data will persist it will come back when the node comes up so we don't have to run a bunch of configuration when the node starts up basically the node starts it automatically joins the cluster and and very strongly there after it starts processing work let's catalog and EBS now the nvme is another raid zero as I mess with this data and is ephemeral so let me stop and start it goes away but basically we take 512 gigabytes of the nvme and we give it to the data temp storage location and then we take whatever is remaining and give it to the depot and since the ETL and the reading clusters are different instance types they the depot is is side differently but otherwise it's the same across small clusters also it all adds up what what we have is now we we stopped the purging data for some of our big a grits we added bunch more columns and what basically we at this point we have 8 petabytes of raw data in each Jian cluster and it is obviously about 4 times what we can hold in our enterprise classes and we can continue to add to this maybe we need to add compute maybe we don't but the the amount of data that can can be held there against can obviously grow much more we've also built in auto scaling tool or service that basically monitors the queue that I showed you earlier monitors for those spikes I want to see as low spikes it then goes and starts up instances one sub-collector any of the sub clusters so that's that's how that's how we we have compute match the capacity match that's the demand also point out that we actually have one sub cluster is a specialized nodes it doesn't actually it's not strictly a customer reports sub clusters so we had this this tool called planner which basically optimizes ad campaigns for for our customers and we built it it runs on Vertica uses data and Vertica runs vertical queries and it was it was wildly successful um so we wanted to have some dedicated compute and beyond witty on it made it really easy to basically spin up one of these sub clusters or new sub cluster and say here you go planner team do what you want you can you can completely maximize the resources on these nodes and it won't affect any of the other operations that were doing the ingest the matching the aggregation or the reports up so it gave us a great deal of flexibility and agility which is super helpful so the question is has it been worth it and without a doubt the answer is yes we're doing things that we never could have done before sort of with reasonable cost we have lots more data specialized nodes and more agility but how do you quantify that because I don't want to try to quantify it for you guys but it's difficult because each eon we still have some enterprise nodes by the way cost as you have two of them but we also have these Eon clusters and so they're there they're running different workloads the aggregation is different the ingest is running more on eon does the number of nodes is different the hardware is different so there are significant differences between enterprise and and beyond and when we combine them together to do the entire workload but eon is definitely doing the majority of the workload it has most of the data it has data that goes is much older so it handles the the heavy heavy lifting now the query performance is more anecdotal still but basically when the data is in the Depot the query performance is very similar to enterprise quite close when the data is not in Depot and it needs to run our remote storage the the query performance is is is not as good it can be multiples it's not an order not orders of magnitude worse but certainly multiple the amount of time that it takes to run on enterprise but the good news is after the data downloads those young clusters quickly catch up as the cache populates there of cost I'd love to be able to tell you that we're running to X the number of reports or things are finishing 8x faster but it's not that simple as you Iran is that you it is me I seem to have gotten to thank you you hear me okay I can hear you now yeah we're still recording but that's fine we can edit this so if I'm just talking to the person the support person he will extend our recording time so if you want to maybe pick back up from the beginning of the slide and then we'll just edit out this this quiet period that we have sir okay great I'm going to go back on mute and why don't you just go back to the previous slide and then come into this one again and I'll make sure that I tell the person who yep perfect and then we'll continue from there is that okay yeah sound good all right all right I'm going back on yet so the question is has it been worth it and for us the answer has been a resounding yes we're doing things that we never could have done at reasonable cost before and we got more data we've got this Y note this law has nodes and in work we're much more agile so how to quantify that um well it's not quite as simple and straightforward as you might hope I mean we still have enterprise clusters we've got to update the the four that we had at peak so we've still got two of those around and we got our two yawn clusters but they're running different workloads and they're comprised of entirely different hardware the dependence has I've covered the number of nodes is different for sub-clusters so 64 versus 50 is going to have different performance the the workload itself the aggregation is aggregating more columns on yon because that's where we have disk available the queries themselves are different they're running more more queries on more intensive data intensive queries on yon because that's where the data is available so in a sense it is Jian is doing the heavy lifting for the cluster for our workload in terms of query performance still a little anecdotal but like when the queries that run on the enterprise cluster the performance matches that of the enterprise cluster quite closely when the data is in the Depot when the data is not in a Depot and Vertica has to go out to the f32 to get the data performance degrades as you might expect it can but it depends on the curious all things like counts counts are is really fast but if you need lots of the data from the material others to realize lots of columns that can run slower I'm not orders of magnitude slower but certainly multiple of the amount of time in terms of costs anecdotal will give a little bit more quantifying here so what I try to do is I try to figure out multiply it out if I wanted to run the entire workload on enterprise and I wanted to run the entire workload on e on with all the data we have today all the queries everything and to try to get it to the Apple tab so for enterprise the the and estimate that we do need approximately 18,000 cores CPU cores all together and that's a big number but that's doesn't even cover all the non-trivial engineering work that would need to be required that I kind of referenced earlier things like starting the data among multiple clusters migrating the data from one culture to another the daisy chain type stuff so that's that's the data point now for eon is to run the entire workload estimate we need about twenty thousand four hundred and eighty CPU cores so more CPU cores uh then then enterprise however about half of those and partly ten thousand of both CPU cores would only run for about six hours per day and so with the on demand and elasticity of the cloud that that is a huge advantage and so we are definitely moving as fast as we can to being on all Aeon we have we have time left on our contract with the enterprise clusters or not we're not able to get rid of them quite yet but Eon is certainly the way of the future for us I also want to point out that uh I mean yawn is we found to be the most efficient MPP database on the market and what that refers to is for a given dollar of spend of cost we get the most from that zone we get the most out of Vertica for that dollar compared to other cloud and MPP database platforms so our business is really happy with what we've been able to deliver with Yan Yan has also given us the ability to begin a new use case which is probably this case is probably pretty familiar to folks on the call where it's UI based so we'll have a website that our customers can log into and on that website they'll be able to run reports on queries through the website and have that run directly on a separate row to get beyond cluster and so much more latent latency sensitive and concurrency sensitive so the workflow that I've described up until this point has been pretty steady throughout the day and then we get our spike and then and then it goes back to normal for the rest of the day this workload it will be potentially more variable we don't know exactly when our engineers are going to deliver some huge feature that is going to make a 1-1 make a lot of people want to log into the website and check how their campaigns are doing so we but Yohn really helps us with this because we can add a capacity so easily we cannot compute and we can add so we can scale that up and down as needed and it allows us to match the concurrency so beyond the concurrency is much more variable we don't need a big long lead time so we're really excited about about this so last slide here I just want to leave you with some things to think about if you're about to embark or getting started on your journey with vertically on one of the things that you'll have to think about is the no account in the shard count so they're kind of tightly coupled the node count we determined by figuring like spinning up some instances in a single sub cluster and getting performance smaller to finding an acceptable performance considering current workload future workload for the queries that we had when we started and so we went with 64 we wanted to you want to certainly want to increase over 50 but we didn't want to have them be too big because of course it costs money and so what you like to do things in power to so 64 nodes and then the shard count for the shards again is like the data segmentation is a new type of segmentation on the data and the start out we went with 128 it began the reason is so that we could have no skew but you know could process the same same amount of data and we wanted to future-proof it so that's probably it's probably a nice general recommendation doubleness account for the nodes the instance type and and how much people space those are certainly things you're going to consider like I was talking about we went for they I three for Excel I 3/8 Excel because they offer good good Depot stores which gives us a really consistent good performance and it is all in Depot the pretty good mud presentation and some information on on I think we're going to use our r5 or the are for instance types for for our UI cluster so much less the data smaller so much less enter this on Depot so we don't need on that nvm you stores the reader we're going to want to have a reserved a mix of reserved and on-demand instances if you're if you're 24/7 shop like we are like so our ETL subclusters those are reserved instances because we know we're going to run those 24 hours a day 365 days a year so there's no advantage of having them be on-demand on demand cost more than reserve so we get cost savings on on figuring out what we're going to run and have keep running and it's the read subclusters that are for the most part on on demand we have one of our each sub Buster's is actually on 24/7 because we keep it up for ad-hoc queries your analyst queries that we don't know when exactly they're going to hit and they want to be able to continue working whenever they want to in terms of the initial data load the initial data ingest what we had to do and now how it works till today is you've got to basically load all your data from scratch there isn't a great tooling just yet for data populate or moving from enterprise to Aeon so what we did is we exported all the data in our enterprise cluster into park' files and put those out on s3 and then we ingested them into into our first Eon cluster so it's kind of a pain we script it out a bunch of stuff obviously but they worked and the good news is that once you do that like the second yon cluster is just a bucket copy in it and so there's tools missions that can help help with that you're going to want to manage your fetches and addiction so this is the data that's in the cache is what I'm referring to here the data that's in the default and so like I talked about we have our ETL cluster which has the most recent data that's just an injected and the most difficult data that's been aggregated so this really recent data so we wouldn't want anybody logging into that ETL cluster and running queries on big aggregates to go back one three years because that would invalidate the cache the depot would start pulling in that historical data and it was our assessing that historical data and evicting the recent data which would slow things out flow down that ETL pipelines so we didn't want that so we need to make sure that users whether their service accounts or human users are connecting to the right phone cluster and I mean we just created the adventure users with IPS and target groups to palm those pretty-pretty it was definitely something to think about lastly if you're like us and you're going to want to stop and start nodes you're going to have to have a service that does that for you we're where we built this very simple tool that basically monitors the queue and stops and starts subclusters accordingly we're hoping that that we can work with Vertica to have it be a little bit more driven by the cloud configuration itself so for us all amazon and we love it if we could have it have a scale with the with the with the eight of us can take through points do things to watch out for when when you're working with Eon is the first is system table queries on storage layer or metadata and the thing to be careful of is that the storage layer metadata is replicated it's caught as a copy for each of the sub clusters that are out there so we have the ETL sub cluster and our resources so for each of the five sub clusters there is a copy of all the data in storage containers system table all the data and partitions system table so when you want to use this new system tables for analyzing how much data you have or any other analysis make sure that you filter your query with a node name and so for us the node name is less than or equal to 64 because each of our sub clusters at 64 so we limit we limit the nodes to the to the 64 et 64 node ETL collector otherwise if we didn't have this filter we would get 5x the values for counts and some sort of stuff and lastly there is a problem that we're kind of working on and thinking about is a DC table data for sub clusters that are our stops when when the instances stopped literally the operating system is down and there's no way to access it so it takes the DC table DC table data with it and so I cannot after after my so close to scale up in the morning and then they scale down I can't run DC table queries on how what performed well and where and that sort of stuff because it's local to those nodes so we're working on something so something to be aware of and we're working on a solution or an implementation to try to suck that data out of all the notes you can those read only knows that stop and start all the time and bring it in to some other kind of repository perhaps another vertical cluster so that we can run analysis and monitoring even you want those those are down that's it um thanks for taking the time to look into my presentation really do it thank you Ron that was a tremendous amount of information thank you for sharing that with everyone um we have some questions come in that I would like to present to you Ron if you have a couple min it your first let's jump right in the first one a loading 85 terabytes per day of data is pretty significant amount what format does that data come in and what does that load process look like yeah a great question so the format is a tab separated files that are Jesus compressed and the reason for that could basically historical we don't have much tabs in our data and this is how how the data gets compressed and moved off of our our bidders the things that generate most of this data so it's a PSD gzip compressed and how you kind of we kind of have how we load it I would say we have actually kind of a Cadillac loader in a couple of different perspectives one is um we've got this autist raishin layer that's homegrown managing the logs is the data that gets loaded into Vertica and so we accumulate data and then we take we take some some files and we push them to redistribute them along the ETL nodes in the cluster and so we're literally pushing the file to through the nodes and we then run a copy statement to to ingest data in the database and then we remove the file from from the nodes themselves and so it's a little bit extra data movement which you may think about changing in the future assisting we move more and more to be on well the really nice thing about this especially for for the enterprise clusters is that the copy' statements are really fast and so we the coffee statements use memory but let's pick any other query but the performance of the cautery statement is really sensitive to the amount of available memory and so since the data is local to the nodes literally in the data directory that I referenced earlier it can access that data from the nvme stores and the kabhi statement runs very fast and then that memory is available to do something else and so we pay a little bit of cost in terms of latency and in terms of downloading the data to the nose we might as we move more and more PC on we might start ingesting it directly from s3 not copying the nodes first we'll see about that what's there that's how that's how we read the data interesting works great thanks Ron um another question what was the biggest challenge you found when migrating from on-prem to AWS uh yeah so um a couple of things that come to mind the first was the baculum the data load it was kind of a pain I mean like I referenced in that last slide only because I mean we didn't have tools built to do this so I mean we had to script some stuff out and it wasn't overly complex but yes it's just a lot of data to move I mean even with starting with with two petabytes so making sure that there there is no missed data no gaps making and moving it from the enterprise cluster so what we did is we exported it to the local disk on the enterprise buses and we then we push this history and then we ingested it in ze on again Allspark X oh so it's a lot of days to move around and I mean we have to you have to take an outage at some point stop loading data while we do that final kiss-up phase and so that was that was a challenge a sort of a one-time challenge the other saying that I mean we've been dealing with a week not that we're dealing with but with his challenge was is I mean it's relatively you can still throw totally new product for vertical and so we are big advantages of beyond is allow us to stop and start nodes and recently Vertica has gotten quite good at stopping in part starting nodes for a while there it was it was it took a really long time to start to Noah back up and it could be invasive but we worked with with the engineering team with Yan Zi and others to really really reduce that and now it's not really an issue that we think that we think too much about hey thanks towards the end of the presentation you had said that you've got 128 shards but you have your some clusters are usually around 64 nodes and you had talked about a ratio of two to one why is that and if you were to do it again would you use 128 shards ah good question so that is a reference the reason why is because we wanted to future professionals so basically we wanted to make sure that the number of stars was evenly divisible by the number of nodes and you could I could have done that was 64 I could have done that with 128 or any other multiple entities for but we went with 128 is to try to protect ourselves in the future so that if we wanted to double the number of nodes in the ECL phone cluster specifically we could have done that so that was double from 64 to 128 and then each node would have happened just one chart that it had would have to deal with so so no skew um the second part of question if I had to do it if I had to do it over again I think I would have done I think I would have stuck with 128 we still have I mean so we either running this cluster for more than 18 months now I think especially in USC and we haven't needed to increase the number of nodes so in that sense like it's been a little bit extra overhead having more shards but it gives us the peace of mind that we can easily double that and not have to worry about it so I think I think everyone is a nice place to start and you may even consider a three to one or four to one if if you're if you're expecting really rapid growth that you were just getting started with you on and your business and your gates that's a small now but what you expect to have them grow up significantly less powerful green thank you Ron that's with all the questions that we have out there for today if you do have others please feel free to send them in and we will get back to you and we'll respond directly via email and again our engineers will be available on the vertical forums where you can continue the discussion with them there I want to thank Ron for the great presentation and also the audience for your participation in questions please note that a replay of today's event and a copy of the slides will be available on demand shortly and of course we invite you to share this information with your colleagues as well again thank you and this concludes this webinar and have a great day you

Published Date : Mar 30 2020

SUMMARY :

stats on on the raw data sizes that we is so that we could have no skew but you

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Ron CormierPERSON

0.99+

sevenQUANTITY

0.99+

RonPERSON

0.99+

twoQUANTITY

0.99+

VerticaORGANIZATION

0.99+

8 petabytesQUANTITY

0.99+

122 gigsQUANTITY

0.99+

85 terabytesQUANTITY

0.99+

ExcelTITLE

0.99+

512 gigabytesQUANTITY

0.99+

128 gigabyteQUANTITY

0.99+

three nodesQUANTITY

0.99+

three yearsQUANTITY

0.99+

six nodesQUANTITY

0.99+

each clusterQUANTITY

0.99+

two petabytesQUANTITY

0.99+

240QUANTITY

0.99+

2 petabytesQUANTITY

0.99+

16 coresQUANTITY

0.99+

espn.comOTHER

0.99+

Amazon Web ServicesORGANIZATION

0.99+

Yan YanORGANIZATION

0.99+

more than 18 monthsQUANTITY

0.99+

todayDATE

0.99+

each clusterQUANTITY

0.99+

oneQUANTITY

0.99+

one clusterQUANTITY

0.99+

eachQUANTITY

0.99+

amazonORGANIZATION

0.99+

32 coresQUANTITY

0.99+

ten thousandQUANTITY

0.98+

each sub clusterQUANTITY

0.98+

one clusterQUANTITY

0.98+

72QUANTITY

0.98+

seven terabytesQUANTITY

0.98+

two dimensionsQUANTITY

0.98+

TwoQUANTITY

0.98+

5xQUANTITY

0.98+

first oneQUANTITY

0.98+

firstQUANTITY

0.98+

eonORGANIZATION

0.98+

128QUANTITY

0.98+

50QUANTITY

0.98+

four gigsQUANTITY

0.98+

s3TITLE

0.98+

three and a half terabytesQUANTITY

0.98+

this weekDATE

0.98+

64QUANTITY

0.98+

8xQUANTITY

0.97+

one chartQUANTITY

0.97+

about ten new terabytesQUANTITY

0.97+

one-timeQUANTITY

0.97+

two instancesQUANTITY

0.97+

DepotORGANIZATION

0.97+

last monthDATE

0.97+

five sub-clustersQUANTITY

0.97+

two clustersQUANTITY

0.97+

each nodeQUANTITY

0.97+

five sub clustersQUANTITY

0.96+

Chris Fox, Oracle | Empowering the Autonomous Enterprise of the Future


 

(upbeat music) >> Welcome back to theCUBE everybody. This is Dave Vellante. We've been covering the transformation of Oracle Consulting and really its rebirth. And I'm here with Chris Fox, who's the Group Vice President for Enterprise Cloud Architects and Chief Technologist for the North America Tech Cloud at Oracle. Chris, thanks so much for coming on theCUBE. >> Thanks Dave, glad to be here. >> So I love this title. I mean years ago there was no such thing as a Cloud Architect, certainly there were Chief Technologists but so you are really-- Those are your peeps, is that right? >> That's right. That's right. That's really, my team and I, that's all we do. So our focus is really helping our customers take this journey from when they were on premise to really transforming with cloud. And when we think about cloud, really for us, it's a combination. It's our hybrid cloud which happens to be on premise and then of course the true public cloud like most people are familiar with. So, very exciting journey and frankly I've seen just a lot of success for our customers. >> interesting that you hear conversations like, "Oh every company is a software company" which by the way we believe. Everybody's got a some kind of SaaS offering, but it really used to be the application, heads within organizations that had a lot of the power, still do, but of course you have cloud native developers etc. And now you have this new role of Cloud Architects, they've got to align, essentially have to provide infrastructure and capabilities so that you can be agile from a development standpoint. I wonder if you can talk about that dynamic of how the roles have evolved in the last several years. >> Yeah, you know it's very interesting now because as Oracle we spend a lot of our time with those applications owners. As a leader in SaaS right now, SaaS ERP, HCM. You just start walking through the list, they're transforming their organizations. They're trying to make their lives, much more efficient, better for their employees or customers etc. On the other side of the spectrum, we have the cloud native development teams and they're looking at better ways to deploy, develop applications, roll out new features at scale, roll out new pipelines. But Dave, what I think we're seeing at Oracle though, because we're so connected with SaaS and then we're also connected with the traditional applications that have run the business for years, the legacy applications that have been servicing us for 20 years and then the cloud native developers. So what my team and I are constantly focused on now is things like digital transformation and really wiring up all three of these across. So if we think of like a customer outcome, like I want to have a package delivered to me from a retailer, that actual process flow could touch a brand new cloud native site from e-commerce. It could touch essentially, maybe a traditional application that used to be on prem that's now on the cloud and then it might even use some new SaaS application maybe for maybe a procurement process or delivery vehicle and scheduling. So what my team does, we actually connect all three. So, what I always mention to my team and all of our customers, we have to be able to service all three of those constituents and really think about process flows. So I take the cloud native developer, we help them become efficient. We take the person who's been running that traditional application and we help them become more efficient. And then we have the SaaS applications which are now rolling out new features on a quarterly basis and the whole new delivery model. But the real key is connecting all three of these into a business process flow that makes the customer's life much more efficient. >> So what you're saying is that these Cloud Architects and the sort of modern day Chief Technologists, they're multi tool players. It's not just about cloud, it's about connecting that cloud to, whether the system's on prem or other clouds. Is that right? >> It is. You know and one thing that we're seeing too Dave, is that we know it's multi cloud. So it could be Oracle's cloud, hopefully it's always Oracle's cloud, but we don't expect that. So as architects, we certainly have to take a look at what is it that we're trying to optimize? What's the outcome we're looking for? And then be able to work across these teams, and I think what makes it probably most fun and exciting, on one day in one morning, let's say, you could be talking to the cloud native developer team. Talking about Kubernetes, CI/CD pipelines, all the great technologies that help us roll out applications and features faster. Then you'll go to a traditional, maybe Oracle E-Business suite job. This is something that's been running on prem maybe for 20 years, and it's really still servicing the business. And then you have another team that maybe is rolling out a SaaS application from Oracle. And literally all three teams are connected by a process flow. So the question is, how do we optimize all three on behalf of either the customer, the employee, the supplier? And that's really the job for the Oracle Cloud Architect. Which I think, really good, that's different than the other cloud because for the most part, we actually do offer SaaS, we offer platform, we offer infrastructure and we offer the hybrid cloud on prem. So it's a common conversation. How do we optimize all these? >> So I want to get into this cloud conversation a little bit. You guys are used to this term last mover advantage. I got to ask you about it. How is being last an advantage? But let me start there. >> Yeah, that's a great question. I mean, so frankly speaking I think that-- So Oracle has been developing, what's interesting is our SaaS applications for many, many, many years, and where we began this journey is looking at SaaS. And then we started with platform. Right after that we started saying how do we augment SaaS? This OCI for us or Oracle Cloud Infrastructure Gen 2 could be considered a last mover advantage. What does that mean? We join this cloud journey later than the others but because of our heritage, of the workloads we've been running, right? We've been running enterprise scale workloads for years, the cloud itself has been phenomenal, right? It's easier to use, pay for what you use, elastic etc. These are all phenomenal features, fell. And based on our enterprise heritage it wasn't delivering resilience at scale, even for like the traditional applications we've known on prem forever. People always say, "Chris we want to get out of the data center. "We're going zero data center." And I always say, "Well, how are you going to handle that back office stuff?" Right? The stuff that's really big, it's cranky, doesn't handle just, instances dying or things going away too easily. It needs predictable performance. It needs scale. It absolutely needs security and ultimately a lot of these applications truly have relied on an Oracle database. The Oracle database has it's own specific characteristics that it needs to run really well. So we actually looked at the cloud and we said, let's take the first generation clouds, which are doing great, but let's add the features that specifically, a lot of times, the Oracle workload needed in order to run very well and in a cost effective manner. So that's what we mean when we say, last mover advantage. We said, let's take the best of the clouds that are out there today. Let's look at the workloads that, frankly Oracle runs and has been running for years, what our customers needed and then let's build those features right into this next version of the cloud, we can service the enterprise. So our goal, honestly what's interesting is, even that first discussion we had about cloud native, and legacy applications, and also the new SaaS applications, we built a cloud that handles all three use cases, at scale resiliently in a very secure manner, and I don't know of any other cloud that's handling those three use cases, all in, we'll call it the same tendency for us at Oracle. >> Let's unpack that a little bit and get into, sort of, trying to understand the strategy and I want to frame it. So you were the last really to enter the cloud market, let's sort of agree on that. >> Chris: Yup. >> And you kind of built it from the ground up. And it's just too expensive now. The CapEx required to get into cloud is just astronomical. Now, even for a SaaS company, there's no sense. If you're a new SaaS company, you're going to run it in the cloud. Somebody else's cloud. There are some SaaS companies that of course run their own data centers but they're fewer and further between. But so, and I've also said that your advantage relative to the hyper scalers is that you've got this big SaaS estate and it somewhat insulates you, actually more than somewhat. Largely insulates you from the race to the bottom. On compute and storage, cost per bit kind of thing. But my question is, why was it was it important for Oracle, and is it important for Oracle and it's customers, that it had to participate in IaaS and PaaS and SaaS? Why not just the last two layers of that? What does that give you from a strategic advantage standpoint and what does that do for your customer? >> Yeah, great question. So the number one reason why we needed to have all three was that we have so many customers to today that are in a data center. They're running a lot of our workloads on premise and they absolutely are trying to find a better way to deliver a lower cost services to their customers. And, so, we couldn't just say let's just-- everyone needs to just become net new. Everyone just needs to ditch the old and go just to brand new alone. Too hard, too expensive at times. So we said, let's give us customers the ultimate amount of choice. So, let's even go back again to that developer conversation in SaaS. If you didn't have IaaS, we couldn't help customers achieve a zero data center strategy with their traditional application. We'll call it Peoplesoft, or JD Edwards or E-Business suite or even-- there's some massive applications that are running on the Oracle cloud right now that are custom applications built on the Oracle database. What they want is they said, "Give me the lowest ASP to get predictable performance IaaS" I'll run my app's tier on this. Number two, give me a platform service for database 'cause frankly, I don't really want to run your database, like, with all the manual effort, I want someone to automate, patching, scale up and down, and all these types of features like the pilot should have given us. And then number three, I do want SaaS over time. So we spend a lot of time with our customers, really saying, "how do I take this traditional application, run it on IaaS and PaaS?" And then number two, "let's modernize it at scale." Maybe I want to start peeling off functionality and running them as cloud native services right alongside, right? That's something again, that we're doing at scale, and other people are having a hard time running these traditional workloads on prem in the cloud. The second part is they say, "You know, I've got this legacy traditional ERP. Been servicing we well or maybe a supply chain system. Ultimately I want to get out of this. How do I get to SaaS?" And we say, "Okay, here's the way to do this. First, bring into the cloud, run it on IaaS and PaaS. And then selectively, I call it cloud slicing. Take a piece of functionality and put it into SaaS." For ERP, it might be something like start with GL, a new chart of accounts in ERP SaaS. And then slowly over a number of your journey as needed, adopt the next module. So this way, I mean, I'll just say this is the fun part of as an architect, our jobs, we're helping customers move to the cloud at scale, we're helping them do it at their rate, with whatever level of change they want. And when they're ready for SaaS, we're ready for them. And I would just say the other IaaS providers, here's the challenge we're seeing Dave, is that they're getting to the cloud, they're doing a little bit of modernization, but they want PaaS, they also want to ultimately get to SaaS, and frankly, those other clouds don't offer them. So they're kind of in this we're stuck on this lift and shift. But then we want to really move and modernize and go to SaaS. And I would say that's what Oracle is doing right now for enterprises. We're really helping them move these traditional workloads to the cloud IaaS and PaaS. And then number two, they're moving to SaaS when they're ready. And even when you get to SaaS, everyone says, "You know what, leave it as as vanilla as possible, but I want to make myself differentiated." In that case, again, IaaS and PaaS, coupled alongside a SaaS environment, you can build your specific differentiation. And then you leave the ERP pristine, so it can be upgraded constantly with no impact to your specific sidebar applications. So, I would say that the best clouds in the world, I mean, I think you're going to see a lot of the others are trying to, either SaaS providers trying to grow a PaaS, or maybe some of the IaaS players are trying to add SaaS. So, I think you're going to see this blending more and more because customers are asking for the flexibility For either or all three. But I will say that-- >> How can I get PaaS and SaaS-minus. >> Absolutely, I mean, what are you doing there? You're offering choice. There's not a question in my mind that Cisco is a huge customer of ours, they have a product that is one of their SaaS applications running Tetration on the Oracle Cloud. It actually doesn't run any Oracle. It's all cloud native applications. Natively built with a number of open source components. They run just IaaS. That's it, the Tetration product, and it runs fast. The Gen 2 cloud has a great architecture underneath it, flattened fast network. By far, for us, we feel like we really gotten into the guts of IaaS and made it run more efficiently. Other customers say, "I've got a huge Oracle footprint in the data center, help me get it out." So up to the cloud that they go, and they say I don't want just IaaS because that means I'm writing all the automation, like I have to manage all the patching. And this is where for us platform services really help because we give them the automation at scale, which allows their people to do other things, that may be more impactful for the business. >> I want to ask you about, the automation piece. And you guys have made the statement that your Gen 2 cloud is fundamentally different than how other clouds work, Gen 1 clouds. And the Gen 1 clouds which are evolving, the hyper scalars are evolving, but how is Oracle's Gen 2 cloud fundamentally different? >> Yeah. I think that one of the most basic elements of the cloud itself was that for us, we had to start with the security and the network. So if you imagine that those two components really, A, could dictate speed and performance, plus doing it in a secure fashion. The two things that you'll see an awful lot about for us, is that we've embedded not only security at every level. But we've even separated off what we call, every cloud, you have a number of compute instances and then you have storage, right? In the middle, you have a network. However, to become a cloud, and to offer the elastic scale and the multiple sharing of resources, you have to have something called a control plane. What we've done is we've actually extracted the control plane out into its own separate instance of a running machine. Other clouds actually have the control plane inside of there running compute cores. Now, what does that do? Well, the fact of the matter is, we assume that the control plane and the network should be completely separate from what you run on your cloud. So if you run a virtual machine, or if you run a bare metal instance, there's no Oracle software running on it. We actually don't trust customers, and we actually tell the customers don't trust us, either. So by separating out the control plane, and all the code that runs that environment off of the running machine, you get more cores meaning like you have-- There's no Oracle tax for running this environment. It's a separate conmputer for each one, the control plane. Number two, it's more secure. We actually don't have any running code on that machine, if you had a bare metal instance. So therefore, there's no way for one machine in the cloud to infect another machine if the control plane was compromised. The second part of the network, the guys who have been building this cloud, Don Johnson, a lot of the guys came from other clouds before and they said, "yYou know the one thing we have to do is make a we call it Flattened Fast Clause Network that really is never oversubscribed." So you'll constantly see and people always ask me same question, "Dave, why is the performance faster if its the same VM shape? "Like I don't understand why it's going faster, like high performance computing." And the reason again a lot of times is the network itself is that it's just not oversubscribed. It's constantly flowing all the data, there's no such thing as congestion on the network, which can happen. The last part, we actually added 52 terabytes of local storage to every one of those compute nodes. So therefore, there's a possibility you don't even have to traverse the network to do some really serious work on the local machine. So you add these together, the idea is make the network incredibly fast, separate out the control plane and run the software and security layer separate from the entire node where all the customers work is being done. Number three, give the customers more compute, by obviously having us offload it to a separate machine. And the last thing is put local storage and everything is what's called NVMe storage. Whether it's local or remote, everything's NVMe, though the IOPS we get are really off the charts. And again, it shows up in our benchmarks. >> Yeah, so you're getting, atomic access to memory. But in your control plane, you describe that control plane that's running. Sorry to geek out everybody. But I'm kind of curious, you know. You got me started, Chris. So that's control-- >> Yeah, that's good. >> the Oracle cloud or runs. Where's it live? >> It's essentially separated from the compute node. We actually have it in between, there's a compute node that all the work is done from the customer, could be on like a Kubernetes container or VM, whatever it might be. The control plane literally is separate. And it lives right next to the actual compute node the customer is using. So it's actually embedded on a SmartNIC, it's a completely different cores. It's a different chipset, different memory structure, everything. And it does two things. It helps us control what happens up in the customers compute nodes in VMs. And it also helps us virtualize the network down as well. So it literally, the control plane is separate and distinct. It's essentially a couple SmartNICS. >> And then how does Autonomous fit into this whole architecture? I'm speaking by the way for that description, I mean, it's nuanced, but it's important. I'm sure you having this conversation with a lot of cloud architects and chief technologists, they want to know this stuff, and they want to know how it works. And then, obviously, we'll talk about what the business impact is. But talk about Autonomous and where that fit. >> Yeah, so as Larry says that there are two products that really dictate the future of Oracle and our success with our customers. Number one is ERP-SaaS. The second one is Autonomous Database. So the Autonomous Database, what we've done is really taken a look at all the runtime operations of an Oracle database. So tuning, patching, securing all these different features, and what we've done is taken the best of the Oracle database, the best of something called Exadata which we run on the cloud, which really helps a lot of our customers. And then we've wrapped it with a set of automation and security tools to help it really manage itself, tune itself, patch itself, scale up and down, independent between compute and storage. So, why that's important though, is that really our goal is to help people run the Oracle database as they have for years but with far less effort, and then even not only far less effort, hopefully, a machine plus man, out of the equation we always talk about is man plus machine is greater than man alone. So being assisted by artificial intelligence and machine learning to perform those database operations, we should provide a better service to our customers with far less costs. >> Yeah, the greatest chess player in the world is a combination of man and machine, you know that? >> You know what? It makes sense. It makes sense because, there's a number of things that we can do as humans that are just too difficult to program. And then there are other things where machines are just phenomenal, right? I mean, there's no-- Think of Google Maps, you ask it wherever you want to go. And it'll tell you in a fraction of a second, not only the best route, but based on traffic from maybe the last couple of years. right now, we don't have autonomous cars, right, that are allowed to at least drive fully autonomous yet, it's coming. But in the meantime, a human could really work through a lot of different scenarios it was hard to find a way to do that in autonomous driving. So I do believe that it's going to be a great combination. Our hope and goal is that the people who have been running Oracle databases, how can we help them do it with far less effort and maybe spend more time on what the data can do for the organization, right? Improve customer experience, etc. Versus maybe like, how do I spin up a table? One of our customers is a huge consumer. They said, "our goal is how do we reduce the time to first table?" Meaning someone in the business just came up with an idea? How do I reduce the time to first table. For some of our customers, it can take months. I mean, if you were going to put in a new server, find a place in the data center, stand up a database, make the security controls, right and etc. With the autonomous database, I could spin one up right here, for us and, and we could start using it and it would be secure, which is utmost and paramount. It would scale up and down, meaning like just based on workload, as I load data into it, it would tune itself, it would help us with the idea of running more efficiently, which means less cores, which means also less cost. And then the constant security patches that may come up because of different threats or new features. It would do that potentially on its own if you allow it. Obviously some people want to watch you know what exactly it's going to do first. Do regression testing. But it's an exciting product because I've been working with the Oracle database for about 20 years now. And to see it run in this manner, it's just phenomenal. And I think that's the thing, a lot of the database teams have seen. Pretty amazing work. >> So I love this conversation. It's hardcore computer science, architecture, engineering. But now let's end with by up leveling this. We've been talking, a lot about Oracle Consulting. So let's talk about the business impact. So you go into customers, you talk to the cloud architects, the chief technologist, you pass that test. Now you got to deliver the business impact. Where does Oracle consulting fit with regard to that, and maybe you could talk about sort of where you guys want to take this thing. >> Yeah, absolutely. I mean, so, the cloud is great set of technologies, but where Oracle consulting is really helping us deliver is in the outcome. One of the things I think that's been fantastic working with the Oracle consulting team is that cloud is new. For a lot of customers who've been running these environments for a number of years, there's always some fear and a little bit of trepidation saying, "How do I learn this new cloud?" I mean, the workloads, we're talking about deeper, like tier zero, tier one, tier two, and all the way up to Dev and Test and DR, Oracle Consulting does really, a couple of things in particular, number one, they start with the end in mind. And number two, that they start to do is they really help implement these systems. And, there's a lot of different assurances that we have that we're going to get it done on time, and better be under budget, 'cause ultimately, again, that's something that's really paramount for us. And then the third part of it a lot of it a lot of times is run books, right? We actually don't want to just live at our customers environments. We want to help them understand how to run this new system. So training and change management. A lot of times Oracle Consulting is helping with run books. We usually will, after doing it the first time, we'll sit back and let the customer do it the next few times, and essentially help them through the process. And our goal at that point is to leave, only if the customer wants us to but ultimately, our goal is to implement it, get it to go live on time, and then help the customer learn this journey to the cloud. And without them, frankly, I think these systems are sometimes too complex and difficult to do on your own, maybe the first time especially because like I say, they're closing the books, they might be running your entire supply chain. They run your entire HR system or whatever they might be. Too important to leave to chance. So they really help us with helping the customer become live and become very competent and skilled, because they can do it themselves. >> But Chris, we've covered the gamut. We're talking about, architecture, went to NVMe. We're talking about the business impact, all of your automation, run books, loved it. Loved the conversation, but to leave it right there but thanks so much for coming on theCUBE and sharing your insights, great stuff. >> Absolutely, thanks Dave, and thank you for having me on. >> All right, you're welcome. And thank you for watching everybody. This is Dave Vellante for theCUBE. We are covering the Oracle North America Consulting transformation and its rebirth in this digital event. Keep it right there. We'll be right back. (upbeat music)

Published Date : Mar 25 2020

SUMMARY :

for the North America Tech Cloud at Oracle. So I love this title. and then of course the true public cloud that had a lot of the power, still do, So I take the cloud native developer, and the sort of modern day Chief Technologists, So the question is, how do we optimize all three I got to ask you about it. and also the new SaaS applications, the strategy and I want to frame it. Why not just the last two layers of that? that are running on the Oracle cloud right now that may be more impactful for the business. And the Gen 1 clouds which are evolving, "yYou know the one thing we have to do is make a But I'm kind of curious, you know. the Oracle cloud or runs. So it literally, the control plane is separate and distinct. I'm speaking by the way for that description, So the Autonomous Database, what we've done How do I reduce the time to first table. the chief technologist, you pass that test. and let the customer do it the next few times, Loved the conversation, but to leave it right there and thank you for having me on. the Oracle North America Consulting transformation

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ChrisPERSON

0.99+

LarryPERSON

0.99+

Dave VellantePERSON

0.99+

CiscoORGANIZATION

0.99+

DavePERSON

0.99+

Chris FoxPERSON

0.99+

OracleORGANIZATION

0.99+

two productsQUANTITY

0.99+

20 yearsQUANTITY

0.99+

52 terabytesQUANTITY

0.99+

Don JohnsonPERSON

0.99+

one dayQUANTITY

0.99+

Oracle ConsultingORGANIZATION

0.99+

second partQUANTITY

0.99+

first tableQUANTITY

0.99+

OneQUANTITY

0.99+

FirstQUANTITY

0.99+

one machineQUANTITY

0.99+

two thingsQUANTITY

0.99+

second oneQUANTITY

0.99+

three use casesQUANTITY

0.99+

oneQUANTITY

0.98+

first timeQUANTITY

0.98+

threeQUANTITY

0.98+

PeoplesoftORGANIZATION

0.98+

two componentsQUANTITY

0.98+

each oneQUANTITY

0.98+

one morningQUANTITY

0.98+

three teamsQUANTITY

0.97+

about 20 yearsQUANTITY

0.97+

Oracle North America ConsultingORGANIZATION

0.97+

todayDATE

0.97+

IaaSTITLE

0.97+

Google MapsTITLE

0.96+

first generationQUANTITY

0.96+

one thingQUANTITY

0.96+

firstQUANTITY

0.95+

Oracle consultingORGANIZATION

0.94+

NVMeORGANIZATION

0.93+

two layersQUANTITY

0.93+

JD EdwardsORGANIZATION

0.92+

Number oneQUANTITY

0.92+

Number threeQUANTITY

0.9+

Eric Herzog, IBM Storage | CUBE Conversation February 2020


 

(upbeat funk jazz music) >> Hello, and welcome to theCUBE Studios in Palo Alto, California for another CUBE Conversation, where we go in depth with thought leaders driving innovation across tech industry. I'm your host, Peter Burris. What does every CIO want to do? They want to support the business as it evolves and transforms, using data as that catalyst for better customer experience, improved operations, and more profitable options. But to do that we have to come up with a way of improving the underlying infrastructure that makes all this possible. We can't have a situation where we introduce more complex applications in response to richer business needs and have that translated into non-scalable underlying technology. CIOs in 2020 and beyond have to increasingly push their suppliers to make things simpler. And that's true in all domains, but perhaps especially storage, where the explosion of data is driving so many of these changes. So what does it mean to say that storage can be made more simple? Well to have that conversation we're going to be speaking with Eric Herzog, CMO and VP of Global Channels at IBM Storage, about, quite frankly, an announcement that IBM's doing to specifically address that question, making storage simpler. Eric, thanks very much for coming back to theCUBE. >> Great, thank you. We love to be here. >> All right, I know you got an announcement to talk about, but give us the update. What's going on with IBM Storage? >> Well, I think the big thing is, clients have told us, storage is too complex. We have a multitude of different platforms, an entry product, a mid-range product, a high-end product, then we have to traverse to the cloud. Why can't we get a simple, easy to use, but very robust feature set? So at IBM Storage with this FlashSystem announcement, we have a family that traverses entry, mid-range, enterprise and automatically can go out to a hybrid multicloud environment, all driven across a common platform, common API, common software, our award-winning Spectrum Virtualize, and innovative technologies around, whether it be cyber-resiliency, performance, incredible performance, ease of use, easier and easier to use. For example, we can do AI-based automated tiering from one flash array to another, or from storage class memory to flash. Innovation, at the same time driving better value out of the storage but not charging a lot of extra money for these features. In fact, our FlashSystems announcement, the platforms, depending on the configuration, can be as much as 50% lower than our previous generation. Now that's delivering value, but at the same time we added enhanced features, for example, the capability of even better container support than we already had in our older platform. Or our new FlashCore Modules that can deliver performance in a cluster of up to 17.2 million IOPS, up from our previous performance of 15. Yet, as I said before, delivering that enterprise value and those enterprise data services, in this case I think you said, depending on the config, up to as much as 50% less expensive than some of our previous generation products. >> So let me unpack that a little bit. So, historically, when you look at, or even today, when you look at how storage product lines are set up, they're typically set up for one footprint for the low end, one or more footprints in the mid-range, and then one or more footprints at the high-end. And those are differentiated by the characteristics of the technologies being employed, the function and services that are being offered, and the prices and financial arrangements that are part of it. Are you talking about, essentially, a common product line that is differentiated only by the configuration needs of the volume and workloads? >> Exactly. The FlashSystem traverses entry, mid-range, enterprise, and can automatically get you out to a hybrid multicloud environment, same APIs, same software, same management infrastructure. Our Storage Insights product, which is a could-based storage manager and predictive analytics, works on the entry product, at no charge, mid-range product at no charge, the enterprise product at no charge, and we've even added, in that solution, support for non-IBM platforms, again. So, delivering more value across a standard platform with a common API, a common software. Remember, today's storage is growing exponentially. Are the enterprise customers getting exponentially more storage admins? No. In fact, many of the big enterprises, after the downturn of '08 and '09 had to cut back on storage resources. They haven't hired back to how many storage resources they had in 2007 or '8. They've gotten back to full IT, but a lot of those guys are DevOps people or other functions, so, the storage admins and the IT infrastructure admins have to manage extra petabytes, extra exabytes depending on the type of company. So one platform that can do that and traverse out to the cloud automatically, gives you that innovation and that value. In fact, two of our competitors, just as example, do the same thing, have four platforms. Two other have three. We can do it with one. Simple platform, common API, common storage management, common interface, incredible performance, cyber-resiliency, but all built in something that's a common data management infrastructure with common data software, yet continuing to innovate as we've done with this release of the FlashSystem family. >> OK, so talk about the things that, common API, common software, also, I presume, common, the core module, that FlashCore Module that you have, common across the family as well? >> Almost all the family. At the very entry space we still do use interstandard SSDs but we can get as low as a street price for all-flash config of $16,000 for an all-flash array. Two, three years ago that would've been unheard of. And, by the way, it had six lines of availability, same software interface and API as a system that could go up to millions of dollars at the way high end, right? And anything in between. So common ease of use, common management, simple to manage, simple to deploy, simple to use, but not simple in the value proposition. Reduce the TCO, reduce the ROI, reduce the operational manpower, they're overtaxed as it is. So by making this across the portfolio with the FlashSystem and go out to the hybrid multicloud but bringing in all this high technology such as our FlashCore Modules and, as I said, at a reduced price to the previous generation. What more could you ask for? >> OK, so you've got some promises that you made in 2019 that you're also actually realizing. One of my favorite ones, something I think is pretty important, is storage class memory. Talk about how some of those 2019 promises are being realized in this announcement. >> So what we did is, when we announced our first FlashSystem family in 2018 using our new NVMe FlashCore Modules, we had an older FlashSystem family for several years that used, you know, the standard SaaS interface. But our first NVMe product was announced in the summer of 2018. At that time we said, all the way back then, that in early '20 we would be start shipping storage class memory. Now, by the way, those FlashSystems NVMe products that we announced back then, actually can still use storage class memory, so, we're protecting the investment of our installed base. Again, innovation with value on the installed base. >> A very IBM thing to do. >> Yes, we want to take care of the installed base, we also want to have new modern technologies, like storage class memory, like improved performance and capacity in our FlashCore Modules where we take off the shelf Flash and create our own modules. Seven year media warranty, up to 17.2 million IOPS, 17 mites of latency, which is 30% better than our next nearest competitor. By the way, we can create a 17 million IOP config in only eight rack U. One of our competitors gets close, 15 million, but it takes them 40 rack U. Again, operational manpower, 40 rack U's harder to manage, simplicity of deployment, it's harder to deploy all that in 40 rack U, we can do it in eight. >> And pricing. >> Yes. And we've even brought out now, a preconfigured rack. So what we call the FlashSystem 9200R built into the rack with a switching infrastructure, with the storage you need, IBM services will deploy it for you, that's part of the deal, and you can create big solutions that can scale dramatically. >> Now R stands for hybrid? >> Rack. >> Rack. Well talk to me about some of the hybrid packaging that you're bringing out for hybrid cloud. >> Sure, so, from a hybrid cloud perspective, our Spectrum Virtualize software, which sits on-prem, entry, mid-range and at the upper end, can traverse to a cloud called Spectrum Virtualize for Cloud. Now, one of the keys things of Spectrum Virtualize, both on-prem and our cloud version, is it supports not only IBM arrays, but through a storage virtualisation technology, over 450 arrays from multi-vendors, and in short our competition. So we can take our arrays, and automatically go out to the cloud. We can do a lot of things. Cloud air gapping, to help with malware and ransonware protection, DR, snapshots and replicas. Not only can the new FlashSystem family do that, to Spectrum Virtualize on-prem and then out, but Spectrum Virtualize coming on our FlashSystem portfolio can actually virtualize non-IBM arrays and give them the same enterprise functionality and in this case, hybrid cloud technology, not only for us, but for our competitors products as well. One user interface. Now talk about simple. Our own products, again one family, entry, mid-range and enterprise traversing the cloud. And by the way, for those of you who are heterogeneous, we can deliver those enterprise class services, including going out to a hybrid multi-cloud configuration, for our competitors products as well. One user interface, one throat to choke, one support infrastructure with our Storage Insights platform, so it's a great way to make things easier, cut the CAPEX and OPEX, but not cut the innovation. We believe in value and innovation, but in an easy deploy methodology, so that you're not overly complex. And that is killing people, the complexity of their solutions. >> All right. So there's a couple of things about cloud, as we move forward, that are going to be especially interesting. One of them is going to be containers. Everybody's talking about, and IBM's been talking about, you've been talking about this, we've talked about this a number of times, about how containers and storage and data are going to come together. How do you see this announcement supporting those emerging and evolving need for container-based applications in the enterprise. >> So, first of all, it's often tied to hybrid multi-cloudness. Many of the hybrid cloud configurations are configured on a container based environment. We support Red Hat OpenShift. We support Kubernetes environments. We can provide on these systems at no charge, persistent storage for those configurations. We also, although it does require a backup package, Spectrum Protect, the capability of backing up that persistent storage in an OpenShift or Kubernetes environment. So really it's critical. Part of our simplicity is this FlashSystem platform with this technology, can support bare metal workloads, virtualised workloads, VMware, HyperV, KVM, OVM, and now container workloads. And we do see, for the next coming years, think about bare metal. Bare metal is as old as I am. That's pretty old. Well we got tons of customers still got bare metal applications, but everyone's also gone virtualized. So it's not, are we going to have one? It's you're going to have all three. So with the FlashSystems family, and what we have with Spectrum Virtualized software, what we have with our container support, we need with bare metal support, incredible performance, whatever you need, VMware integration, HyperV integration, everything you need for a virtualized environment, and for a container environment, we have everything too. And we do think the, especially the mid to big accounts, are going to try run all three, at least for the next couple of years. This gives you a platform that can do that, at the entry point, up to the high end, and then out to a hybrid multi-cloud environment. >> With that common software and APIs across. Now, every year that you and I have talked, you've been especially passionate about the need for turning the crank, and evolving and improving the nature of automation, which is another one of the absolute necessities, as we start thinking about cloud. How is this announcement helping to take that next step, turn the crank in automation? >> So a couple of things. One is our support now for Ansible, so offering that Ansible support, integrates into the container management frameworks. Second thing is, we have a ton of AI-type specific based technology built into the FlashSystem platform. First is our cloud based storage and management predictive analytics package, Storage Insights. The base version comes for free across our whole portfolio, whether it be entry, mid-range or high-end, across the whole FlashSystems family. It gives you predictive analytics. If you really do have a support problem, it eases the support issues. For example, instead of me saying, "Peter send me those log files." Guess what? We can see the log files. And we can do it right there while you're on the phone. You've got a problem? Let's make it easier for you to get it solved. So Storage Insights across AI based, predictive analytics, performance, configuration issues, all predicatively done, so AI based. Secondly, we've integrated AI in to our Spectrum Virtualize product. So as exemplar, easier to your technology, can allow you to tier data from storage class memory to Flash, as an example, and guess what it does? It automatically knows based on usage patterns, where the data should go. Should it be on the storage class memory? Should it be on Flash core modules? And in fact, we can create a configuration, we have Flash core modules and introduce standard SSDs, which are both Flash, but our Flash core modules are substantially faster, much better latency, like I said, 30% better than the next nearest competition, up to 17.2 million IOPS. The next closest is 15. And in fact, it's interesting, one of our competitors has used storage class memory as a read cache. It dramatically helps them. But they go from 250 publicly stated mites of latency, to 125. With this product, the FlashSystem, anything that uses our Flash core modules, our FlashSystems semi 200, our FlashSystem 9200 product, and the 9200-R product. We can do 70 mites of latency, so almost twice as fast, without using storage class memory. So think what that storage class memory will offer. So we can create hybrid configurations, with StorageClass and Flash, you could have our Flash core modules, and introduce standard SSDs if you want, but it's all AI based. So we have AI based in our Storage Insights, predictive analytics, management and support infrastructure. And we have predictive analytics in things like our Easy Tier. So not only do we think storage is a critical foundation for the AI application workload and use case, which it is, but you need to imbue your storage, which we've done across FlashSystems, including what we've done with our cloud edition, because Spectrum Virtualize has a cloud edition, and an on-prem edition, seamless transparency, but AI in across that entire platform, using Spectrum Virtualize. >> All right, so let me summarize. We've got an absolute requirement from enterprise, to make storage simpler, which requires simple product families with more commonality, where that commonality delivers great value, and at the same time the option to innovate, where that innovation's going to create value. We have a lot simpler set of interfaces and technologies, as you said they're common, but they are more focused on the hybrid cloud, the multi-cloud world, that we're working in right now, that brings more automation and more high-quality storage services to bear wherever you are in the enterprise. So I've got to ask you one more question. I'm a storage administrator, or a person who is administering data, inside the infrastructure. I used to think of doing things this way, what is the one or two things that I'm going to do differently as a consequence of this kind of an announcement? >> So I think the first one, it's going to reduce your operational expenses and your operational man power, because you have a common API, a common software platform, a common foundation for data management and data movement, it's not going to be as complex for you to pull your storage configurations. Second thing, you don't have to make as many choices between high-end workloads, mid-range workloads, and entry workloads. Six lines across the board. Enterprise class data services across the board. So when you think simple, don't think simple as simplistic, low-end. This is a simple to use, simple deploy, simple to manage product, with extensive innovation and a price that's- >> So simple to secure? >> And simple to secure. Data rest encryption across the portfolio. And in fact those that use our FlashCore Modules, no performance hit on encryption, and no performance hit on data compression. So it can help you shrink the actual amount you need to buy from us, which sounds sort of crazy, that a storage company would do that, but with our data reduction technologies, compression being one of them, there's no performance hits, you can compress compressable workloads, and now, anything with a FlashCore Module, which by the way, happens to be FIPS 140-2 certified, there's no excuse not to encrypt, because encryption, as you know, has had a performance hit in the past. Now, our 7200, our 5100 FlashSystem, and our FlashSystem 9200 and 9200R, there's no performance on encrypting, so it gives you that extra resiliency, that you need in a storage world, and you don't get a non-compression, which helps you shrink how much you end up buying from IBM. So that's the type of innovation we deliver, in a simple to use, easy to deploy, easy to manage but incredible innovative value, brought into a very innovative solution, across the board, not just let's innovate at the high end or you know what I mean? Trying to make that innovation spread, which, by the way, makes it easier for the storage guy. >> Well, look, in a world, even inside a single enterprise, you're going to have branch offices, you're going to have local this, the edge, you can't let the bad guys in on a lesser platform that then can hit data on a higher end platform. So the days of presuming that there's this great differentiation in the tier are slowly coming to an end as everything becomes increasingly integrated. >> Well as you've pointed out many times, data is the asset. Not the most valuable one. It is the asset of today's digital enterprise and it doesn't matter whether you're a global Fortune 500, or you're a (mumble). Everybody is a digital enterprise these days, big, medium or small. So cyber resiliency is important, cutting costs is important, being able to modernize and optimize your infrastructure, simply and easily. The small guys don't have a storage guy, and a network guy and a server guy, they have the IT guy. And even the big guys, who used to have hundreds of storage admins in some cases, don't have hundreds any more. They've got a lot of IT people, but they cut back so these storage admins and infrastructure admins in these global enterprise, they're managing 10, 20 times the amount of storage they managed even two or three years ago. So, simple, across the board, and of course hyper multicloud is critical to these configurations. >> Eric, it's a great annoucement, congratulations to IBM to actually delivering on what your promises are. Once again, great to have you on theCUBE. >> Great, thank you very much Peter. >> And thanks to you, again, for participating in this CUBE conversation, I'm Peter Burris, see you next time. (upbeat, jazz music)

Published Date : Feb 12 2020

SUMMARY :

But to do that we have to come up with We love to be here. I know you got an announcement to talk about, Innovation, at the same time driving better value and the prices and financial arrangements No. In fact, many of the big enterprises, At the very entry space we still do use interstandard SSDs in 2019 that you're also actually realizing. in the summer of 2018. By the way, we can create a 17 million IOP config and you can create big solutions that you're bringing out for hybrid cloud. And by the way, for those of you who are heterogeneous, container-based applications in the enterprise. and then out to a hybrid multi-cloud environment. and evolving and improving the nature of automation, and the 9200-R product. and at the same time the option to innovate, it's not going to be as complex for you So that's the type of innovation we deliver, So the days of presuming It is the asset of today's digital enterprise Once again, great to have you on theCUBE. And thanks to you, again,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
TristanPERSON

0.99+

George GilbertPERSON

0.99+

JohnPERSON

0.99+

GeorgePERSON

0.99+

Steve MullaneyPERSON

0.99+

KatiePERSON

0.99+

David FloyerPERSON

0.99+

CharlesPERSON

0.99+

Mike DooleyPERSON

0.99+

Peter BurrisPERSON

0.99+

ChrisPERSON

0.99+

Tristan HandyPERSON

0.99+

BobPERSON

0.99+

Maribel LopezPERSON

0.99+

Dave VellantePERSON

0.99+

Mike WolfPERSON

0.99+

VMwareORGANIZATION

0.99+

MerimPERSON

0.99+

Adrian CockcroftPERSON

0.99+

AmazonORGANIZATION

0.99+

BrianPERSON

0.99+

Brian RossiPERSON

0.99+

Jeff FrickPERSON

0.99+

Chris WegmannPERSON

0.99+

Whole FoodsORGANIZATION

0.99+

EricPERSON

0.99+

Chris HoffPERSON

0.99+

Jamak DaganiPERSON

0.99+

Jerry ChenPERSON

0.99+

CaterpillarORGANIZATION

0.99+

John WallsPERSON

0.99+

Marianna TesselPERSON

0.99+

JoshPERSON

0.99+

EuropeLOCATION

0.99+

JeromePERSON

0.99+

GoogleORGANIZATION

0.99+

Lori MacVittiePERSON

0.99+

2007DATE

0.99+

SeattleLOCATION

0.99+

10QUANTITY

0.99+

fiveQUANTITY

0.99+

Ali GhodsiPERSON

0.99+

Peter McKeePERSON

0.99+

NutanixORGANIZATION

0.99+

Eric HerzogPERSON

0.99+

IndiaLOCATION

0.99+

MikePERSON

0.99+

WalmartORGANIZATION

0.99+

five yearsQUANTITY

0.99+

AWSORGANIZATION

0.99+

Kit ColbertPERSON

0.99+

PeterPERSON

0.99+

DavePERSON

0.99+

Tanuja RanderyPERSON

0.99+

Patrick Smith, Pure Storage & Eric Greffier, Cisco | Cisco Live EU Barcelona 2020


 

>> Announcer: Live from Barcelona, Spain, it's theCUBE! Covering Cisco Live 2020. Brought to you by Cisco and its ecosystem partners. >> Welcome back, this is theCUBE's live coverage of Cisco Live 2020, here in Barcelona. Our third year of the show, over 17,000 in attendance between the Cisco people, their large partner ecosystem, and the customers, I'm Stu Miniman, my cohost for this segment is Dave Vellante. John Furrier's scouring the show for all of the news at the event, and joining us, we have two first time guests on the program, first, sitting to my left is Patrick Smith, who is the field CTO for EMEA with Pure Storage. Sitting to his left is Eric Greffier, who is the managing director of EMEAR specialists with Cisco, so you have a slightly larger region than Patrick does, gentlemen, thanks so much for joining us. >> Patrick: Great to be here. >> All right, so, we know this show, we were talking that broad ecosystem, and of course Cisco in the data center group has very strong storage partnerships, highlighted by their converged infrastructure stacks. I wrote my research many many years ago, Cisco's brilliant job was when they entered the server market, they made sure that that fragmented storage ecosystem, they made partnerships across the board. And of course, when Pure's ascendancy with the flash era made the stack, so helping to paint those data centers orange with your Cisco partnership, so Patrick, give us the update here, 2020, what's interesting and important to know about Pure Storage and Cisco customer base? >> You know, we continue to see significant adoption of FlashStack, our converged infrastructure with Cisco. Driving just great interest and great growth, both for Pure and for Cisco with the UCS platform, and the value that the customers see in FlashStack, bringing together storage, networking and compute together with overall automation of the stack, and that really gives customers fantastic time to value. And that's what they're looking for in this day and age. >> All right, and Eric, what differentiates the partnership with Pure, versus, as you said, you do work with many of the storage companies out there. >> Well, we had a baby together, it was called FlashStack, and it was couple of years ago now, and as you said, I think the key element for us is really to have those CVDs, those Cisco Validated Designs together, and FlashStack was a great addition to our existing partnership at that time, talking about a couple of years ago. And of course, with the flash technology of Pure, we've seen the demand that we'd say going and going, and it has been amazing, amazing trajectory together. >> But talk a little bit more about the CVDs, the different use cases that you're seeing. You don't have to go through all 20, but maybe pick a couple of your favorite children. >> Well, just to make sure that people understand what CVD means, it's Cisco Validated Design, and this is kind of an outcome in the form of a document, which is available for customers and partners, which is the outcome of the partnership from R&D to R&D, which is just telling customers and partners what they need to order and have in it to fit all of this together for a specific business outcome. And the reason why we have multiple CVDs, is we have one CVD per use case. So the more use cases we have together, the more the CVD's precise, and you just have to follow the CVD design principles. Of course, the later swarms, and maybe Patrick can say a word, but we've been of course doing things regarding analytics and AI, because this is a big demand right now, so maybe Patrick, you want to say a word on this. >> Yeah, you guys were first with the AI and bringing AI and storage together with your partnership with Nvidia, so maybe double down on that. >> The FlashBlade was our move into building a storage platform for AI and model analytics, and we've seen tremendous success with that in lots of different verticals. And so with Cisco we launched FlashStack for AI, which brings together FlashBlade networking, and Cisco's fantastic compute platform with capability for considerable scale of Nvidia GPUs. So an in-a-box capability to really deliver fast time to market solutions for the growing world of analytics and modern AI, people want quick insight into the vast amounts of data we have, and so FlashStack for AI is really important for us being able to deliver as part of the Cisco ecosystem, and provide customers with a platform for success. >> What's happening with modernization, generally, but specifically in Europe, obviously Cisco, long history in Europe, Pure, you've got a presence here, good presence, but obviously much newer. Larger proportion, far larger proportion is in North America, so it's a real opportunity for you guys. What are you seeing in terms of modernization of infrastructure, and apps in the European community? >> Modernization I think is particularly important, and it's more and more seen under the guise of digital transformation, because investing in infrastructure just doesn't get the credit that sometimes it deserves. But the big push there is really all around simpler infrastructure, easier management, and the push for automation. Organizations don't want to have large infrastructure support teams who are either installing or managing in a heavy touch way, their environments, and so the push towards automation, not just at the infrastructure layer, but all the way up the stack, is really key. And you know, we were talking earlier, behind us we have the DevNet sessions here, all about how customers of Cisco and by correlation Pure, can really optimize the management to their environment, use technology like Intersight, like Ansible and others, to really minimize the overhead of managing technology, deliver services faster to customers and be more agile, in this always-on world that we live in, there's no time to really add a human to the cycle of managing infrastructure. >> I think we've been very proud over the years because this notion of converged infrastructure, which was, the promise was to simplify and modernize the data centers, before it was like, "Everything needs to get connected to anything," and coming was this notion of a pod, everything converged, "We've done the job for you, mister customer, "just think about adding some pod." This has been the promise for the last 10 years, and we've been very proud, almost to have created this market, but it wouldn't have been possible without the partnership with the storage players, and with Pure, we've been one step further in terms of simplifying things for customers. >> I love the extension you're talking about, because absolutely converged infrastructure was supposed to deliver on that simplicity, and it was, let's think of the entire rack as a unit of how we manage it, but with today's applications, with the speed of change happening in the environment, we've gone beyond human speed, and so therefore if we don't have the automation that you were talking about, we can't keep up with what the business needs to be able to do there. >> Yeah, that's what it's all about, it's the rapid rate of change. Whether it's business services, whether it's supporting developers in the developer environment, more and more our customers are becoming software development organizations, their developers are a key resource, and making them as efficient as possible is really important, so being able to quickly spin up development environments, new environments for developers, using snapshot technology, giving them the latest sets of data to test their applications on, is really central to enabling and empowering the developer. >> You know, you talk about Cisco's play and kind of creation of the converged infrastructure, Mark, and I think that's fair, by the way. Others may claim it, but I think the mantle goes to you. But there were two friction points, or headwinds, that we pointed out early in the day, the first was organizational, the servers team, the storage team, the network team didn't speak together, then the practitioner told us one day, "Look, you want to solve that problem, "put it in and watch what happens." 'Cause if you try to figure out the organization you'll never get there, and that sort of took care of itself. The other was the channel. The channel likes things separate, they can add value, they have this sort of box selling mentality, so I wonder if you could update us on what the mindset is in the channel, and how that's evolved. >> Yeah, it's a great question. I think the channel actually really likes the simplicity of a converged infrastructure to sell, it's a very simple message, and it really empowers the channel to take, to your point about organization, they have the full stack, all in one sellable item, and so they don't have to fight for the different components, it's one consistent unit that they sell as a whole, and so I think it simplifies the channel, and actually, we find that customers are actively seeking out, it's shown by our growth with FlashStack that customers are actually seeking out the channel partners who are selling FlashStack. >> Yeah, and do you think the channel realizes, "Wow, we really do have to go up the stack, "add more value, do things like partner with"? >> Well for most of the partners, they were heavily specialized on storage or compute or network, so for most of them, supporting the converged infrastructure was to be able to put a foot into another market, which was an expansion for them, which was part number one. Part number two, maybe the things that we've been missing, because since the beginning we had APIs around all those platforms. I don't believe in the early days, I'm talking about five years from now, that they got, that they could really really build something upon the converged infrastructure. Now, if you go through the DevNet area here at Cisco Live, you will see that I think this is the time now for them to understand, and really build new services on top of it, so I believe the value for the channel is pretty obvious now, more than ever. >> Well yeah, it's a great point, you don't usually hear converged infrastructure and infrastructure as code in the same conversation, but the maturation of the platforms underneath are bringing things together. >> They really are, in the same way that IT organizations are freeing up more time to focus up the stack on automation and added value, the same is true of the partners. It's interesting the corollary between the two. >> So I have a question on your act two, so what got us here the last 10 years, both firms were disruptors. Cisco came in and disrupted the compute space, it was misunderstood, "Cisco getting into servers, "that'll never work!" "Well, really not getting into servers, "we're changing the game." "Ah, okay," 10 years later. Pure, all-flash, really created some havoc in the industry, injected a ton of flash into the data center, practically drove a truck through the legacy business. Okay, so very successful. What's act two for you guys, what do you envision, disruptors, are you more incrementalists, I'd love to hear your thoughts on that. >> I start, Patrick. Probably for us, phase two is what you heard yesterday morning, I think Liz Anthony did a great speech regarding Cisco Intersight Workload Optimizer, sorry for the name, this is a bit long, but what it means is now we truly connect the infrastructure to the application performance, and the fact that we can place and discuss about converged infrastructure but in the context of what truly matters for customers, which is application, this is the first time ever you're going to see such amount of R&D put into bringing the two worlds together. So this is just the beginning, but I think this was probably for me yesterday one of the most important announcement ever. And by the way, Pure is coming with this announcement, so if you as a customer buy Cisco Intersight Workload Optimizer, you'll get everything you need to know about Pure and if you have to move things around the storage area, you know the tool will be doing it for you. So we are really the two of us in this announcement, so Patrick, if you want to? >> No, I mean as Eric mentioned, Intersight's important for Cisco, it's important for us, we're very proud to be early integrators as a third party into Intersight to allow that simple management, but you know, as you talk about the future, we were viewed as disruptors when we first came to market with flash array, and we consider still ourselves to be disruptors and innovators, and the amount of our revenue that we invest in innovation, in what is a really focused product portfolio, I think is showing benefits, and you've seen the announcements over the last six months or so with FlashArray//C, bringing all the benefits of flash to tier two applications, and just the interest that that has generated is huge. In the world of networking with NVMe, we have a fabric in RoCEv2, just increasing the performance for business applications that will have fantastic implications for things like SAP, time and performance-critical databases, and then what we announced with direct memory with adding SCM as a read cache onto flash array as well. Really giving customers investment protection for what they bought from us already, because they can, as you well know, Evergreen gives customers an asset that continues to appreciate in value, which is completely the opposite. >> And you're both sort of embracing that service consumption model, I mean Cisco's becoming a very large proportion of your business, you guys have announced some actual straight cloud plays, you've built an aray inside of AWS, which is pretty innovative, so. >> Yes, and as well as the cloud play with Cloud Block Store in AWS, there's Pure as a service, which takes that cloud-like consumption model and allows a customer to run it in their own data center without owning the assets, and that's really interesting, because customers have got used to the cloud-like consumption model, and paying as an OpEx rather than CapEx, and so bringing that into their own facility, and only paying for the data you have written, really does change the game in terms of how they consume and think about their storage environments. >> Patrick, we'd just love to get your viewpoint, you've been talking to a lot of customers this week, you said you've been checking out the DevNet zone, for people that didn't make it to the show here, what have they been missing, what would their peers be telling them in the hallway conversations? >> There's a huge amount as we've been talking about, there's a huge amount on automation, and actually we see it as we go into customers, the number of people we're now talking to who are developers but not developers developing business applications but developers developing code for managing infrastructure is key, and you see it all around the DevNet zone. And then, the focus on containers, I've been talking about it for a long time, and containers is so important for enterprises going forward. We have a great play in that space, and I think as we roll forward, the next three to five years, containers is just going to be the important technology that will be prevalent across enterprises large and small. >> Dave: Yeah, we agree. >> Eric and Patrick, thank you so much for giving us the update, congratulations on all the progress and definitely look forward to keeping an eye on your progress. >> Thanks very much. >> All right, Dave Vellante and I will be back with much more here from Cisco Live 2020 in Barcelona, thanks for watching theCUBE. (techno music)

Published Date : Jan 29 2020

SUMMARY :

Brought to you by Cisco and its ecosystem partners. and the customers, I'm Stu Miniman, and of course Cisco in the data center group and the value that the customers see in FlashStack, with Pure, versus, as you said, and as you said, I think the key element for us the different use cases that you're seeing. the more the CVD's precise, and you just have to follow and bringing AI and storage together and we've seen tremendous success with that and apps in the European community? and so the push towards automation, the data centers, before it was like, the automation that you were talking about, in the developer environment, and kind of creation of the converged infrastructure, the channel to take, to your point about organization, because since the beginning we had APIs and infrastructure as code in the same conversation, They really are, in the same way Cisco came in and disrupted the compute space, and the fact that we can place and discuss and just the interest that that has generated is huge. you guys have announced some actual straight cloud plays, and only paying for the data you have written, the next three to five years, Eric and Patrick, thank you so much with much more here from Cisco Live 2020 in Barcelona,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric GreffierPERSON

0.99+

Dave VellantePERSON

0.99+

PatrickPERSON

0.99+

CiscoORGANIZATION

0.99+

EuropeLOCATION

0.99+

NvidiaORGANIZATION

0.99+

EricPERSON

0.99+

BarcelonaLOCATION

0.99+

Stu MinimanPERSON

0.99+

Liz AnthonyPERSON

0.99+

Patrick SmithPERSON

0.99+

John FurrierPERSON

0.99+

North AmericaLOCATION

0.99+

twoQUANTITY

0.99+

DavePERSON

0.99+

EvergreenORGANIZATION

0.99+

IntersightORGANIZATION

0.99+

2020DATE

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

yesterday morningDATE

0.99+

third yearQUANTITY

0.99+

FlashStackTITLE

0.99+

firstQUANTITY

0.99+

10 years laterDATE

0.99+

Barcelona, SpainLOCATION

0.99+

EMEARORGANIZATION

0.98+

20QUANTITY

0.98+

both firmsQUANTITY

0.98+

couple of years agoDATE

0.98+

AnsibleORGANIZATION

0.98+

Cloud Block StoreTITLE

0.98+

first timeQUANTITY

0.98+

PureORGANIZATION

0.98+

DevNetTITLE

0.97+

this weekDATE

0.97+

todayDATE

0.96+

two friction pointsQUANTITY

0.96+

two worldsQUANTITY

0.96+

MarkPERSON

0.96+

EMEAORGANIZATION

0.96+

Pure StorageORGANIZATION

0.96+

bothQUANTITY

0.95+

over 17,000QUANTITY

0.95+

theCUBEORGANIZATION

0.94+

oneQUANTITY

0.92+

Eric Herzog, IBM Storage | CUBE Conversation December 2019


 

(funky music) >> Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Well, as I sit here in our CUBE studios, 2020's fast approaching, and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain in which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, the CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? >> Peter, thank you. Love being here at theCUBE. Great solutions. You guys do a great job on educating everyone in the marketplace. >> Well, thanks very much. But let's start really quickly, quick update on IBM Storage. >> Well, been a very good year for us. Lots of innovation. We've brought out a new Storwize family in the entry space. Brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. We've got a great set of solutions for the hybrid multicloud world for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. >> All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. >> Okay. >> How are, in your conversations with customers, 'cause you talk to a lot of customers, is that notion of data as an asset starting to take hold? >> Most of our clients, whether it be big, medium, or small, and it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing. Obviously we support a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data's protected or they could be out of business. >> All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network. But there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. >> So with data as their most valuable asset, what that means is storage is the critical foundation. As you know, if the storage makes a mistake, that data's gone. >> Right. >> If you have a malware or ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our Spectrum Protect product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they could take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. >> So let's talk about what you see as in that foundation some of the storage services we're going to be talking most about in 2020. >> Eric: So I think one of the big things is-- >> Oh, I'm sorry, data services that we're going to be talking most about in 2020. >> So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cyber security and resiliency think about keeping the bad guy out, and since it's not an issue of if, it's when, chasing the bad guy down. But I've talked to CIOs and other executives. Sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data at rest encryption, encrypting data when you send it out transparently to your hybrid multicloud environment, whether malware and ransomware detection, things like air gap, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable, and that data could be compromised and stolen. So I can almost say that in 2020, we're going to talk more about how the relationship between security and data and storage is going to evolve, almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? >> Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track him down. And until you do, you want that storage to resist all attacks. You need that storage to be encrypted so they can't steal it. So that's a thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out. Yes, you want to track the bad guy down. But when they get in, you'd better make sure that what's there is bolted to the wall. You know, it's the jewelry in the floor safe underneath the carpet. They don't even know it's there. So those are the types of things you need to rely on, and your storage can do almost all of that for you once the bad guy's there till you get him. >> So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software-defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software-defined storage systems is just like buying not a piece of hardware, but a piece of software as a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies that are capable of spanning multiple vendors and delivering a more broad, generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software-defined, cloud-like capabilities. >> So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't, it can't be hard to do. It's got to happen very easily. The cloud is a target, and by the way, most mid-size enterprise and up don't use one cloud, they use many, so you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the overcomplexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, mid-range system, a high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things. But if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our products called Spectrum Virtualize. We support enterprise-class data service including moving the data out to cloud not only on IBM storage, but over 450 other arrays which are not IBM-logoed. Now, that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors, quite honestly. >> Now, once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving, some of these data-first workloads, AI, ML, and how is that going to drive storage decisions in the next year, year and a half, do you think? >> Well, again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data, and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace, for example, we've got with our Spectrum Scale solutions and our Elastic Storage System 3000 the capability of racking and stacking two rack U at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high-performance bandwidth you need for AI, analytic, and big data workloads. And by the way, guess what, you could traverse it out to the cloud when you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years to go, it's here to stay, and the characteristics that IBM sees that we've had in our Spectrum Scale products, we've had for years that have really come out of the supercomputing and the high-performance computing space, those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI, analytics, and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops, as well. So that's another trend you're going to see. The easier you make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-size company, the small company all to get into AI and get the value. The small companies have to compete with the big guys, so they need something, too, and we can provide that starting with a little simple two rack U unit and scaling up into exabyte-class capabilities. >> So all these new workloads and the simplicity of how you can apply them nonetheless is still driving questions about how the storage hierarchies evolved. Now, this notion of the storage hierarchy's been around for, what, 40, 50 years, or something like that. >> Eric: Right. >> You know, tape and this and, but there's some new entrants here and there are some reasons why some of the old entrants are still going to be around. So I want to talk about two. How do you see tape evolving? Is that, is there still need for that? Let's start there. >> So we see tape as actually very valuable. We've had a real strong uptick the last couple years in tape consumption, and not just in the enterprise accounts. In fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads, and you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zetabytes and zetabytes, you've got to have a low-cost platform, and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your costs down yet easily go out to the cloud or easily pull data back. >> So tape still is a reasonable, in fact, a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage-class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage-class memory playing out in the next couple years? >> Well, we already publicly announced in 2019 that in 2020, in the first half, we'd be shipping storage-class memory. It would not only working some coming systems that we're going to be announcing in the first half of the year, but they would also work on some of our older products such as the FlashSystem 9100 family, the Storwize V7000 gen three will be able to use storage-class memory, as well. So it is a way to also leverage AI-based tiering. So in the old days, flash would tier to disk. You've created a hybrid array. With storage-class memory, it'll be a different type of hybrid array in the future, storage-class memory actually tiering to flash. Now, obviously the storage-class memory is incredibly fast and flash is incredibly fast compared to disk, but it's all relative. In the old days, a hybrid array was faster than an all hard drive array, and that was flash and disk. Now you're going to see hybrid arrays that'll be storage-class memory and with our easy tier function, which is part of our Spectrum Virtualize software, we use AI-based tiering to automatically move the data back and forth when it's hot and when it's cool. Now, obviously flash is still fast, but if flash is that secondary medium in a configuration like that, it's going to be incredibly fast, but it's still going to be lower cost. The other thing in the early years that storage-class memory will be an expensive option from all vendors. It will, of course, over time get cheap, just the way flash did. >> Sure. >> Flash was way more expensive than hard drives. Over time it, you know, now it's basically the same price as what were the old 15,000 RPM hard drives, which have basically gone away. Storage-class over several years will do that, of course, as well, and by the way, it's very traditional in storage, as you, and I've been around so long and I've worked at hard drive companies in the old days. I remember when the fast hard drive was a 5400 RPM drive, then a 7200 RPM drive, then a 10,000 RPM drive. And if you think about it in the hard drive world, there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage-class memory as your fastest tier, and now a still incredibly fast tier with flash. So it'll allow you to do that. And that will grow over time. It's going to be slow to start, but it'll continue to grow. We're there at IBM already publicly announcing. We'll have products in the first half of 2020 that will support storage-class memory. >> All right, so let's hit flash, because there's always been this concern about are we going to have enough flash capacity? You know, is enough going to, enough product going to come online, but also this notion that, you know, since everybody's getting flash from the same place, the flash, there's not going to be a lot of innovation. There's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? >> So when you look at flash, that's what IBM has funded on. We have focused on taking raw flash and creating our own flash modules. Yes, we can use industry standard solid state disks if you want to, but our flash core modules, which have been out since our FlashSystem product line, which is many years old. We just announced a new set in 2018 in the middle of the year that delivered in a four-node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash. At the same time when we launched that product, the FlashSystem 9100, we were able to launch it with NVME technology built right in. So we were one of the first players to ship NVME in a storage subsystem. By the way, we're end-to-end, so you can go fiber channel of fabric, InfiniBand over fabric, or ethernet over fabric to NVME all the way on the back side at the media level. But not only do we get that performance and that latency, we've also been able to put up to two petabytes in only two rack U. Two petabytes in two rack U. So incredibly rack density. So those are the things you can do by innovating in a flash environment. So flash can continue to have innovation, and in fact, you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our FlashSystem technology. >> Well, I look forward to that conversation. But before you go here, I got one more question for you. >> Sure. >> Look, I've known you for a long time. You spend as much time with customers as anybody in this world. Every CIO I talk to says, "I want to talk to the guy who brings me "or the gal who brings me the great idea." You know, "I want those new ideas." When Eric Herzog walks into their office, what's the good idea that you're bringing them, especially as it pertains to storage for the next year? >> So, actually, it's really a couple things. One, it's all about hybrid and multicloud. You need to seamlessly move data back and forth. It's got to be easy to do. Entry platform, mid-range, high-end, out to the cloud, back and forth, and you don't want to spend a lot of time doing it and you want it to be fully automated. >> So storage doesn't create any barriers. >> Storage is that foundation that goes on and off-prem and it supports multiple cloud vendors. >> Got it. >> Second thing is what we already talked about, which is because data is your most valuable asset, if you don't have cyber-resiliency on the storage side, you are leaving yourself exposed. Clearly big data and AI, and the other thing that's been a hot topic, which is related, by the way, to hybrid multiclouds, is the rise of the container space. For primary, for secondary, how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing. And we see the world in 2020 being trifold. You're still going to have applications that are bare metal, right on the server. You're going to have tons of applications that are virtualized, VMware, Hyper-V, KVM, OVM, all the virtualization layers. But you're going to start seeing the rise of the container admin. Containers are not just going to be the purview of the devops guy. We have customers that talk about doing 10,000, 20,000, 30,000 containers, just like they did when they first started going into the VM worlds, and now that they're going to do that, you're going to see customers that have bare metal, virtual machines, and containers, and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000, you can't have the devops guy manage that 'cause you're deploying it all over the place. So we see containers. This is the year that containers starts to go really big-time. And we're there already with our Red Hat support, what we do in Kubernetes environments. We provide primary storage support for persistency containers, and we also, by the way, have the capability of backing that up. So we see containers really taking off in how it relates to your storage environment, which, by the way, often ties to how you configure hybrid multicloud configs. >> Excellent. Eric Herzog, CMO and vice president of partner strategies for IBM Storage. Once again, thanks for being on theCUBE. >> Thank you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (funky music)

Published Date : Dec 29 2019

SUMMARY :

in the particular technology everyone in the marketplace. But let's start really quickly, and the things you need is the role that data plays that data is the thing of been the thing you did is the critical foundation. and help the backup admin some of the storage services that we're going to be talking of the storage to help protect their data. once the bad guy's there till you get him. So the second thing I want including moving the data out to cloud and how is that going to and the characteristics that IBM sees and the simplicity of are still going to be around. and not just in the enterprise accounts. that can service all the So in the old days, and by the way, it's very in the flash drives. in the middle of the year that delivered But before you go here, storage for the next year? and you don't want to spend and it supports multiple cloud vendors. and now that they're going to do that, Eric Herzog, CMO and vice See you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Eric HerzogPERSON

0.99+

Peter BurrisPERSON

0.99+

2019DATE

0.99+

EricPERSON

0.99+

PeterPERSON

0.99+

December 2019DATE

0.99+

2018DATE

0.99+

2020DATE

0.99+

IBMORGANIZATION

0.99+

15,000 RPMQUANTITY

0.99+

5400 RPMQUANTITY

0.99+

30QUANTITY

0.99+

twoQUANTITY

0.99+

10,000QUANTITY

0.99+

7200 RPMQUANTITY

0.99+

40QUANTITY

0.99+

10,000 RPMQUANTITY

0.99+

50 yearsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

two rackQUANTITY

0.99+

IBM StorageORGANIZATION

0.99+

next yearDATE

0.99+

Two petabytesQUANTITY

0.99+

Global ChannelsORGANIZATION

0.99+

this yearDATE

0.99+

oneQUANTITY

0.99+

Elastic Storage System 3000COMMERCIAL_ITEM

0.99+

CUBEORGANIZATION

0.98+

first halfQUANTITY

0.98+

Second thingQUANTITY

0.98+

under 100 microsecondsQUANTITY

0.98+

20,000QUANTITY

0.98+

second thingQUANTITY

0.97+

OneQUANTITY

0.97+

one wayQUANTITY

0.96+

firstQUANTITY

0.96+

one more questionQUANTITY

0.96+

FlashSystem 9100COMMERCIAL_ITEM

0.95+

four-nodeQUANTITY

0.95+

singleQUANTITY

0.94+

theCUBEORGANIZATION

0.94+

two petabytesQUANTITY

0.93+

CMOORGANIZATION

0.92+

first playersQUANTITY

0.92+

first half of 2020DATE

0.91+

two largest supercomputersQUANTITY

0.89+

Red HatTITLE

0.89+

terabytesQUANTITY

0.89+

over 450 other arraysQUANTITY

0.88+

theCUBE StudiosORGANIZATION

0.86+

next couple yearsDATE

0.85+

year and a halfQUANTITY

0.85+

up to 15 million IOPSQUANTITY

0.84+

Spectrum ProtectCOMMERCIAL_ITEM

0.84+

yearsQUANTITY

0.84+

Greg Tinker, SereneIT | CUBEConversation, November 2019


 

(upbeat music) >> Hi, and welcome to another CUBEConversation where we go in-depth into the topics that are most important to the technology industry with the thought leaders who are actually getting the work done. I'm Peter Burris, and we've got a great conversation today, and it all starts with the idea of how do you get smart people outside of your organization, in-service organizations to help you achieve your outcomes? It's a challenge because as we become more dependent upon services, we discover that service companies are often trying to sell us bills of goods or visions that aren't solving our exact problem. There's a new breed of service company that's really fascinated by your problem, and wants to sell it. Starts with engineering, starts with value add, and then leads to other types of potential relationships and activities. So what do those service companies look like? Well, to have that conversation, we've got Greg Tinker, who is the CTO and founder of Serene IT. Greg, welcome back to theCUBE. >> Thank you very much Peter, glad to be here. >> So tell us a little bit about Serene IT. >> So Serene IT is a, well we call it a next generation bar. So what do I mean by that? We mean that we are an engineering-first firm, so our staff is big, we're across the U.S., we have multiple branches and we just went international into Canada, with Serene IT Canada. We have other international branches that we coming online next year. So with that being said though, the key to our growth, the key to our success is the fact that we're an engineering firm first. We have very few sales staff. Our sales staff are more of an account management style, more of a nurturer or a farmer, we would call it, versus a hunter that means someone going out, because the customers are coming to us with their problems because they need a smart engineering bench to help them. They're not looking for somebody else's to bring them askew, or resell them a product. That can be easily done by some of the large conglomerates that are already out there, not to mention, spend 30 seconds on Google, you can pretty much buy anything you want. >> Yeah, and you know Fred Brookes said a million years ago, when I was, even before I got into computer science, wrote "The Mythical Man Month", and made the observation that the solution to a hard problem typically, is not more people, >> Right. >> It's working smarter, and working more with the right people. So tell a little about how you're able to find the right people from the industry, and bring them together to turn them into the right team. >> It's a great question, Peter, so I've been very fortunate. I loved my career at Hewlett Packard. I left on good terms because I saw a problem in the industry that I wanted to go and tackle head-on. It's easy for people to sit back and talk about it, it's more difficult to actually go and try to solve the problem, and I'm trying to solve the problem. The problem is, there's a lot of orders out there that bring very low value today, they bring a lot of resale. And that's great for those clients that just know what they want. The vast majority of customers don't know what they want today because the technologies are so advanced, they need help to get from where they were, a legacy model, to a more modern software-defined ecosystem. >> And the business problems are so complex. >> Yes. >> It's that combination of complex business problems, 'cause your competitions and your customers are pushing you, and now advanced technologies that have to be marshaled to solve those problems. >> That's exactly right, so with that being said, I set out build an engineering firm and resale would be something later, but we sell through the engineering consulting firms to solve those business problems for our clients. And so our engineering bench is comprised of engineers from Cisco, from Dell, from HPE, from a lot of big conglomerates that everybody all knows. But when you work in this industry, in the labs of these big conglomerates, me coming from HPE, when you do that, you get a lot of friends across the pillars. >> Sure. >> You build networks. >> You build networks. And quite frankly, it's the Marvel lab guys that own today Q-Logic. We all know each other, and with that being said, some of these guys want to go out and try to solve these big problems with companies like myself, and so with that being said, that's how we're building Serene IT, is engineering-first, and we have a very large technical bench today. Just think about it, the company came online in 2017 with just two, so today, we are significantly bigger than that. We're approaching a 50-plus headcount, and we continue to expand with multiple branches, and our growth rate is almost double every six months. And it's something I'm having a great deal of fun doing. The key thing here though is solving business problems and helping customers. >> Well let's talk about that, because every IT organization faces the challenge that they've been so focused on the hardware assets for so long, or the application assets. Now they're trying to focus on the data assets, but they find themselves often in conflict with the business They're not doing a particularly good job of translating a business opportunity into a technology solution still. >> True. >> You've got these great engineers. How are you getting them to also speak business, so that you facilitate that domain expertise about the business so it can be turned into a technology-reliable solution? >> Like any good engineering firm, you have to have levels right? So we have a knock all the way to level four, and our level four engineers are our master technologists that are usually patent published or some varied nature thereof, with usually a multitude of master ASC certification structures to be able to state the fact that they are level four. We also have some college kids that are coming up that are wanting to learn with us, which is good. But I want to tell you on that same point though, is we only allow those elite, the level three, the level four guys, to be in front of our clients, because they've been in this industry a long time. Like myself, we can understand the business problems, as well as the technology problems, and help a client go from zero to hero. That's what we do well. >> So you're bringing in people who have been business people, but have strong engineering backgrounds >> Correct. >> In product domains, in service domains, in the industry, and you're bringing them together and saying, let's go back to being engineers, that can still talk business. >> That's exactly it, that's the key differentiator with us, is the fact that we're not talking just essays, a lot of ours, in our mindsets have essays they call engineers. We don't hire anyone that can't put fingers on a keyboard. If they can't make magic happen on a keyboard, they're of no value to us, they're of no value to our clients, which is what they need help with. So if we're not able to sit down and have a conversation and pull out a laptop and make some some magic happen with, name it, Ansible, Puppet, Shell, Saltstack, that's just in automation CodeLogics, C-code we've got all the cool stuff in that space. But if we can't sit down and write Python, Ruby on Rails and whatnot, and make something tangible to a client in very short order, we didn't do our job. >> So a lot of companies that I've experienced, a lot of customers I've talked to, have what I would call the "goldilocks" problem with their service providers. By that I mean, some of their service providers don't have the technical chops to just throw numbers at it, so they're too cold. Some of their service providers are too smart, or pushing too hard and they get suspicious of them. How do you be that just right, stay focused on the problem bringing the other team, the engineers or the IT folks that you're working with along with you, so you get that natural technology transfer so the business gets the capability that it can run and you can go do something else? >> So that's a good point, Peter. I mean, we're still working out some of those details, I'll tell you, to be honest with you on that stuff. >> Everybody is. >> Yeah. We're getting better at it, you know customers. If we get to aggressive, and tell the customers this is what's wrong with your problem, this is where you need to go, we call their baby ugly, it puts a lot of contention right on the onset, so it causes problems. So we have to be very cognitive of what they have, and where they want to go, and show them where we're going and why we're doing it, and not just focus on "You did it the wrong way". We don't want to focus on that. That's already done, that ship's already sailed, why bash it? I tell my engineers don't talk negative, there's no good going to come of it. Focus on what you have, and where you need to go with it, and how we're going to get there. Keep it a positive message, and you'll find it'd be more receptive, and it's working for our team. >> Well I'll tell you, one of the things I've heard about Serene IT is that you guys especially developed competencies in technologies that have worked in the past. >> You can say that. >> It seems as though one of the things you're able to do is you're able not to make something so new and so distinct that the client can't see how they can possibly operate it without you. You're taking a lot of open-source, a lot of established tried-and-true technologies and using your smarts to put them together in new and interesting ways so the customer says, "Oh that was smart, that was smart. "I can do that, oh yes, now I get it". Is that, am I mis-characterizing your guys? >> No, you're not, you're actually spot-on. We actually have one of the largest ZFS file systems on the planet right now with 142 million users hitting it and-- >> ZFS? >> Yeah, it's old school. >> With 142 million, okay. >> Yeah, it's old-school But if what's old is new again, we're just putting a new wrapper around it. It worked great in its day, but you put that old technology, the file system itself that's been around for a long time, one of the biggest file systems at 128 bit. You take that file system and you put that on today's Red Hat, Caldera, SUSE, name your favorite. You put that on a big machine, a Linux machine today, a large scale like an HPDL380 with NVME drives with a back-end data store, like a 3PAR or Primäre, or name whatever you want on the back end with a big fiber channel, you'd be surprised what we can do with that thing. So we're able to keep customers' costs down by showing them we can take a old-school technology and make it far bigger than you ever imagined, and give you more horsepower and at less cost, and customers are really receptive to that. Now is that perfect for every footprint? No, that was a unique situation. Not everybody's got 142 million users.(chuckles) >> Well, that's true. And so let me build on that, because the other thing that the CIOs I talk to and senior IT people and also business people, increasingly, is they want to make sure that the solution works now, but that it's not going to end-of-life options for them. >> Yeah. >> How do you do this using tried-and-true technologies combined into new and interesting ways, in a way that still nonetheless gives customers future growth options or future application options? >> I'm not a fan of vendor-locking, I'm not a fan of Franken-monsters. Our team of engineers, we have a mandate that they do not build anything like that, I won't approve it. Because I don't want to have a customer locked in to Serene IT. That was never the intent. We want them to choose us, we want them to come to our team and get our value, so we can show them how to grow their business, and do it in a nice, sustainable way, so we can show their staff how to support it. That takes us into our managed services component. Most of the big things we design and do, we're what we call an adaptive managed services, an AMS model. What do I mean by that statement? We're not a WITO. What's a WITO, you ask? It's a "Walk In, Take Over". That's the big boys, that's the DXEs of the world, that's the Assentras, that's what they do. And they do that well. We're not here to compete with that. But what we're here to do is say, to a company or business, whoever they might be, you probably don't need us to take over everything in your IT shop, and really, we're not going to be the best at that, nor are they in some cases, the other vendors. I'll tell you, you know your business the best. We know infrastructure the best, and we can show you where you can build your skillsets up and get better at it. We can automate a lot of it and show you how to manage the automation, and there'll be certain key points that maybe you guys don't want to own for various reasons, and we will manage just that key component, and we do that today with a lot of our big clients. >> Greg Tinker, CTO and founder of Serene IT, thanks very much for being on theCUBE. >> Thank you, Peter. >> And once again, I want to thank you for participating in this CUBEConversation. Until next time. (upbeat music)

Published Date : Nov 6 2019

SUMMARY :

and it all starts with the idea of how do you get the key to our growth, the key to our success and bring them together to turn them into the right team. I left on good terms because I saw a problem in the industry that have to be marshaled to solve those problems. from a lot of big conglomerates that everybody all knows. and we continue to expand with multiple branches, faces the challenge that they've been so focused on so that you facilitate that domain expertise But I want to tell you on that same point though, and you're bringing them together and saying, That's exactly it, that's the key differentiator with us, So a lot of companies that I've experienced, So that's a good point, Peter. and not just focus on "You did it the wrong way". is that you guys especially developed competencies that the client can't see We actually have one of the largest ZFS file systems You take that file system and you put that because the other thing that the CIOs I talk to and we can show you where Greg Tinker, CTO and founder of Serene IT, And once again, I want to thank you for participating

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Greg TinkerPERSON

0.99+

PeterPERSON

0.99+

GregPERSON

0.99+

Peter BurrisPERSON

0.99+

CiscoORGANIZATION

0.99+

CanadaLOCATION

0.99+

Hewlett PackardORGANIZATION

0.99+

2017DATE

0.99+

DellORGANIZATION

0.99+

Serene ITORGANIZATION

0.99+

Fred BrookesPERSON

0.99+

HPEORGANIZATION

0.99+

next yearDATE

0.99+

MarvelORGANIZATION

0.99+

todayDATE

0.99+

128 bitQUANTITY

0.99+

PythonTITLE

0.99+

30 secondsQUANTITY

0.99+

Q-LogicORGANIZATION

0.99+

November 2019DATE

0.99+

The Mythical Man MonthTITLE

0.99+

142 millionQUANTITY

0.99+

Ruby on RailsTITLE

0.98+

U.S.LOCATION

0.98+

twoQUANTITY

0.98+

zeroQUANTITY

0.98+

142 million usersQUANTITY

0.97+

GoogleORGANIZATION

0.97+

LinuxTITLE

0.97+

oneQUANTITY

0.96+

AnsibleORGANIZATION

0.95+

CUBEConversationEVENT

0.94+

first firmQUANTITY

0.91+

SaltstackORGANIZATION

0.9+

level threeQUANTITY

0.9+

a million years agoDATE

0.9+

50-plus headcountQUANTITY

0.87+

level fourQUANTITY

0.86+

every six monthsQUANTITY

0.84+

ShellORGANIZATION

0.82+

firstQUANTITY

0.8+

PuppetORGANIZATION

0.71+

Red HatTITLE

0.7+

HPDL380COMMERCIAL_ITEM

0.69+

SereneORGANIZATION

0.68+

3PAROTHER

0.68+

doubleQUANTITY

0.67+

SUSETITLE

0.66+

WITOORGANIZATION

0.63+

CodeLogicsTITLE

0.55+

SereneITORGANIZATION

0.54+

CalderaORGANIZATION

0.52+

CTOPERSON

0.49+

PrimäreTITLE

0.49+

NVMETITLE

0.43+

FrankenORGANIZATION

0.39+

Randy Arseneau & Steve Kenniston, IBM | CUBEConversation, August 2019


 

from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape all right buddy welcome to this cute conversation my name is Dave Ville on time or the co-host of the cube and we're gonna have a conversation to really try to explore does infrastructure matter you hear a lot today I've ever since I've been in this business I've heard Oh infrastructure is dead hardware is dead but we're gonna explore that premise and with me is Randy Arsenault and Steve Kenaston they're both global market development execs at IBM guys thanks for coming in and let's riff thanks for having us Dave so here's one do I want to start with the data we were just recently at the MIT chief data officer event 10 years ago that role didn't even exist now data is everything so I want to start off with you here this bro my data is the new oil and we've said you know what data actually is more valuable than oil oil I can put in my car I can put in my house but I can't put it in both data is it doesn't follow the laws of scarcity I can use the same data multiple times and I can copy it and I can find new value I can cut cost I can raise revenue so data in some respects is more valuable what do you think right yeah I would agree and I think it's also to your point kind of a renewable resource right so so data has the ability to be reused regularly to be repurposed so I would take it even further we've been talking a lot lately about this whole concept that data is really evolving into its own tier so if you think about a traditional infrastructure model where you've got sort of compute and network and applications and workloads and on the edge you've got various consumers and producers of that data the data itself has those pieces have evolved the data has been evolving as well it's becoming more complicated it's becoming certainly larger and more voluminous it's better instrumented it carries much more metadata it's typically more proximal with code and compute so the data itself is evolving into its own tier in a sense so we we believe that we want to treat data as a tier we want to manage it to wrap the services around it that enable it to reach its maximum potential in a sense so guys let's we want to make this interactive in a way and I'd love to give you my opinions as well as links are okay with that but but so I want to make an observation Steve if you take a look at the top five companies in terms of market cap in the US of Apple Google Facebook Amazon and of course Microsoft which is now over a trillion dollars they're all data companies they've surpassed the bank's the insurance companies the the Exxon Mobil's of the world as the most valuable companies in the world what are your thoughts on that why is that I think it's interesting but I think it goes back to your original statement about data being the new oil the and unlike oil Ray's said you can you can put it in house what you can't put it in your car you also when it's burnt it's gone right but with data you you have it around you generate more of it you keep using it and the more you use it and the more value you get out of it the more value the company gets out of it and so as those the reason why they continue to grow in value is because they continue to collect data they continue to leverage that data for intelligent purposes to make user experiences better their business better to be able to go faster to be able to new new things faster it's all part of part of this growth so data is one of the superpowers the other superpower of course is machine intelligence or what everybody talks about as AI you know it used to be that processing power doubling every 18 months was what drove innovation in the industry today it's a combination of data which we have a lot of it's AI and cloud for scaling we're going to talk about cloud but I want to spend a minute talking about AI when I first came into this business AI was all the rage but we didn't have the amount of data that we had today we don't we didn't have the processing power it was too expensive to store all this data that's all changed so now we have this emerging machine intelligence layer being used for a lot of different inks but it's sort of sitting on top of all these workloads that's being injected into databases and applications it's being used to detect fraud to sell us more stuff you know in real time to save lives and I'm going to talk about that but it's one of these superpowers that really needs new hardware architectures so I want to explore machine intelligence a little bit it really is a game changers it really is and and and tying back to the first point about sort of the the evolution of data and the importance of data things like machine learning and adaptive infrastructure and cognitive infrastructure have driven to your point are a hard requirement to adapt and improve the infrastructure upon which that lives and runs and operates and moves and breathes so we always had Hardware evolution or development or improvements and networks and the basic you know components of the infrastructure being driven again by advances in material science and silicon etc well now what's happening is the growth and importance and and Dynamis city of data is far outpacing the ability of the physical sciences to keep pace right that's a reality that we live in so therefore things like you know cognitive computing machine learning AI are kind of bridging the gap almost between the limitations we're bumping up against in physical infrastructure and the immense unlocked potential of data so that intermediary is really where this phenomenon of AI and machine learning and deep learning is happening and you're also correct in pointing out that it's it's everywhere I mean it's imbuing every single workload it's transforming every industry and a fairly blistering pace IBM's been front and center around artificial intelligence in cognitive computing since the beginning we have a really interesting perspective on it and I think we bring that to a lot of the solutions that we offer as well Ginni Rometty a couple years ago actually use the term incumbent disruptors and when I think of that I think about artificial intelligence and I think about companies like the ones I mentioned before that are very valuable they have data at their core most incumbents don't they have data all over the place you know they might have a bottling plant at the core of the manufacturing plant or some human process at the core so to close that gap artificial intelligence from the incumbents the appointees they're gonna buy that from companies like IBM they're gonna you know procure Watson or other AI tools and you know or maybe you know use open-source AI tools but they're gonna then figure out how to apply those to their business to do whatever fraud detection or recommendation engines or maybe even improve security and we're going to talk about this in detail but Steve this there's got to be new infrastructure behind that we can't run these new workloads on infrastructure that was designed 30 40 years ago exactly I mean I think I am truly fascinated by with this growth of data it's now getting more exponential and why we think about why is it getting more exponential it's getting more exponential because the ease at which you can actually now take advantage of that data it's going beyond the big financial services companies the big healthcare companies right we're moving further and further and further towards the edge where people like you and I and Randi and I have talked about the maker economy right I want to be able to go in and build something on my own and then deliver it to either as a service as a person a new application or as a service to my infrastructure team to go then turn it on and make something out of that that infrastructure it's got to come down in cost but all the things that you said before performance reliability speed to get there intelligence about data movement how do we get smarter about those things all of the underlying ways we used to think about how we managed protect secure that it all has evolved and it's continuing to evolve everybody talks about the journey the journey to cloud why does that matter it's not just the cloud it's also the the componentry underneath and it's gonna go much broader much bigger much faster well and I would just add just amplify what Steve said about this whole maker movement one of the other pressures that that's putting on corporate IT is it's driving essentially driving product development and innovation out to the end to the very edge to the end user level so you have all these very smart people who are developing these amazing new services and applications and workloads when it gets to the point where they believe it can add value to the business they then hand it off to IT who is tasked with figuring out how to implement it scale it protect it secured debt cetera that's really where I believe I um plays a key role or where we can play a key role add a lot of values we understand that process of taking that from inception to scale and implementation in a secure enterprise way and I want to come back to that so we talked about data as one of the superpowers an AI and the third one is cloud so again it used to be processor speed now it's data plus AI and cloud why is cloud important because cloud enables scale there's so much innovation going on in cloud but I want to talk about you know cloud one dot o versus cloud two dot o IBM talks about you know the new era of cloud so what was cloud one dot o it was largely lift and shift it was taking a lot of crap locations and putting him in the public cloud it was a lot of tests in dev a lot of startups who said hey I don't need to you know have IT I guess like the cube we have no ID so it's great for small companies a great way to experiment and fail fast and pay for you know buy the drink that was one dot o cloud to dot all to datos is emerging is different it's hybrid it's multi cloud it's massively distributed systems distributed data on Prem in many many clouds and it's a whole new way of looking at infrastructure and systems design so as Steve as you and I have talked about it's programmable so it's the API economy very low latency we're gonna talk more about what that means but that concept of shipping code to data wherever it lives and making that cloud experience across the entire infrastructure no matter whether it's on Prem or in cloud a B or C it's a complicated problem it really is and when you think about the fact that you know the big the big challenge we started to run into when we were talking about cloud one always shadow IT right so folks really wanted to be able to move faster and they were taking data and they were actually copying it to these different locations to be able to use it for them simply and easily well once you broke that mold you started getting away from the security and the corporate furnance that was required to make sure that the business was safe right it but it but it but following the rules slowed business down so this is why they continued to do it in cloud 2.0 I like the way you position this right is the fact that I no longer want to move data around moving data it within the infrastructure is the most expensive thing to do in the data center so if I can move code to where I need to be able to work on it to get my answers to do my AI to do my intelligent learning that all of a sudden brings a lot more value and a lot more speed and speed as time as money rate if I can get it done faster I get more valuable and then just you know people often talk about moving data but you're right on you the last thing you want to do is move data in just think about how long it takes to back up the first time you ever backed up your iPhone how long it took well and that's relatively small compared to all the data in a data center there's another subtext here from a standpoint of cloud 2.0 and it involves the edge the edge is a new thing and we have a belief inside of wiki bond and the cube that we talk about all the time that a lot of the inference is going to be done at the edge what does that mean it means you're going to have factory devices autonomous vehicles a medical device equipment that's going to have intelligence in there with new types of processors and we'll talk about that but a lot of the the inference is that conclusions were made real-time and and by the way these machines will be able to talk to each other so you'll have a machine to machine communication no humans need to be involved to actually make a decision as to where should I turn or you know what should be the next move on the factory floor so again a lot of the data is gonna stay in place now what does that mean for IBM you still have an opportunity to have data hubs that collect that data analyze it maybe push it up to the cloud develop models iterate and push it back down but the edge is a fundamentally new type of approach that we've really not seen before and it brings in a whole ton of new data yeah that's a great point and it's a market phenomenon that has moved and is very rapidly moving from smartphones to the enterprise right so right so your point is well-taken if you look in the fact is we talked earlier that compute is now proximal to the data as opposed to the other way around and the emergence of things like mesh networking and you know high bandwidth local communications peer-to-peer communications it's it's not only changing the physical infrastructure model and the and the best practices around how to implement that infrastructure it's also fundamentally changing the way you buy them the way you consume them the way you charge for them so it's it's that shift is changing and having a ripple effect across our industry in every sense whether it's from the financial perspective the operational perspective the time to market perspective it's also and we talked a lot about industry transformation and disruptors that show up you know in an industry who work being the most obvious example and just got an industry from the from the bare metal and recreate it they are able to do that because they've mastered this new environment where the data is king how you exploit that data cost-effectively repeatably efficiently is what differentiates you from the pack and allows you to create a brand new business model that that didn't exist prior so that's really where every other industry is going you talking about those those those big five companies in North America that are that are the top top companies now because of data I often think about rewind you know 25 years do you think Amazon when they built Amazon really thought they were going to be in the food service business that the video surveillance business the drone business all these other book business right maybe the book business right but but their architecture had to scale and change and evolve with where that's going all around the data because then they can use these data components and all these other places to get smarter bigger and grow faster and that's that's why they're one of the top five this is a really important point especially for the young people in the audience so it used to be that if you were in an industry if you were in health care or you were in financial services or you were in manufacturing you were in that business for life every industry had its own stack the sales the marketing the R&D everything was wired to that industry and that industry domain expertise was really not portable across businesses because of data and because of digital transformations companies like Amazon can get into content they can get into music they can get it to financial services they can get into healthcare they can get into grocery it's all about that data model being portable across those industries it's a very powerful concept that you and I mean IBM owns the weather company right so I mean there's a million examples of traditional businesses that have developed ways to either enter new markets or expand their footprint in existing markets by leveraging new sources of data so you think about a retailer or a wholesale distributor they have to very accurately or as accurately as possible forecast demand for goods and make sure logistically the goods are in the right place at the right time well there are million factors that go into that there's whether there's population density there's local cultural phenomena there's all sorts of things that have to be taken into consideration previously that would be near impossible to do now you can sit down again as an individual maker I can sit down at my desk and I can craft a model that consumes data from five readily available public api's or data sets to enhance my forecast and I can then create that model execute it and give it to two of my IT guy to go scale-out okay so I want to start getting into the infrastructure conversation again remember the premise of this conversation it doesn't read for structure matter we want to we want to explore that oh I start at the high level with with with cloud multi-cloud specifically we said cloud 2.0 is about hybrid multi cloud I'm gonna make a statements of you guys chime in my my assertion is that multi cloud has largely been a symptom of multi-vendor shadow IT different developers different workloads different lines of business saying hey we want to we want to do stuff in the cloud this happened so many times in the IT business all and then I was gonna govern it how is this gonna be secure who's got access control on and on and on what about compliance what about security then they throw it over to IT and they say hey help us fix this and so itea said look we need a strategy around multi cloud it's horses for courses maybe we go for cloud a for our collaboration software cloud B for the cognitive stuff cloud C for the you know cheap and deep storage different workloads for different clouds but there's got to be a strategy around that so I think that's kind of point number one and I T is being asked to kind of clean up this stuff but the future today the clouds are loosely coupled there may be a network that connects them but there's there's not a really good way to take data or rather to take code ship it to data wherever it lives and have it be a consistent well you were talking about an enterprise data plane that's emerging and that's kind of really where the opportunity is and then you maybe move into the control plane and the management piece of it and then bring in the edge but envision this mesh of clouds if you will whether it's on pram or in the public cloud or some kind of hybrid where you can take metadata and code ship it to wherever the data is leave it there much smaller you know ship five megabytes of code to a petabyte of data as opposed to waiting three months to try to ship you know petabytes to over the network it's not going to work so that's kind of the the spectrum of multi cloud loosely coupled today going to this you know tightly coupled mesh your guys thoughts on that yeah that's that's a great point and and I would add to it or expand that even further to say that it's also driving behavioral fundamental behavioral and organizational challenges within a lot of organizations and large enterprises cloud and this multi cloud proliferation that you spoke about one of the other things that's done that we talked about but probably not enough is it's almost created this inversion situation where in the past you'd have the business saying to IT I need this I need this supply chain application I need this vendor relationship database I need this order processing system now with the emergence of this cloud and and how easy it is to consume and how cost-effective it is now you've got the IT guys and the engineers and the designers and the architects and the data scientists pushing ideas to the business hey we can expand our footprint and our reach dramatically if we do this so you've get this much more bi-directional conversation happening now which frankly a lot of traditional companies are still working their way through which is why you don't see you know 100% cloud adoption but it drives those very productive full-duplex conversations at a level that we've never seen before I mean we encounter clients every day who are having these discussions are sitting down across the table and IT is not just doesn't just have a seat at the table they are often driving the go-to-market strategy so that's a really interesting transformation that we see as well in addition to the technology so there are some amazing things happening Steve underneath the covers and the plumbing and infrastructure and look at we think infrastructure matters that's kind of why we're here we're infrastructure guys but I want to make a point so for decades this industry is marked to the cadence of Moore's law the idea that you can double processing speeds every 18 months disk drive processors disk drives you know they followed that curve you could plot it out the last ten years that started to attenuate so what happened is chip companies would start putting more cores on to the real estate well they're running out of real estate now so now what's happening is we've seen this emergence of alternative processors largely came from mobile now you have arm doing a lot of offload processing a lot of the storage processing that's getting offloaded those are ARM processors in video with GPUs powering a lot of a lot of a is yours even seeing FPGAs they're simple they're easy them to spin up Asics you know making a big comeback so you've seen these alternative processes processors powering things underneath where the x86 is and and of course they're still running applications on x86 so that's one sort of big thing big change in infrastructure to support this distributed systems the other is flash we saw flash basically take out spinning disk for all high-speed applications we're seeing the elimination of scuzzy which is a protocol that sits in between the the the disk you know and the rest of the network that's that's going away you're hearing things like nvme and rocky and PCIe basically allowing stores to directly talk to the so now a vision envision this multi-cloud system where you want to ship metadata and code anywhere these high speed capabilities interconnects low latency protocols are what sets that up so there's technology underneath this and obviously IBM is you know an inventor of a lot of this stuff that is really gonna power this next generation of workloads your comments yeah I think I think all that 100% true and I think the one component that we're fading a little bit about it even in the infrastructure is the infrastructure software right there's hardware we talked a lot talked about a lot of hardware component that are definitely evolving to get us better stronger faster more secure more reliable and that sort of thing and then there's also infrastructure software so not just the application databases or that sort of thing but but software to manage all this and I think in a hybrid multi cloud world you know you've got these multiple clauses for all practical purposes there's no way around it right marketing gets more value out of the Google analytic tools and Google's cloud and developers get more value out of using the tools in AWS they're gonna continue to use that at the end of the day I as a business though need to be able to extract the value from all of those things in order to make different business decisions to be able to move faster and surface my clients better there's hardware that's gonna help me accomplish that and then there are software things about managing that whole consetta component tree so that I can maximize the value across that entire stack and that stack is multiple clouds plus the internal clouds external clouds everything yeah so it's great point and you're seeing clear examples of companies investing in custom hardware you see you know Google has its own ship Amazon its own ship IBM's got you know power 9 on and on but none of this stuff works if you can't manage it so we talked before about programmable infrastructure we talked about the data plane and the control plane that software that's going to allow us to actually manage these multiple clouds as at least a quasi single entity you know something like a logical entity certainly within workload classes and in Nirvana across the entire you know network well and and the principal or the principle drivers of that evolution of course is containerization right so the containerization phenomenon and and you know obviously with our acquisition of red hat we're now very keenly aware and acutely plugged into the whole containerization phenomenon which is great we're you're seeing that becoming almost the I can't think of us a good metaphor but you're seeing containerization become the vernacular that's being spoken in multiple different types of reference architectures and use case environments that are vastly different in their characteristics whether they're high throughput low latency whether they're large volume whether they're edge specific whether they're more you know consolidated or hub-and-spoke models containerization is becoming the standard by which those architectures are being developed and with which they're being deployed so we think we're very well-positioned working with that emerging trend and that rapidly developing trend to instrument it in a way that makes it easier to deploy easier to instrument easier to develop so that's key and I want to sort of focus now on the relevance of IBM one side one thing that we understand because that that whole container is Asian think back to your original point Dave about moving data being very expensive and the fact that the fact that you want to move code out to the data now with containers microservices all of that stuff gets a lot easier development becomes a lot faster and you're actually pushing the speed of business faster well and the other key point is we talked about moving code you know to the data as you move the code to the data and run applications anywhere wherever the data is using containers the kubernetes etc you don't have to test it it's gonna run you know assuming you have the standard infrastructure in place to do that and the software to manage it that's huge because that means business agility it means better quality and speed alright let's talk about IBM the world is complex this stuff is not trivial the the more clouds we have the more edge we have the more data we have the more complex against IBM happens to be very good at complex three components of the innovation cocktail data AI and cloud IBM your customers have a lot of data you guys are good with data it's very strong analytics business artificial intelligence machine intelligence you've invested a lot in Watson that's a key component business and cloud you have a cloud it's not designed to compete not knock heads and the race to zero with with the cheap and deep you know storage clouds it's designed to really run workloads and applications but you've got all three ingredients as well you're going hard after the multi cloud world for you guys you've got infrastructure underneath you got hardware and software to manage that infrastructure all the modern stuff that we've talked about that's what's going to power the customers digital transformations and we'll talk about that in a moment but maybe you could expand on that in terms of IBM's relevance sure so so again using the kind of maker the maker economy metaphor bridging from that you know individual level of innovation and creativity and development to a broadly distributed you know globally available work loader or information source of some kind the process of that bridge is about scale and reach how do you scale it so that it runs effectively optimally is easily managed Hall looks and feels the same falls under the common umbrella of services and then how do you get it to as many endpoints as possible whether it's individuals or entities or agencies or whatever scale and reach iBM is all about scale and reach I mean that's kind of our stock and trade we we are able to take solutions from small kind of departmental level or kind of skunkworks level and make them large secure repeatable easily managed services and and make them as turnkey as possible our services organizations been doing it for decades exceptionally well our product portfolio supports that you talk about Watson and kind of the cognitive computing story we've been a thought leader in this space for decades I mean we didn't just arrive on the scene two years ago when machine learning and deep learning and IO ste started to become prominent and say this sounds interesting we're gonna plant our flag here we've been there we've been there for a long time so you know I kind of from an infrastructure perspective I kind of like to use the analogy that you know we are technology ethos is built on AI it's built on cognitive computing and and sort of adaptive computing every one of our portfolio products is imbued with that same capability so we use it internally we're kind of built from AI for AI so maybe that's the answer to this question of it so what do you say that somebody says well you know I want to buy you know my flash storage from pure AI one of my bi database from Oracle I want to buy my you know Intel servers from Dell you know whatever I want to I want to I want control and and and I gotta go build it myself why should I work with IBM do you do you get that a lot and how do you respond to that Steve I think I think this whole new data economy has opened up a lot of places for data to be stored anywhere I think at the end of the day it really comes down to management and one of the things that I was thinking about as you guys were we're conversing is the enterprise class or Enterprise need for things like security and protection that sort of thing that rounds out the software stack in our portfolio one of the things we can bring to the table is sure you can go by piece parts and component reform from different people that you want right and in that whole notion around fail-fast sure you can get some new things that might be a little bit faster that might be might be here first but one of the things that IBM takes a lot of pride was a lot of qual a lot of pride into is is the quality of their their delivery of both hardware and software right so so to me even though the infrastructure does matter quite a bit the question is is is how much into what degree so when you look at our core clients the global 2,000 right they want to fail fast they want to fail fast securely they want to fail fast and make sure they're protected they want to fail fast and make sure they're not accidentally giving away the keys to the kingdom at the end of the day a lot of the large vendor a lot of the large clients that we have need to be able to protect their are their IP their brain trust there but also need the flexibility to be creative and create new applications that gain new customer bases so the way I the way I look at it and when I talk to clients and when I talk to folks is is we want to give you them that while also making sure they're they're protected you know that said I would just add that that and 100% accurate depiction the data economy is really changing the way not only infrastructure is deployed and designed but the way it can be I mean it's opening up possibilities that didn't exist and there's new ones cropping up every day to your point if you want to go kind of best to breed or you want to have a solution that includes multi vendor solutions that's okay I mean the whole idea of using again for instance containerization thinking about kubernetes and docker for instance as a as a protocol standard or a platform standard across heterogeneous hardware that's fine like like we will still support that environment we believe there are significant additive advantages to to looking at IBM as a full solution or a full stack solution provider and our largest you know most mission critical application clients are doing that so we think we can tell a pretty compelling story and I would just finally add that we also often see situations where in the journey from the kind of maker to the largely deployed enterprise class workload there's a lot of pitfalls along the way and there's companies that will occasionally you know bump into one of them and come back six months later and say ok we encountered some scalability issues some security issues let's talk about how we can develop a new architecture that solves those problems without sacrificing any of our advanced capabilities all right let's talk about what this means for customers so everybody talks about digital transformation and digital business so what's the difference in a business in the digital business it's how they use data in order to leverage data to become one of those incumbent disruptors using Ginny's term you've got to have a modern infrastructure if you want to build this multi cloud you know connection point enterprise data pipeline to use your term Randy you've got to have modern infrastructure to do that that's low latency that allows me to ship data to code that allows me to run applet anywhere leave the data in place including the edge and really close that gap between those top five data you know value companies and yourselves now the other piece of that is you don't want to waste a lot of time and money managing infrastructure you've got to have intelligence infrastructure you've got to use modern infrastructure and you've got to redeploy those labor assets toward a higher value more productive for the company activities so we all know IT labor is a chop point and we spend more on IT labor managing Leung's provisioning servers tuning databases all that stuff that's gotta change in order for you to fund digital transformations so that to me is the big takeaway as to what it means for customer and we talked about that sorry what we talked about that all the time and specifically in the context of the enterprise data pipeline and within that pipeline kind of the newer generation machine learning deep learning cognitive workload phases the data scientists who are involved at various stages along the process are obviously kind of scarce resources they're very expensive so you can't afford for them to be burning cycles and managing environments you know spinning up VMs and moving data around and creating working sets and enriching metadata that they that's not the best use of their time so we've developed a portfolio of solutions specifically designed to optimize them as a resource as a very valuable resource so I would vehemently agree with your premise we talked about the rise of the infrastructure developer right so at the end of the day I'm glad you brought this topic up because it's not just customers it's personas Pete IBM talks to different personas within our client base or our prospect base about why is this infrastructure important to to them and one of the core components is skill if you have when we talk about this rise of the infrastructure developer what we mean is I need to be able to build composable intelligent programmatic infrastructure that I as IT can set up not have to worry about a lot of risk about it break have to do in a lot of troubleshooting but turn the keys over to the users now let them use the infrastructure in such a way that helps them get their job done better faster stronger but still keeps the business protected so don't make copies into production and screw stuff up there but if I want to make a copy of the data feel free go ahead and put it in a place that's safe and secure and it won't it won't get stolen and it also won't bring down the enterprise's is trying to do its business very key key components - we talked about I infused data protection and I infused storage at the end of the day it's what is an AI infused data center right it needs to be an intelligent data center and I don't have to spend a lot of time doing it the new IT person doesn't want to be troubleshooting all day long they want to be in looking at things like arm and vme what's that going to do for my business to make me more competitive that's where IT wants to be focused yeah and it's also we just to kind of again build on this this whole idea we haven't talked a lot about it but there's obviously a cost element to all this right I mean you know the enterprise's are still very cost-conscious and they're still trying to manage budgets and and they don't have an unlimited amount of capital resources so things like the ability to do fractional consumption so by you know pay paper drink right buy small bits of infrastructure and deploy them as you need and also to Steve's point and this is really Steve's kind of area of expertise and where he's a leader is kind of data efficiency you you also can't afford to have copy sprawl excessive data movement poor production schemes slow recovery times and recall times you've got a as especially as data volumes are ramping you know geometrically the efficiency piece and the cost piece is absolutely relevant and that's another one of the things that often gets lost in translation between kind of the maker level and the deployment level so I wanted to do a little thought exercise for those of you think that this is all you know bromide and des cloud 2.0 is also about we're moving from a world of cloud services to one where you have this mesh which is ubiquitous of of digital services you talked about intelligence Steve you know the intelligent data center so all these all these digital services what am I talking about AI blockchain 3d printing autonomous vehicles edge computing quantum RPA and all the other things on the Gartner hype cycle you'll be able to procure these as services they're part of this mesh so here's the thought exercise when do you think that owning and driving your own vehicle is no longer gonna be the norm right interesting thesis question like why do you ask the question well because these are some of the disruptions so the questions are designed to get you thinking about the potential disruptions you know is it possible that our children's children aren't gonna be driving their own car it's because it's a it's a cultural change when I was 16 year olds like I couldn't wait but you started to see a shifted quasi autonomous vehicles it's all sort of the rage personally I don't think they're quite ready yet but it's on the horizon okay I'll give you another one when will machines be able to make better diagnosis than doctors actually both of those are so so let's let's hit on autonomous and self-driving vehicles first I agree they're not there yet I will say that we have a pretty thriving business practice and competency around working with a das providers and and there's an interesting perception that a das autonomous driving projects are like there's okay there's ten of them around the world right maybe there's ten metal level hey das projects around the world what people often don't see is there is a gigantic ecosystem building around a das all the data sourcing all the telemetry all the hardware all the network support all the services I mean building around this is phenomenal it's growing at a had a ridiculous rate so we're very hooked into that we see tremendous growth opportunities there if I had to guess I would say within 10 to 12 years there will be functionally capable viable autonomous vehicles not everywhere but they will be you will be able as a consumer to purchase one yeah that's good okay and so that's good I agree that's a the time line is not you know within the next three to five years all right how about retail stores will well retail stores largely disappeared we're we're rainy I was just someplace the other day and I said there used to be a brick-and-mortar there and we were walking through the Cambridge Tseng Galleria and now the third floor there's no more stores right there's gonna be all offices they've shrunken down to just two floors of stores and I highly believe that it's because you know the brick you know the the retailers online are doing so well I mean think about it used to be tricky and how do you get in and and and I need the Walmart minute I go cuz I go get with Amazon and that became very difficult look at places like bombas or Casper or all the luggage plate all this little individual boutique selling online selling quickly never having to have to open up a store speed of deployment speed of product I mean it's it's it's phenomenal yeah and and frankly if if Amazon could and and they're investing billions of dollars and they're trying to solve the last mile problem if Amazon could figure out a way to deliver ninety five percent of their product catalog Prime within four to six hours brick-and-mortar stores would literally disappear within a month and I think that's a factual statement okay give me another one will banks lose control traditional banks lose control of the payment systems you can Moselle you see that banks are smart they're buying up you know fin tech companies but right these are entrenched yeah that's another one that's another one with an interesting philosophical element to it because people and some of its generational right like our parents generation would be horrified by the thought of taking a picture of a check or using blockchain or some kind of a FinTech coin or any kind of yeah exactly so Bitcoin might I do my dad ask you're not according I do I don't bit going to so we're gonna we're waiting it out though it's fine by the way I just wanted to mention that we don't hang out in the mall that's actually right across from our office I want to just add that to the previous comment so there is a philosophical piece of it they're like our generation we're fairly comfortable now because we've grown up in a sense with these technologies being adopted our children the concept of going to a bank for them will be foreign I mean it will make it all have no context for the content for the the the process of going to speak face to face to another human it just say it won't exist well will will automation whether its robotic process automation and other automation 3d printing will that begin to swing the pendulum back to onshore manufacturing maybe tariffs will help to but again the idea that machine intelligence increasingly will disrupt businesses there's no industry that's safe from disruption because of the data context that we talked about before Randy and I put together a you know IBM loves to use were big words of transformation agile and as a sales rep you're in the field and you're trying to think about okay what does that mean what does that mean for me to explain to my customer so he put together this whole thing about what his transformation mean to one of them was the taxi service right in the another one was retail so and not almost was fencers I mean you're hitting on on all the core things right but this transformation I mean it goes so deep and so wide when you think about exactly what Randy said before about uber just transforming just the taxi business retailers and taxis now and hotel chains and that's where the thing that know your customer they're getting all of that from data data that I'm putting it not that they're doing work to extract out of me that I'm putting in so that autonomous vehicle comes to pick up Steve Kenaston it knows that Steve likes iced coffee on his way to work gives me a coupon on a screen I hit the button it automatically stops at Starbucks for me and it pre-ordered it for me you're talking about that whole ecosystem wrapped around just autonomous vehicles and data now it's it's unbeliev we're not far off from the Minority Report era of like Anthem fuck advertising targeted at an individual in real time I mean that's gonna happen it's almost there now I mean you just use point you will get if I walk into Starbucks my phone says hey why don't you use some points while you're here Randy you know so so that's happening at facial recognition I mean that's all it's all coming together so and again underneath all this is infrastructure so infrastructure clearly matters if you don't have the infrastructure to power these new workloads you're drugged yeah and I would just add and I think we're all in agreement on that and and from from my perspective from an IBM perspective through my eyes I would say we're increasingly being viewed as kind of an arms dealer and that's a probably a horrible analogy but we're being used we're being viewed as a supplier to the providers of those services right so we provide the raw materials and the machinery and the tooling that enables those innovators to create those new services and do it quickly securely reliably repeatably at a at a reasonable cost right so it's it's a step back from direct engagement with consumer with with customers and clients and and architects but that's where our whole industry is going right we are increasingly more abstracted from the end consumer we're dealing with the sort of assembly we're dealing with the assemblers you know they take the pieces and assemble them and deliver the services so we're not as often doing the assembly as we are providing the raw materials guys great conversation I think we set a record tends to be like that so thank you very much for no problem yeah this is great thank you so much for watching everybody we'll see you next time you're watching the cube

Published Date : Aug 8 2019

SUMMARY :

the the the disk you know and the rest

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Randy ArsenaultPERSON

0.99+

Steve KenastonPERSON

0.99+

StevePERSON

0.99+

MicrosoftORGANIZATION

0.99+

three monthsQUANTITY

0.99+

Dave VillePERSON

0.99+

RandyPERSON

0.99+

Exxon MobilORGANIZATION

0.99+

DavePERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

August 2019DATE

0.99+

100%QUANTITY

0.99+

DavidPERSON

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

Steve KennistonPERSON

0.99+

two floorsQUANTITY

0.99+

DellORGANIZATION

0.99+

third floorQUANTITY

0.99+

GoogleORGANIZATION

0.99+

RandiPERSON

0.99+

25 yearsQUANTITY

0.99+

ninety five percentQUANTITY

0.99+

AppleORGANIZATION

0.99+

billions of dollarsQUANTITY

0.99+

North AmericaLOCATION

0.99+

over a trillion dollarsQUANTITY

0.99+

WalmartORGANIZATION

0.99+

million factorsQUANTITY

0.99+

30DATE

0.99+

StarbucksORGANIZATION

0.99+

10 years agoDATE

0.99+

uberORGANIZATION

0.99+

FacebookORGANIZATION

0.99+

twoQUANTITY

0.99+

six hoursQUANTITY

0.99+

Boston MassachusettsLOCATION

0.99+

Randy ArseneauPERSON

0.98+

two years agoDATE

0.98+

fourQUANTITY

0.98+

WatsonTITLE

0.98+

x86TITLE

0.98+

first pointQUANTITY

0.98+

Cambridge Tseng GalleriaLOCATION

0.98+

GartnerORGANIZATION

0.98+

bothQUANTITY

0.98+

six months laterDATE

0.98+

OracleORGANIZATION

0.98+

12 yearsQUANTITY

0.97+

USLOCATION

0.97+

todayDATE

0.97+

oneQUANTITY

0.97+

AWSORGANIZATION

0.96+

a minuteQUANTITY

0.96+

#scaletowin with Infinidat


 

(orchestral music) >> Hi everybody my name is Dave Vallate and welcome to the special CUBE community event. You know, customers are on a digital journey. They're trying to transform themselves into a digital business, what's the difference between a business and a digital business? Well we think it's the way in which they use data. So we're here with a company Infinidat who's all about using data at multi petabyte scale. We have news, we have announcements, we're gonna drill down with subject matter experts, and we're gonna start with Brian Carmody, who's the chief technology officer of Infinidat. Brian, it's good to see you again. >> Good to see you too, Dave. And I can't believe it's been a year. >> It has been a year since we last sat down. If you had to summarize, Brian, the last twelve months in one word, what would it be? >> How about two words, "insane growth". >> Insane growth, okay. >> Yes, yes. >> Talk about that. >> Yeah so, as of this morning at least, Infinidat has a hair over 4.6 exabytes of customer data under management, which is just insanely cool and I'm not sure if I counted all of the zeroes properly, but it looks like it's around 180 trillion IOs served to happy customers so far as of this morning. >> Some mind boggling numbers, so let me ask you a question. Is this growth coming from, sort of traditional workloads? Is it new workloads, is it a mix? >> Oh, that's a great question. So you know, early in the Infinidat ramp, our early traction was with core banking, transaction processing applications. It was all about consolidation and replacing rows of venoxes with a single floor tile, Infinibox. But in the past year, virtually all of our growth has been an expansion outside of that core, and it's a movement into greenfield applications. So basically, obviously our customers are going into hardcore digital transformation, and this kind of changes the types of workloads that we're looking at, that we're supporting, but it also changes the value proposition, consolidation and stuff like that is all about the bottom line, it's about making storage more efficient, but once we get into the digital transformation, these greenfield applications which is what most of our new growth is, it's actually all about using your digital infrastructure as a revenue generating machine for opening up new markets, new opportunities, new applications et cetera. >> So when people talk about cloud native, that would be an example, using cloud native tool chains, that's what's happening on your systems. Is that correct? >> Yeah absolutely. And I can give you some examples. So I recently spent a day with a group of engineers that are working with autonomous vehicle sensor data. So this is telemetry coming off of self driving cars. And they're working with these ridiculously large, like multi petabyte data sets, and the purpose of this system is to make the vehicles more smarter, and more resistance to collisions, and ultimately more safe. A little bit before that, me and a bunch of other people from the team spent a day with another partner, they're also working with sensor data, but they're doing biometrics off of wearables. So they've perfected an algorithm that can, in real time, detect a heart attack from your pulse. And will immediately dispatch an ambulance to your geolocation of where, hopefully your arm is still connected to your body. And immediately send your electronic medical health records to that nearest hospital, and only then you get a video call on your phone from a doctor who says hey, are you sitting down? Your gonna be fine, you're having a heart attack, and an ambulance is gonna be there in two minutes. And the whole purpose of this is just to shave precious minutes off of that critical period of getting a person who's having a heart attack, to get them the medical care they need. >> Yeah, I'd say that's a non traditional workload. And the impact is saving lives, that's awesome. Now let's talk a little bit about your journey. You know, our friends at Gartner, they do these magic quadrants, a lot of people don't like 'em, I happen to think they're quite useful, as a guidepost, you guys have always been strong on the vision, and you've been executing. Where are you today in that quadrant? >> Yeah, it's an extreme honor. Gartner elevated us into the Leader's Quadrant last year, so customers take that very, very seriously. And the ability to execute access, is, what Gartner says it's, are you influencing the market? Are you causing the incumbents to change their strategies? And with our disruptive pricing, with our liability guarantees, our SLAs and stuff like that, Gartner felt like we met the criteria. And it's a huge honor, and we absolutely have our customers to thank for that because the magic quadrant isn't about what you tell Gartner, it's about what your customers tell Gartner. >> Congratulations on that, and I know the peer insight, you guys have done very well on that also. I want you to talk about the team, you're growing. To grow, you've gotta bring on good people. You've added some folks, talk about that a little bit. >> Yeah, yeah, well speaking of Gartner, we got Stan Zafos who recently joined. He's gonna be running product marketing for us. We're working with Doc, so he's a legend in the industry, so we're delighted to have him on board. Also, Steiny came over from Pure to join us as our field CTO, another legend who needs no introduction. So really, really happy about that. But also, it's not just, those are guys that customers see. But we're also experiencing this on the engineering side. So we, for example, we recently were very amused to realize that there are now more EMC fellows working at Infinidat, if you count Moshe, more EMC fellows working at Infinidat then working at Dell EMC, which is just, you know a humorous, kind of funny thing. So as the business has grown and has gotten momentum, you know, just like we're continuously amazed by the creativity and the things our customers are doing with data, every day, I am continuously amazed and humbled by the caliber of people that I get to work with every day. >> That's awesome. >> We're really, really happy about that. >> All right, well thank you for the recap of the past year, let's get into sort of some of the announcements today, and I wanna talk about the vision, so you have this Infinidat elastic data fabric, I'm interested in what that is, but I'm also, frankly even more interested in why. What's the "why" behind that? >> Sure. So elastic data fabric is Infinidat's roadmap, and our shared vision with customers for the future of enterprise storage. And the "why" is because customers demanded it. If we look at what's happening in the industry and the way that real customers are dealing with data right now, they have some of their data, and some of their workloads are running across public clouds. Some of them are in managed service providers. Some of them are SASs, and then they have on premises storage arrays, and elastic data fabric is Infinidat's solution that glues all of that together. It turns it into a single platform that spans on premises, colo, Infinidat powered managed service providers, Google, Amazon and Azure, and it glues it into a single platform for running workloads, so over the course of this of these presentations, we're gonna drill down into some of the enabling technologies that make this possible, but the net net, is that it is a brand new, next generation data plane for let's say for example, within a customer data center it allows customers to cluster multiple Infiniboxes together into what we call availability zones, and then manage that as a single entity. And that scales from a petabyte up to an exabyte of capacity per data center, and typically a customer would have one availability zone per data center and then one availability zone that can span multiple clouds, so that's the data plane. The control plane is the ability to manage all of this, no matter where the data lives, no matter where the workload is or needs to be and to manage it with a single pane of glass. And those are the kind of pieces of enabling technology that we're gonna unpack in the technical sessions. >> Two questions on that if I may. So you've got the data plane and the control plane, if I want to plug in to some other control plane, you know VMware control plane for instance, your API based architecture allows me to do that? Is that correct? >> Oh yeah, it's application aware, so for instance if you're running a VMware environment or a Kubernetes environment, it seamlessly integrates into that, and you manage it from a single API endpoint, and it's elastic, it scales up and down, and it's infinite and immortal. And probably the biggest problem that this solves for customers is it makes data migrations obsolete. It gives us the ability to decouple the data lifecycle from the hardware refresh lifecycle, which is a game changer for customers. >> I think you just answered my second question, which is what makes this unique? And that's at least one aspect of course. >> Yeah, I mean that's the, data migrations are the bane of customer's existence. And the larger the customer is, the more filer and erase sprawl they have, the more of a data migration headache they have. So when we kicked this project off five years ago, our call to action, the kernel of an idea that became elastic data fabric, was find a way to make it so that the next generation of infrastructure engineers that are graduated from college right now, will never know what a data migration is, and make it a story that old men in our industry talk about. >> Well that's huge because it is the bane of customers' existences. Very expensive, minimum $50,000 per migration, and many, many months, thanks Brian, for kicking this off, we've got a lot of ground to cover, and so we're gonna get into it now. We're gonna get into the news, we're gonna double click on some of the technologies and architectures, we're gonna hear from customers. And then it's your turn, we're gonna jump into the crowd chat and hear from you, so keep it right there. We'll be right back, right after this short break. (calming music) We're back with Doc D'Errico, the CMO of Infinidat. We're gonna talk about agility and manageability. Good to see you Doc. >> Good to see you again, Dave. >> All right, let's start in reverse order, let's start with manageability. What's your story there? >> Sure, happy to do that, you know Dave, we get great feedback from our customers on how simple and easy our systems are to manage. We have products like Infinimetrics which give them a lot of insights into the system. We have APIs, very simple and easy to use. But our customers keep asking for more insights into their environment, leveraging the analytics that we already do, now you've also heard just now about our elastic data fabric, which is our vision, Infinidat's vision for the data center, not just for today, but into the future. And our first instantiation of that vision in answering those customer responses, is a new cloud based platform, initially to provide some better monitoring and analytics, but then you're going to go into data migrations, auto provisioning, storage availability zones, and really your whole customer experience with Infinidat. >> So for my understanding, this is a SAS solution, is that correct? >> It is, it's a secure, multi site solution, so in other words, all of your Infinidat systems, wherever they are around the world, all visible through a single pane of glass. But the cloud based system gives us a lot of great power too, it gives us the agility to provide faster development and rapid enhancement based on feedback and feature requests. It also then provides you customizable dashboards in your system, dashboards that we can create very rapidly, giving you advisors and insights into a variety of different things. And we have lots of customers who are already engaged in using this. >> So I'm interested in this advisors and insights, my understanding is you guys got a data lake in the backend. You're mining that data, performing analytics on it. What kinds of benefits do customers get out of that? >> Well they can search into things, like abandoned volumes within their system. Tracking the growth of their storage environment. Configuration errors, like asymmetric ports and paths, or even just performance behaviors, like abnormal latencies or bandwidth patterns. >> So when you're saying abandoned volumes, your talking about like, reclaiming wasted space? >> Absolutely. >> To be able to reuse it. I mean people in the old days have done that because of a log structured file and they had to do it for performance, but you're doing it to give back money to the customers, is that right? >> That's exactly right, you know customers very often get requests from business units to spin off additional volume sets for whether it be a test environment or some specific application that they're running for some period of time. And then when they spin down the environment they sometimes leave the data set there thinking that they might need it again in the not so distant future, and then it sort of dies on the vine, it sits there taking up space and it's never used again, so we give them insights into when the last time things were accessed, how often it's accessed, what the IO patterns are, how many copies there might be, with snapshots and things like that. >> You mentioned strong customer feedback. Everybody says they get great customer feedback. But you've been with a lot of companies. How is this different, and what specifically is that feedback? >> Yeah, the analytics and insights are very unique, this is exactly what customers have been asking for from other vendors. Nobody does it, you know we're hearing such great stories about the impact on their costs. Like the capacity utilization, reclaiming all that abandoned capacity, being able to put new workloads and grow their environment without having to pay any additional costs is exciting to them. Identifying and correcting configuration issues, getting ahead of performance problems before they occur. Our customers are already saving time and money by leveraging this in our environment. >> All right let's pivot to agility. You've got Flex, what's your story there? What is Flex? >> Well Dave, imagine a world if you will, if you didn't have to worry about hardware anymore, right, it sounds like a science fiction story but it's not. >> Sounds like cloud. >> It sounds like cloud, and people have been migrating to the cloud and in the public cloud environment, we have a solution that we talked about a year ago called Neutrix Cloud, providing a sovereign based storage solution so that you can get the resilience and the performance of Infinibox or Infiniguard in your system today, but people want that experience on premises, so for the on premise experience, we're announcing Infinibox Flex, and Inifiniguard Flex, an environment where, you don't have to worry about the hardware, you manage your data, we'll manage the hardware, and you get to pay for what you use as you need it. You can scale up an down, we'll guarantee the availability. 100% availability, and with this environment, you'll get free hardware for life. >> Okay a lot of questions, so this sounds like your on prem cloud, right, you're bringing that cloud experience to the data, wherever it lives, you say you can scale up and scale down, how does that work, you're over provisioning, or, and you're not charging me for what I don't use, can you give us some details there? >> Well just like with an Infinibox, we're going to try to provide the customer with the Infinibox that they need not just for today, but for tomorrow. We're gonna work with the customer to look into the future and try to determine what are their performance requirements and capacity requirements over time. The customer will have the ability to manage the data configuration and the allocation of the storage and add or remove storage as they need it. As they need it, as they scale up, and we'll build them based on the daily average, just like the cloud experience, and if, as they reduce, same thing, it will adjust the daily average and build accordingly. >> Am I right, the customer will make some minimum commitment, and then if they go over that, you'll charge 'em for it, if they don't, then you won't charge 'em for it, is that correct? >> If they go over it, we'll charge them for the period they go over, if they continue to use it forever, we'll charge them that. If they reduce it back, then we'll charge them the reduced amount. >> So that gives them the flexibility there and the agility. Okay 100% availability, what's behind that? >> You know, we have a seven nines reliability metric that we manage to on a day to day basis. We have customers who have been running systems for years without any noticeable downtime, and when you have seven ninths, that's 3.16 seconds of availability per year. Right, the life cycle of an IO timeout is much longer than that, so effectively from the customer's application perspective, it's 100% available. We're willing to put our money where our mouth is. So if you experience downtime that's caused by our system at any time during that monthly period, you get the next month for free for the entire capacity. >> Okay, so that's a guarantee that you're making. >> That's a guarantee. >> Okay, read the fine print. But it sounds like the fine print is just what you said it is. >> It's pretty straight forward. >> Free hardware for life. Free, like a puppy? (laughs) >> No, free like in free, free meaning you're paying for the service, we're providing the capacity for you to put your data, and every three years, we will refresh that entire system with new hardware. And the minimum is three years, if you prefer because of your business practices to change that cycle, we'll work with you to find the time that makes the most sense. >> So I could do four years or five years if I wanted. >> You could do four years or five years. You could do three years and three months. And you'll get the latest and greatest hardware. We'll also, by the way provide the data migration services which is part of this cloud vision. So your not going to have to do any of the work. You're not going to have to pay for additional capital expense so that you have two sets of hardware on the floor for six months to a year while you do migration and work it into your schedules. We'll do that entire thing transparently for you in your environment, completely non disruptive to you. >> So you guys are all about petabyte scale. Hard enterprise problems, this isn't a mom and pop sort of small business solution, where do you see this play? Obviously service providers are gonna eat this stuff up. Give us some -- >> Yeah you know, service providers is a great opportunity for this. It's also a wonderful opportunity for Infiniverse. But any large scale environment this should be a shoo-in. And you know what, even if you're in a small scale environment that has a need that you wanna maintain that environment on premises, you're small scale, you wanna take advantage of your data more. You know you're going to grow your environment, but you're not quite sure how you're gonna do it. Or you have these sporadic workloads. Perhaps in the finance industry, you know we're in tax season right now, taxes just ended half a month ago right, there are plenty of businesses who need additional capacity for maybe four months of the year, so they can scale up for those four months and then scale back down. >> Okay, give us the bottom line on the customer impact. >> So the customer impact is really all about greater agility, the ability to provide that capacity and flexible model without big impact to their overall budget over the course of the year. >> All right Doc, thank you very much. Appreciate your time and the insight. >> It's my pleasure, Dave. >> All right, let's year from the customer, and we'll be right back. Right after this short break. >> Michael Gray is here, he's the chief technology officer of Boston based Thrive, Michael, good to see you. Thanks for coming on. >> Hey, glad to be here. >> So tell us about Thrive, what are you guys all about? >> You know, Thrive started almost 20 years ago as a traditional managed service provider. But really in the past four to five years transformed into a next generation managed service provider, primarily now, we're focusing on cyber security, cloud hosting and public cloud hosting, as well as disaster recovery. To me, and this is something that's primary to Thrive's focus, is application enablement. We're an application enablement company. So if your application is best run in Azure, then we wanna put it there, a lot of times we'll find that just due to business problems or legacy technologies, we have to build private clouds. Or even for security reasons, we want to build private cloud, or purely just because we're running into a lot of public cloud refugees. You know they didn't realize a lot of the, maybe incidental fees along the way actually climbed up to be a fairly big budget number. So you know, we wanna really look at people's applications and enable them to be high performance but also highly secure. >> So I'm curious as to when you brought in Infinidat, what the business impact was economically. There's all the sort of non TCO factors that I wanna explore, so was it the labor costs that got reduced, did you redeploy those resources? Was it actually the hardware, or? >> First and foremost, and you know this is going back many years, and I think I would say this is true for any data center cloud provider. The minute the phone rings and someone says my storage is slow, we're losing money. Okay, because we've had to pick up the phone and someone needs to address that. We have eliminated all storage performance help desk issues, it's now one thing I don't need to think about anymore. We know that we can rely on our performance. And we know we don't need to worry about that on a day to day basis, and that is not in question. Now the other thing is really, as we started to expand our Infinidat footprint geographically, we suddenly started to realize, not only do we have this great foundation built but we can leverage an investment we made to do things that we couldn't do before. Maybe we could do them but they required another piece of technology, maybe we could do them but they required some more licensing. Something like that, but really when we started the standardization, we did it for operational efficiency reasons, and then suddenly realized that we had other opportunities here. And I have to hand it to Infinidat. They're actually the ones that helped us craft this story. Not only is this just a solid foundation but it's something you can build on top of. >> Has that been your experience, that it's sort of reduced or eliminated traditional storage bottlenecks? >> Oh absolutely, and you know I mentioned before that storage forms have now become an afterthought to me. You know, and a little bit the way we look at our storage platform is from a performance standpoint, not a capacity standpoint, we can throw whatever we want at the Infinidat, and sort of the running joke internally is that we'll just smile and say is that all you got? >> You mean like mix workloads so you don't have to sort of tune each array for a particular workload? >> Yeah, and you know I can image that as someone who might be listening to what I'm saying, well hey come on, it can't really be that good. And I'm telling you from seeing it day to day, again you can just throw the workloads at it, and it will do what it says it does. You don't see that everyday, now as far as capacity goes, there's this capacity on demand model, which we're a huge fan of, they also have some other models, the flex model, which is very useful for budgeting purposes, what I will tell you is you have to sacrifice at least one floor tile for Infinidat, it's very off putting first on day one, and I remember my reaction. But again, as I was saying earlier, when you start peeling back the pieces of the technology and why theses things are, and the different flexibility on the financial side, you realize this actually isn't a downside, it's an upside. >> We're gonna talk performance with Craig Hebbert who's vice president with Infinidat, he focuses on strategic accounts, Craig, thanks for coming on. >> Thanks for having me. >> All right, so let's talk performance, everybody talks about performance they have their bench marketing, everybody's throwing Flash at the problem, you guys, you use Flash, but you didn't hop on that all Flash bandwagon, why and how are you different? >> Great question, we get it a lot with our customers. So we innovated, we spent over five years looking at the big picture, what the box would need today. What it would need in the future, and how would we arrive there by doing it economically? And so as you said, we use a small amount of Flash, that's a small percentage, two, three, four percent of the total box, but we do it by having a foundation that nobody else has, instead of throwing hardware at the solution, we have some specific mechanisms that nobody else has, we have a tri, which is a multi value structure that allows us to dynamically trace and track all of the IOs that come into the box, we ship intelligence. Everybody else ships dumb blocks of data. And so their only course of action to adopt new strategies is to bolt on the latest and greatest media. I've had a lot of experience at other companies where they've tried to shoehorn in new techniques whether it be a NAS Blade into an existing storage box or whether it be thin provisioning after the fact. And things that are done sort of like after the design is done never pan out very well. And the beauty with Infinibox is that all our protocols work the same way. I-ska-zin, NAS Block, it is all structured the same way. And that makes performance equal over all those protocols. And it makes it also easy to manage via the same API structure. >> So you're claiming that you can give equivalent or better performance with a combination of Flash and Spinning Disk than your competitors who are all Flash. Can you kind of add some color to that? >> Absolutely, so we use DRAM, all of our writes are ingested into the box through DRAM. We have 130 microsecond latency. Which is actually the lowest speed that fiber channel can attain, and so we're able to do things very, very quickly, it's 800 times faster NAND which is what our competitors are using. We have no raid structure on the SSD at all. So as things flow out of DRAM and go onto the SSD, our SSD is faster than everybody else's. Even though we use the same, so there's a mechanism there that we optimize. We write in large sequential blocks to the SSD. So the wear rate isn't the same as what our competitors are using, so everything we do is with an optimization, both for the present data and also the recall, and one of the things that culminates in a massive success for us, how we have those three tiers of data, but how we're able to out performance all Flash arrays, is that we do something, we hold data in cache for a massive amount of time, the average write latency in something like a VMAX is something like 13 seconds, the maximum is 28, we hold things for an astounding five minutes, and what that allows us to do is put profiles around things and remove randomness, randomness is something that's plagued data storage vendors for years. Whether it's random writes or random reads. If you can remove that randomness, then you can write out what are the slowest spinning disks out there, the Nearline SAS drives, but they're the fastest disks for sequential read, so if everything you write out is sequential, you can use the lowest cost disk, the Nearline SAS disk, and maximize their performance. And it's that technology, it's those patterns, 138 patterns that allow us to do all of these 38 steps in the process which augment our ability to serve customers data at a vastly reduced price. >> So your secret sauce is architecture intelligence as you call it, and then your able to provide lower cost media, and of course if Flash were lower cost, you'd be able to use that. There's no reason that you couldn't. Is that correct? >> We could but we wouldn't gain anything from it. A lot of customers say to us, why aren't you using more Flash, why don't you build an all Flash array? Why don't you use NVME? And we are actually the next version of the soft-wool-ship and the ME Capable as well as storage class memory. Why we don't do it is because we don't need it. Our customers have often said to us why don't you use 16 gig fiber channel or 32. And we haven't made that move because we don't move bottlenecks, we give customers a solution which is an end to end appliance, and so when we refresh the software stack, and we change the config with that, we make sure that the fiber channel is upgraded, we make sure that the three port, the Infiniban, everything comes with an uplift so there's not just one single area of a bottleneck. We could use more SSD but it would just be more money and we wouldn't be able to give you any more performance than we are today. >> So you have some hard news today. Tell us about that. >> Yeah I will. So we are a software company, and going back to the gen one I was here on day one when we started selling in the United States, when the first box was released it was 300,000 IOs, Moshe said he wanted a million IOs without changing the platform. We got up to about 900,000, that's a massive increase by just software tweaks, and so what we do is once the product has gone through its second year we go back and we optimize and we reevaluate. Which is what we did in the fall of 2018. And we were able to give a 30% uplift to our existing customers just with software tweaks in that area, so now we move to another config where we will introduce the 16, the 32 gig fiber channel cards and the MEO for fabric and storage class memory and all those things that are up and coming, but we don't need to utilize those until the price point drops. Right now if we did that, we'd just be like everybody else, and we would be driving up the price point, we're making the box ready to adapt those when the price point becomes accessible to our customers. >> Okay, last question, you spent a lot of time with strategic accounts, financial services, healthcare, insurance, what are some of the most pressing problems that you're hearing from them that you guys are helping them solve? >> It's a great question, so we see people with sprawl, managing many, many arrays, one of our competitors for instance for Splunk, they'll give you one array with one interface for the hot indexes, another mid tier array with another interface for the warm indexes. >> Brute force. >> Yeah, and then they'll give you a bunch of cold now storage on the back end with another disparate interface, all three of them are managed separately and you can't even control them from the same API. So what customers like about us, and just Splunk is one example. So we come in with just one 19 inch array and one rack, the hot indexes are handled by the DRAM, the warm indexes are handled by the SSD, and cold data's right there on the Nearline Sass drives. So they see from us this powerful, all encompassing solution that's better, faster, and cheaper. We sell on real, not effective, and so when encryption and things like this get turned on, the price point doesn't go up with Infinidat customers. They already know what they're buying. Everything else is just cream. And it's massive for economical reasons, as well as technological reasons. >> Excellent, Craig, thank you. >> Thank you very much for having me. >> Okay keep it right there everybody. We'll be right back after this short break. (calming music) We're back with Ken Steinhart who's a field CTO with Infinidat, Ken, good to see you again. >> Great to see you Dave, it's been a long while. >> It sure has, thanks for coming back on the CUBE here. So you have the customer perspective. You've worked with a lot of customers. You've been a customer, availability, high availability, obviously important, especially in the context of storage. What's Infinidat's story there? >> Well high availability's been a cornerstone for Infinidat obviously from the beginning. And it's really driven some pretty amazing things. Not the least of which has been seven nines of availability proven by the product. What's new and different now, is we're extending that with the ability to do active active clustering and it's the real deal, we're talking about the ability to have the exact same volume now at synchronous distances, presenting itself to both sites as if it were just a single volume. Now this is technology that's based upon the existing synchronous replication and Infinisnap technology that Infinidat has already had, and this is gonna provide always on, continuous operation, even able to be resilient against site failures, component failures, storage failures, server failures, whatever, we will provide true zero RPO and true zero RTO at distance, and it's able to provide the ability to provide consistency also by using a very lightweight witness which presents itself as a third, completely separate fault domain to be able to see both sites to ensure the integrity of information, while being able to read and write simultaneously at two sites to what logically looks like one single volume. This is gonna be supported with all the major cluster software and server environments. And it's incredibly easy to deploy. So that's really the first point associated with this. >> So let me follow up on that, so a lot of people talk about active active, a lot of companies. How is this specifically different? >> It's different in that it is going to be able to now change the economics, first and foremost. Up until now, typically, people have had to trade off between RPO, RTO and cost, and usually you can get two of the three to be positive but not all three. It's sort of like if you buy a car. RPO equates to the quality of the solution, RTO equates to the speed or time, cost is cost. If you buy a car, if it's good and it's fast it won't be cheap, if it's good and it's cheap, it won't be fast, and if it's fast and it's cheap it won't be good, so we're able to break that paradigm for the first time here, and we're gonna be able to now take the economics of multi site, disaster tolerant, cluster type solutions and do it at costs to what are comparable to what most people would do for just a single site implementation. >> And your secret sauce there is the architecture, it's the software behind it. >> Well it's actually a key point, the software is standard and included. And it's all about the software, this is an extension of the existing synchronous replication technology that Infinidat has had, standard and included, no additional costs, no separate quirky gateways or anything, being able to now have one single volume logically presented to two different sites in real time continuously for high availability. >> So what's the customer impact? >> The customer impact is continuous operation at economics that are comparable to what single site solutions have typically looked like. And that's just gonna be huge, we see this as possibly bringing multi site disaster tolerance and active active clustering to people that have never been able to afford it or didn't think they could afford it previously. That really brings us to the third part of this. The last piece is that, when you take an architecture such as Infinidat with Infinibox, that has been able to demonstrate seven nines of availability, and now you can couple that across at distance in synchronous distances to two data centers or two completely different sites, we are now able to offer a 100% uptime guarantee. Something that statistically hasn't really been particularly practical in the past, for a vendor to talk about, but we're now able to do it because of the technology that this architecture affords our customers. >> So guarantee as in, when I read the fine print, what does it say? >> Obviously we'll give the opportunity for our customers to read the fine print. But basically it's saying we're gonna stand behind this product relative to its ability to deliver for them, and obviously this is something customers we think are gonna be very, very excited about. >> Ken, thinks so much for coming on the CUBE, appreciate it. >> Pleasure's mine, Dave. As always. >> Great to see you. Okay, thank you for watching, keep it right there. We'll be right back, right after this short break. (calming music) Okay we're back for the wrap up with Brian Carmody. Brian, let's geek out a little bit. You guys are technologists, let's start with the software tech that we heard about today. What are the takeaways? >> Sure, so there's a huge amount of content in here, and software is most of it, so we have, first is R5. This is the latest software release for Infinibox. It improves performance, it improves availability with active active, it introduces non disruptive data mobility which is a game changer for customers for manageability and agility. Also as part of that, we have the availability of Infiniverse, which is our cloud based analytics and monitoring platform for Infinidat products, but it's also the next generation control plane that we're building. And when we talk about our roadmap, it's gonna grow into a lot more than it is today, so it's a very strategic product for us. But yeah, that's the net net on software. >> Okay, so but the software has to run on some underlying hardware, so what are the innovations there? >> Yeah, so I'm not sure if I'd call 'em innovations, I mean in our model, hardware is boring and commoditized and really all the important stuff happens in software. But we have listened, customers have asked us for it, we are delivering, 16 gigabit fiber channel is a standard option, and we're also giving a option for a 32 gig fiber channel, and a 25 gig ethernet, 25 gig ethernet, which is again, things that customers asking for 'em, and we've delivered, and also while we're on the topic of protocols and stuff like that, we're also demonstrating our NVMe over fabrics implementation, which is deployed with select customers right now, it is the world's fastest NVMe over fabrics implementation, it is a round trip latency of 52 microseconds which is half the time, roundtrip for us, is half the time that it takes a NAND Flash cell to recall its data, forgetting about the software stack on the round trip, that's gonna be available in the future for all of our customers, general availability via a software only update. >> That's incredible, all right, so to get out what that means for the road map. >> Oh sure, so basically with our road map, is we're laying out a very ambitious vision for the next 18 months of how to give customers ultimately what they are screaming for which is help us evolve our on premises storage from old school storage arrays and turn them into elastic data center scale clouds in my own data centers, and then come up and give us an easy, seamless way to integrate that into our public cloud and our off premises technologies, and that's where we're gonna be. Starting today, and taking us out the next 18 months. >> Well we covered a lot of ground today. Pretty remarkable, congratulations on the announcements. We covered all the abilities, even performance ability. We'll throw that one in there. So thank you for that, final word? >> The final word is probably just a message to our customers to say thank you, and for trusting us with your data. We take that covenant very seriously. And we hope that you with all of this work that we've done, that you feel we're delivering on our promise of value, to help them enable competitive advantage and do it at multi petabyte scale. >> Great, all right thank you Brian. And thank you, now it's your turn. Hop into the crowd chat, we've got some questions for you, you can ask questions of the experts that are on the call. Thanks everybody for watching. This is Dave Vallante signing out from the CUBE.

Published Date : May 8 2019

SUMMARY :

Brian, it's good to see you again. Good to see you too, Dave. If you had to summarize, Brian, the last twelve months all of the zeroes properly, but it looks like Some mind boggling numbers, so let me ask you a question. But in the past year, virtually all of our growth that would be an example, using cloud native from the team spent a day with another partner, And the impact is saving lives, that's awesome. And the ability to execute access, is, Congratulations on that, and I know the peer insight, by the caliber of people that I get to work with every day. We're really, really happy about the vision, so you have this Infinidat The control plane is the ability to manage all of this, you know VMware control plane for instance, And probably the biggest problem that this solves I think you just answered my second question, And the larger the customer is, the more filer Good to see you Doc. in reverse order, let's start with manageability. happy to do that, you know Dave, But the cloud based system gives you guys got a data lake in the backend. Tracking the growth of their storage environment. I mean people in the old days have done that in the not so distant future, and then it sort of is that feedback? about the impact on their costs. All right let's pivot to agility. if you will, if you didn't have to worry about the hardware, you manage your data, provide the customer with the Infinibox that they need for the period they go over, if they continue the flexibility there and the agility. So if you experience downtime that's caused But it sounds like the fine print is just what you It's pretty Free, like a puppy? And the minimum is three years, if you prefer So I could do on the floor for six months to a year So you guys are all about petabyte scale. Perhaps in the finance industry, you know we're greater agility, the ability to provide that capacity All right Doc, thank you very much. from the customer, and we'll be right back. Michael Gray is here, he's the chief technology officer But really in the past four to five years as to when you brought in Infinidat, started the standardization, we did it for operational You know, and a little bit the way we look at and the different flexibility on the financial side, We're gonna talk performance with Craig Hebbert that come into the box, we ship intelligence. that you can give equivalent or better performance like 13 seconds, the maximum is 28, we hold things There's no reason that you couldn't. A lot of customers say to us, why aren't you using So you have some hard news today. in the United States, when the first box was released for the hot indexes, another mid tier array and one rack, the hot indexes are handled with Infinidat, Ken, good to see you again. especially in the context of storage. the ability to have the exact same volume now How is this specifically different? for the first time here, and we're gonna be able to now it's the software behind it. And it's all about the software, this is an extension do it because of the technology that this the opportunity for our customers to read the fine print. As always. the software tech that we heard about today. This is the latest software release for Infinibox. and really all the important stuff happens in software. That's incredible, all right, so to get out for the next 18 months of how to give customers So thank you for that, final word? And we hope that you with all of this work of the experts that are on the call.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VallatePERSON

0.99+

Brian CarmodyPERSON

0.99+

Dave VallantePERSON

0.99+

BrianPERSON

0.99+

Michael GrayPERSON

0.99+

InfinidatORGANIZATION

0.99+

SteinyPERSON

0.99+

DavePERSON

0.99+

GartnerORGANIZATION

0.99+

Craig HebbertPERSON

0.99+

MoshePERSON

0.99+

six monthsQUANTITY

0.99+

Ken SteinhartPERSON

0.99+

two sitesQUANTITY

0.99+

30%QUANTITY

0.99+

100%QUANTITY

0.99+

CraigPERSON

0.99+

second questionQUANTITY

0.99+

three yearsQUANTITY

0.99+

four yearsQUANTITY

0.99+

AmazonORGANIZATION

0.99+

Stan ZafosPERSON

0.99+

GoogleORGANIZATION

0.99+

38 stepsQUANTITY

0.99+

one wordQUANTITY

0.99+

five yearsQUANTITY

0.99+

both sitesQUANTITY

0.99+

twoQUANTITY

0.99+

52 microsecondsQUANTITY

0.99+

United StatesLOCATION

0.99+

13 secondsQUANTITY

0.99+

800 timesQUANTITY

0.99+

Two questionsQUANTITY

0.99+

KenPERSON

0.99+

last yearDATE

0.99+

138 patternsQUANTITY

0.99+

two wordsQUANTITY

0.99+

first boxQUANTITY

0.99+

five minutesQUANTITY

0.99+

ThriveORGANIZATION

0.99+

25 gigQUANTITY

0.99+

two data centersQUANTITY

0.99+

threeQUANTITY

0.99+

four monthsQUANTITY

0.99+

InfiniverseORGANIZATION

0.99+

third partQUANTITY

0.99+

16 gigQUANTITY

0.99+

tomorrowDATE

0.99+

first pointQUANTITY

0.99+

EMCORGANIZATION

0.99+

two minutesQUANTITY

0.99+

one rackQUANTITY

0.99+

todayDATE

0.99+

three tiersQUANTITY

0.99+

over five yearsQUANTITY

0.99+

two different sitesQUANTITY

0.99+

32 gigQUANTITY

0.99+

Joel Dedrick, Toshiba | CUBEConversation, February 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.

Published Date : Feb 28 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Peter BurrisPERSON

0.99+

2012DATE

0.99+

20 secondsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

Joel DedrickPERSON

0.99+

1986DATE

0.99+

100sQUANTITY

0.99+

500 waysQUANTITY

0.99+

February 2019DATE

0.99+

200 waysQUANTITY

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

GooglesORGANIZATION

0.99+

300%QUANTITY

0.99+

twoQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

10,000QUANTITY

0.99+

Toshiba Memory CorporationORGANIZATION

0.99+

25 years laterDATE

0.98+

Black FridayEVENT

0.98+

bothQUANTITY

0.98+

10ths of secondsQUANTITY

0.98+

oneQUANTITY

0.97+

SaasORGANIZATION

0.96+

Silicon Valley,LOCATION

0.96+

Every monthQUANTITY

0.93+

50,000, 100,000QUANTITY

0.92+

FlashORGANIZATION

0.92+

EMCORGANIZATION

0.91+

two hard drivesQUANTITY

0.9+

Networks Storage SoftwareORGANIZATION

0.89+

millions of times a minuteQUANTITY

0.88+

one wayQUANTITY

0.88+

million machines a yearQUANTITY

0.88+

first transportQUANTITY

0.87+

single computeQUANTITY

0.83+

ChristmasEVENT

0.82+

Cloud ProviderORGANIZATION

0.81+

KubernetesTITLE

0.78+

FlashTITLE

0.78+

two block storage command setsQUANTITY

0.77+

step oneQUANTITY

0.75+

NVMETITLE

0.75+

1,000s of machinesQUANTITY

0.75+

CubeORGANIZATION

0.72+

coupleQUANTITY

0.63+

NVMEORGANIZATION

0.62+

Cube ConversationEVENT

0.6+

SCSITITLE

0.57+

KubernetesORGANIZATION

0.49+

CUBEConversationEVENT

0.49+

nodeTITLE

0.49+

PresidentPERSON

0.48+

ConversationEVENT

0.36+

Liran Zvibel & Andy Watson, WekaIO | CUBE Conversation, December 2018


 

(cheery music) >> Hi I'm Peter Burris, and welcome to another CUBE Conversation from our studios in Palo Alto, California. Today we're going to be talking about some new advances in how data gets processed. Now it may not sound exciting, but when you hear about some of the performance capabilities, and how it liberates new classes of applications, this is important stuff, now to have that conversation we've got Weka.IO here with us, specifically Liran Zvibel is the CEO of Weka.IO, and joined by Andy Watson, who's the CTO of Weka.IO. Liran, Andy, welcome to the cube. >> Thanks. >> Thank you very much for having us. >> So Liran, you've been here before, Andy, you're a newbie, so Liran, let's start with you. Give us the Weka.IO update, what's going on with the company? >> So 18 has been a grand year for us, we've had great market adoption, so we've spent last year proving our technology, and this year we have accelerated our commercial successes, we've expanded to Europe, we've hired quite a lot of sales in the US, and we're seeing a lot of successes around machine learning, deep learning, and life sciences data processes. >> And you've hired a CTO. >> And we've hired the CTO, Andy Watson, which I am excited about. >> So Andy, what's your pedigree, what's your background? >> Well I've been around a while, got the scars on my back to show it, mostly in storage, dating back to even off-specs before NetApp, but probably best known for the years I spent at NetApp, was there from 95 through 2007, kind of the glory years, I was the second CTO at NetApp, as a matter of fact, and that was a pretty exciting time. We changed the way the world viewed shared storage, I think it's fair to say, at NetApp, and it feels the same here at Weka.IO, and that's one of the reasons I'm so excited to have joined this company, because it's the same kind of experience of having something that is so revolutionary that quite often, whether it's a customer, or an analyst like yourself, people are a little skeptical, they find it hard to believe that we can do the things that we do, and so it's gratifying when we have the data to back it up, and it's really a lot of fun to see how customers react when they actually have it in their environment, and it changes their workflow and their life experience. >> Well I will admit, I might be undermining my credibility here, but I will admit that back in the mid 90s I was a little bit skeptical about NetApp, but I'm considerably less skeptical about Weka.IO, just based on the conversations we've had, but let's turn to that, because there are classes of applications that are highly dependent on very large, small files, being able to be moved very very rapidly, like machine learning, so you mentioned machine learning, Liran, talk a little bit about some of the market success that you're having, some of those applications' successes. >> Right so machine learning actually works extremely well for us for two reasons. For one big reasons, machine learning is being performed by GPU servers, so a server with several GPU offload engines in them, and what we see with this kind of server, a single GPU server replaces ten or tens of CPU based servers, and what we see that you actually need, the IO performance to be ten or tens times what the CPU servers has been, so we came up with a way of providing significantly higher, so two orders of magnitude higher IO to a single client on the one hand, and on the other hand, we have sold the data performance from the metadata perspective, so we can have directories with billions of files, we can have the whole file system with trillions of files, and when we look at the autonomous driving problem, for examples, if you look at the high end car makers, they have eight cameras around the cars, these cameras take small resolution, because you don't need a very high resolution to recognize the line, or a cat, or a pedestrian, but they take them at 60 frames per second, so 30 minutes, you get about the 100k files, traditional filers could put in the directory, but if you'd like to have your cars running in the Bay Area, you'd like to have all the data from the Bay Area in the single directory, then you would need the billions of file directories for us, and what we have heard from some of our customers that have had great success with our platform is that not only they get hundreds of gigabytes of small file read performance per second, they tell us that they take their standard time to add pop from about two weeks before they switched to us down to four hours. >> Now let's explore that, because one of the key reasons there is the scalability of the number of files you can handle, so in other words, instead of having to run against a limit of the number of files that they can typically run through the system, saturate these GPUs based on some other storage or file technology, they now don't have to stop and set up the job again and run it over and over, they can run the whole job against the entire expansive set of files, and that's crucial to speeding up the delivery of the outcome, right? >> Definitely, so what they, these customers used to do before us, they would do a local caching, cause NFS was not fast enough for them, so they would copy the data locally, and then they would run them over on the local file system, because that has been the pinnacle of performance of recent year. We are the only storage currently, I think we'll actually be the first wave of storage solutions where a shared platform built for NVME is actually faster than a local file system, so we'd let them go through any file, they don't have to pick initially what files goes to what server, and also we are even faster than the traditional caching solutions. >> And imagine, having to collect the data and copy it to the local server, application server, and do that again and again and again for a whole server farm, right? So it's bad enough to even do it once, to do it many times, and then to do it over and over and over and over again, it's a huge amount of work. >> And a lot of time? >> And a lot of time, and cumulatively that burden, it's going to slow you down, so that makes a big big difference and secondly, as Liran was explaining, if you put 100,000 files in a directory of other file systems, that is stressful. You want to put more than 100,000 files in a directory of other file systems, that is a tragedy, and we routinely can handle millions of files in a directory, doesn't matter to us at all because just like we distribute the data, we also distribute the metadata, and that's completely counter to the way the other file systems are designed because they were all designed in an era where their focus was on the physical geometry of hard disks, and we have been designed for flash storage. >> And the metadata associated with the distribution of that data typically was in a one file, in one place, and that was the master serialization problem when you come right down to it. So we've got a lot of ML workloads, very large number of files, definitely improved performance because of the parallelism through your file system, in the as I said, the ML world. Let's generalize this. What does this mean overall, you've kind of touched upon it, but what does it mean overall for the way that customers are going to think about storage architectures in the future as they are combining ML and related types of workloads with more traditional types of things? What's the impact of this on storage? >> So if you look at how people architect their solutions around storage recently, you have four different kind of storage systems. If you need the utmost performance, you're going to DAS, Fusion IO had a run, perfecting DAS and then the whole industry realized it. >> Direct attached storage. >> Direct attached storage, right, and then the industry realized hey it makes so much sense, they create a standard out of it, created NVME, but then you're wasting a lot of capacity, and you cannot manage it, you cannot back it up, and then if you need it as some way to manage it, you would put your data over SAN, actually our previous company was XAV storage that IBM acquired, vast majority of our use cases are actually people buying block, and then they overlay a local file system over it because it gets you so much higher performance then if you must get, but you don't get, you cannot share the data. Now, if you put it on a filer, which is Neta, or Islon, or the other solutions, you can share the data but your performance is limited, and your scalability is limited as Andy just said, and if you had to scale through the roof- >> With a shared storage approach. >> With a shared storage approach you had to go and port your application to an object storage which is an enormous feat of engineering, and tons of these projects actually failed. We actually bring the new kind of storage, which is assured storage, as scalable as an object storage, but faster than direct attach storage, so looking at the other traditional storage systems of the last 20 or 30 years, we actually have all the advantages people would come to expect from the different categories, but we don't have any of the downsides. >> Now give us some numbers, or do you have any benchmarks that you can talk about that kind of show or verify or validate this kind of vision that you've got, that Weka's delivering on? >> Definitely, but the i500? >> Sure, sure, we recently actually published our IO500 performance results at the SE1800, SE18 event in Dallas, and there are two different metrics- >> So fast you can go back in time? >> Yes, exactly, there are two different metrics, one metric is like an aggregate total amount of performance, it's a much longer list. I think the one that's more interesting is the one where it's the 10-client version, which we like to focus on because we believe that the most important area for a customer to focus on is how much IO can you deliver to an individual application server? And so this part of the benchmark is most representative of that, and on that rating, we were able to come in second well, after you filter out the irrelevant results, which, that's a separate process. >> Typical of every benchmark. >> Yes exactly, of the relevant meaningful results, we came in second behind the world's largest and most expensive supercomputer at Oak Ridge, the SUMMIT system. So they have a 40 rack system, and we have a half, or maybe a little bit more than half, one rack system of industry standard hardware running our software. So compare that, the cost of our hardware footprint and so forth is much less than a million dollars. >> And what was the differential between the two? >> Five percent. >> Five percent? So okay, sound of jaw dropping. 40 rack system at Oak Ridge? Five percent more performance than you guys running on effectively a half rack of like a supermicro or something like that? >> Oh and it was the first time we ran the benchmark, we were just learning how to run it, so those guys are all experts, they had IBM in there at their elbow helping them with all their tuning and everything, this was literally the first time our engineers ran the benchmark. >> Is a large feature of that the fact that Oak Ridge had to get all that hardware to get the physical IO necessary to run serial jobs, and you guys can just do this parallel on a relatively standard IO subset, NVME subset? >> Because beyond that, you have to learn how to use all those resources, right? All the tuning, all the expertise, one of the things people say is you need a PhD to administer one of those systems, and they're not far off, because it's true that it takes a lot of expertise. Our systems are dirt simple. >> Well you got to move the parallelism somewhere, and either you create it yourself, like you do at Oak Ridge, or you do it using your guys' stuff, through a file system. >> Exactly, and what we are showing that we have tremendously higher IO density, and we actually, what we're showing, that instead of using a local file system, that where most of them were created in the 90s, in the serial way of thinking, of optimizing over hard drives, if now you say, hey, NVME devices, SSDs are beasts at running 4k IOs, if you solve the networking problem, if the network is not the bottleneck anymore, if you just run all your IOs as much parallelized workload over 4k IOs, you actually get much higher performance than what you could get, up until we came, the pinnacle of performance, which is a local file system over a local device. >> Well so NFS has an effective throughput limitation of somewhere around a gigabyte, so if you've got a bunch of GPUs that are each wanting four, five, 10 gigabytes of data coming in, you're not saturating them out of an effective one gigabyte throughput rate, so it's almost like you've got the New York City Waterworks coming in to some of these big file systems, and you got like your little core sink that's actually spitting the data out into the GPUs, have I got that right? >> Good analogy, if you are creating a data lake, and then you're going to sip at it with some tiny little straw, it doesn't matter how much data you have, you can't really leverage the value of all that data that you've accumulated, if you're feeding it into your compute farm, GPU or not, because if you're feeding it into that farm slowly, then you'll never get to it all, right? And meanwhile more data's coming in every day, at a faster rate. It's an impossible situation, so the only solution really is to increase the rate by which you access the data, and that's what we do. >> So I could see how you're making the IO bandwidth junkies at Oak Ridge, or would make them really happy, but the other thing that at least I find interesting about Weka.IO is as you just talked about is that, that you've come up with an approach that's specifically built for SSD, you've moved the parallelism into the file system, as opposed to having it be somewhere else, which is natural, because SSD is not built to persist data, it's built to deliver data, and that suggests as you said earlier, that we're looking at a new way of thinking about storage as a consequence of technologies like Weka, technologies like NVME. Now Andy, you came from NetApp, and I remember what NetApp did to the industry, when it started talking about the advantages of sharing storage. Are we looking at something similar happening here with SSD and NVME and Weka? >> Indeed, I think that's the whole point, it's one of the reasons I'm so excited about it. It's not only because we have this technology that opens up this opportunity, this potential being realized. I think the other thing is, there's a lot of features, there's a lot of meaningful software that needs to be written around this architectural capability, and the team that I joined, their background, coming from having created XIV before, and the almost amazing way they all think together and recognize the market, and the way they interact with customers allows the organization to address realistically customer requirements, so instead of just doing things that we want to do because it seems elegant, or because the technology sparkles in some interesting way, this company, and it remains me of NetApp in the early days, and it was a driver of NetApp's big success, this company is very customer-focused, very customer driven. So when customers tell us what they're trying to do, we want to know more. Tell us in detail how you're trying to get there. What are your requirements? Because if we understand better, then we can engineer what we're doing to meet you there, because we have the fundamental building blocks. Those are mostly done, now what we're trying to do is add the pieces that allow you to implement it into your workflow, into your data center, or into your strategy for leveraging the cloud. >> So Liran, when you're here in 2019, we're having a similar conversation with this customer focus, you've got a value proposition to the IO bandwidth junkies, you can give more, but what's next in your sights? Are you going to show how this for example, you can get higher performance with less hardware? >> So we are already showing how you can get higher performance with less hardware, and I think as we go forward, we're going to have more customers embracing us for more workloads, so what we see already, they get us in for either the high end of their life sciences or their machine learning, and then people working around these people realize hey, I could get some faster speed as well, and then we start expanding within these customers and we get to see more and more workloads where people like us and we can start telling stories about them. The other thing that we have natural to us, we run natively in the cloud, and we actually let you move your workload seamlessly between your on-premises and the cloud, and we are seeing tremendous interest about moving to the cloud today, but not a lot of organizations already do it. I think 19 and forward, we are going to see more and more enterprises considering seriously moving to the cloud, cause we have almost 100% of our customers PFCing, cloudbursting, but not a lot of them using them. I think as time passes, all of them that has seen it working, when they did the initial test, will start leveraging this, and getting the elasticity out of the cloud, because this is what you should get out of the cloud, so this is one way for expansion for us. We are going to spend more resources into Europe, which we have recently started building the team, and later in that year also, JPAC. >> Gentlemen, thanks very much for coming on theCUBE and talking to us about some new advances in file systems that are leading to greater performance, less specialized hardware, and enabling new classes of applications. Liran Zvibel is the CEO of Weka.IO, Andy Watson is the CTO of Weka.IO, thanks for being on theCUBE. >> Thank you very much. >> Yeah, thanks a lot. >> And once again, I'm Peter Burris, and thanks very much for participating in this CUBE Conversation, until next time. (cheery music)

Published Date : Dec 14 2018

SUMMARY :

some of the performance So Liran, you've in the US, and we're And we've hired the CTO, Andy Watson, 2007, kind of the glory years, just based on the conversations we've had, a single client on the one the data locally, and then and then to do it over and distribute the data, we also in the future as they are So if you look at how people and then if you need it as We actually bring the new more interesting is the one Yes exactly, of the than you guys running on the benchmark. expertise, one of the things the parallelism somewhere, in the 90s, in the serial way of thinking, so the only solution the file system, as opposed to and the team that I and the cloud, and we are Liran Zvibel is the CEO and thanks very much for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AndyPERSON

0.99+

Peter BurrisPERSON

0.99+

LiranPERSON

0.99+

30 minutesQUANTITY

0.99+

tenQUANTITY

0.99+

Andy WatsonPERSON

0.99+

Liran ZvibelPERSON

0.99+

2019DATE

0.99+

Oak RidgeORGANIZATION

0.99+

EuropeLOCATION

0.99+

Weka.IOORGANIZATION

0.99+

100,000 filesQUANTITY

0.99+

Five percentQUANTITY

0.99+

IBMORGANIZATION

0.99+

40 rackQUANTITY

0.99+

four hoursQUANTITY

0.99+

twoQUANTITY

0.99+

December 2018DATE

0.99+

DallasLOCATION

0.99+

USLOCATION

0.99+

2007DATE

0.99+

Bay AreaLOCATION

0.99+

hundreds of gigabytesQUANTITY

0.99+

last yearDATE

0.99+

two reasonsQUANTITY

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

billions of file directoriesQUANTITY

0.99+

NetAppORGANIZATION

0.99+

more than 100,000 filesQUANTITY

0.99+

one fileQUANTITY

0.99+

secondQUANTITY

0.99+

this yearDATE

0.99+

NVMEORGANIZATION

0.99+

mid 90sDATE

0.99+

one metricQUANTITY

0.99+

one placeQUANTITY

0.99+

millions of filesQUANTITY

0.98+

90sDATE

0.98+

fiveQUANTITY

0.98+

WekaORGANIZATION

0.98+

tensQUANTITY

0.98+

first timeQUANTITY

0.98+

eight camerasQUANTITY

0.98+

two different metricsQUANTITY

0.98+

single directoryQUANTITY

0.98+

trillions of filesQUANTITY

0.98+

oneQUANTITY

0.97+

SE1800EVENT

0.97+

less than a million dollarsQUANTITY

0.97+

a halfQUANTITY

0.97+

JPACORGANIZATION

0.97+

one wayQUANTITY

0.97+

CUBE ConversationEVENT

0.96+

10-clientQUANTITY

0.96+

tens timesQUANTITY

0.96+

60 frames per secondQUANTITY

0.96+

TodayDATE

0.96+

NetAppTITLE

0.96+

two ordersQUANTITY

0.95+

fourQUANTITY

0.95+

almost 100%QUANTITY

0.94+

David Flynn, Hammerspace | AWS re:Invent 2018


 

>> Live from Las Vegas. It's theCUBE. Covering AWS re:Invent 2018. Brought to you by Amazon Web Services, Intel and their ecosystem partners. >> And welcome back to our continuing coverage here on theCUBE of AWS re:Invent, we're on day three of three days of wall to wall coverage that we've brought you here from the Sands Expo along with David Vellante, I'm John Walls. Glad you're with us here, we're joined by David Flynn from Hammerspace, and David, good afternoon to you. >> Good afternoon. >> Been quite a year for you, right? >> Yeah. >> This has been something else. Set us up a little bit about where you've been, the journey you're on right now with Hammerspace and maybe for folks at home who aren't familiar, a little bit about what you do. >> So Hammerspace is all about data agility. We believe that data should be like the air you breathe, where you need it, when you need it, without having to think about it. Today, data's managed by copying it between the sundry different types of storage. And that's 'cause we're managing data through the storage system itself. What we want is for data to simply be there, when you need it. So it's all about data agility. >> I need to know more. So let's talk about some of your past endeavors. Fusion-io we watched you grow that company from just an idea. You solved the block storage problem, you solved the performance problems, amazing what you guys did with that company. My understanding is you're focused on file. >> That's right. >> Which is a much larger-- >> Unstructured data in general file and object. >> So a much larger proportion of the data that's out there. >> Yes. >> What's the problem that you guys are going after? >> Well at Fusion-io and this was pre-flash, now flash everybody takes it for granted. When we started it didn't really exist in the data center. And if you're using SAN, most likely it's for performance. And there's a better way to get performance with flash down in the server. Very successful with that. Now the problem is, people want the ease of managablility of having a global name space of file and object name space. And that's what we're tackling now because file is not native in the Cloud. It's kind of an afterthought. And all of these different forms of storage represents silos into which you copy data, from on-prem into cloud, between the different types of storage, from one site to another. This is what we're addressing with virtualizing the data, putting powerful metadata in control of how that data's realized across multiple data centers across the different types of storage, so that you see it as a single piece of data regardless of where it lives. >> Okay so file's not a first class citizen. You're making copies, moving data all over the place. You got copy creep going on. >> It's like cutting off Hydra's head. When you manage data by copying it you're just making more of it and that's because the metadata's down with the data. Every time you make a copy, it's a new piece of data that needs to be managed. >> So talk more about the metadata structure, architecture, what you guys are envisioning? >> Fundamentally, the technology is a separate metadata control plane that is powerful enough to present data as both file and object. And takes that powerful metadata, and puts it in control of where the data is realized, both in terms of what data center it's in, as well as what type of storage it's on, allowing you to tap into the full dynamic range of the performance of server-attached flash, of course Fusion-io, very near and dear to my heart, getting tens of millions of I-ops and tens of gigabytes per second, you can't do that across the network. You have to have the data be very agile, and be able to be promoted into the server. And then be able to manage it all the way to global scale between whole different data centers. So that's the magic of being able to cover the full dynamic range performance to capacity, scale and distance, and have it be that same piece of data that's simply instantiated, where you need it, when you need it, based on the power of the metadata. >> So when you talk about object, you talk about a simplified means of interacting, it's a get-put paradigm right? >> That's right. >> So that's something that you're checking up? >> That's right, ultimately you need to also have random read and write semantics and very high performance, and today, the standard model is you put your data in object storage and then you have your application rewritten to pull it down, store it on some local storage, to work with it and then put it back. And that's great for very large-scale applications, where you can invest the effort to rewrite them. But what about the world where they want the convenience of, the data is simply there, in something that you can mount as a file system or access as object, and it can be at the highest performance of random IO against local flash, all the way to cold in the Cloud where it's cheap. >> I get it so it's like great for Shutterfly 'cause they've got the resources to rewrite the application but for everybody else. >> That's right, and that's why the web scalers pioneered the notion of object storage and we helped them with the local block to get very, very high performance. So that bifurcated world, because the spectrum got stretched so wide that a single size fits all no longer works. So you have to kind of take object on the capacity, distance and scale side, and block, local on the performance side. But what I realized early on, all the way back to Fusion-io is that it is possible to have a shared namespace, both file system and object, that can span that whole spectrum. But to do that you have to provide really powerful metadata as a separate service that has the competency to actually manage the realization of the data across the infrastructure. >> You know David you talk about data agility, so that's what we're all about right? We're all about being agile. Just conceptually today, a lot more data than you've ever had to deal with before. In a lot more places. >> It's a veritable forest. >> With a lot more demands, so just fundamentally, how do you secure that agility. How can you provide that kind of reliability and agility, in that environment, like the challenge for you. >> Oh yeah. Well the challenge really goes back to the fact that the network storage protocols haven't had innovation for like 20 years because of the world of NAS being so dominant by a few players, well one. There really hasn't been a lot of innovation. Y'know NFSv3 three has been around for decades. NFSv4 didn't really happen. It was slower and worse off. At the heart of the storage networking protocols for presenting a file system, it hadn't even been enhanced to be able to communicate across hostile networks. So how are you going to use that at the kind of scale and distance of cloud, right? So what I did, after leaving Fusion-io, was I went and teamed up with the world's top experts. We're talking here about Trent Micklebus, the Linux Kernel author and maintainer of the storage networking stack. And we have spent the last five plus years fixing the fundamental plumbing that makes it possible to bring the shared file semantic into something that becomes cloud native. And that really is two things. One is about the ability to scale, both performance, capacity, in the metadata and in the data. And you couldn't do that before because NAS systems fundamentally have the metadata and data together. Splitting the two allows you to scale them both. So scale is one. Also the ability to secure it over large distances and networks, the ability to operate in an eventually consistent, to work across multiple datacenters. NAS had never made the multi-datacenter leap. Or the securing it across other networks, it just hadn't got there. But that is actually secondary compared to the fact that the world of NAS is very focused on the infrastructure guys and the storage admin. And what you have to do is elevate the discussion to be about the data user and empower them with powerful metadata to do self service. And as a service so that they can completely automate all of the concerns about the infrastructure. 'Cause if there's anything that's cloud, it's being able to delegate and hand off the infrastructure concerns, and you simply can't do that when you're focused at it from a storage administration and data janitorial kind of model. >> So I want to pause for a second and just talk to our audience and just stress how important it is to pay attention to this man. So there's no such thing as a sure thing in business. But there is one sure thing that is if David Flynn's involved you're going to disrupt something so you disrupted Scuzzy, the horrible storage stack. So when you hear things today like NVME and CAPPY and Atomic Rights and storage class memory, you got it all started. Fusion-io. >> That's right. >> And that was your vision that really got that started up. When I used to talk to people about that they would say I'm crazy, and you educated myself and Floyer and now you see it coming to fruition today. So you're taking aim at decades old infrastructure and protocols called NAS, and trying to do the same thing at Cloud scale, which is obviously something you know a lot about. >> That's right. I mean if you think about it. The spectrum of data, goes from performance on the one hand to ease of manageability, distance and scale, cost capacity versus cost performance. And that's inherent to our physical universe because it takes time to propagate information to a distance and to get ease of manageability to encode things very, very tight to get capacity efficiency, takes time, which works against performance. And as technology advances the spectrum only gets wider, and that's why we're stuck to the point of having to bifurcate it, that performance is locally attached flash. And that's what I pioneered with flash in the server in NVME. I told everybody, EMC, SAN, it sucks. If you want performance put flash in the server. Now we're saying if you want ease of use and manageability there's a better way to do that than NAS, and even object storage. It's to separate the metadata as a distinct control plane that is put in charge of managing data through very rich and powerful metadata, and that puts the data owner in control of their data. Not just across different types of storage in the performance capacity spectrum, but also across on-prem and in the Cloud, and across multi-cloud. 'Cause the Cloud after all is just another big storage silo. And given the inertia of data, they've got you by the balls when they've got all the data there. (laughing) I'm sorry, I know I'm at AWS I should be careful what I say. >> Well this is live. >> Yeah, okay so they can't censor us, right. So just like the storage vendors of yesteryear, would charge you an arm and a leg when their arrays were out of service, to get out of your service, because they knew that if you were trying to extend the service life of that, that that's because it was really hard for you to get the data off of it because you had to suffer application downtime and all of that. In the same fashion, when you have your data in the Cloud, the egress costs are so expensive. And so this is all about putting the data owner in control of the data by giving them a rich powerful metadata platform to do that. >> You always want to have strategies that give you flexibility, exit strategies if things don't work out, so that's fascinating. I know we got to wrap, but give us the low-down on the company, the funding, what can you share with us. Go-to-market, et cetera. >> So it's a tightly held company. I was very successful financially. So from that point of view we're... >> Self-funded. >> Self-funded, funded from angels. I made some friends with Fusion-io right? So from that point of view yeah, it's the highest power team you can get. I mean these are great guys, the Linux Kernel maintainer on the storage networking stack. This was a heavy lift because you have to fix the fundamental plumbing in the way storage networking works so that you can, it's like a directories service for data, and then all the management service. This has been a while in the making, but it's that foundational engineering. >> You love heavy lifts. >> I love hard problems. >> I feel like I mis-introduced you, I should have said the great disruptor is what I should have said. >> Well, we'll see. I think disrupting the performance side was a pure play and very easy. Disrupting the ease of use side of the data spectrum, that's the fun one that's actually so transformative because it touches the people that use the data. >> Well best of luck. It was really, I'm excited for ya. >> Thanks for joining us David. Appreciate the time. David Flynn joined up from Hammerspace, and back with more on theCUBE at AWS re:Invent. (upbeat music)

Published Date : Nov 29 2018

SUMMARY :

Brought to you by Amazon Web Services, Intel that we've brought you here from the Sands Expo the journey you're on right now with Hammerspace We believe that data should be like the air you breathe, You solved the block storage problem, from on-prem into cloud, between the different types You're making copies, moving data all over the place. of it and that's because the metadata's down with the data. So that's the magic of being able to cover the full dynamic the data is simply there, in something that you can mount they've got the resources to rewrite the application But to do that you have to provide really powerful metadata You know David you talk about data agility, in that environment, like the challenge for you. Splitting the two allows you to scale them both. So when you hear things today like NVME and CAPPY and now you see it coming to fruition today. And given the inertia of data, they've got you by the balls In the same fashion, when you have your data in the Cloud, the company, the funding, what can you share with us. So from that point of view we're... so that you can, it's like a directories service for data, the great disruptor is what I should have said. that's the fun one that's actually so transformative Well best of luck. Appreciate the time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

David FlynnPERSON

0.99+

David VellantePERSON

0.99+

Amazon Web ServicesORGANIZATION

0.99+

AWSORGANIZATION

0.99+

John WallsPERSON

0.99+

Trent MicklebusPERSON

0.99+

20 yearsQUANTITY

0.99+

two thingsQUANTITY

0.99+

Las VegasLOCATION

0.99+

TodayDATE

0.99+

IntelORGANIZATION

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

three daysQUANTITY

0.99+

todayDATE

0.99+

tens of millionsQUANTITY

0.99+

Sands ExpoEVENT

0.98+

bothQUANTITY

0.98+

HammerspaceORGANIZATION

0.98+

Linux KernelTITLE

0.97+

oneQUANTITY

0.96+

one siteQUANTITY

0.96+

ShutterflyORGANIZATION

0.95+

single pieceQUANTITY

0.91+

day threeQUANTITY

0.9+

tens of gigabytes per secondQUANTITY

0.89+

single sizeQUANTITY

0.87+

decadesQUANTITY

0.87+

last five plus yearsDATE

0.85+

Fusion-ioTITLE

0.83+

InventEVENT

0.82+

a secondQUANTITY

0.8+

NFSv4TITLE

0.79+

one sure thingQUANTITY

0.78+

AWS re:Invent 2018EVENT

0.76+

HammerspaceTITLE

0.76+

I-opsQUANTITY

0.75+

NVMETITLE

0.74+

both fileQUANTITY

0.74+

NFSv3 threeTITLE

0.73+

first classQUANTITY

0.73+

EMCORGANIZATION

0.73+

CAPPYTITLE

0.72+

HydraORGANIZATION

0.7+

Fusion-ioORGANIZATION

0.69+

re:InventEVENT

0.65+

ScuzzyPERSON

0.61+

Fusion-ORGANIZATION

0.6+

AtomicTITLE

0.58+

ioTITLE

0.52+

2018TITLE

0.51+

FloyerORGANIZATION

0.49+

reEVENT

0.4+

Patrick Osborne, HPE | CUBEConversation, November 2018


 

>> From the SiliconANGLE Media Office in Boston, Massachusets, it's theCUBE. Now, here's your host, Dave Vellante. >> Hi everybody, welcome to this preview of HPE's, Discover Madrid storage news. We're gonna unpack that. My name is Dave Vellante and Hewlett Packard Enterprise has a six-month cadence of shows. They have one in the June timeframe in Las Vegas, and then one in Europe. This year, again, it's in Madrid and you always see them announce products and innovations coinciding with those big user shows. With me here is Patrick Osborne who's the Vice President and General Manager of Big Data and Secondary Storage at HPE. Patrick, great to see you again. >> Great to be here, love theCUBE, thanks for having us. >> Oh, you're very welcome. So let's, let's unpack some of these announcements. You guys, as I said, you're on this six-month cadence. You've got sort of three big themes that you're vectoring into, maybe you could start there. >> Yeah, so within HP Storage and Big Data where, you know, where our point of view is around intelligent storage and intelligent data management and underneath that we've kind of vectored in on three pillars that you talked about. AI driven, so essentially bringing the intelligence, self-managing, self-healing, to all of our storage platforms, and big-data platforms, built for the Cloud, right? We've got a lot of use cases, and user stories, and you've seen from an HPE perspective, Hybrid Cloud, you know, is a big investment we're making in addition to the edge. And the last is delivering all of our capabilities, from product perspective, solutions and services as a service, right? So GreenLake is something that we started a few years ago and being able to provide that type of elastic, you know, purchasing experience for our customers is gonna weave itself in further products and solutions that we announce. >> So I like your strategy around AI. AI of course gets a lot of buzz these days. You guy are taking a practical approach. The Nimble acquisition gave you some capabilities there in predictive maintenance. You've pushed it into your automation capabilities. So let's talk about the hard news specifically around InfoSight. >> Yeah, so InfoSight is an incredible platform and what you see is that we've been not only giving customers richer experiences on top of InfoSight that go further up into the stack so we're providing recommendation engines so we've got this whole concept of Cross-stack Analytics that go from, you know, your app and your virtualization layer through the physical infrastructure. So we've had a number of pieces of that, that we're announcing to give very rich, AI-driven guidance, to customers, you know, to fix specific problems. We're also extending it to more platforms. Right, we just announced last week the ability to run InfoSight on our server platforms, right? So we're starting off on a journey of providing that which we're doing at the storage and networking layer weaving in our server platform. So essentially platforms like ProLiant, Synergy, Apollo, all of our value compute platforms. So we are, we're doing some really cool stuff not only providing the experience on new platforms, but richer experiences certainly around performance bottlenecks on 3PAR so we're getting deeper AI-driven recommendation engines as well as what we call an AI-driven resource planner for Nimble. So if you take a look at it from a tops-down view this isn't AI marketing. We're actually applying these techniques and machine learning within our install base in our fleet which is growing larger as we extend support from our platforms that actually make people's lives easier from a storage administration perspective. >> And that was a big part of the acquisition that IP, that machine intelligence IP. Obviously you had to evaluate that and the complexity of bringing it across the portfolio. You know we live in this API-driven world, Nimble was a very modern platform so that facilitated that injection of that intelligence across the platform and that's what we're seeing now isn't it. >> Yeah, absolutely. You go from essentially tooling up these platforms for this very rich telemetry really delivering a differentiated support experience that takes a lot of the manual interactions and interventions from a human perspective out of it and now we're moving in with these three announcements that we've made into things that are doing predictive analytics, recommendations and automation at the end of the day. So we're really making, trying to make people's lives easier from an admin perspective and giving them time back to work on higher value activities. >> Well let's talk about Cloud. HP doesn't have a public Cloud like an Amazon or an Azure, you partner with those guys, but you have Cloud Volumes, which is Cloud-like, it's actually Cloud from a business model perspective. Explain what Cloud Volumes is and what's the news here? >> Yeah, so, we've got a great service, it's called HPE Cloud Volumes and you'll see throughout the year us extending more user stories and experiences for Hybrid Cloud, right. So we have CloudBank, which focuses on secondary storage, Cloud Volumes is for primary storage users, so it is a Cloud, public Cloud adjacent storage as a service and it allows you to go into the portal, into your credentials. You can enter in your credit card number and essentially get storage as a service as an adjacent, or replacement data service for, for example, EBS from Amazon. So you're able to stand up storage as a service within a co-location facility that we manage and it's completely delivered as a service and then our announcement for that is that, so what we've done in the Americas is you can essentially apply compute instances from the public Cloud to that storage, so it's in a co-location facility it's very close from a latency standpoint to the public Cloud. Now we're gonna be extending that service into Europe, so UK, Ireland, and for the EMEA users as well as now we can also support persistent storage work loads for Docker and Kubernetes and this is a big win for a lot of customers that wanna do continuous improvement, continuous development, and use those containerized frameworks and then you can essentially, you know, integrate with your on-prem storage to your off-prem and then pull in the compute from the Cloud. >> Okay so you got that, write once, run anywhere sort of model. I was gonna ask you well why would I do this instead of EBS, I think you just answered that question. It's because you now can do that anywhere, hybrid is a key theme here, right? >> Yeah, also too from a resiliency perspective, performance, and durability perspective, the service that we provide is, you know, certainly six-nines, very high performant, from a latency perspective. We've been in the enterprise-storage game for quite some time so we feel we've got a really good service just from the technology perspective as well. >> And the European piece, I presume a lot of that is, well of course, GDPR, the fines went into effect in May of 2018. There's a lot of discussion about okay, data can't leave a particular locality, it's especially onerous in Europe, but probably other places as well. So there's a, there's a data locality governance compliance angle here too, is there not? >> Yeah, absolutely, and for us if you take a specific industry like healthcare, you know, for example, so you have to have pretty clear line of sight for your data provenance so it allows us to provide the service in these locations for a healthcare customer, or a healthcare ISV, you know, SAS provider to be able to essentially point to where that data is, you know, and so for us it's gonna be an entrance into that vertical for hybrid Cloud use cases. >> Alright so, so again, we've got the AI-driven piece, the Cloud piece, I see as a service, which is the third piece, I see Cloud as one, and as a service is one-A, it's almost like a feature of Cloud. So let's unpack that a little bit. What are you announcing in as a service and what's your position there? >> Yeah, so our vision is to be able to provide, and as a service experience, for almost everything we have that we provide our customers. Whether it's an individual product, whether it's a solution, or actually like a segment, right? So in the space that I work in, in Big Data and secondary service, secondary storage, backup is a service, for example, right, it's something that customers want, right? They don't want to be able to manage that on their own by piece parts, architect the whole thing, so what we're able to do is provide your primary storage, your secondary storage, your backup ISV, so in this case we're gonna be providing backup as a service through GreenLake with Vim. And then we even can bring in your Cloud capacity, so for example, Azure Blob Storage which will be your tertiary storage, you know, from an archive perspective. So for us it really allows us to provide customers an experience that, you know, is more of an, it's an experienced, Cloud is a destination, we're providing a multi-Cloud, a Hybrid-Cloud experience not only from a technology perspective, but also from a purchasing flex up, flex down, flex out experience and we're gonna keep on doing that over and over for the next, you know, foreseeable future. >> So you've been doing GreenLake for awhile here-- >> Yeah, absolutely. >> So how's that going and what's new here? >> Yeah, so that's been going great. We have well over, I think at this point, 500 petabytes on our management under GreenLake and so the service is, it's interesting when you think about it, when we were designing this we thought, just like the public Cloud, the compute as a service would take off, but from our perspective I think one of the biggest pain points for customers is managing data, you know, storage and Big Data, so storage as a service has grown very rapidly. So these services are very popular and we'll keep on iterating on them to create maximum velocity. One of the other things that's interesting about some of these accounting rules that have taken place, is that customers seed to us the, the ability to do architecture, right, so we're essentially creating no Snowflakes for our customers and they get better outcomes from a business perspective so we help them with the architecture, we help them with planning an architecture of the actual equipment and then they get a very defined business outcome in SLA that they pay for as a service, right? So it's a win-win across the board, is really good. >> Okay, so no Snowflakes as in, not everything's custom-- >> Absolutely. >> And then that, so that lowers not only your cost, it lowers the customer's cost. So let's take an example like that, let's take backup as a service which is part of GreenLake. How does that work if I wanna engage with you on backup as a service? >> Yeah, so we have a team of folks in Pointnext that can engage like very far up in the front end, right, so they say, hey, listen, I know that I need to do a major re-architecture for my secondary storage, HPE, can you help me out? So we provide advisory services, we have well-known architectures that fit a set of well-known mission critical, business critical applications at a typical customer site so we can drive that all the way from the inception of that project to implementation. We can take more customized view, or a road-mapped approach to customers where they want to bite off a little bit at a time and use things like Flex Capacity, and then weave in a full GreenLake implementation so it's very flexible in terms of the way we can implement it. So we can go soup to nuts, or we can get down to a very small granular pieces of infrastructure. >> Just sticking on data protection for a second, I saw a stat the other day, it's a fairly well, you know, popular, often quoted stat, it was Gartner I think, is 50% of customers are gonna change their backup platform by like 2023 or something. And you think about, and by the way, I think that's a legitimate stat and when you talk to customers about why, well things are changing, the Cloud, Multicloud, things like GDPR, Ransomware, digital transformation, I wanna get more out of my data then just insurance, my backup then just insurance, I wanna do analytics. So there's all these other sort of evolving things. I presume your backup as a service is evolving with that? >> Absolutely. >> What are you seeing there? >> Yeah, we're definitely seeing that the secondary storage market is very dynamic in terms of the expectations from customers, are, you know, they're changing, and changing very rapidly. And so not only are providing things like GreenLake and backup as a service we're also seeking new partners in this space so one of the big announcements that we'll make at Discover is we are doing a pretty big amplification of our partnership in an OEM relationship with Cohesity, right, so a lot of customers are looking for a secondary platform from a consolidation standpoint, so being able to run a number of very different disparate workloads from a secondary storage perspective and make them, you know, work. So it's a great platform scale-out. It's gonna run on a number of our HPE platforms, right, so we're gonna be able to provide customers that whole solution from HPE partnering with Cohesity. So, you know, in general this secondary storage market's hot and we're making some bets in our ecosystem right now. >> You also have Big Data in your title so you're responsible for that portfolio. I know Apollo in the HPC world has been at a foothold there. There's a lot of synergies between high-performance computing and Big Data-- >> Absolutely. >> What's going on in the Big Data world? >> Yeah, so Big Data is one of our fastest growing segments within HPE. I'd say Big Data and Analytics and some of the things that are going on with AI, and commercial high-performance applications. So for us we're, we have a new platform that we're announcing, our Gen10 version of Apollo 4200, it's definitely the workhorse of our Apollo server line for applications like, Cloudera, Hortonworks, MapR, we see Apache Spark, Kafka, a number of these as well as some of these newer workloads around HPC, so TensorFlow, Caffe, H2O, and so that platform allows us with a really good compute memory and storage mix, from a footprint perspective, and it certainly scales into rack-level infrastructure. That part of the business for us is growing very quickly. I think a lot of customers are using these Big Data Analytics techniques to transform their business and, you know, as we go along and help them it certainly, it's been a really cool ride to see all this implemented at customer sites. >> You know with all this talk about sort of Big Data and Analytics, and Cloud, and AI, you sort of, you know, get lost, the infrastructure kinda gets lost, but you know, the plumbing still matters, right, and so underneath this. So we saw the flash trend, and that really had a major impact on certainly the storage business specifically, but generally, the overall marketplace, I mean, you really, it'd be hard to support a lot of these emerging workloads without flash and that stack continues to evolve, the pyramid if you will. So you've got flash memory now replacing much of the spinning disk space, you've got DRAM which obviously is the most expensive, highest performance, and there seems to be this layer emerging in the middle, this storage-class memory layer. What are you guys doing there? Is there anything new there? >> Yeah, so we've got a couple things cooking in that space. In general, like when you talk about the infrastructure it is important, right, and we're trying to help customers not only by providing really good product in scalable infrastructure, things like Apollo, you know our system's Nimble 3PAR. We're also trying to provide experience around that too. So, you know, combining things like InfoSight, InfoSight on storage, InfoSight on servers and Apollo for Big Data workloads is something that we're gonna be delivering in the future. The platforms really matter. So we're gonna be introducing NVME and storage class memory into our, what we feel is the industry-leading portfolio for our, for flash storage. So between Nimble and 3PAR we'll have, those platforms will be, and they're NVME ready and we'll be making some product announcements on the availability of that type of medium. So if you think about using it in a platform like 3PAR, right, industry leading from a performance perspective allows to get sub 200 millisecond performance for very mission-critical latency intolerant applications and it's a great architecture. It scales in parallel, active, active, active, right, so you can get quite a bit of performance from a very, a large 3PAR system and we're gonna be introducing NVME into that equation as a part of this announcement. >> So, we see this as critical, for years, in the storage business, you talk about how storage is growing, storage is growing, storage is growing, and we'd show the charts upper to the right, and, but it always like yeah, and somehow you gotta store it, you gotta manage it, you might have to move it, it's a real pain. The whole equation is changing now because of things like flash, things like GPU, storage class memory, NVME, now you're seeing, and of course all this ML and deep learning tech, and now you're seeing things that you're able to do with the data that you've never been able to do before-- >> Absolutely. >> And emerging use cases and so it's not just lots of data, it's completely new use cases and it's driving new demands for infrastructure isn't it? >> Absolutely, I mean, there's some macro economic tailwinds that we had this year, but HP had a phenomenal year this year and we're looking at some pretty good outlooks into next year as well. So, yeah, from our perspective the requirement for customers, for latency improvements, bandwidth improvements, and total addressable capacity improvements is, never stops, right? So it's always going on and it's the data pipeline is getting longer. The amount of services and experiences that you're tying on to, existing applications, keeps on augmenting, right? So for us there's always new capabilities, always new ways that we can improve our products. We use for things like InfoSight, and a lot of the predictive Analytics, we're using those techniques for ourselves to improve our customers experience with our products. So it's been, it's a very, you know, virtual cycle in the industry right now. >> Well Patrick, thanks for coming in to theCube and unpacking these announcements at Discover Madrid. You're doing a great job sort of executing on the storage plan. Every time I see you there's new announcements, new innovations, you guys are hittin' all your marks, so congratulations on that. >> HPE, intelligent storage, intelligent data management, so if you guys have data needs you know where to come to. >> Alright, thanks again Patrick. >> Great, thank you so much. >> Talk to you soon. Alright, thanks for watching everybody. This is Dave Vellante from theCUBE. We'll see ya next time. (upbeat music)

Published Date : Nov 27 2018

SUMMARY :

From the SiliconANGLE Media Office and you always see them announce products and innovations Great to be here, love theCUBE, maybe you could start there. that type of elastic, you know, So let's talk about the hard news and what you see is that we've been not only of that intelligence across the platform that takes a lot of the manual interactions but you have Cloud Volumes, which is Cloud-like, from the public Cloud to that storage, Okay so you got that, write once, run anywhere the service that we provide is, you know, And the European piece, I presume a lot of that is, Yeah, absolutely, and for us if you take What are you announcing in as a service for the next, you know, foreseeable future. and so the service is, How does that work if I wanna engage with you of the way we can implement it. and when you talk to customers about why, and make them, you know, work. I know Apollo in the HPC world has been and so that platform allows us the pyramid if you will. right, so you can get quite a bit of performance in the storage business, you talk about how So it's been, it's a very, you know, virtual cycle new innovations, you guys are hittin' all your marks, so if you guys have data needs Talk to you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
PatrickPERSON

0.99+

Dave VellantePERSON

0.99+

EuropeLOCATION

0.99+

MadridLOCATION

0.99+

Patrick OsbornePERSON

0.99+

BostonLOCATION

0.99+

AmazonORGANIZATION

0.99+

Las VegasLOCATION

0.99+

IrelandLOCATION

0.99+

HPEORGANIZATION

0.99+

six-monthQUANTITY

0.99+

50%QUANTITY

0.99+

HPORGANIZATION

0.99+

May of 2018DATE

0.99+

AmericasLOCATION

0.99+

November 2018DATE

0.99+

UKLOCATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

next yearDATE

0.99+

DiscoverORGANIZATION

0.99+

ApolloORGANIZATION

0.99+

NimbleORGANIZATION

0.99+

last weekDATE

0.99+

500 petabytesQUANTITY

0.99+

third pieceQUANTITY

0.99+

this yearDATE

0.99+

This yearDATE

0.99+

EBSORGANIZATION

0.99+

three announcementsQUANTITY

0.98+

Discover MadridORGANIZATION

0.98+

JuneDATE

0.98+

CohesityORGANIZATION

0.98+

InfoSightORGANIZATION

0.98+

oneQUANTITY

0.98+

GartnerORGANIZATION

0.98+

GDPRTITLE

0.97+

Big DataORGANIZATION

0.97+

SASORGANIZATION

0.96+

KafkaTITLE

0.96+

CloudTITLE

0.96+

OneQUANTITY

0.95+

SynergyORGANIZATION

0.95+

SiliconANGLE Media OfficeORGANIZATION

0.95+

Cloud VolumesTITLE

0.94+

few years agoDATE

0.93+

MassachusetsLOCATION

0.93+

EMEAORGANIZATION

0.91+

ApacheORGANIZATION

0.91+

GreenLakeORGANIZATION

0.91+

VimORGANIZATION

0.85+

six-ninesQUANTITY

0.84+

PointnextORGANIZATION

0.83+

GreenLakeTITLE

0.83+

MapRTITLE

0.82+

threeQUANTITY

0.79+

ProLiantORGANIZATION

0.79+

theCUBEORGANIZATION

0.79+