Kim Leyenaar, Broadcom | SuperComputing 22
(Intro music) >> Welcome back. We're LIVE here from SuperComputing 22 in Dallas Paul Gillin, for Silicon Angle in theCUBE with my guest host Dave... excuse me. And our, our guest today, this segment is Kim Leyenaar who is a storage performance architect at Broadcom. And the topic of this conversation is, is is networking, it's connectivity. I guess, how does that relate to the work of a storage performance architect? >> Well, that's a really good question. So yeah, I have been focused on storage performance for about 22 years. But even, even if we're talking about just storage the entire, all the components have a really big impact on ultimately how quickly you can access your data. So, you know, the, the switches the memory bandwidth, the, the expanders the just the different protocols that you're using. And so, and the big part of is actually ethernet because as you know, data's not siloed anymore. You have to be able to access it from anywhere in the world. >> Dave: So wait, so you're telling me that we're just not living in a CPU centric world now? >> Ha ha ha >> Because it is it is sort of interesting. When we talk about supercomputing and high performance computing we're always talking about clustering systems. So how do you connect those systems? Isn't that, isn't that kind of your, your wheelhouse? >> Kim: It really is. >> Dave: At Broadcom. >> It's, it is, it is Broadcom's wheelhouse. We are all about interconnectivity and we own the interconnectivity. You know, you know, years ago it was, 'Hey, you know buy this new server because, you know, we we've added more cores or we've got better memory.' But now you've got all this siloed data and we've got you know, we've got this, this stuff or defined kind of environment now this composable environments where, hey if you need more networking, just plug this in or just go here and just allocate yourself more. So what we're seeing is these silos really of, 'hey here's our compute, here's your networking, here's your storage.' And so, how do you put those all together? The thing is interconnectivity. So, that's really what we specialize in. I'm really, you know, I'm really happy to be here to talk about some of the things that that we do to enable high performance computing. >> Paul: Now we're seeing, you know, new breed of AI computers being built with multiple GPUs very large amounts of data being transferred between them. And the internet really has become a, a bottleneck. The interconnect has become a bottle, a bottleneck. Is that something that Broadcom is working on alleviating? >> Kim: Absolutely. So we work with a lot of different, there's there's a lot of different standards that we work with to define so that we can make sure that we work everywhere. So even if you're just a dentist's office that's deploying one server, or we're talking about these hyperscalers that are, you know that have thousands or, you know tens of thousands of servers, you know, we're working on making sure that the next generation is able to outperform the previous generation. Not only that, but we found that, you know with these siloed things, if, if you add more storage but that means we're going to eat up six cores using that it's not really as useful. So Broadcom's really been focused on trying to offload the CPU. So we're offloading it from, you know data security, data protection, you know, we're we do packet sniffing ourselves and things like that. So no longer do we rely on the CPU to do that kind of processing for us but we become very smart devices all on our own so that they work very well in these kind of environments. >> Dave: So how about, give, give us an example. I know a lot of the discussion here has been around using ethernet as the connectivity layer. >> Yes. >> You know, in in, in the past, people would think about supercomputing as exclusively being InfiniBand based. >> Ha ha ha. >> But give, give us an idea of what Broadcom is doing in the ethernet space. What, you know, what's what are the advantages of using ethernet? >> Kim: So we've made two really big announcements. The first one is our Tomahawk five ethernet switch. So it's a 400 gigi ethernet switch. And the other thing we announced too was our Thor. So we have, these are our network controllers that also support up to 400 gigi each as well. So, those two alone, it just, it's amazing to me how much data we're able to transfer with those. But not only that, but they're super super intelligent controllers too. And then we realized, you know, hey, we're we're managing all this data, let's go ahead and offload the CPU. So we actually adopted the Rocky Standards. So that's one of the things that puts us above InfiniBand is that ethernet is ubiquitous, it's everywhere. And InfiniBand is primarily just owned by one or two companies. And, and so, and it's also a lot more expensive. So ethernet is just, it's everywhere. And now with the, with the Rocky standards, we're working along with, it's, it's, it does what you're talking about much better than, you know predecessors. >> Tell us about the Rocky Standards. I'm not familiar with it. I'm sure some of our listeners are not. What is the Rocky standard? >> Kim: Ha ha ha. So it's our DNA over converged to ethernet. I'm not a Rocky expert myself but I am an expert on how to offload the CPU. And so one of the things it does is instead of using the CPU to transfer the data from, you know the user space over to the next, you know server when you're transferring it we actually will do it ourselves. So we'll handle it ourselves. We will take it, we will move it across the wire and we will put it in that remote computer. And we don't have to ask the CPU to do anything to get involved in that. So big, you know, it's a big savings. >> Yeah, I mean in, in a nutshell, because there are parts of the InfiniBand protocol that are essentially embedded in RDMA over converged ethernet. So... >> Right. >> So if you can, if you can leverage kind of the best of both worlds, but have it in an ethernet environment which is already ubiquitous, it seems like it's, kind of democratizing supercomputing and, and HPC and I know you guys are big partners with Dell as an example, you guys work with all sorts of other people. >> Kim: Yeah. >> But let's say, let's say somebody is going to be doing ethernet for connectivity, you also offer switches? >> Kim: We do, actually. >> So is that, I mean that's another piece of the puzzle. >> That's a big piece of the puzzle. So we just released our, our Atlas 2 switch. It is a PCIE Gen Five switch. And... >> Dave: What does that mean? What does Gen five, what does that mean? >> Oh, Gen Five PCIE, it's it's a magic connectivity right now. So, you know, we talk about the Sapphire Rapids release as well as the GENUWA release. I know that those, you know those have been talked about a lot here. I've been walking around and everybody's talking about it. Well, those enable the Gen Five PCIE interfaces. So we've been able to double the bandwidth from the Gen Four up to the Gen Five. So, in order to, to support that we do now have our Atlas two PCIE Gen Five switch. And it allows you to connect especially around here we're talking about, you know artificial intelligence and machine learning. A lot of these are relying on the GPU and the DPU that you see, you know a lot of people talking about enabling. So by in, you know, putting these switches in the servers you can connect multitudes of not only NVME devices but also these GPUs and these, these CPUs. So besides that we also have the storage component of it too. So to support that, we we just recently have released our 9,500 series HBAs which support 24 gig SAS. And you know, this is kind of a, this is kind of a big deal for some of our hyperscalers that say, Hey, look our next generation, we're putting a hundred hard drives in. So we're like, you know, so a lot of it is maybe for cold storage, but by giving them that 24 gig bandwidth and by having these mass 24 gig SAS expanders that allows these hyperscalers to build up their systems. >> Paul: And how are you supporting the HPC community at large? And what are you doing that's exclusively for supercomputing? >> Kim: Exclusively for? So we're doing the interconnectivity really for them. You know, you can have as, as much compute power as you want, but these are very data hungry applications and a lot of that data is not sitting right in the box. A lot of that data is sitting in some other country or in some other city, or just the box next door. So to be able to move that data around, you know there's a new concept where they say, you know do the compute where the data is and then there's another kind of, you know the other way is move the data around which is a lot easier kind of sometimes, but so we're allowing us to move that data around. So for that, you know, we do have our our tomahawk switches, we've got our Thor NICS and of course we got, you know, the really wide pipe. So our, our new 9,500 series HBA and RAID controllers not only allow us to do, so we're doing 28 gigabytes a second that we can trans through the one controller, and that's on protected data. So we can actually have the high availability protected data of RAID 5 or RAID 6, or RAID 10 in the box giving in 27 gigabytes a second. So it's, it's unheard of the latency that we're seeing even off of this too, we have a right cash latency that is sub 8 microseconds that is lower than most of the NVME drives that you see, you know that are available today. So, so you know we're able to support these applications that require really low latency as well as data protection. >> Dave: So, so often when we talk about the underlying hardware, it's a it's a game of, you know, whack-a-mole chase the bottleneck. And so you've mentioned PCIE five, a lot of folks who will be implementing five, gen five PCIE five are coming off of three, not even four. >> Kim: I know. >> So make, so, so they're not just getting a last generation to this generation bump but they're getting a two generations, bump. >> Kim: They are. >> How does that, is it the case that it would never make sense to use a next gen or a current gen card in an older generation bus because of the mismatch and performance? Are these things all designed to work together? >> Uh... That's a really tough question. I want to say, no, it doesn't make sense. It, it really makes sense just to kind of move things forward and buy a card that's made for the bus it's in. However, that's not always the case. So for instance, our 9,500 controller is a Gen four PCIE but what we did, we doubled the PCIE so it's a by 16, even though it's a gen four, it's a by 16. So we're getting really, really good bandwidth out of it. As I said before, you know, we're getting 28, 27.8 or almost 28 gigabytes a second bandwidth out of that by doubling the PCIE bus. >> Dave: But they worked together, it all works together? >> All works together. You can put, you can put our Gen four and a Gen five all day long and they work beautifully. Yeah. We, we do work to validate that. >> We're almost out our time. But I, I want to ask you a more, nuts and bolts question, about storage. And we've heard for, you know, for years of the aerial density of hard disk has been reached and there's really no, no way to excel. There's no way to make the, the dish any denser. What is the future of the hard disk look like as a storage medium? >> Kim: Multi actuator actually, we're seeing a lot of multi-actuator. I was surprised to see it come across my desk, you know because our 9,500 actually does support multi-actuator. And, and, and so it was really neat after I've been working with hard drives for 22 years and I remember when they could do 30 megabytes a second, and that was amazing. That was like, wow, 30 megabytes a second. And then, about 15 years ago, they hit around 200 to 250 megabytes a second, and they stayed there. They haven't gone anywhere. What they have done is they've increased the density so that you can have more storage. So you can easily go out and buy 15 to 30 terabyte drive, but you're not going to get any more performance. So what they've done is they've added multiple actuators. So each one of these can do its own streaming and each one of these can actually do their own seeking. So you can get two and four. And I've even seen a talk about, you know eight actuator per disc. I, I don't think that, I think that's still theory, but but they could implement those. So that's one of the things that we're seeing. >> Paul: Old technology somehow finds a way to, to remain current. >> It does. >> Even it does even in the face of new alternatives. Kim Leyenaar, Storage Architect, Storage Performance Architect at Broadcom Thanks so much for being here with us today. Thank you so much for having me. >> This is Paul Gillin with Dave Nicholson here at SuperComputing 22. We'll be right back. (Outro music)
SUMMARY :
And the topic of this conversation is, is So, you know, the, the switches So how do you connect those systems? buy this new server because, you know, we you know, new breed So we're offloading it from, you know I know a lot of the You know, in in, in the What, you know, what's And then we realized, you know, hey, we're What is the Rocky standard? the data from, you know of the InfiniBand protocol So if you can, if you can So is that, I mean that's So we just released So we're like, you know, So for that, you know, we do have our it's a game of, you know, So make, so, so they're not out of that by doubling the PCIE bus. You can put, you can put And we've heard for, you know, for years so that you can have more storage. to remain current. Even it does even in the with Dave Nicholson here
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kim Leyenaar | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kim | PERSON | 0.99+ |
30 megabytes | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
9,500 | QUANTITY | 0.99+ |
28 | QUANTITY | 0.99+ |
22 years | QUANTITY | 0.99+ |
six cores | QUANTITY | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
24 gig | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Rocky | ORGANIZATION | 0.98+ |
27.8 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
30 terabyte | QUANTITY | 0.98+ |
both worlds | QUANTITY | 0.98+ |
about 22 years | QUANTITY | 0.97+ |
two generations | QUANTITY | 0.97+ |
each one | QUANTITY | 0.97+ |
SuperComputing 22 | ORGANIZATION | 0.97+ |
one controller | QUANTITY | 0.97+ |
three | QUANTITY | 0.96+ |
two really big announcements | QUANTITY | 0.96+ |
250 megabytes | QUANTITY | 0.96+ |
one server | QUANTITY | 0.94+ |
Gen four | COMMERCIAL_ITEM | 0.94+ |
up to 400 gigi | QUANTITY | 0.93+ |
Rocky standards | ORGANIZATION | 0.93+ |
tens of thousands of servers | QUANTITY | 0.93+ |
400 gigi | QUANTITY | 0.92+ |
around 200 | QUANTITY | 0.92+ |
9,500 series | QUANTITY | 0.92+ |
excel | TITLE | 0.91+ |
9,500 series | COMMERCIAL_ITEM | 0.9+ |
16 | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.89+ |
sub 8 microseconds | QUANTITY | 0.89+ |
gen four | COMMERCIAL_ITEM | 0.89+ |
eight actuator | QUANTITY | 0.89+ |
second bandwidth | QUANTITY | 0.88+ |
Atlas 2 | COMMERCIAL_ITEM | 0.86+ |
GENUWA | ORGANIZATION | 0.86+ |
Thor | ORGANIZATION | 0.85+ |
five | TITLE | 0.85+ |
about 15 years ago | DATE | 0.84+ |
28 gigabytes | QUANTITY | 0.84+ |
Gen Five | COMMERCIAL_ITEM | 0.83+ |
27 gigabytes a second | QUANTITY | 0.82+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
(upbeat music) (logo swooshing) >> Good morning and welcome back to Dallas, ladies and gentlemen, we are here with theCUBE Live from Supercomputing 2022. David, my cohost, how are you doing? Exciting, day two, feeling good? >> Very exciting. Ready to start off the day. >> Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >> Thank you for having us. >> Thank you for having us. >> I'm excited that you're starting off the day because we've been hearing a lot of rumors about Ethernet as the fabric for HPC, but we really haven't done a deep dive yet during the show. You all seem all in on Ethernet. Tell us about that. Armando, why don't you start? >> Yeah, I mean, when you look at Ethernet, customers are asking for flexibility and choice. So when you look at HPC, InfiniBand's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial in their enterprise customers. And not everybody wants to be in the top 500, what they want to do is improve their job time and improve their latency over the network. And when you look at Ethernet, you kind of look at the sweet spot between 8, 12, 16, 32 nodes, that's a perfect fit for Ethernet in that space and those types of jobs. >> I love that. Pete, you want to elaborate? >> Yeah, sure. I mean, I think one of the biggest things you find with Ethernet for HPC is that, if you look at where the different technologies have gone over time, you've had old technologies like, ATM, Sonic, Fifty, and pretty much everything is now kind of converged toward Ethernet. I mean, there's still some technologies such as InfiniBand, Omni-Path, that are out there. But basically, they're single source at this point. So what you see is that there is a huge ecosystem behind Ethernet. And you see that also the fact that Ethernet is used in the rest of the enterprise, is used in the cloud data centers, that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia into enterprise, into cloud service providers, it's much easier to integrate it with the same technology you're already using in those data centers, in those networks. >> So what's the state of the art for Ethernet right now? What's the leading edge? what's shipping now and what's in the near future? You're with Broadcom, you guys designed this stuff. >> Pete: Yeah. >> Savannah: Right. >> Yeah, so leading edge right now, got a couple things-- >> Savannah: We love good stage prop here on the theCUBE. >> Yeah, so this is Tomahawk 4. So this is what is in production, it's shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 terabytes per second. >> David: Okay. >> Which matches any other technology out there. Like if you look at say, InfinBand, highest they have right now that's just starting to get into production is 25.6 T. So state of the art right now is what we introduced, We announced this in August, This is Tomahawk 5, so this is 51.2 terabytes per second. So double the bandwidth, out of any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, actually, winds up being a factor of six efficiency. >> Savannah: Wow. >> 'Cause if you want, I can go into that, but... >> Why not? >> Well, what I want to know, please tell me that in your labs, you have a poster on the wall that says T five, with some like Terminator kind of character. (all laughs) 'Cause that would be cool. If it's not true, just don't say anything. I'll just... >> Pete: This can actually shift into a terminator. >> Well, so this is from a switching perspective. >> Yeah. >> When we talk about the end nodes, when we talk about creating a fabric, what's the latest in terms of, well, the nicks that are going in there, what speed are we talking about today? >> So as far as 30 speeds, it tends to be 50 gigabits per second. >> David: Okay. >> Moving to a hundred gig PAM-4. >> David: Okay. >> And we do see a lot of nicks in the 200 gig Ethernet port speed. So that would be four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon, 800 gig in the future. But say state of the art right now, we're seeing for the end node tends to be 200 gig E based on 50 gig PAM-4. >> Wow. >> Yeah, that's crazy. >> Yeah, that is great. My mind is act actively blown. I want to circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen, where do you think we are on the adoption curve and sort of in that cycle? Armando, do you want to go? >> Yeah, well, if you look at the market research, they're actually telling you it's 50/50 now. So Ethernet is at the level of 50%, InfinBand's at 50%, right? >> Savannah: Interesting. >> Yeah, and so what's interesting to us, customers are coming to us and say, hey, we want to see flexibility and choice and, hey, let's look at Ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we their have switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially MPI. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, hey, I've been InfiniBand but now I want to go Ethernet, there's going to be some learning curves there. And so what we want to do is really simplify that so that we can make it easy to install, get the cluster up and running and they can actually get some value out the cluster. >> Yeah, Pete, talk about that partnership. what does that look like? I mean, are you working with Dell before the T six comes out? Or you just say what would be cool is we'll put this in the T six? >> No, we've had a very long partnership both on the hardware and the software side. Dell's been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group, within Broadcom, we've then gotten very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, Dell can take it and deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again, in both the hardware and the software. >> So I'm fascinated by... I always like to know like what, yeah, exactly. Look, you start talking about the largest supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be two million CPUs, 2 million CPU cores. Exoflap of performance. What are the outward limits of T five in switches, building out a fabric, what does that look like? What are the increments in terms of how many... And I know it's a depends answer, but how many nodes can you support in a scale out cluster before you need another switch? Or what does that increment of scale look like today? >> Yeah, so this is 51.2 terabytes per second. Where we see the most common implementation based on this would be with 400 gig Ethernet ports. >> David: Okay. >> So that would be 128, 400 gig E ports connected to one chip. Now, if you went to 200 gig, which is kind of the state of the art for the nicks, you can have double that. So in a single hop, you can have 256 end nodes connected through one switch. >> Okay, so this T five, that thing right there, (all laughing) inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what's the form factor look like for where that T five sits? Is there just one in a chassis or you have.. What does that look like? >> It tends to be pizza boxes these days. What you've seen overall is that the industry's moved away from chassis for these high end systems more towardS pizza boxes. And you can have composable systems where, in the past you would have line cards, either the fabric cards that the line cards are plug into or interfaced to. These days what tends to happen is you'd have a pizza box and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the line card. >> David: Okay. >> So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a 2RU, with 64 OSFP ports. And often each of those OSFP, which is an 800 gig E or 800 gig port, we've broken out into two 400 gig ports. >> So yeah, in 2RU, and this is all air cooled, in 2RU, you've got 51.2 T. We do see some cases where customers would like to have different optics and they'll actually deploy 4RU, just so that way they have the phase-space density. So they can plug in 128, say QSFP 112. But yeah, it really depends on which optics, if you want to have DAK connectivity combined with optics. But those are the two most common form factors. >> And Armando, Ethernet isn't necessarily Ethernet in the sense that many protocols can be run over it. >> Right. >> I think I have a projector at home that's actually using Ethernet physical connections. But, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center Ethernet, or is this RDMA over converged Ethernet? What Are we talking about? >> Yeah, so RDMA, right? So when you look at running, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on Ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on Ethernet. If you look at NPIs officially, built to, hey, it was designed to run on InfiniBand but now what you see with Broadcom, with the great work they're doing, now we can make that work on Ethernet and get same performance, so that's huge for customers. >> Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of Ethernet in HPC in terms of AI and ML, where do you think we're going to be next year or 10 years from now? >> You want to go first or you want me to go first? >> I can start, yeah. >> Savannah: Pete feels ready. >> So I mean, what I see, I mean, Ethernet, what we've seen is that as far as on, starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. >> That's impressive. >> Pete: Yeah. >> Nicely done, casual, humble brag there. That was great, I love that. I'm here for you. >> I mean, I think that's one of the benefits of Ethernet, is the ecosystem, is the trajectory the roadmap we've had, I mean, you don't see that in any of the networking technology. >> David: More who? (all laughing) >> So I see that, that trajectory is going to continue as far as the switches doubling in bandwidth, I think that they're evolving protocols, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on RDMA, for the supercomputing, the AI/ML workloads. But we do see that as you have a mix of the applications running on these end nodes, maybe they're interfacing to the CPUs for some processing, you might use a different mix of protocols. So I'd say it's going to be doubling a bandwidth over time, evolution of the protocols. I mean, I expect that Rocky is probably going to evolve over time depending on the AI/ML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-packed optics. So right now, this chip is, all the balls in the back here, there's electrical connections. >> How many are there, by the way? 9,000 plus on the back of that-- >> 9,352. >> I love how specific it is. It's brilliant. >> Yeah, so right now, all the SERDES, all the signals are coming out electrically based, but we've actually shown, we actually we have a version of Tomahawk 4 at 25.6 T that has co-packed optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk 5. >> Nice. >> Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. >> Wow. Cool. >> So I see there's the bandwidth, there's radix's increasing, protocols, different physical connectivity. So I think there's a lot of things throughout, and the protocol stack's also evolving. So a lot of excitement, a lot of new technology coming to bear. >> Okay, You just threw a carrot down the rabbit hole. I'm only going to chase this one, okay? >> Peter: All right. >> So I think of individual discreet physical connections to the back of those balls. >> Yeah. >> So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that many optical connections? What's the mapping there? What does that look like? >> So what we've announced for Tomahawk 5 is it would have FR4 optics coming out. So you'd actually have 512 fiber pairs coming out. So basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because-- >> It's miraculous, essentially. >> Savannah: I know. >> Yeah. So a lot of people are going to be looking at this and thinking in terms of InfiniBand versus Ethernet, I think you've highlighted some of the benefits of specifically running Ethernet moving forward as HPC which sort of just trails slightly behind super computing as we define it, becomes more pervasive AI/ML. What are some of the other things that maybe people might not immediately think about when they think about the advantages of running Ethernet in that environment? Is it about connecting the HPC part of their business into the rest of it? What are the advantages? >> Yeah, I mean, that's a big thing. I think, and one of the biggest things that Ethernet has again, is that the data centers, the networks within enterprises, within clouds right now are run on Ethernet. So now, if you want to add services for your customers, the easiest thing for you to do is the drop in clusters that are connected with the same networking technology. So I think one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to Ethernet. So now you've got to train your technicians, you train your assist admins on two different network technologies. You need to have all the debug technology, all the interconnect for that. So here, the easiest thing is you can use Ethernet, it's going to give you the same performance and actually, in some cases, we've seen better performance than we've seen with Omni-Path, better than in InfiniBand. >> That's awesome. Armando, we didn't get to you, so I want to make sure we get your future hot take. Where do you see the future of Ethernet here in HPC? >> Well, Pete hit on a big thing is bandwidth, right? So when you look at, train a model, okay? So when you go and train a model in AI, you need to have a lot of data in order to train that model, right? So what you do is essentially, you build a model, you choose whatever neural network you want to utilize. But if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the CPU. And essentially, if you're going to do it maybe on CPU only, but if you do it on accelerators, well, guess what? You need a big pipe in order to get all that data through. And here's the deal, the bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage, maybe it's some new way you design a product, but that's a benefit of speed, you want faster, faster, faster. >> It's all about making it faster and easier-- for the users. >> Armando: It is. >> I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas, stakes, there's a lot going on with that. >> Making me hungry. >> I know, exactly. I'm sitting out here thinking, man, I did not have big enough breakfast. How did you come up with the name Tomahawk? >> So Tomahawk, I think it just came from a list. So we have a tried end product line. >> Savannah: Ah, yes. >> Which is a missile product line. And Tomahawk is being kind of like the bigger and batter missile, so. >> Savannah: Love this. Yeah, I mean-- >> So do you like your engineers? You get to name it. >> Had to ask. >> It's collaborative. >> Okay. >> We want to make sure everyone's in sync with it. >> So just it's not the Aquaman tried. >> Right. >> It's the steak Tomahawk. I think we're good now. >> Now that we've cleared that-- >> Now we've cleared that up. >> Armando, Pete, it was really nice to have both you. Thank you for teaching us about the future of Ethernet and HCP. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to theCUBE live from Dallas. We're here talking all things HPC and supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us. (soft music)
SUMMARY :
David, my cohost, how are you doing? Ready to start off the day. Gentlemen, thank you about Ethernet as the fabric for HPC, So when you look at HPC, Pete, you want to elaborate? So what you see is that You're with Broadcom, you stage prop here on the theCUBE. So this is what is in production, So state of the art right 'Cause if you want, I have a poster on the wall Pete: This can actually Well, so this is from it tends to be 50 gigabits per second. 800 gig in the future. that you brought up a second ago, So Ethernet is at the level of 50%, So if you have a customer that, I mean, are you working with Dell and on the APIs, on the operating system that exist today, and you Yeah, so this is 51.2 of the art for the nicks, chassis or you have.. in the past you would have line cards, for this is they tend to be two, if you want to have DAK in the sense that many as what you think of So when you look at running, Both of you get to see a lot starting off of the switch side, I'm here for you. in any of the networking technology. But we do see that as you have a mix I love how specific it is. And if you look at, from the bottom, you actually have fibers and the protocol stack's also evolving. carrot down the rabbit hole. So I think of individual How do you do that many coming out of the sides there. What are some of the other things the easiest thing for you to do is Where do you see the future So the faster you can train for the users. I love that. How did you come up So we have a tried end product line. kind of like the bigger Yeah, I mean-- So do you like your engineers? everyone's in sync with it. It's the steak Tomahawk. And thank you all for tuning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
David Nicholson | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
August | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Savannah | PERSON | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
50 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
9,000 | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
128, 400 gig | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,352 | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
Tomahawk 4 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
North America | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
one | QUANTITY | 0.98+ |
512 fiber | QUANTITY | 0.98+ |
seven times | QUANTITY | 0.98+ |
Tomahawk 5 | COMMERCIAL_ITEM | 0.98+ |
four lanes | QUANTITY | 0.98+ |
9,000 plus | QUANTITY | 0.98+ |
Dell Technologies | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
Aquaman | PERSON | 0.97+ |
Both | QUANTITY | 0.97+ |
InfiniBand | ORGANIZATION | 0.97+ |
QSFP 112 | OTHER | 0.96+ |
hundred gig | QUANTITY | 0.96+ |
Peter Del Vecchio | PERSON | 0.96+ |
25.6 terabytes per second | QUANTITY | 0.96+ |
two fascinating guests | QUANTITY | 0.96+ |
single source | QUANTITY | 0.96+ |
64 OSFP | QUANTITY | 0.95+ |
Rocky | ORGANIZATION | 0.95+ |
two million CPUs | QUANTITY | 0.95+ |
25.6 T. | QUANTITY | 0.95+ |
Peter Del Vecchio, Broadcom and Armando Acosta, Dell Technologies | SuperComputing 22
>>You can put this in a conference. >>Good morning and welcome back to Dallas. Ladies and gentlemen, we are here with the cube Live from, from Supercomputing 2022. David, my cohost, how you doing? Exciting. Day two. Feeling good. >>Very exciting. Ready to start off the >>Day. Very excited. We have two fascinating guests joining us to kick us off. Please welcome Pete and Armando. Gentlemen, thank you for being here with us. >>Having us, >>For having us. I'm excited that you're starting off the day because we've been hearing a lot of rumors about ethernet as the fabric for hpc, but we really haven't done a deep dive yet during the show. Y'all seem all in on ethernet. Tell us about that. Armando, why don't you start? >>Yeah. I mean, when you look at ethernet, customers are asking for flexibility and choice. So when you look at HPC and you know, infinite band's always been around, right? But when you look at where Ethernet's coming in, it's really our commercial and their enterprise customers. And not everybody wants to be in the top 500. What they want to do is improve their job time and improve their latency over the network. And when you look at ethernet, you kinda look at the sweet spot between 8, 12, 16, 32 nodes. That's a perfect fit for ethernet and that space and, and those types of jobs. >>I love that. Pete, you wanna elaborate? Yeah, yeah, >>Yeah, sure. I mean, I think, you know, one of the biggest things you find with internet for HPC is that, you know, if you look at where the different technologies have gone over time, you know, you've had old technologies like, you know, atm, Sonic, fitty, you know, and pretty much everything is now kind of converged toward ethernet. I mean, there's still some technologies such as, you know, InfiniBand, omnipath that are out there. Yeah. But basically there's single source at this point. So, you know, what you see is that there is a huge ecosystem behind ethernet. And you see that also, the fact that ethernet is used in the rest of the enterprise is using the cloud data centers that is very easy to integrate HPC based systems into those systems. So as you move HPC out of academia, you know, into, you know, into enterprise, into cloud service providers is much easier to integrate it with the same technology you're already using in those data centers, in those networks. >>So, so what's this, what is, what's the state of the art for ethernet right now? What, you know, what's, what's the leading edge, what's shipping now and what and what's in the near future? You, you were with Broadcom, you guys design this stuff. >>Yeah, yeah. Right. Yeah. So leading edge right now, I got a couple, you know, Wes stage >>Trough here on the cube. Yeah. >>So this is Tomahawk four. So this is what is in production is shipping in large data centers worldwide. We started sampling this in 2019, started going into data centers in 2020. And this is 25.6 tets per second. Okay. Which matches any other technology out there. Like if you look at say, infin band, highest they have right now that's just starting to get into production is 25 point sixt. So state of the art right now is what we introduced. We announced this in August. This is Tomahawk five. So this is 51.2 terabytes per second. So double the bandwidth have, you know, any other technology that's out there. And the important thing about networking technology is when you double the bandwidth, you don't just double the efficiency, it's actually winds up being a factor of six efficiency. Wow. Cause if you want, I can go into that, but why >>Not? Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, with some like Terminator kind of character. Cause that would be cool if it's not true. Don't just don't say anything. I just want, I can actually shift visual >>It into a terminator. So. >>Well, but so what, what are the, what are the, so this is, this is from a switching perspective. Yeah. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, well, the kns that are, that are going in there, what's, what speed are we talking about today? >>So as far as 30 speeds, it tends to be 50 gigabits per second. Okay. Moving to a hundred gig pan four. Okay. And we do see a lot of Knicks in the 200 gig ethernet port speed. So that would be, you know, four lanes, 50 gig. But we do see that advancing to 400 gig fairly soon. 800 gig in the future. But say state of the art right now, we're seeing for the end nodes tends to be 200 gig E based on 50 gig pan four. Wow. >>Yeah. That's crazy. Yeah, >>That is, that is great. My mind is act actively blown. I wanna circle back to something that you brought up a second ago, which I think is really astute. When you talked about HPC moving from academia into enterprise, you're both seeing this happen. Where do you think we are on the adoption curve and sort of in that cycle? Armand, do you wanna go? >>Yeah, yeah. Well, if you look at the market research, they're actually telling it's 50 50 now. So ethernet is at the level of 50%. InfiniBand is at 50%. Right. Interesting. Yeah. And so what's interesting to us, customers are coming to us and say, Hey, we want to see, you know, flexibility and choice and hey, let's look at ethernet and let's look at InfiniBand. But what is interesting about this is that we're working with Broadcom, we have their chips in our lab, we have our switches in our lab. And really what we're trying to do is make it easy to simple and configure the network for essentially mpi. And so the goal here with our validated designs is really to simplify this. So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves there. And so what we wanna do is really simplify that so that we can make it easy to install, get the cluster up and running, and they can actually get some value out of the cluster. >>Yeah. Peter, what, talk about that partnership. What, what, what does that look like? Is it, is it, I mean, are you, you working with Dell before the, you know, before the T six comes out? Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? >>No, we've had a very long partnership both on the hardware and the software side. You know, Dell has been an early adopter of our silicon. We've worked very closely on SI and Sonic on the operating system, you know, and they provide very valuable feedback for us on our roadmap. So before we put out a new chip, and we have actually three different product lines within the switching group within Broadcom, we've then gotten, you know, very valuable feedback on the hardware and on the APIs, on the operating system that goes on top of those chips. So that way when it comes to market, you know, Dell can take it and, you know, deliver the exact features that they have in the current generation to their customers to have that continuity. And also they give us feedback on the next gen features they'd like to see again in both the hardware and the software. >>So, so I, I'm, I'm just, I'm fascinated by, I I, I always like to know kind like what Yeah, exactly. Exactly right. Look, you, you start talking about the largest super supercomputers, most powerful supercomputers that exist today, and you start looking at the specs and there might be 2 million CPUs, 2 million CPU cores, yeah. Ex alop of, of, of, of performance. What are the, what are the outward limits of T five in switches, building out a fabric, what does that look like? What are the, what are the increments in terms of how many, and I know it, I know it's a depends answer, but, but, but how many nodes can you support in a, in a, in a scale out cluster before you need another switch? What does that increment of scale look like today? >>Yeah, so I think, so this is 51.2 terras per second. What we see the most common implementation based on this would be with 400 gig ethernet ports. Okay. So that would be 128, you know, 400 giggi ports connected to, to one chip. Okay. Now, if you went to 200 gig, which is kind of the state of the art for the Nicks, you can have double that. Okay. So, you know, in a single hop you can have 256 end nodes connected through one switch. >>So, okay, so this T five, that thing right there inside a sheet metal box, obviously you've got a bunch of ports coming out of that. So what is, what does that, what's the form factor look like for that, for where that T five sits? Is there just one in a chassis or you have, what does that look >>Like? It tends to be pizza boxes these days. Okay. What you've seen overall is that the industry's moved away from chassis for these high end systems more towards pizza, pizza boxes. And you can have composable systems where, you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface to these days, what tends to happen is you'd have a pizza box, and if you wanted to build up like a virtual chassis, what you would do is use one of those pizza boxes as the fabric card, one of them as the, the line card. >>Okay. >>So what we see, the most common form factor for this is they tend to be two, I'd say for North America, most common would be a two R U with 64 OSF P ports. And often each of those OSF p, which is an 800 gig e or 800 gig port, we've broken out into two 400 gig quarts. Okay. So yeah, in two r u you've got, and this is all air cooled, you know, in two re you've got 51.2 T. We do see some cases where customers would like to have different optics, and they'll actually deploy a four U just so that way they have the face place density, so they can plug in 128, say qsf P one 12. But yeah, it really depends on which optics, if you wanna have DAK connectivity combined with, with optics. But those are the two most common form factors. >>And, and Armando ethernet isn't, ethernet isn't necessarily ethernet in the sense that many protocols can be run over it. Right. I think I have a projector at home that's actually using ethernet physical connections. But what, so what are we talking about here in terms of the actual protocol that's running over this? Is this exactly the same as what you think of as data center ethernet, or, or is this, you know, RDMA over converged ethernet? What, what are >>We talking about? Yeah, so our rdma, right? So when you look at, you know, running, you know, essentially HPC workloads, you have the NPI protocol, so message passing interface, right? And so what you need to do is you may need to make sure that that NPI message passing interface runs efficiently on ethernet. And so this is why we want to test and validate all these different things to make sure that that protocol runs really, really fast on ethernet, if you look at NPI is officially, you know, built to, Hey, it was designed to run on InfiniBand, but now what you see with Broadcom and the great work they're doing now, we can make that work on ethernet and get, you know, it's same performance. So that's huge for customers. >>Both of you get to see a lot of different types of customers. I kind of feel like you're a little bit of a, a looking into the crystal ball type because you essentially get to see the future knowing what people are trying to achieve moving forward. Talk to us about the future of ethernet in hpc in terms of AI and ml. Where, where do you think we're gonna be next year or 10 years from now? >>You wanna go first or you want me to go first? I can start. >>Yeah. Pete feels ready. >>So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, is that we've consistently doubled the bandwidth every 18 to 24 months. That's >>Impressive. >>Yeah. So nicely >>Done, casual, humble brag there. That was great. That was great. I love that. >>I'm here for you. I mean, I think that's one of the benefits of, of Ethan is like, is the ecosystem, is the trajectory, the roadmap we've had, I mean, you don't see that in any other networking technology >>More who, >>So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, you know, doubling in bandwidth. I think that, you know, they're evolving protocols. You know, especially again, as you're moving away from academia into the enterprise, into cloud data centers, you need to have a combination of protocols. So you'll probably focus still on rdma, you know, for the supercomputing, the a AIML workloads. But we do see that, you know, as you have, you know, a mix of the applications running on these end nodes, maybe they're interfacing to the, the CPUs for some processing, you might use a different mix of protocols. So I'd say it's gonna be doubling a bandwidth over time evolution of the protocols. I mean, I expect that Rocky is probably gonna evolve over time depending on the a AIML and the HPC workloads. I think also there's a big change coming as far as the physical connectivity within the data center. Like one thing we've been focusing on is co-pack optics. So, you know, right now this chip is all, all the balls in the back here, there's electrical connections. How >>Many are there, by the way? 9,000 plus on the back of that >>352. >>I love how specific it is. It's brilliant. >>Yeah. So we get, so right now, you know, all the thirties, all the signals are coming out electrically based, but we've actually shown, we have this, actually, we have a version of Hawk four at 25 point sixt that has co-pack optics. So instead of having electrical output, you actually have optics directly out of the package. And if you look at, we'll have a version of Tomahawk five Nice. Where it's actually even a smaller form factor than this, where instead of having the electrical output from the bottom, you actually have fibers that plug directly into the sides. Wow. Cool. So I see, you know, there's, you know, the bandwidth, there's radis increasing protocols, different physical connectivity. So I think there's, you know, a lot of things throughout, and the protocol stack's also evolving. So, you know, a lot of excitement, a lot of new technology coming to bear. >>Okay. You just threw a carrot down the rabbit hole. I'm only gonna chase this one. Okay. >>All right. >>So I think of, I think of individual discreet physical connections to the back of those balls. Yeah. So if there's 9,000, fill in the blank, that's how many connections there are. How do you do that in many optical connections? What's, what's, what's the mapping there? What does that, what does that look like? >>So what we've announced for TAMA five is it would have fr four optics coming out. So you'd actually have, you know, 512 fiber pairs coming out. So you'd have, you know, basically on all four sides, you'd have these fiber ribbons that come in and connect. There's actually fibers coming out of the, the sides there. We wind up having, actually, I think in this case, we would actually have 512 channels and it would wind up being on 128 actual fiber pairs because >>It's, it's miraculous, essentially. It's, I know. Yeah, yeah, yeah, yeah. Yeah. So, so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus versus ethernet. I think you've highlighted some of the benefits of specifically running ethernet moving forward as, as hpc, you know, which is sort of just trails slightly behind supercomputing as we define it, becomes more pervasive AI ml. What, what are some of the other things that maybe people might not immediately think about when they think about the advantages of running ethernet in that environment? Is it, is it connecting, is it about connecting the HPC part of their business into the rest of it? What, or what, what are the advantages? >>Yeah, I mean, that's a big thing. I think, and one of the biggest things that ethernet has again, is that, you know, the data centers, you know, the networks within enterprises within, you know, clouds right now are run on ethernet. So now if you want to add services for your customers, the easiest thing for you to do is, you know, the drop in clusters that are connected with the same networking technology, you know, so I think what, you know, one of the biggest things there is that if you look at what's happening with some of the other proprietary technologies, I mean, in some cases they'll have two different types of networking technologies before they interface to ethernet. So now you've got to train your technicians, you train your, your assist admins on two different network technologies. You need to have all the, the debug technology, all the interconnect for that. So here, the easiest thing is you can use ethernet, it's gonna give you the same performance. And actually in some cases we seen better performance than we've seen with omnipath than, you know, better than in InfiniBand. >>That's awesome. Armando, we didn't get to you, so I wanna make sure we get your future hot take. Where do you see the future of ethernet here in hpc? >>Well, Pete hit on a big thing is bandwidth, right? So when you look at train a model, okay, so when you go and train a model in ai, you need to have a lot of data in order to train that model, right? So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, but if you don't have a good data set that's trained over that model, you can't essentially train the model. So if you have bandwidth, you want big pipes because you have to move that data set from the storage to the cpu. And essentially, if you're gonna do it maybe on CPU only, but if you do it on accelerators, well guess what? You need a big pipe in order to get all that data through. And here's the deal. The bigger the pipe you have, the more data, the faster you can train that model. So the faster you can train that model, guess what? The faster you get to some new insight, maybe it's a new competitive advantage. Maybe it's some new way you design a product, but that's a benefit of speed you want faster, faster, faster. >>It's all about making it faster and easier. It is for, for the users. I love that. Last question for you, Pete, just because you've said Tomahawk seven times, and I'm thinking we're in Texas Stakes, there's a lot going on with with that making >>Me hungry. >>I know exactly. I'm sitting up here thinking, man, I did not have a big enough breakfast. How do you come up with the name Tomahawk? >>So Tomahawk, I think you just came, came from a list. So we had, we have a tri end product line. Ah, a missile product line. And Tomahawk is being kinda like, you know, the bigger and batter missile, so, oh, okay. >>Love this. Yeah, I, well, I >>Mean, so you let your engineers, you get to name it >>Had to ask. It's >>Collaborative. Oh good. I wanna make sure everyone's in sync with it. >>So just so we, it's not the Aquaman tried. Right, >>Right. >>The steak Tomahawk. I >>Think we're, we're good now. Now that we've cleared that up. Now we've cleared >>That up. >>Armando P, it was really nice to have both you. Thank you for teaching us about the future of ethernet N hpc. David Nicholson, always a pleasure to share the stage with you. And thank you all for tuning in to the Cube Live from Dallas. We're here talking all things HPC and Supercomputing all day long. We hope you'll continue to tune in. My name's Savannah Peterson, thanks for joining us.
SUMMARY :
how you doing? Ready to start off the Gentlemen, thank you for being here with us. why don't you start? So when you look at HPC and you know, infinite band's always been around, right? Pete, you wanna elaborate? I mean, I think, you know, one of the biggest things you find with internet for HPC is that, What, you know, what's, what's the leading edge, Trough here on the cube. So double the bandwidth have, you know, any other technology that's out there. Well, I, what I wanna know, please tell me that in your labs you have a poster on the wall that says T five with, So. When we talk about the end nodes, when we talk about creating a fabric, what, what's, what's the latest in terms of, So that would be, you know, four lanes, 50 gig. Yeah, Where do you think we are on the adoption curve and So if you have a customer that, Hey, I've been in fbe, but now I want to go ethernet, you know, there's gonna be some learning curves Or you just say, you know, what would be cool, what would be cool is we'll put this in the T six? on the operating system, you know, and they provide very valuable feedback for us on our roadmap. most powerful supercomputers that exist today, and you start looking at the specs and there might be So, you know, in a single hop you can have 256 end nodes connected through one switch. Is there just one in a chassis or you have, what does that look you know, in the past you would have line cards, either the fabric cards that the line cards are plugged into or interface if you wanna have DAK connectivity combined with, with optics. Is this exactly the same as what you think of as data So when you look at, you know, running, you know, a looking into the crystal ball type because you essentially get to see the future knowing what people are You wanna go first or you want me to go first? So I mean, what I see, I mean, ethernet, I mean, is what we've seen is that as far as on the starting off of the switch side, I love that. the roadmap we've had, I mean, you don't see that in any other networking technology So, you know, I see that, you know, that trajectory is gonna continue as far as the switches, I love how specific it is. So I see, you know, there's, you know, the bandwidth, I'm only gonna chase this one. How do you do So what we've announced for TAMA five is it would have fr four optics coming out. so, you know, a lot of people are gonna be looking at this and thinking in terms of InfiniBand versus know, so I think what, you know, one of the biggest things there is that if you look at Where do you see the future of ethernet here in So what you do is essentially you build a model, you choose whatever neural network you wanna utilize, It is for, for the users. How do you come up with the name Tomahawk? And Tomahawk is being kinda like, you know, the bigger and batter missile, Yeah, I, well, I Had to ask. I wanna make sure everyone's in sync with it. So just so we, it's not the Aquaman tried. I Now that we've cleared that up. And thank you all for tuning in to the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
August | DATE | 0.99+ |
2019 | DATE | 0.99+ |
Pete | PERSON | 0.99+ |
128 | QUANTITY | 0.99+ |
Peter | PERSON | 0.99+ |
2 million | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
30 speeds | QUANTITY | 0.99+ |
50 gig | QUANTITY | 0.99+ |
one chip | QUANTITY | 0.99+ |
400 giggi | QUANTITY | 0.99+ |
512 channels | QUANTITY | 0.99+ |
9,000 | QUANTITY | 0.99+ |
seven times | QUANTITY | 0.99+ |
800 gig | QUANTITY | 0.99+ |
Armando | PERSON | 0.99+ |
24 months | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
9,000 plus | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Peter Del Vecchio | PERSON | 0.99+ |
single source | QUANTITY | 0.99+ |
North America | LOCATION | 0.98+ |
double | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both | QUANTITY | 0.98+ |
Hawk four | COMMERCIAL_ITEM | 0.98+ |
three | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.97+ |
next year | DATE | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
Tomahawk five | COMMERCIAL_ITEM | 0.97+ |
Dell Technologies | ORGANIZATION | 0.97+ |
T six | COMMERCIAL_ITEM | 0.96+ |
two | QUANTITY | 0.96+ |
one switch | QUANTITY | 0.96+ |
Texas | LOCATION | 0.96+ |
six efficiency | QUANTITY | 0.96+ |
25 point | QUANTITY | 0.95+ |
Armando | ORGANIZATION | 0.95+ |
50 | QUANTITY | 0.93+ |
25.6 tets per second | QUANTITY | 0.92+ |
51.2 terabytes per second | QUANTITY | 0.92+ |
18 | QUANTITY | 0.91+ |
512 fiber pairs | QUANTITY | 0.91+ |
two fascinating guests | QUANTITY | 0.91+ |
hundred gig | QUANTITY | 0.91+ |
four lanes | QUANTITY | 0.9+ |
HPC | ORGANIZATION | 0.9+ |
51.2 T. | QUANTITY | 0.9+ |
InfiniBand | ORGANIZATION | 0.9+ |
256 end | QUANTITY | 0.89+ |
first | QUANTITY | 0.89+ |
Armando Acosta | PERSON | 0.89+ |
two different network technologies | QUANTITY | 0.88+ |
Travis Vigil, Dell Technologies | SuperComputing 22
>>How do y'all, and welcome to Dallas, where we're proud to be live from Supercomputing 2022. My name is Savannah Peterson, joined here by my cohost David on the Cube, and our first guest today is a very exciting visionary. He's a leader at Dell. Please welcome Travis Vhi. Travis, thank you so much for being here. >>Thank you so much for having me. >>How you feeling? >>Okay. I I'm feeling like an exciting visionary. You >>Are. That's, that's the ideas why we tee you up for that. Great. So, so tell us, Dell had some huge announcements Yes. Last night. And you get to break it to the cube audience. Give us the rundown. >>Yeah. It's a really big show for Dell. We announced a brand new suite of GPU enabled servers, eight ways, four ways, direct liquid cooling. Really the first time in the history of the portfolio that we've had this much coverage across Intel amd, Invidia getting great reviews from the show floor. I had the chance earlier to be in the whisper suite to actually look at the gear. Customers are buzzing over it. That's one thing I love about this show is the gear is here. >>Yes, it is. It is a haven for hardware nerds. Yes. Like, like well, I'll include you in this group, it sounds like, on >>That. Great. Yes. Oh >>Yeah, absolutely. And I know David is as well, sew up >>The street. Oh, big, big time. Big time hardware nerd. And just to be clear, for the kids that will be watching these videos Yes. We're not talking about alien wear gaming systems. >>No. Right. >>So they're >>Yay big yay tall, 200 pounds. >>Give us a price point on one of these things. Re retail, suggested retail price. >>Oh, I'm >>More than 10 grand. >>Oh, yeah. Yeah. Try another order of magnitude. Yeah. >>Yeah. So this is, this is the most exciting stuff from an infrastructure perspective. Absolutely. You can imagine. Absolutely. But what is it driving? So talk, talk to us about where you see the world of high performance computing with your customers. What are they, what are they doing with this? What do they expect to do with this stuff in the future? >>Yeah. You know, it's, it's a real interesting time and, and I know that the provenance of this show is HPC focused, but what we're seeing and what we're hearing from our customers is that AI workloads and traditional HPC workloads are becoming almost indistinguishable. You need the right mix of compute, you need GPU acceleration, and you need the ability to take the vast quantities of data that are being generated and actually gather insight from them. And so if you look at what customers are trying to do with, you know, enterprise level ai, it's really, you know, how do I classify and categorize my data, but more, more importantly, how do I make sense of it? How do I derive insights from it? Yeah. And so at the end of the day, you know, you look, you look at what customers are trying to do. It's, it's take all the various streams of data, whether it be structured data, whether it be unstructured data, bring it together and make decisions, make business decisions. >>And it's a really exciting time because customers are saying, you know, the same things that, that, that, you know, research scientists and universities have been trying to do forever with hpc. I want to do it on industrial scale, but I want to do it in a way that's more open, more flexible, you know, I call it AI for the rest of us. And, and, and customers are here and they want those systems, but they want the ecosystem to support ease of deployment, ease of use, ease of scale. And that's what we're providing in addition to the systems. We, we provide, you know, Dell's one of the only providers on the on in the industry that can provide not only the, the compute, but the networking and the storage, and more importantly, the solutions that bring it all together. Give you one example. We, we have what we call a validated design for, for ai. And that validated design, we put together all of the pieces, provided the recipe for customers so that they can take what used to be two months to build and run a model. We provide that capability 18 times faster. So we're talking about hours versus months. So >>That's a lot. 18 times faster. I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot up here, that makes a huge difference in what people are able to do. Absolutely. >>Absolutely. And so, I mean, we've, you know, you've been doing this for a while. We've been talking about the, the deluge of data forever, but it's gotten to the point and it's, you know, the, the disparity of the data, the fact that much of it remains siloed. Customers are demanding that we provide solutions that allow them to bring that data together, process it, make decisions with it. So >>Where, where are we in the adoption cycle early because we, we've been talking about AI and ML for a while. Yeah. You, you mentioned, you know, kind of the leading edge of academia and supercomputing and HPC and what that, what that conjures up in people's minds. Do you have any numbers or, you know, any, any thoughts about where we are in this cycle? How many, how many people are actually doing this in production versus, versus experimenting at this point? Yeah, >>I think it's a, it's a reason. There's so much interest in what we're doing and so much demand for not only the systems, but the solutions that bring the systems together. The ecosystem that brings the, the, the systems together. We did a study recently and ask customers where they felt they were at in terms of deploying best practices for ai, you know, mass deployment of ai. Only 31% of customers said that they felt that they self-reported. 31% said they felt that they were deploying best practices for their AI deployments. So almost 70% self reporting saying we're not doing it right yet. Yeah. And, and, and another good stat is, is three quarters of customers have fewer than five AI applications deployed at scale in their, in their IT environments today. So, you know, I think we're on the, you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and customers are asking, Can I do it end to end? >>Can I do it with the best of breed in terms of systems? But Dell, can you also use an ecosystem that I know and understand? And I think that's, you know, another great example of something that Dell is doing is, is we have focused on ethernet as connectivity for many of the solutions that we put together. Again, you know, provenance of hpc InfiniBand, it's InfiniBand is a great connectivity option, but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it both with InfiniBand for those, you know, government class CU scale, government scale clusters or university scale clusters and more of our enterprise customers can do it with, with ethernet on premises. It's a great option. >>Yeah. You've got so many things going on. I got to actually check out the million dollar hardware that you have just casually Yeah. Sitting in your booth. I feel like, I feel like an event like this is probably one of the only times you can let something like that out. Yeah, yeah. And, and people would actually know what it is you're working >>With. We actually unveiled it. There was a sheet on it and we actually unveiled it last night. >>Did you get a lot of uz and os >>You know, you said this was a show for hardware nerds. It's been a long time since I've been at a shoe, a show where people cheer and u and a when you take the sheet off the hardware and, and, and Yes, yes, >>Yes, it has and reveal you had your >>Moment. Exactly, exactly. Our three new systems, >>Speaking of u and os, I love that. And I love that everyone was excited as we all are about it. What I wanna, It's nice to be home with our nerds. Speaking of, of applications and excitement, you get to see a lot of different customers across verticals. Is there a sector or space that has you personally most excited? >>Oh, personally most excited, you know, for, for credibility at home when, when the sector is media and entertainment and the movie is one that your, your children have actually seen, that one gives me credibility. Exciting. It's, you can talk to your friends about it at, at at dinner parties and things like that. I'm like, >>Stuff >>Curing cancer. Marvel movie at home cred goes to the Marvel movie. Yeah. But, but, but you know, what really excites me is the variety of applications that AI is being used, used in healthcare. You know, on a serious note, healthcare, genomics, a huge and growing application area that excites me. You know, doing, doing good in the world is something that's very important to Dell. You know, know sustainability is something that's very important to Dell. Yeah. So any application related to that is exciting to me. And then, you know, just pragmatically speaking, anything that helps our customers make better business decisions excites me. >>So we are, we are just at the beginning of what I refer to as this rolling thunder of cpu. Yes. Next generation releases. We re recently from AMD in the near future it'll be, it'll be Intel joining the party Yeah. Going back and forth, back and forth along with that gen five PCI e at the motherboard level. Yep. It's very easy to look at it and say, Wow, previous gen, Wow, double, double, double. It >>Is, double >>It is. However, most of your customers, I would guess a fair number of them might be not just N minus one, but n minus two looking at an upgrade. So for a lot of people, the upgrade season that's ahead of us is going to be not a doubling, but a four x or eight x in a lot of, in a lot of cases. Yeah. So the quantity of compute from these new systems is going to be a, it's gonna be a massive increase from where we've been in, in, in the recent past, like as in last, last Tuesday. So is there, you know, this is sort of a philosophical question. We talked a little earlier about this idea of the quantitative versus qualitative difference in computing horsepower. Do we feel like we're at a point where there's gonna be an inflection in terms of what AI can actually deliver? Yeah. Based on current technology just doing it more, better, faster, cheaper? Yeah. Or do we, or do we need this leap to quantum computing to, to get there? >>Yeah. I look, >>I think we're, and I was having some really interesting conversations with, with, with customers that whose job it is to run very, very large, very, very complex clusters. And we're talking a little bit about quantum computing. Interesting thing about quantum computing is, you know, I think we're or we're a ways off still. And in order to make quantum computing work, you still need to have classical computing surrounding Right. Number one. Number two, with, with the advances that we're, we're seeing generation on generation with this, you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade cycle to, to something that because of all of the technology that's being deployed into the industry is almost more continuous upgrade cycle. I, I'm personally optimistic that we are on the, the cusp of a new level of infrastructure modernization. >>And it's not just the, the computing power, it's not just the increases in GPUs. It's not, you know, those things are important, but it's things like power consumption, right? One of the, the, the ways that customers can do better in terms of power consumption and sustainability is by modernizing infrastructure. Looking to your point, a lot of people are, are running n minus one, N minus two. The stuff that's coming out now is, is much more energy efficient. And so I think there's a lot of, a lot of vectors that we're seeing in, in the market, whether it be technology innovation, whether it be be a drive for energy efficiency, whether it be the rise of AI and ml, whether it be all of the new silicon that's coming in into the portfolio where customers are gonna have a continuous reason to upgrade. I mean, that's, that's my thought. What do you think? >>Yeah, no, I think, I think that the, the, the objective numbers that are gonna be rolling out Yeah. That are starting to roll out now and in the near future. That's why it's really an exciting time. Yeah. I think those numbers are gonna support your point. Yeah. Because people will look and they'll say, Wait a minute, it used to be a dollar, but now it's $2. That's more expensive. Yeah. But you're getting 10 times as much Yeah. For half of the amount of power boom. And it's, and it's >>Done. Exactly. It's, it's a >>Tco It's, it's no brainer. It's Oh yeah. You, it gets to the point where it's, you look at this rack of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. >>And Right. >>The power is such a huge component of this. Yeah. It's huge, huge. >>Our customer, I mean, it's always a huge issue, but our customers, especially in Amia with what's going on over there are, are saying, I, you know, I need to upgrade because I need to be more energy efficient. >>Yeah. >>Yeah. I I, we were talking about 20 years from now, so you've been at Dell over 18 years. >>Yeah. It'll be 19 in in May. >>Congratulations. Yeah. What, what commitment, so 19 years from now in your, in your second Dell career. Yeah. What are we gonna be able to say then that perhaps we can't say now? >>Oh my gosh. Wow. 19 years from now. >>Yeah. I love this as an arbitrary number too. This is great. Yeah. >>38 year Dell career. Yeah. >>That might be a record. Yeah. >>And if you'd like to share the winners of Super Bowls and World Series in advance, like the world and the, the sports element act from back to the future. So we can play ball bets power and the >>Power ball, but, but any >>Point building Yeah. I mean this is what, what, what, what do you think ai, what's AI gonna deliver in the next decade? >>Yeah. I, I look, I mean, there are are, you know, global issues that advances in computing power will help us solve. And, you know, the, the models that are being built, the ability to generate a, a digital copy of the analog world and be able to run models and simulations on it is, is amazing. Truly. Yeah. You know, I, I was looking at some, you know, it's very, it's a very simple and pragmatic thing, but I think it's, it, it's an example of, of what could be, we were with one of our technology providers and they, they were, were showing us a digital simulation, you know, a digital twin of a factory for a car manufacturer. And they were saying that, you know, it used to be you had to build the factory, you had to put the people in the factory. You had to, you know, run cars through the factory to figure out sort of how you optimize and you know, where everything's placed. >>Yeah. They don't have to do that anymore. No. Right. They can do it all via simulation, all via digital, you know, copy of, of analog reality. And so, I mean, I think the, you know, the, the, the, the possibilities are endless. And, you know, 19 years ago, I had no idea I'd be sitting here so excited about hardware, you know, here we are baby. I think 19 years from now, hardware still matters. Yeah. You know, hardware still matters. I know software eats the world, the hardware still matters. Gotta run something. Yeah. And, and we'll be talking about, you know, that same type of, of example, but at a broader and more global scale. Well, I'm the knucklehead who >>Keeps waving his phone around going, There's one terabyte in here. Can you believe that one terabyte? Cause when you've been around long enough, it's like >>Insane. You know, like, like I've been to nasa, I live in Texas, I've been to NASA a couple times. They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on >>Too far less in our pocket computers. Yeah. It's, it's amazing. >>I am an optimist on, on where we're going clearly. >>And we're clearly an exciting visionary, like we said, said the gate. It's no surprise that people are using Dell's tech to realize their AI ecosystem dreams. Travis, thank you so much for being here with us David. Always a pleasure. And thank you for tuning in to the Cube Live from Dallas, Texas. My name is Savannah Peterson. We'll be back with more supercomputing soon.
SUMMARY :
Travis, thank you so much for being here. You And you get to break it to the cube audience. I had the chance earlier to be in the whisper suite to actually look at the gear. Like, like well, I'll include you in this group, And I know David is as well, sew up And just to be clear, for the kids that will be Give us a price point on one of these things. Yeah. you see the world of high performance computing with your customers. And so at the end of the day, you know, And it's a really exciting time because customers are saying, you know, the same things that, I just wanna emphasize that 18 times faster, and we're talking about orders of magnitude and whatnot you know, the, the disparity of the data, the fact that much of it remains siloed. you have any numbers or, you know, any, any thoughts about where we are in this cycle? you know, if, if I, you think about it as a traditional S curve, I think we're at the first inflection point and but you know, there's a lot of care and feeding that goes along with InfiniBand and the fact that you can do it I got to actually check out the million dollar hardware that you have just There was a sheet on it and we actually unveiled it last night. You know, you said this was a show for hardware nerds. Our three new systems, that has you personally most excited? Oh, personally most excited, you know, for, for credibility at home And then, you know, the near future it'll be, it'll be Intel joining the party Yeah. you know, this is sort of a philosophical question. you know, what, what has moved from a kind of a three year, you know, call it a two to three year upgrade It's not, you know, those things are important, but it's things like power consumption, For half of the amount of power boom. It's, it's a of amazing stuff that you have a personal relationship with and you say, I can't afford to keep you plugged in anymore. Yeah. what's going on over there are, are saying, I, you know, I need to upgrade because Yeah. Wow. 19 years from now. Yeah. Yeah. Yeah. advance, like the world and the, the sports element act from back to the future. what's AI gonna deliver in the next decade? And they were saying that, you know, it used to be you had to build the factory, And so, I mean, I think the, you know, the, the, the, the possibilities are endless. Can you believe that one terabyte? They, you know, they talk about, they sent, you know, they sent people to the moon on, on way less, less on Yeah. And thank you for tuning in to the Cube Live from Dallas,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Travis | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
$2 | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
18 times | QUANTITY | 0.99+ |
Dallas | LOCATION | 0.99+ |
Dallas, Texas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
two months | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
200 pounds | QUANTITY | 0.99+ |
38 year | QUANTITY | 0.99+ |
31% | QUANTITY | 0.99+ |
last Tuesday | DATE | 0.99+ |
today | DATE | 0.99+ |
three year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Super Bowls | EVENT | 0.99+ |
More than 10 grand | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
19 years ago | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
Last night | DATE | 0.98+ |
million dollar | QUANTITY | 0.98+ |
World Series | EVENT | 0.98+ |
one example | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first guest | QUANTITY | 0.98+ |
May | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
next decade | DATE | 0.97+ |
over 18 years | QUANTITY | 0.97+ |
last night | DATE | 0.97+ |
19 years | QUANTITY | 0.97+ |
Travis Vigil | PERSON | 0.97+ |
hpc | ORGANIZATION | 0.97+ |
19 | QUANTITY | 0.96+ |
four ways | QUANTITY | 0.96+ |
eight ways | QUANTITY | 0.96+ |
both | QUANTITY | 0.95+ |
InfiniBand | COMMERCIAL_ITEM | 0.95+ |
one thing | QUANTITY | 0.94+ |
four | QUANTITY | 0.92+ |
Intel | ORGANIZATION | 0.92+ |
almost 70% | QUANTITY | 0.92+ |
Amia | LOCATION | 0.91+ |
first inflection | QUANTITY | 0.91+ |
NASA | ORGANIZATION | 0.88+ |
Marvel | ORGANIZATION | 0.88+ |
Intel amd | ORGANIZATION | 0.83+ |
three quarters | QUANTITY | 0.82+ |
five | QUANTITY | 0.82+ |
three new systems | QUANTITY | 0.82+ |
eight x | QUANTITY | 0.81+ |
nasa | LOCATION | 0.78+ |
Cube Live | COMMERCIAL_ITEM | 0.77+ |
couple times | QUANTITY | 0.73+ |
about 20 years | QUANTITY | 0.7+ |
doubling | QUANTITY | 0.67+ |
times | QUANTITY | 0.64+ |
a dollar | QUANTITY | 0.61+ |
SC22 Karan Batta, Kris Rice
>> Welcome back to Supercloud22, #Supercloud22. This is Dave Vellante. In 2019 Oracle and Microsoft announced a collaboration to bring interoperability between OCI, Oracle Cloud Infrastructure and Azure Clouds. It was Oracle's initial foray into so-called multi-cloud and we're joined by Karan Batta, who's the Vice President for Product Management at OCI. And Kris Rice is the Vice President of Software Development at Oracle Database. And we're going to talk about how this technology's evolving and whether it fits our view of what we call supercloud. Welcome gentlemen, thank you. >> Thanks for having us. >> So you recently just last month announced the new service. It extends on the initial partnership with Microsoft Oracle interconnect with Azure, and you refer to this as a secure private link between the two clouds, it cross 11 regions around the world, under two milliseconds data transmission sounds pretty cool. It enables customers to run Microsoft applications against data stored in Oracle databases without any loss in efficiency or presumably performance. So we use this term supercloud to describe a service or sets of services built on hyper scale infrastructure that leverages the core primitives and APIs of an individual cloud platform, but abstracts that underlying complexity to create a continuous experience across more than one cloud. Is that what you've done? >> Absolutely. I think it starts at the top layer in terms of just making things very simple for the customer, right. I think at the end of the day we want to enable true workloads running across two different clouds where you're potentially running maybe the app layer in one and the database layer or the back in another. And the integration I think starts with, you know, making it ease of use. Right. So you can start with things like, okay can you log into your second or your third cloud with the first cloud provider's credentials? Can you make calls against another cloud using another cloud's APIs? Can you peer the networks together? Can you make it seamless? I think those are all the components that are sort of, they're kind of the ingredients to making a multi-cloud or supercloud experience successful. >> Oh, thank you for that, Karan. So I guess there's a question for Chris is I'm trying to understand what you're really solving for? What specific customer problems are you focused on? What's the service optimized for presumably it's database but maybe you could double click on that. >> Sure. So, I mean, of course it's database. So it's a super fast network so that we can split the workload across two different clouds leveraging the best from both, but above the networking, what we had to do do is we had to think about what a true multi-cloud or what you're calling supercloud experience would be it's more than just making the network bites flow. So what we did is we took a look as Karan hinted at right, is where is my identity? Where is my observability? How do I connect these things across how it feels native to that other cloud? >> So what kind of engineering do you have to do to make that work? It's not just plugging stuff together. Maybe you could explain a little bit more detail, the the resources that you had to bring to bear and the technology behind the architecture. >> Sure. I think, it starts with actually, what our goal was, right? Our goal was to actually provide customers with a fully managed experience. What that means is we had to basically create a brand new service. So, we have obviously an Azure like portal and an experience that allows customers to do this but under the covers, we actually have a fully managed service that manages the networking layer, the physical infrastructure, and it actually calls APIs on both sides of the fence. It actually manages your Azure resources, creates them but it also interacts with OCI at the same time. And under the covers this service actually takes Azure primitives as inputs. And then it sort of like essentially translates them to OCI action. So, we actually truly integrated this as a service that's essentially built as a PaaS layer on top of these two clouds. >> So, the customer doesn't really care or know maybe they know cuz they might be coming through, an Azure experience, but you can run work on either Azure and or OCI. And it's a common experience across those clouds. Is that correct? >> That's correct. So like you said, the customer does know that they know there is a relationship with both clouds but thanks to all the things we built there's this thing we invented we created called a multi-cloud control plane. This control plane does operate against both clouds at the same time to make it as seamless as possible so that maybe they don't notice, you know, the power of the interconnect is extremely fast networking, as fast as what we could see inside a single cloud. If you think about how big a data center might be from edge to edge in that cloud, going across the interconnect makes it so that that workload is not important that it's spanning two clouds anymore. >> So you say extremely fast networking. I remember I used to, I wrote a piece a long time ago. Larry Ellison loves InfiniBand. I presume we've moved on from them, but maybe not. What is that interconnect? >> Yeah, so it's funny you mentioned interconnect you know, my previous history comes from Edge PC where we actually inside OCI today, we've moved from Infinite Band as is part of Exadata's core to what we call Rocky V two. So that's just another RDMA network. We actually use it very successfully, not just for Exadata but we use it for our standard computers that we provide to high performance computing customers. >> And the multi-cloud control plane runs. Where does that live? Does it live on OCI? Does it live on Azure? Yes? >> So it does it lives on our side. Our side of the house as part of our Oracle OCI control plane. And it is the veneer that makes these two clouds possible so that we can wire them together. So it knows how to take those Azure primitives and the OCI primitives and wire them at the appropriate levels together. >> Now I want to talk about this PaaS layer. Part of supercloud, we said to actually make it work you're going to have to have a super PaaS. I know we're taking this this term a little far but it's still it's instructive in that, what we surmised was you're probably not going to just use off the shelf, plain old vanilla PaaS, you're actually going to have a purpose built PaaS to solve for the specific problem. So as an example, if you're solving for ultra low latency, which I think you're doing, you're probably no offense to my friends at Red Hat but you're probably not going to develop this on OpenShift, but tell us about that PaaS layer or what we call the super PaaS layer. >> Go ahead, Chris. >> Well, so you're right. We weren't going to build it out on OpenShift. So we have Oracle OCI, you know, the standard is Terraform. So the back end of everything we do is based around Terraform. Today, what we've done is we built that control plane and it will be API drivable, it'll be drivable from the UI and it will let people operate and create primitives across both sides. So you can, you mentioned developers, developers love automation, right, because it makes our lives easy. We will be able to automate a multi-cloud workload from ground up config is code these days. So we can config an entire multi-cloud experience from one place. >> So, double click Chris on that developer experience. What is that like? They're using the same tool set irrespective of, which cloud we're running on is, and it's specific to this service or is it more generic, across other Oracle services? >> There's two parts to that. So one is the, we've only onboarded a portion. So the database portfolio and other services will be coming into this multi-cloud. For the majority of Oracle cloud, the automation, the config layer is based on Terraform. So using Terraform, anyone can configure everything from a mid-tier to an Exadata, all the way soup to nuts from smallest thing possible to the largest. What we've not done yet is integrated truly with the Azure API, from command line drivable. That is coming in the future. It is on the roadmap, it is coming. Then they could get into one tool but right now they would have half their automation for the multi-cloud config on the Azure tool set and half on the OCI tool set. >> But we're not crazy saying from a roadmap standpoint that will provide some benefit to developers and is a reasonable direction for the industry generally but Oracle and Microsoft specifically. >> Absolutely. I'm a developer at heart. And so one of the things we want to make sure is that developers' lives are as easy as possible. >> And is there a metadata management layer or intelligence that you've built in to optimize for performance or low latency or cost across the respective clouds? >> Yeah, definitely. I think, latency's going to be an important factor. The service that we've initially built isn't going to serve, the sort of the tens of microseconds but most applications that are sort of in, running on top of the enterprise applications that are running on top of the database are in the several millisecond range. And we've actually done a lot of work on the networking pairing side to make sure that when we launch these resources across the two clouds we actually picked the right trial site. We picked the right region we pick the right availability zone or domain. So we actually do the due diligence under the cover so the customer doesn't have to do the trial and error and try to find the right latency range. And this is actually one of the big reasons why we only launch the service on the interconnect regions. Even though we have close to, I think close to 40 regions at this point in OCI, this service is only built for the regions that we have an interconnect relationship with Microsoft. >> Okay, so you started with Microsoft in 2019. You're going deeper now in that relationship, is there any reason that you couldn't, I mean technically what would you have to do to go to other clouds? You talked about understanding the primitives and leveraging the primitives of Azure. Presumably if you wanted to do this with AWS or Google or Alibaba, you would have to do similar engineering work, is that correct? Or does what you've developed just kind of poured over to any cloud? >> Yeah, that's absolutely correct Dave. I think Chris talked a lot about the multi-cloud control plane, right? That's essentially the control plane that goes and does stuff on other clouds. We would have to essentially go and build that level of integration into the other clouds. And I think, as we get more popularity and as more products come online through these services I think we'll listen to what customers want. Whether it's, maybe it's the other way around too, Dave maybe it's the fact that they want to use Oracle cloud but they want to use other complimentary services within Oracle cloud. So I think it can go both ways. I think, the market and the customer base will dictate that. >> Yeah. So if I understand that correctly, somebody from another cloud Google cloud could say, Hey we actually want to run this service on OCI cuz we want to expand our market. And if TK gets together with his old friends and figures that out but then we're just, hypothesizing here. But, like you said, it can go both ways. And then, and I have another question related to that. So, multi clouds. Okay, great. Supercloud. How about the Edge? Do you ever see a day where that becomes part of the equation? Certainly the near Edge would, you know, a Home Depot or Lowe's store or a bank, but what about the far Edge, the tiny Edge. Can you talk about the Edge and where that fits in your vision? >> Yeah, absolutely. I think Edge is a interestingly, it's getting fuzzier and fuzzier day by day. I think, the term. Obviously every cloud has their own sort of philosophy in what Edge is, right. We have our own. It starts from, if you do want to do far Edge, we have devices like red devices, which is our ruggedized servers that talk back to our control plane in OCI. You could deploy those things unlike, into war zones and things like that underground. But then we also have things like clouded customer where customers can actually deploy components of our infrastructure like compute or Exadata into a facility where they only need that certain capability. And then a few years ago we launched, what's now called Dedicated Region. And that actually is a different take on Edge in some sense where you get the entire capability of our public commercial region, but within your facility. So imagine if a customer was to essentially point a finger on a commercial map and say, Hey, look, that region is just mine. Essentially that's the capability that we're providing to our customers, where if you have a white space if you have a facility, if you're exiting out of your data center space, you could essentially place an OCI region within your confines behind your firewall. And then you could interconnect that to a cloud provider if you wanted to, and get the same multi-cloud capability that you get in a commercial region. So we have all the spectrums of possibilities here. >> Guys, super interesting discussion. It's very clear to us that the next 10 years of cloud ain't going to be like the last 10. There's a whole new layer. Developing, data is a big key to that. We see industries getting involved. We obviously didn't get into the Oracle Cerner acquisitions. It's a little too early for that but we've actually predicted that companies like Cerner and you're seeing it with Goldman Sachs and Capital One they're actually building services on the cloud. So this is a really exciting new area and really appreciate you guys coming on the Supercloud22 event and sharing your insights. Thanks for your time. >> Thanks for having us. >> Okay. Keep it right there. #Supercloud22. We'll be right back with more great content right after this short break. (lighthearted marimba music)
SUMMARY :
And Kris Rice is the Vice President that leverages the core primitives And the integration I think What's the service optimized but above the networking, the resources that you on both sides of the fence. So, the customer at the same time to make So you say extremely fast networking. computers that we provide And the multi-cloud control plane runs. And it is the veneer that So as an example, if you're So the back end of everything we do and it's specific to this service and half on the OCI tool set. for the industry generally And so one of the things on the interconnect regions. and leveraging the primitives of Azure. of integration into the other clouds. of the equation? that talk back to our services on the cloud. with more great content
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Karan Batta | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Alibaba | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kris Rice | PERSON | 0.99+ |
Karan | PERSON | 0.99+ |
Cerner | ORGANIZATION | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
second | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
two parts | QUANTITY | 0.99+ |
11 regions | QUANTITY | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
two clouds | QUANTITY | 0.99+ |
Supercloud22 | EVENT | 0.99+ |
both sides | QUANTITY | 0.99+ |
Home Depot | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
third cloud | QUANTITY | 0.99+ |
last month | DATE | 0.98+ |
one place | QUANTITY | 0.98+ |
both ways | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
OpenShift | TITLE | 0.98+ |
Today | DATE | 0.98+ |
one tool | QUANTITY | 0.98+ |
Exadata | ORGANIZATION | 0.97+ |
more than one cloud | QUANTITY | 0.97+ |
first cloud | QUANTITY | 0.96+ |
Azure | TITLE | 0.96+ |
Edge PC | ORGANIZATION | 0.96+ |
Edge | ORGANIZATION | 0.96+ |
10 | QUANTITY | 0.96+ |
two different clouds | QUANTITY | 0.96+ |
Oracle Database | ORGANIZATION | 0.96+ |
TK | PERSON | 0.95+ |
both clouds | QUANTITY | 0.95+ |
under two milliseconds | QUANTITY | 0.95+ |
40 regions | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
single cloud | QUANTITY | 0.93+ |
Vice President | PERSON | 0.91+ |
Terraform | ORGANIZATION | 0.91+ |
PaaS | TITLE | 0.91+ |
InfiniBand | ORGANIZATION | 0.91+ |
tens of microseconds | QUANTITY | 0.9+ |
#Supercloud22 | EVENT | 0.9+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.9+ |
double | QUANTITY | 0.88+ |
OCI | COMMERCIAL_ITEM | 0.88+ |
Azure | ORGANIZATION | 0.86+ |
Karan Batta, Kris Rice | Supercloud22
(upbeat music) >> Welcome back to Supercloud22, #Supercloud22, this is Dave Vellante. In 2019, Oracle and Microsoft announced a collaboration to bring interoperability between OCI, Oracle Cloud Infrastructure and Azure clouds. It was Oracle's initial foray into so-called multi-cloud and we're joined by Karan Batta, who's the vice president for product management at OCI, and Kris Rice, is the vice president of software development at Oracle database. And we're going to talk about how this technology's evolving and whether it fits our view of what we call, Supercloud. Welcome, gentlemen. Thank you. >> Thanks for having us. >> Thanks for having us. >> So you recently just last month announced the new service. It extends on the initial partnership with Microsoft Oracle Interconnect with Azure, and you refer to this as a secure private link between the two clouds across 11 regions around the world. Under two milliseconds data transmission, sounds pretty cool. It enables customers to run Microsoft applications against data stored in Oracle databases without any loss in efficiency or presumably performance. So we use this term Supercloud to describe a service or sets of services built on hyperscale infrastructure that leverages the core primitives and APIs of an individual cloud platform, but abstracts that underlying complexity to create a continuous experience across more than one cloud. Is that what you've done? >> Absolutely. I think, you know, it starts at the, you know, at the top layer in terms of, you know, just making things very simple for the customer, right. I think at the end of the day we want to enable true workloads running across two different clouds, where you're potentially running maybe the app layer in one and the database layer or the back in another, and the integration I think, starts with, you know, making it ease of use. Right? So you can start with things like, okay can you log into your second or your third cloud with the first cloud provider's credentials? Can you make calls against another cloud using another cloud's APIs? Can you peer the networks together? Can you make it seamless? I think those are all the components that are sort of, they're kind of the ingredients to making a multi-cloud or Supercloud experience successful. >> Oh, thank you for that, Karan. So, I guess as a question for Kris is trying to understand what you're really solving for, what specific customer problems are you focused on? What's the service optimized for presumably its database but maybe you could double click on that. >> Sure. So, I mean, of course it's database so it's a super fast network so that we can split the workload across two different clouds leveraging the best from both, but above the networking, what we had to do is we had to think about what a true multi-cloud or what you're calling Supercloud experience would be. It's more than just making the network bytes flow. So what we did is, we took a look as Karan hinted at, right? Is where is my identity? Where is my observability? How do I connect these things across how it feels native to that other cloud? >> So what kind of engineering do you have to do to make that work? It's not just plugging stuff together. Maybe you could explain in a little bit more detail, the resources that you had to bring to bear and the technology behind the architecture? >> Sure. >> I think, you know, it starts with actually, you know, what our goal was, right? Our goal was to actually provide customers with a fully managed experience. What that means is we had to basically create a brand new service. So, you know, we have obviously an Azure like portal and an experience that allows customers to do this but under the covers, we actually have a fully managed service that manages the networking layer that the physical infrastructure, and it actually calls APIs on both sides of the fence. It actually manages your Azure resources, creates them, but it also interacts with OCI at the same time. And under the covers this service actually takes Azure primitives as inputs, and then it sort of like essentially translates them to OCI action. So, so we actually truly integrated this as a service that's essentially built as a PaaS layer on top of these two clouds. >> So, so the customer doesn't really care, or know, maybe they know, coz they might be coming through, you know, an Azure experience, but you can run work on either Azure and or OCI, and it's a common experience across those clouds, is that correct? >> That's correct. So, like you said, the customer does know that they know there is a relationship with both clouds but thanks to all the things we built there's this thing we invented, we created called a multi-cloud control plane. This control plane does operate against both clouds at the same time to make it as seamless as possible so that maybe they don't notice, you know, the power of the interconnect is extremely fast networking, as fast as what we could see inside a single cloud, if you think about how big a data center might be from edge to edge in that cloud. Going across the interconnect makes it so that that workload is not important that it's spanning two clouds anymore. >> So you say extremely fast networking. I remember I used to, I wrote a piece a long time ago. Hey, Larry Ellison loves InfiniBand. I presume we've moved on from them, but maybe not. What is that interconnect? >> Yeah, so it's funny, you mentioned interconnect, you know, my previous history comes from HPC where we actually inside inside OCI today, we've moved from, you know, InfiniBand as its part of Exadata's core, to what we call RoCEv2. So that's just another RDMA network. We actually use it very successfully, not just for Exadata but we use it for our standard computers, you know, that we provide to, you know, high performance computing customers. >> And the multi-cloud control plane, runs... Where does that live? Does it live on OCI? Does it live on Azure? Yes? >> So it does. It lives on our side. >> Yeah. >> Our side of the house, and it is part of our Oracle OCI control plane. And it is the veneer that makes these two clouds possible so that we can wire them together. So it knows how to take those Azure primitives and the OCI primitives and wire them at the appropriate levels together. >> Now I want to talk about this PaaS layer. Part of Supercloud, we said, to actually make it work you're going to have to have a super PaaS. I know, we're taking this term a little far but it's still, it's instructive in that, what we, what we surmised was, you're probably not going to just use off the shelf, plain old vanilla PaaS, you're actually going to have a purpose built PaaS to solve for the specific problem. So, as an example, if you're solving for ultra low latency, which I think you're doing, you're probably, no offense to my friends at Red Hat, but you're probably not going to develop this on OpenShift, but tell us about that, that PaaS layer or what we call the super PaaS layer. >> Go ahead, Kris. >> Well, so you're right. We weren't going to build it out on OpenShift. So we have Oracle OCI, you know, the standard is Terraform. So the back end of everything we do is based around Terraform. Today, what we've done, is we built that control plane and it will be API drivable. It'll be drivable from the UI and it will let people operate and create primitives across both sides. So you can, you, you mentioned developers developers love automation, right? Because it makes our lives easy. We will be able to automate a multi-cloud workload, from ground up, Config is code these days. So we can Config an entire multi-cloud experience from one place. >> So, double click Kris on that developer experience, you know, what is that like? They're using the same tool set irrespective of, you know, which cloud we're running on is, is it and it's specific to this service or is it more generic across other Oracle services? >> There's two parts to that. So one is the, we've only onboarded a portion. So the database portfolio and other services will be coming into this multi-cloud. For the majority of Oracle cloud the automation, the Config layer is based on Terraform. So using Terraform, anyone can configure everything from a mid tier to an Exadata, all the way soup to nuts from smallest thing possible to the largest. What we've not done yet is is integrated truly with the Azure API, from command line drivable, that is coming in the future. It will be, it is on the roadmap. It is coming, then they could get into one tool but right now they would have half their automation for the multi-cloud Config on the Azure tool set and half on the OCI tool set. >> But we're not crazy saying from a roadmap standpoint that will provide some benefit to developers and is a reasonable direction for the industry generally but Oracle and, and, and Microsoft specifically? >> Absolutely. I'm a developer at heart. And so one of the things we want to make sure is that developers' lives are as easy as possible. >> And, and is there a Metadata management layer or intelligence that you've built in to optimize for performance or low latency or cost across the, the respective clouds? >> Yeah, definitely. I think, you know, latency's going to be an important factor. You know, the, the service that we've initially built isn't going to serve, you know, the sort of the tens of microseconds but most applications that are sort of in, you know, running on top of, the enterprise applications that are running on top of the database are in the several millisecond range. And we've actually done a lot of work on the networking pairing side to make sure that when we launch, when we launch these resources across the two clouds we actually pick the right trial site, we pick the right region, we pick the right availability zone or domain. So we actually do the due diligence under the cover, so the customer doesn't have to do the trial and error and try to find the right latency range, you know, and this is actually one of the big reasons why we only launched this service on the interconnect regions. Even though we have close to, I think, close to 40 regions at this point in OCI, this, this, this service is only built for the regions that we have an interconnect relationship with with Microsoft. >> Okay. So, so you've, you started with Microsoft in 2019 you're going deeper now in that relationship, is there is there any reason that you couldn't, I mean technically what would you have to do to go to other clouds? Would you just, you talked about understanding the primitives and leveraging the primitives of Azure. Presumably if you wanted to do this with AWS or Google or Alibaba, you would have to do similar engineering work, is that correct? Or does what you've developed just kind of pour it over to any cloud? >> Yeah, that's, that's absolutely correct, Dave, I think, you know, Kris talked a lot about kind of the multi-cloud control plane, right? That's essentially the, the, the control plane that goes and does stuff on other clouds. We would have to essentially go and build that level of integration into the other clouds. And I think, you know, as we get more popularity and as as more products come online through these services I think we'll listen to what customers want, whether it's you know, maybe it's the other way around too, Dave maybe it's the fact that they want to use Oracle cloud but they want to use other complimentary services within Oracle cloud. So I think it can go both ways. I think, you know, kind of the market and the customer base will dictate that. >> Yeah. So if I understand that correctly, somebody from another cloud Google cloud could say, "Hey, we actually want to run this service on OCI coz we want to expand our market and..." >> Right. >> And if TK gets together with his old friends and figures that out but we're just, you know, hypothesizing here, but but like you said, it can, can go both ways. And then, and I have another question related to that. So you multi-clouds. Okay, great. Supercloud. How about the edge? Do you ever see a day where that becomes part of the equation? Certainly the, the near edge would, you know, a a home Depot or a Lowe's store or a bank, but what about like the far edge, the tiny edge. Do, do you, can you talk about the edge and and where that fits in your vision? >> Yeah, absolutely. I think edge is a interestingly, it's a, it's a it's getting fuzzier and fuzzier day by day. I think there's the term, you know, we, obviously every cloud has their own sort of philosophy in what edge is, right? We have our own, you know, it starts from, you know, if you if you do want to do far edge, you know, we have devices like red devices, which is our ruggedized servers that that talk back to our, our control plane in OCI you could deploy those things in like, you know, into war zones and things like that underground. But then we also have things like Cloud@Customer where customers can actually deploy components of our infrastructure, like Compute or Exadata into a facility where they only need that certain capability. And then a few years ago we launched, you know, what's now called Dedicated Region. And that actually is a, is a different take on edge in some sense where you get the entire capability of our public commercial region, but within your facility. So imagine if, if, if a customer was to essentially point to, you know, point to, point a finger on a commercial map and say, "Hey, look, that region is just mine." Essentially, that's the capability that we're providing to our customers, where if you have a white space if you have a facility if you're exiting out of your data center space you could essentially place an OCI region within your confines behind your firewall. And then you could interconnect that to a cloud provider if you wanted to. and get the same multi-cloud capability that you get in a commercial region. So we have all the spectrums of possibilities there. >> Guys, super interesting discussion. It's very clear to us that the next 10 years of cloud ain't going to be like the last 10. There's a whole new layer developing. Data is a big key to that. We see industries getting involved. We obviously didn't, didn't get into the Oracle Cerner acquisitions a little too early for that but we we've actually predicted that companies like Cerner and you've seen it with Goldman Sachs and Capital One, they're actually building services on the cloud. So this is a really exciting new area and I really appreciate you guys coming on the Supercloud22 event and sharing your insights. Thanks for your time. >> Thank very much. >> Thank very much. >> Okay. Keep it right there. #Supercloud22. We'll be right back with more great content right after this short break. (upbeat music)
SUMMARY :
and Kris Rice, is the vice president and you refer to this and the integration I think, but maybe you could double click on that. so that we can split the workload the resources that you it starts with actually, you know, so that maybe they don't notice, you know, So you say extremely fast networking. you know, InfiniBand as And the multi-cloud So it does. and the OCI primitives call the super PaaS layer. So we have Oracle OCI, you and half on the OCI tool set. And so one of the things isn't going to serve, you know, the and leveraging the primitives of Azure. And I think, you know, as we "Hey, we actually want to but we're just, you know, we launched, you know, and I really appreciate you guys coming on right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Karan Batta | PERSON | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Kris Rice | PERSON | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Kris | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
Karan | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
two parts | QUANTITY | 0.99+ |
Cerner | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
two clouds | QUANTITY | 0.99+ |
11 regions | QUANTITY | 0.99+ |
third cloud | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Supercloud22 | EVENT | 0.99+ |
both clouds | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
more than one cloud | QUANTITY | 0.99+ |
Supercloud | ORGANIZATION | 0.99+ |
both ways | QUANTITY | 0.99+ |
one place | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
both sides | QUANTITY | 0.98+ |
last month | DATE | 0.98+ |
40 regions | QUANTITY | 0.98+ |
tens of microseconds | QUANTITY | 0.98+ |
Azure | TITLE | 0.98+ |
Exadata | ORGANIZATION | 0.98+ |
HPC | ORGANIZATION | 0.98+ |
one tool | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
single cloud | QUANTITY | 0.94+ |
TK | PERSON | 0.94+ |
InfiniBand | ORGANIZATION | 0.93+ |
Config | TITLE | 0.93+ |
Under two milliseconds | QUANTITY | 0.92+ |
few years ago | DATE | 0.91+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.91+ |
RoCEv2 | COMMERCIAL_ITEM | 0.91+ |
Azure | ORGANIZATION | 0.91+ |
PaaS | TITLE | 0.9+ |
first cloud provider | QUANTITY | 0.87+ |
Eric Herzog, Infinidat | CUBE Conversation April 2022
(upbeat music) >> Lately Infinidat has been on a bit of a Super cycle of product announcements. Adding features, capabilities, and innovations to its core platform that are applied across its growing install base. CEO, Phil Bollinger has brought in new management and really emphasized a strong and consistent cadence of product releases, a hallmark of successful storage companies. And one of those new executives is a CMO with a proven product chops, who seems to bring an energy and an acceleration of product output, wherever he lands. Eric Herzog joins us on "theCUBE". Hey, man. Great to see you. Awesome to have you again. >> Dave. Thank you. And of course, for "theCUBE", of course, I had to put on a Hawaiian shirt as always. >> They're back. All right, I love it.(laughs) Watch out for those Hawaiian shirt police, Eric. (both laughing) All right. I want to have you start by. Maybe you can make some comments on the portfolio over the past year. You heard my intro, InfiniBox is the core, the InfiniBox SSA, which announced last year. InfiniGuard you made some substantial updates in February of this year. Real focus on cyber resilience, which we're going to talk about with Infinidat. Give us the overview. >> Sure. Well, what we've got is it started really 11 years ago with the InfiniBox. High end enterprise solution, hybrid oriented really incredible magic fairy dust around the software and all the software technology. So for example, the Neural Cache technology, which has multiple patents on it, allowed the original InfiniBox to outperform probably 85% of the All-Flash Arrays in the industry. And it still does that today. We also of course, had our real, incredible ease-of-use the whole point of the way it was configured and set up from the beginning, which we continued to make sure we do is if you will a set it and forget it model. For example, When you install, you don't create lungs and raid groups and volumes it automatically and autonomously configures. And when you add new solutions, AKA additional applications or additional servers and point it at the InfiniBox. It automatically, again in autonomously, adjust to those new applications learning what it needs to configure everything. So you're not setting cash size and Q depth, or Stripes size, anything you would performance to you don't have to do any of that. So that entire set of software is on the InfiniBox. The InfiniBox SSA II, which we're of course launching today and then inside of the InfiniGuard platform, there's a actually an InfiniBox. So the commonality of snapshots replication, ease of use. All of that is identical across the platform of all-flash array, hybrid array and purpose-built backup secondary storage and no other vendor has that breadth of product that has the same exact software. Some make a similar GUI, but we're talking literally the same exact software. So once you learn it, all three platforms, even if you don't have them, you could easily buy one of the other platforms that you don't have yet. And once you've got it, you already know how to use it. 'Cause you've had one platform to start as an example. So really easy to use from a customer perspective. >> So ever since I've been following the storage business, which has been a long time now, three things that customers want. They want something that is rock solid, dirt cheap and super fast. So performance is something that you guys have always emphasized. I've had some really interesting discussions over the years with Infinidat folks. How do you get performance? If you're using this kind of architecture, it's been quite amazing. But how does this launch extend or affect performance? Why the focus on performance from your standpoint? >> Well, we've done a number of different things to bolster the performance. We've already been industry-leading performance again. The regular InfiniBox outperforms 80, 85% of the All-Flash Arrays. Then, when the announcement of the InfiniBox SSA our first all-flash a year ago, we took that now to the highest demanding workloads and applications in the industry. So what did it add to the super high end Oracle app or SAP or some custom app that someone's created with Mongo or Cassandra. We can absolutely meet the performance between either the InfiniBox or the InfiniBox all-flash with the InfiniBox SSA. However, we've decided to extend the performance even farther. So we added a whole bunch of new CPU cores into our tri part configuration. So we don't have two array controllers like many companies do. We actually have three everything's in threes, which gives us the capability of having our 100% availability guarantee. So we've extended that now we've optimized. We put a additional InfiniBand interconnects between the controllers, we've added the CPU core, we've taken if you will the InfiniBox operating system, Neural Cache and everything else we've had. And what we have done is we have optimized that to take advantage of all those additional cores. This has led us to increase performance in all aspects, IOPS bandwidth and in fact in latency. In latency we now are at 35 mikes of latency. Real world, not a hero number, but real-world on an array. And when you look end to end, if I Mr. Oracle, or SAP sitting in the server and I'll look across that bridge, of course the sand and over to the other building the storage building that entire traversing can be as fast as a 100 microseconds of latency across the entire configuration, not just the storage. >> Yeah. I think that's best in class for an external array. Well, so what's the spectrum you can now hit with the performance ranges. Can you hit all the aspects of the market with the two InfiniBoxes, your original, and then the SSA? >> Yes, even with the original SSA. In fact, we've had one of our end users, who's been first InfiniBox customer, then InfiniBox SSA actually has been running for the last two months. A better version of the SSA II. So they've had a better version and this customer's running high end Oracle rack configurations. So they decided, you know what? We're not going to run storage benchmarks. We're going to run only Oracle benchmarks. And in every benchmark IOPS, latency and bandwidth oriented, we outperformed the next nearest competition. So for example, 57% faster in IOPS, 58% faster in bandwidth and on the latency side using real-world Oracle apps, we were three times better performance on the latency aspect, which of course for a high end high performance workload, that's heavily transactional. Latency is the most important, but when you look across all three of those aspects dramatically outperform. And by the way, that was a beta unit that didn't of course have final code on it yet. So incredible performance angle with the InfiniBox SSA II. >> So I mean you earlier, you were talking about the ease of use. You don't have to provision lungs and all that sort of nonsense, and you've always emphasized ease-of-use. Can you double click on that a little bit? How do you think about that capability? And I'm really interested in why you think it's different from other vendors? >> Well, we make sure that, for example, when you install you don't have to do anything, you have to rack and stack, yes and cable. And of course, point the servers at the storage, but the storage just basically comes up. In fact, we have a customer and it's a public reference that bought a couple units many years ago and they said they were up and going in about two hours. So how many high-end enterprise storage array can be up and going in two hours? Almost I mean, basically nobody about us. So we wanted to make sure that we maintain that when we have customers, one of our big plays, particularly helping with CapEx and OpEx is because we are so performant. We can consolidate, we have a large customer in Europe that took 57 arrays from one of our competitors and consolidate it to five of the original InfiniBox. 57 to 5. They saved about $25 million in capital expense and they're saving about a million and a half a year in operational expense. But the whole point was as they kept adding more and more servers that were connected to those competitive arrays and pointing them at the InfiniBox, there's no performance tuning. Again, that's all ease-of-use, not only saving on operational expense, but obviously as we know, the headcount for storage admins is way down from its peak, which was probably in 2007. Yet every admin is managing what 25 to 50 times the amount of storage between 2007 and 2022. So the reality is the easier it is to use. Not only does of course the CIO love it because both the two of us together probably been storage, doing storage now for close to 80 years would be my guess I've been doing it for 40. You're a little younger. So maybe we're at 75 to 78. Have you ever met a CIO used to be a storage admin ever? >> No. >> And I can't think of one either so guess what? The easier it is to use the CIOs know that they need storage. They don't like it. They're all these days are all software guys. There used to be some mainframe guys in the old days, but they're long gone too. It's all about software. So when you say, not only can we help reduce your CapEx at OpEx, but the operational manpower to run the storage, we can dramatically reduce that because of our ease-of-use that they get and ease-of-use has been a theme on the software side ever since the Mac came out. I mean, Windows used to be a dog. Now it's easy to use and you know, every time the Linux distribution come out, someone's got something that's easier and easier to use. So, the fact that the storage is easy to use, you can turn that directly into, we can help you save on operational manpower and OPEX and CIOs. Again, none of which ever met are storage guys. They love that message. Of course the admins do too 'cause they're managing 25 to 50 times more storage than they had to manage back in 2007. So the easier it is for them at the tactical level, the storage admin, the storage manager, it's a huge deal. And we've made sure we've maintained that as you've added the SSA, as we brought up the InfiniGuard, as we've continue to push new feature function. We always make it easy to use. >> Yeah. Kind of a follow up on that. Just focus on software. I mean, I would think every storage company today, every modern storage company is going to have more software engineers than hardware engineers. And I think Infinidat obviously is no different. You got a strong set of software, it's across the portfolio. It's all included kind of thing. I wonder if you could talk about your software approach and how that is different from your competitors? >> Sure, so we started out 11 years ago when in Infinidat first got started. That was all about commodity hardware. So while some people will use custom this and custom that, yeah and I having worked at two of the biggest storage companies in the world before I came here. Yes, I know it's heavily software, but our percentage of hardware engines, softwares is even less hardware engineering than our competitors have. So we've had that model, which is why this whole what we call the set it and forget it mantra of ease-of-use is critical. We make sure that we've expanded that. For example, we're announcing today, our InfiniOps focus and Infini Ops all software allows us to do AIOps both inside of our storage system with our InfiniVerse and InfiniMetrics packages. They're easy to use. They come pre-installed and they manage capacity performance. We also now have heavy integration with AI, what I'll call data center, AIOps vendors, Vetana ServiceNow, VMware and others. And in that case, we make sure that we expose all of our information out to those AIOps data center apps so that they can report on the storage level. So we've made sure we do that. We have incredible support for the Ansible framework again, which is not only a software statement, but an ease-of-use statement as well. So for the Ansible framework, which is trying to allow an even simpler methodology for infrastructure deployment in companies. We support that extensively and we added some new features. Some more, if you will, what I'll say are more scripts, but they're not really scripts that Ansible hides all that. And we added more of that, whether that be configuration installations, that a DevOps guy, which of course just had all the storage guys listening to this video, have a heart attack, but the DevOps guy could actually configure storage. And I guess for my storage buddies, they can do it without messing up your storage. And that's what Ansible delivers. So between our AIOps focus and what we're doing with InfiniOps, that extends of course this ease-of-use model that we've had and includes that. And all this again, including we already talked about a little bit cyber resilience Dave, within InfiniSafe. All this is included when you buy it. So we don't piecemeal, which is you get this and then we try to upcharge you for that. We have the incredible pricing that delivers this CapEx and an OpEx. Not just for the array, but for the associated software that goes with it, whether that be Neural Cache, the ease-of-use, the InfiniOps, InfiniSafes. You get all of that package together in the way we deploy from a business now perspective, ease of doing business. You don't cut POS for all kinds of pieces. You cut APO and you just get all the pieces on the one PO when we deliver it. >> I was talking yesterday to a VC and we were chatting about AI And of course, everybody's chasing AI. It's a lot of investments go in there, but the reality is, AI is like containers. It's just getting absorbed into virtually every thing. And of course, last year you guys made a pretty robust splash into AIOps. And then with this launch, you're extending that pretty substantially. Tell us a little bit more about the InfiniOps announcement news. >> So the InfiniOps includes our existing in the box framework InfiniVerse and what we do there, by the way, InfiniVerse has the capability with the telemetry feed. That's how we could able to demo at our demo today and also at our demo for our channel partner pre-briefing. Again a hundred mics of latency across the entire configuration, not just to a hundred mics of latency on storage, which by the way, several of our competitors talk about a hundred mics of latency as their quote hero number. We're talking about a hundred mics of latency from the application through the server, through the SAN and out to the storage. Now that is incredible. But the monitoring for that is part of the InfiniOps packaging, okay. We support again with DevOps with all the integration that we do, make it easy for the DevOps team, such as with Ansible. Making sure for the data center people with our integration, with things like VMware and ServiceNow. The data center people who are obviously often not the storage centric person can also be managing the entire data center. And whether that is conversing with the storage admin on, we need this or that, or whether they're doing it themselves again, all that is part of our InfiniOps framework and we include things like the Ansible support as part of that. So InfiniOps is sort of an overarching theme and then overarching thing extends to AIops inside of the storage system. AIops across the data center and even integration with I'll say something that's not even considered an infrastructure play, but something like Ansible, which is clearly a red hat, software oriented framework that incorporates storage systems and servers or networks in the capability of having DevOps people manage them. And quite honestly have the DevOps people manage them without screwing them up or losing data or losing configuration, which of course the server guys, the network guys and the storage guys hate when the DevOps guys play with it. But that integration with Ansible is part of our InfiniOps strategy. >> Now our shift gears a little bit talk about cyber crime and I mean, it's a topic that we've been on for a long time. I've personally been writing about it now for the last few years. Periodically with my colleagues from ETR, we hit that pretty hard. It's top of mind, and now the house just approved what's called the Better Cybercrime Metrics Act. It was a bipartisan push. I mean, the vote was like 377 to 48 and the Senate approved this bill last year. Once president Biden signs it, it's going to be the law's going to be put into effect and you and many others have been active in this space Infinidat. You announced cyber resilience on your purpose bill backup appliance and secondary storage solution, InfiniGuard with the launch of InfiniSafe. What are you doing for primary storage from InfiniBox around cyber resilience? >> So the goal between the InfiniGuard and secondary storage and the InfiniBox and the InfiniBox SSA II, we're launching it now, but the InfiniSafe for InfiniBox will work on the original InfiniBox. It's a software only thing. So there's no extra hardware needed. So it's a software only play. So if you have an InfiniBox today, when you upgrade to the latest software, you can have the InfiniSafe reference architecture available to you. And the idea is to support the four key legs of the cybersecurity table from a storage perspective. When you look at it from a storage perspective, there's really four key things that the CISO and the CIO look for first is a mutable snapshot technology. An article can't be deleted, right? You can schedule it. You can do all kinds of different things, but the point is you can't get rid of it. Second thing of course, is an air gap. And there's two types of air gap, logical air gap, which is what we provide and physical the main physical air gaping would be either to tape or to course what's left of the optical storage market. But we've got a nice logical air gap and we can even do that logical air gaping remotely. Since most customers often buy for disaster recovery purposes, multiple arrays. We can then put that air gap, not just locally, but we can put the air gap of course remotely, which is a critical differentiator for the InfiniBox a remote logical air gap. Many other players have logical, we're logical local, but we're going remote. And then of course the third aspect is a fenced forensic environment. That fence forensic environment needs to be easily set up. So you can determine a known good copy to a restoration after you've had a cyber incident. And then lastly is rapid recovery. And we really pride ourself on this. When you go to our most recent launch in February of the InfiniGuard within InfiniSafe, we were able to demo live a recovery taking 12 minutes and 12 seconds of 1.5 petabytes of backup data from Veeam. Now that could have been any backup data. Convolt IBM spectrum tech Veritas. We happen to show with Veeam, but in 12 minutes and 12 seconds. Now on the primary storage side, depending on whether you're going to try to recover locally or do it from a remote, but if it's local, we're looking at something that's going to be 1 to 2 minutes recovery, because the way we do our snapshot technology, how we just need to rebuild the metadata tree and boom, you can recover. So that's a real differentiator, but those are four things that a CISO and a CIO look for from a storage vendor is this imutable snapshot capability, the air gaping capability, the fenced environment capability. And of course this near instantaneous recovery, which we have proven out well with the InfiniGuard. And now with the InfiniBox SSA II and our InfiniBox platform, we can make that recovery on primary storage, even faster than what we have been able to show customers with the InfiniGuard on the secondary data sets and backup data sets. >> Yeah. I love the four layer cake. I just want to clarify something on the air gap if I could so you got. You got a local air gap. You can do a remote air gap with your physical storage. And then you're saying there's I think, I'm not sure I directly heard that, but then the next layer is going to be tape with the CTA, the Chevy truck access method, right? >> Well, so while we don't actively support tape and go to that there's basically two air gap solutions out there that people talk about either physical, which goes to tape or optical or logical. We do logical air gaping. We don't do air gaping to tape 'cause we don't sell tape. So we make sure that it's a remote logical air gap going to a secondary DR Site. Now, obviously in today's world, no one has a true DR data center anymore, right. All data centers are both active and DR for another site. And because we're so heavily concentrated in the global Fortune 2000, almost all the InfiniBoxes in the field already are set up as in a disaster recovery configuration. So using a remote logical air gap would be is easy for us to do with our InfiniBox SSA II and the whole InfiniBox family. >> And, I get, you guys don't do tape, but when you say remote, so you've got a local air gap, right? But then you also you call a remote logical, but you've got a physical air gap, right? >> Yeah, they would be physically separated, but when you're not going to tape because it's fully removable or optical, then the security analysts consider that type of air gap, a logical air gap, even though it's physically at a remote. >> I understand, you spent a lot of time with the channel as well. I know, and they must be all over this. They must really be climbing on to the whole cyber resiliency. What do you say, do they set up? Like a lot of the guys, doing managed services as well? I'm just curious. Are there separate processes for the air gap piece than there are for the mainstream production environment or is it sort of blended together? How are they approaching that? >> So on the InfiniGuard product line, it's blended together, okay. On the InfiniBox with our InfiniSafe reference architecture, you do need to have an extra server where you create an scuzzy private VLAN and with that private VLAN, you set up your fenced forensic environment. So it's a slightly more complicated. The InfiniGuard is a 100% automated. On the InfiniBox we will be pushing that in the future and we will continue to have releases on InfiniSafe and making more and more automated. But the air gaping and the fence reference now are as a reference architecture configuration. Not with click on a gooey in the InfiniGuard case are original InfiniSafe. All you do is click on some windows and it just goes does. And we're not there yet, but we will be there in the future. But it's such a top of mind topic, as you probably see. Last year, Fortune did a survey of the Fortune 500 CEOs and the number one cited threat at 66% by the way was cybersecurity. So one of the key things store storage vendors do not just us, but all storage vendors is need to convince the CISO that storage is a critical component of a comprehensive cybersecurity strategy. And by having these four things, the rapid recovery, the fenced forensic environment, the air gaping technology and the immutable snapshots. You've got all of the checkbox items that a CISO needs to see to make sure. That said many CISOs still even today stood on real to a comprehensive cybersecurity strategy and that's something that the storage industry in general needs to work on with the security community from a partner perspective. The value is they can sell a full package, so they can go to their end user and say, look, here's what we have for edge protection. Here's what we've got to track the bad guide down once something's happened or to alert you that something's happened by having tools like IBM's, Q Radar and competitive tools to that product line. That can traverse the servers and the software infrastructure, and try to locate malware, ransomware akin to the way all of us have Norton or something like Norton on our laptop that is trolling constantly for viruses. So that's sort of software and then of course storage. And those are the elements that you really need to have an overall cybersecurity strategy. Right now many companies have not realized that storage is critical. When you think about it. When you talk to people in security industry, and I know you do from original insertion intrusion to solution is 287 days. Well guess what if the data sets thereafter, whether it be secondary InfiniGuard or primary within InfiniBox, if they're going to trap those things and they're going to take it. They might have trapped those few data sets at day 50, even though you don't even launch the attack until day 200. So it's a big deal of why storage is so critical and why CISOs and CIOs need to make sure they include it day one. >> It's where the data lives, okay. Eric. Wow.. A lot of topics we discovered. I love the agile sort of cadence. I presume you're not done for the year. Look forward to having you back and thanks so much for coming on today. >> Great. Thanks you, Dave. We of course love being on "theCUBE". Thanks again. And thanks for all the nice things about Infinidat. You've been saying thank you. >> Okay. Yeah, thank you for watching this cube conversation. This is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
to have you again. And of course, for "theCUBE", of course, on the portfolio over the past year. of product that has the following the storage business, and applications in the industry. spectrum you can now hit and on the latency side and all that sort of nonsense, So the reality is the easier it is to use. So the easier it is for it's across the portfolio. and then we try to upcharge you for that. but the reality is, AI is like containers. and servers or networks in the capability and the Senate approved And the idea is to on the air gap if I could so you got. and the whole InfiniBox family. consider that type of air gap, Like a lot of the guys, and the software infrastructure, I love the agile sort of cadence. And thanks for all the nice we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Steve Manly | PERSON | 0.99+ |
Sanjay | PERSON | 0.99+ |
Rick | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Fernando Castillo | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Balanta | PERSON | 0.99+ |
Erin | PERSON | 0.99+ |
Aaron Kelly | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Fernando | PERSON | 0.99+ |
Phil Bollinger | PERSON | 0.99+ |
Doug Young | PERSON | 0.99+ |
1983 | DATE | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Yahoo | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsing | PERSON | 0.99+ |
Data Torrent | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Aaron | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
AWS Partner Network | ORGANIZATION | 0.99+ |
Maurizio Carli | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Drew Clark | PERSON | 0.99+ |
March | DATE | 0.99+ |
John Troyer | PERSON | 0.99+ |
Rich Steeves | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
BMW | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
Phu Hoang | PERSON | 0.99+ |
Volkswagen | ORGANIZATION | 0.99+ |
1 | QUANTITY | 0.99+ |
Cook Industries | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Dave Valata | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Boston | LOCATION | 0.99+ |
Stephen Jones | PERSON | 0.99+ |
UK | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Better Cybercrime Metrics Act | TITLE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Juan Loaiza, Oracle | CUBE Conversation, September 2021
(bright music) >> Hello, everyone, and welcome to this CUBE video exclusive. This is Dave Vellante, and as I've said many times what people sometimes forget is Oracle's chairman is also its CTO, and he understands and appreciates the importance of engineering. It's the lifeblood of tech innovation, and Oracle continues to spend money on R and D. Over the past decade, the company has evolved its Exadata platform by investing in core infrastructure technology. For example, Oracle initially used InfiniBand, which in and of itself was a technical challenge to exploit for higher performance. That was an engineering innovation, and now it's moving to RoCE to try and deliver best of breed performance by today's standards. We've seen Oracle invest in machine intelligence for analytics. It's converged OLTB and mixed workloads. It's driving automation functions into its Exadata platform for things like indexing. The point is we've seen a consistent cadence of improvements with each generation of Exadata, and it's no secret that Oracle likes to brag about the results of its investments. At its heart, Oracle develops database software and databases have to run fast and be rock solid. So Oracle loves to throw around impressive numbers, like 27 million AKI ops, more than a terabyte per second for analytics scans, running it more than a terabyte per second. Look, Oracle's objective is to build the best database platform and convince its customers to run on Oracle, instead of doing it themselves or in some other cloud. And because the company owns the full stack, Oracle has a high degree of control over how to optimize the stack for its database. So this is how Oracle intends to compete with Exadata, Exadata Cloud@Customer and other products, like ZDLRA against AWS Outposts, Azure Arc and do it yourself solutions. And with me, to talk about Oracle's latest innovation with its Exadata X9M announcement is Juan Loaiza, who's the Executive Vice President of Mission Critical Database Technologies at Oracle. Juan, thanks for coming on theCUBE, always good to see you, man. >> Thanks for having me, Dave. It's great to be here. >> All right, let's get right into it and start with the news. Can you give us a quick overview of the X9M announcement today? >> Yeah, glad to. So, we've had Exadata on the market for a little over a dozen years, and every year, as you mentioned, we make it better and better. And so this year we're introducing our X9M family of products, and as usual, we're making it better. We're making it better across all the different dimensions for OLTP, for analytics, lower costs, higher IOPs, higher throughputs, more capacity, so it's better all around, and we're introducing a lot of new software features as well that make it easier to use, more manageable, more highly available, more options for customers, more isolation, more workload consolidation, so it's our usual better and better every year. We're already way ahead of the competition in pretty much every metric you can name, but we're not sitting back. We have the pedal to the metal and we're keeping it there. >> Okay, so as always, you announced some big numbers. You're referencing them. I did in my upfront narrative. You've claimed double to triple digit performance improvements. Tell us, what's the secret sauce that allows you to achieve that magnitude of performance gain? >> Yeah, there's a lot of secret sauce in Exadata. First of all, we have custom designed hardware, so we design the systems from the top down, so it's not a generic system. It's designed to run database with a specific and sole focus of running database, and so we have a lot of technologies in there. Persistent memory is a really big one that we've introduced that enables super low response times for OLTP. The RoCE, the remote RDMA over convergency ethernet with a hundred gigabit network is a big thing, offload to storage servers is a big thing. The columnar processing of the storage is a huge thing, so there's a lot of secret sauce, most of it is software and hardware related and interesting about it, it's very unique. So we've been introducing more and more technologies and actually advancing our lead by introducing very unique, very effective technologies, like the ones I mentioned, and we're continuing that with our X9 generation. >> So that persistent memory allows you to do a right directly, atomic right directly to memory, and then what, you update asynchronously to the backend at some point? Can you double click on that a little bit? >> Yeah, so we use persistent memory as kind of the first tier of storage. And the thing about persistent memory is persistent. Unlike normal memory, it doesn't lose its contents when you lose power, so it's just as good as flash or traditional spinning disks in terms of storing data. And the integration that we do is we do what's called remote direct memory access, that means the hardware sends the new data directly into persistent memory and storage with no software, getting rid of all the software layers in between, and that's what enables us to achieve this extremely low latency. Once it's in persistent memory, it's stored. It's as good as being in flash or disc. So there's nothing else that we need to do. We do age things out of persistent memory to keep only hot data in there. That's one of the tricks that we do to make sure, because persistent memory is more expensive than flash or disc, so we tier it. So we age data in and out as it becomes hot, age it out as it becomes cold, but once it's in persistent memory, it's as good as being stored. It is stored. >> I love it. Flash is a slow tier now. So, (laughs) let's talk about what this-- >> Right, I mean persistent memory is about an order of magnitude faster. Flash is more than an order of magnitude faster than disk drive, so it is a new technology that provides big benefits, particularly for latency on OLTP. >> Great, thank you for that, okay, we'll get out of the plumbing. Let's talk about what this announcement means to customers. How does all this performance, and you got a lot of scale here, how does it translate into tangible results say, for a bank? >> Yeah, so there's a lot of ways. So, I mentioned performance is a big thing, always with Exadata. We're increasing the performance significantly for OLTP, analytics, so OLTP, 50, 60% performance improvements, analytics, 80% performance improvements in terms of costs, effectiveness, 30 to 60% improvement, so all of these things are big benefits. You know, one of the differences between a server product like Exadata and a consumer product is performance translates in the cost also. If I get a new smartphone that's faster, it doesn't actually reduce my costs, it just makes my experience a little better. But with a server product like Exadata, if I have 50% faster, I can translate that into I can serve 50% more users, 50% more workload, 50% more data, or I can buy a 50% smaller system to run the same workload. So, when we talk about performance, it also means lower costs, so if big customers of ours, like banks, telecoms, retailers, et cetera, they can take that performance and turn it into better response times. They can also take that performance and turn it into lower costs, and everybody loves both of those things, so both of those are big benefits for our customers. >> Got it, thank you. Now in a move that was maybe a little bit controversial, you stated flat out that you're not going to bother to compare Exadata cloud and customer performance against AWS Outposts and Azure Stack, rather you chose to compare to RDS, Redshift, Azure SQL. Why, why was that? >> Yeah, so our Exadata runs in the public cloud. We have Exadata that runs in Cloud@Customer, and we have Exadata that runs on Prem. And Azure and Azure Stack, they have something a little more similar to Cloud@Customer. They have where they take their cloud solutions and put them in the customer data center. So when we came out with our new X8, 9M Cloud@Customer, we looked at those technologies and honestly, we couldn't even come up with a good comparison with their equivalent, for example, AWS Outpost, because those products really just don't really run. For example, the two database products that Outposts promote or that Amazon promotes is Aurora for OLTP and Redshift for analytics. Well, those two can't even run at all on their Outposts product. So, it's kind of like beating up on a child or something. (laughs) It doesn't make sense. They're out of our weight class, so we're not even going to compare against them. So we compared what we run, both in public cloud and Cloud@Customer against their best product, which is the Redshifts and the Auroras in their public cloud, which is their most scalable available products. With their equivalent Cloud@Customer, not only does it not perform, it doesn't run at all. Their Premiere products don't run at all on those platforms. >> Okay, but RDS does, right? I think, and Redshift and Azure SQL, right, will run a their version, so you compare it against those. What were the results of the benchmarks when you did made those comparisons? >> Yeah, so compared against their public cloud or Cloud@Customer, we generally get results that are something like 50 times lower latency and close to a hundred times higher analytic throughput, so it's orders of magnitude. We're not talking 50%, we're talking 50 times, so compared to those products, there really is kind of, we're in a different league. It's kind of like they're the middle school little league and we're the professional team, so it's really dramatically different. It's not even in the same league. >> All right, now you also chose to compare the X9M performance against on-premises storage systems. Why and what were those results? >> Yeah, so with the on-premises, traditionally customers bought conventional storage and that kind of stuff, and those products have advanced quite a bit. And again, those aren't optimized. Those aren't designed to run database, but some customers have traditionally deployed those, you know, there's less and less these days, but we do get many times faster both on OLTP and analytic performance there, I mean, with analytics that can be up to 80 times faster, so again, dramatically better, but yeah, there's still a lot of on-premise systems, so we didn't want to ignore that fact and compare only to cloud products. >> So these are like to like in the sense that they're running the same level of database. You're not playing games in terms of the versioning, obviously, right? >> Actually, we're giving them a lot of the benefit. So we're taking their published numbers that aren't even running a database, and they use these low-level benchmarking tools to generate these numbers. So, we're comparing our full end-to-end database to storage numbers against their low-level IO tool that they've published in their data sheets, so again, we're trying to give them the benefit of the doubt, but we're still orders of magnitude better. >> Okay, now another claim that caught our attention was you said that 87% of the Fortune 100 organizations run Exadata, and you're claiming many thousands of other organizations globally. Can you paint a picture of the ICP, the Ideal Customer Profile for Exadata? What's a typical customer look like, and why do they use Exadata, Juan? >> Yeah, so the ideal customer is pretty straightforward, customers that care about data. That's pretty much it. (Dave laughs) If you care about data, if you care about performance of data, if you care about availability of data, if you care about manageability, if you care about security, those are the customers that should be looking strongly at Exadata, and those are the customers that are adopting Exadata. That's why you mentioned 87% of the global Fortune 100 have already adopted Exadata. If you look at a lot of industries, for example, pretty much every major bank almost in the entire world is running Exadata, and they're running it for their mission critical workloads, things like financial trading, regulatory compliance, user interfaces, the stuff that really matters. But in addition to the biggest companies, we also have thousands of smaller companies that run it for the same reason, because their data matters to them, and it's frankly the best platform, which is why we get chosen by these very, very sophisticated customers over and over again, and why this product has grown to encompass most of the major corporations in the world and governments also. >> Now, I know Deutsche bank is a customer, and I guess now an engineering partner from the announcement that I saw earlier this summer. They're using Cloud@Customer, and they're collaborating on things like security, blockchain, machine intelligence, and my inference is Deutsch Bank is looking to build new products and services that are powered by your platforms. What can you tell us about that? Can you share any insights? Are they going to be using X9M, for example? >> Yes, Deutsche Bank is a partnership that we announced a few months ago. It's a major partnership. Deutsche Bank is one of the biggest banks in the world. They traditionally are an on-premises customer, and what they've announced is they're going to move almost the entire database estate to our Exadata Cloud@Customer platform, so they want to go with a cloud platform, but they're big enough that they want to run it in their own data center for certain regulatory reasons. And so, the announcement that we made with them is they're moving the vast bulk of their data estate to this platform, including their core banking, regulatory applications, so their most critical applications. So, obviously they've done a lot of testing. They've done a lot of trials and they have the confidence to make this major transition to a cloud model with the Exadata Cloud@Customer solution, and we're also working with them to enhance that product and to work in various other fields, like you mentioned, machine learning, blockchain, that kind of project also. So it's a big deal when one of the biggest, most conservative, best respected financial institution in the world says, "We're going all in on this product," that's a big deal. >> Now outside of banking, I know a number of years ago, I stumbled upon an installation or a series of installations that Samsung found out about them as a customer. I believe it's now public, but they've something like 300 Exadatas. So help us understand, is it common that customers are building these kinds of Exadata farms? Is this an outlier? >> Yeah, so we have many large customers that have dozens to hundreds of Exadatas, and it's pretty simple, they start with one or two, and then they see the benefits, themselves, and then it grows. And Samsung is probably the biggest, most successful and most respected electronics company in the world. They are a giant company. They have a lot of different sub units. They do their own manufacturing, so manufacturing's one of their most critical applications, but they have lots of other things they run their Exadata for. So we're very happy to have them as one of our major customers that run Exadata, and by the way, Exadata again, very huge in electronics, in manufacturing. It's not just banking and that kind of stuff. I mean, manufacturing is incredibly critical. If you're a company like Samsung, that's your bread and butter. If your factory stops working, you have huge problems. You can't produce products, and you will want to improve the quality. You want to improve the tracking. You want to improve the customer service, all that requires a huge amount of data. Customers like Samsung are generating terabytes and terabytes of data per day from their manufacturing system. They track every single piece, everything that happens, so again, big deal, they care about data. They care deeply about data. They're a huge Exadata customer. That's kind of the way it works. And they've used it for many years, and their use is growing and growing and growing, and now they're moving to the cloud model as well. >> All right, so we talked about some big customers and Juan, as you know, we've covered Exadata since its inception. We were there at the announcement. We've always stressed the fit in our research with mission critical workloads, which especially resonates with these big customers. My question is how does Exadata resonate with the smaller customer base? >> Yeah, so we talk a lot about the biggest customers, because honestly they have the most critical requirements. But, at some level they have worldwide requirements, so if one of the major financial institutions goes down, it's not just them that's affected, that reverberates through the entire world. There's many other customers that use Exadata. Maybe their application doesn't stop the world, but it stops them, so it's very important to them. And so one of the things that we've introduced in our Cloud@Customer and public cloud Exadata platforms is the ability for Oracle to manage all the infrastructure, which enables smaller customers that don't have as much IT sophistication to adopt these very mission critical technology, so that's one of the big advancements. Now, we've always had smaller customers, but now we're getting more and more. We're getting universities, governments, smaller businesses adopting Exadata, because the cloud model for adopting is dramatically simpler. Oracle does all the administration, all the low-level stuff. They don't have to get involved in it at all. They can just use the data. And, on top of that comes our autonomous database, which makes it even easier for smaller customers to adapt. So Exadata, which some people think of as a very high-end platform in this cloud model, and particularly with autonomous databases is very accessible and very useful for any size customer really. >> Yeah, by all accounts, I wouldn't debate Exadata has been a tremendous success. But you know, a lot of customers, they still prefer to roll their own, do it themselves, and when I talk to them and ask them, "Okay, why is that?" They feel it limits their reliance on a single vendor, and it gives them better ability to build what I call a horizontal infrastructure that can support say non-Oracle workloads, so what do you tell those customers? Why should those customers run Oracle database on Exadata instead of a DIY infrastructure? >> Yeah, so that debate has gone on for a lot of years. And actually, what I see, there's less and less of that debate these days. You know, initially customers, many customers, they were used to building their own. That's kind of what they did. They were pretty good at it. What we have shown customers, and when we talk about these major banks, those are the kinds of people that are really good at it. They have giant IT departments. If you look at a major bank in the world, they have tens of thousands of people in their IT departments. These are gigantic multi-billion dollar organizations, so they were pretty good at this kind of stuff. And what we've shown them is you can't build this yourself. There's so much software that we've written to integrate with the database that you just can't build yourself, it's not possible. It's kind of like trying to build your own smartphone. You really can't do it, the scale, the complexity of the problem. And now as the cloud model comes in, customers are realizing, hey, all this attention to building my own infrastructure, it's kind of last decade, last century. We need to move on to more of an as a service model, so we can focus on our business. Let enterprises that are specialized in infrastructure, like Oracle that are really, really good at it, take care of the low-level details, and let me focus on things that differentiate me as a business. It's not going to differentiate them to establish their own storage for database. That's not a differentiator, and they can't do it nearly as well as we can, and a lot of that is because we write a lot of special technology and software that they just can't do themselves, it's not possible. It's just like you can't build your own smartphone. It's just really not possible. >> Now, another area that we've covered extensively, we were there at the unveiling, as well is ZDLRA, Zero Data Loss Recovery Appliance. We've always liked this product, especially for mission critical workloads, we're near zero data loss, where you can justify that. But while we always saw it as somewhat of a niche market, first of all, is that fair, and what's new with ZDLRA? >> Yeah ZDLRA has been in the market for a number of years. We have some of the biggest corporations in the world running on that, and one of the big benefits has been zero data loss, so again, if you care about data, you can't lose data. You can't restore to last night's backup if something happens. So if you're a bank, you can't restore everybody's data to last night. Suppose you made a deposit during the day. They're like, "Hey, sorry, Mr. Customer, your deposit, "well, we don't have any record of it anymore, "'cause we had to restore to last night's backup," you know, that doesn't work. It doesn't work for airlines. It doesn't work for manufacturing. That whole model is obsolete, so you need zero data loss, and that's why we introduced Zero Data Loss Recovery Appliance, and it's been very successful in the market. In addition to zero data loss, it actually provides much faster restore, much more reliable restores. It's more scalable, so it has a lot of advantages. With our X9M generation, we're introducing several new capabilities. First of all, it has higher capacity, so we can store more backups, keep data for longer. Another thing is we're actually dropping the price of the entry-level configuration of ZDLRA, so it makes it more affordable and more usable by smaller businesses, so that's a big deal. And then the other thing that we're hearing a lot about, and if you read the news at all, you hear a lot about ransomware. This is a major problem for the world, cyber criminals breaking into your network and taking the data ransom. And so we've introduced some, we call cyber vault capabilities in ZDLRA. They help address this ransomware issue that's kind of rampant throughout the world, so everybody's worried about that. There's now regulatory compliance for ransomware that particularly financial institutions have to conform to, and so we're introducing new capabilities in that area as well, which is a big deal. In addition, we now have the ability to have multiple ZDLRAs in a large enterprise, and if something happens to one, we automatically fail over backups to another. We can replicate across them, so it makes it, again, much more resilient with replication across different recovery appliances, so a lot of new improvements there as well. >> Now, is an air gap part of that solution for ransomware? >> No, air gap, you really can't have your back, if you're continuously streaming changes to it, you really can't have an air gap there, but you can protect the data. There's a number of technologies to protect the data. For example, one of the things that a cyber criminal wants to do is they want to take control of your data and then get rid of your backup, so you can't restore them. So as a simple example of one thing we're doing is we're saying, "Hey, once we have the data, "you can't delete it for a certain amount of days." So you might say, "For the 30 days, "I don't care who you are. "I don't care what privileges you have. "I don't care anything, I'm holding onto that data "for at least 30 days," so for example, a cyber criminal can't come in and say, "Hey, I'm going to get into the system "and delete that stuff or encrypt it," or something like that. So that's a simple example of one of the things that the cyber vault does. >> So, even as an administrator, I can't change that policy? >> That's right, that's one of the goals is doesn't matter what privileges you have, you can't change that policy. >> Does that eliminate the need for an air gap or would you not necessarily recommend, would you just have another layer of protection? What's your recommendation on that to customers? >> We always recommend multiple layers of protection, so for example, in our ZDLRA, we support, we offload tape backups directly from the appliance, so a great way to protect the data from any kind of thing is you put it on a tape, and guess what, once that tape drive is filed away, I don't care what cyber criminal you are, if you're remote, you can't access that data. So, we always promote multiple layers, multiple technologies to protect the data, and tape is a great way to do that. We can also now archive. In addition to tape, we can now archive to the public cloud, to our object storage servers. We can archive to what we call our ZFS appliance, which is a very low cost storage appliance, so there's a number of secondary archive copies that we offload and implement for customers. We make it very easy to do that. So, yeah, you want multiple layers of protection. >> Got it, okay, your tape is your ultimate air gap. ZDLRA is your low RPO device. You've got cloud kind of in the middle, maybe that's your cheap and deep solution, so you have some options. >> Juan: Yes. >> Okay, last question. Summarize the announcement, if you had to mention two or three takeaways from the X9M announcement for our audience today, what would you choose to share? >> I mean, it's pretty straightforward. It's the new generation. It's significantly faster for OLTP, for analytics, significantly better consolidation, more cost-effective. That's the big picture. Also there's a lot of software enhancements to make it better, improve the management, make it more usable, make it better disaster recovery. I talked about some of these cyber vault capabilities, so it's improved across all the dimensions and not in small ways, in big ways. We're talking 50% improvement, 80% improvements. That's a big change, and also we're keeping the price the same, so when you get a 50 or 80% improvement, we're not increasing the price to match that, so you're getting much better value as well. And that's pretty much what it is. It's the same product, even better. >> Well, I love this cadence that we're on. We love having you on these video exclusives. We have a lot of Oracle customers in our community, so we appreciate you giving us the inside scope on these announcements. Always a pleasure having you on theCUBE. >> Thanks for having me. It's always fun to be with you, Dave. >> All right, and thank you for watching. This is Dave Vellante for theCUBE, and we'll see you next time. (bright music)
SUMMARY :
and databases have to run It's great to be here. of the X9M announcement today? We have the pedal to the metal sauce that allows you to achieve and so we have a lot of that means the hardware sends the new data Flash is a slow tier now. that provides big benefits, and you got a lot of scale here, and everybody loves both of those things, Now in a move that was maybe and we have Exadata that runs on Prem. and Azure SQL, right, and close to a hundred times Why and what were those results? and compare only to cloud products. of the versioning, obviously, right? and they use these of the Fortune 100 and it's frankly the best platform, is looking to build new and to work in various other it common that customers and now they're moving to and Juan, as you know, is the ability for Oracle to and it gives them better ability to build and a lot of that is because we write first of all, is that fair, and so we're introducing new capabilities of one of the things That's right, that's one of the goals In addition to tape, we can now You've got cloud kind of in the middle, from the X9M announcement the price to match that, so we appreciate you It's always fun to be with you, Dave. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Deutsche Bank | ORGANIZATION | 0.99+ |
Juan | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Deutsche bank | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
September 2021 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
50 times | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
30 days | QUANTITY | 0.99+ |
Deutsch Bank | ORGANIZATION | 0.99+ |
50% | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
87% | QUANTITY | 0.99+ |
ZDLRA | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
last night | DATE | 0.99+ |
last century | DATE | 0.99+ |
first tier | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.98+ |
this year | DATE | 0.98+ |
more than a terabyte per second | QUANTITY | 0.98+ |
Redshift | TITLE | 0.97+ |
Exadata | ORGANIZATION | 0.97+ |
First | QUANTITY | 0.97+ |
hundreds | QUANTITY | 0.97+ |
X9M | TITLE | 0.97+ |
more than a terabyte per second | QUANTITY | 0.97+ |
Outposts | ORGANIZATION | 0.96+ |
Azure SQL | TITLE | 0.96+ |
Azure Stack | TITLE | 0.96+ |
zero data | QUANTITY | 0.96+ |
over a dozen years | QUANTITY | 0.96+ |
Kevin Deierling, NVIDIA and Scott Tease, Lenovo | CUBE Conversation, September 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE conversation. >> Hi, I'm Stu Miniman, and welcome to a CUBE conversation. I'm coming to you from our Boston Area studio. And we're going to be digging into some interesting news regarding networking. Some important use cases these days, in 2020, of course, AI is a big piece of it. So happy to welcome to the program. First of all, I have one of our CUBE alumni, Kevin Deierling. He's the Senior Vice President of Marketing with Nvidia, part of the networking team there. And joining him is Scott Tease, someone we've known for a while, but first time on the program, who's the General Manager of HPC and AI, for the Lenovo Data Center Group. Scott and Kevin, thanks so much for joining us. >> It's great to be here Stu. >> Yeah, thank you. >> Alright, so Kevin, as I said, you you've been on the program a number of times, first when it was just Mellanox, now of course the networking team, there's some other acquisitions that have come in. If you could just set us up with the relationship between Nvidia and Lenovo. And there's some news today that we're here to talk about too. So let's start getting into that. And then Scott, you'll jump in after Kevin. >> Yeah, so we've been a long time partner with Lenovo, on our high performance computing. And so that's the InfiniBand piece of our business. And more and more, we're seeing that AI workloads are very, very similar to HPC workloads. And so that's been a great partnership that we've had for many, many years. And now we're expanding that, and we're launching a OEM relationship with Lenovo, for our Ethernet switches. And again, with our Ethernet switches, we really take that heritage of low latency, high performance networking that we built over many years in HPC, and we bring that to Ethernet. And of course that can be with HPC, because frequently in an HPC supercomputing environment, or in an AI supercomputing environment, you'll also have an Ethernet network, either for management, or sometimes for storage. And now we can offer that together with Lenovo. So it's a great partnership. We talked about it briefly last month, and now we're coming to market, and we'll be able to offer this to the market. >> Yeah, yeah, Kevin, we're super excited about it here in Lenovo as well. We've had a great relationship over the years with Mellanox, with Nvidia Mellanox. And this is just the next step. We've shown in HPC that the days of just taking an Ethernet card, or an InfiniBand card, plugging it in the system, and having it work properly are gone. You really need a system that's engineered for whatever task the customer is going to use. And we've known that in HPC for a long time, as we move into workloads, like artificial intelligence, where networking is a critical aspect of getting these systems to communicate with one another, and work properly together. We love from HPC perspective, to use InfiniBand, but most enterprise clients are using Ethernet. So where do we go? We go to a partner that we've trusted for a very long time. And we selected the Nvidia Mellanox Ethernet switch family. And we're really excited to be able to bring that end-to-end solution to our enterprise clients, just like we've been doing for HPC for a while. >> Yeah, well Scott, maybe if you could. I'd love to hear a little bit more about kind of that customer demand that those usages there. So you think traditionally, of course, is supercomputing, as you both talked about that move from InfiniBand, to leveraging Ethernet, is something that's been talked about for quite a while now in the industry. But maybe that AI specifically, could you talk about what are the networking requirements, how similar is it? Is it 95% of the same architecture, as what you see in HPC environments? And also, I guess the big question there is, how fast are customers adopting, and rolling out those AI solutions? And what kind of scale are they getting them to today? >> So yeah, there's a lot there of good things we can talk about. So I'd say in HPC, the thing that we've learned, is that you've got to have a fabric that's up to the task. When you're testing an HPC solution, you're not looking at a single node, you're looking at a combination of servers, and storage, management, all these things have to come together, and they come together over InfiniBand fabric. So we've got this nearly a purpose built fabric that's been fantastic for the HPC community for a long time. As we start to do some of that same type of workload, but in an enterprise environment, many of those customers are not used to InfiniBand, they're used to an Ethernet fabric, something that they've got all throughout their data center. And we want to try to find a way to do was, bring a lot of that rock solid interoperability, and pre-tested capability, and bring it to our enterprise clients for these AI workloads. Anything high performance GPUs, lots of inner internode communications, worries about traffic and congestion, abnormalities in the network that you need to spot. Those things happen quite often, when you're doing these enterprise AI solutions. You need a fabric that's able to keep up with that. And the Nvidia networking is definitely going to be able to do that for us. >> Yeah well, Kevin I heard Scott mention GPUs here. So this kind of highlights one of the reasons why we've seen Nvidia expand its networking capabilities. Could you talk a little bit about that kind of expansion, the portfolio, and how these use cases really are going to highlight what Nvidia helped bring to the market? >> Yeah, we like to really focus on accelerated computing applications. And whether those are HPC applications, or now they're becoming much more broadly adopted in the enterprise. And one of the things we've done is, tight integration at a product level, between GPUs, and the networking components in our business. Whether that's the adapters, or the DPU, the data processing unit, which we've talked about before. And now even with the switches here, with our friends at Lenovo, and really bringing that all together. But most importantly, is at a platform level. And by that I mean the software. And the enterprise here has all kinds of different verticals that are going after. And we invest heavily in the software ecosystem that's built on top of the GPU, and the networking. And by integrating all of that together on a platform, we can really accelerate the time to market for enterprises that wants to leverage these modern workloads, sort of cloud native workloads. >> Yeah, please Scott, if you have some follow up there. >> Yeah, if you don't mind Stu, I just like to say, five years ago, the roadmap that we followed was the processor roadmap. We all could tell you to the week when the next Xeon processor was going to come out. And that's what drove all of our roadmaps. Since that time what we found is that the items that are making the radical, the revolutionary improvements in performance, they're attached to the processor, but they're not the processor itself. It's things like, the GPU. It's things like that, especially networking adapters. So trying to design a platform that's solely based on a CPU, and then jam these other items on top of it. It no longer works, you have to design these systems in a holistic manner, where you're designing for the GPU, you're designing for the network. And that's the beauty of having a deep partnership, like we share with Nvidia, on both the GPU side, and on the networking side, is we can do all that upfront engineering to make sure that the platform, the systems, the solution, as a whole works exactly how the customer is going to expect it to. >> Kevin, you mentioned that a big piece of this is software now. I'm curious, there's an interesting piece that your networking team has picked up, relatively recently, that the Cumulus Linux, so help us understand how that fits into the Ethernet portfolio? And would it show up in these kind of applications that we're talking about? >> Yeah, that's a great question. So you're absolutely right, Cumulus is integral to what we're doing here with Lenovo. If you looked at the heritage that Mellanox had, and Cumulus, it's all about open networking. And what we mean by that, is we really decouple the hardware, and the software. So we support multiple network operating systems on top of our hardware. And so if it's, for example, Sonic, or if it's our Onyx or Dents, which is based on switch def. But Cumulus who we just recently acquired, has been also on that same access of open networking. And so they really support multiple platforms. Now we've added a new platform with our friends at Lenovo. And really they've adopted Cumulus. So it is very much centered on, Enterprise, and really a cloud like experience in the Enterprise, where it's Linux, but it's highly automated. Everything is operationalized and automated. And so as a result of that, you get sort of the experience of the cloud, but with the economics that you get in the Enterprise. So it's kind of the best of both worlds in terms of network analytic, and all of the ability to do things that the cloud guys are doing, but fully automated, and for an Enterprise environment. >> Yeah, so Kevin, I mean, I just want to say a few things about this. We're really excited about the Cumulus acquisition here. When we started our negotiations with Mellanox, we were still planning to use Onyx. We love Onyx, it's been our IB nodes of choice. Our users love, our are architects love it. But we were trying to lean towards a more open kind of futuristic, node as we got started with this. And Cumulus is really perfect. I mean it's a Linux open source based system. We love open source in HPC. The great thing about it is, we're going to be able to take all the great learnings that we've had with Onyx over the years, and now be able to consolidate those inside of Cumulus. We think it's the perfect way to start this relationship with Nvidia networking. >> Well Scott, help us understand a little more. What you know what does this expansion of the partnership mean? If you're talking about really the full solutions that Lenovo opens in the think agile brand, as well as the hybrid and cloud solutions. Is this something then that, is it just baked into the solution, is it a reseller, what should customers, and your your channel partners understand about this? >> Yeah, so any of the Lenovo solutions that require a switch to perform the functionality needed across the solution, are going to show up with the networking from Nvidia inside of it. Reasons for that, a couple of reasons. One is even something as simple as solution management for HPC, the switch is so integral to how we do all that, how we push all those functions down, how we deploy systems. So you've got to have a switch, in a connectivity methodology, that ensures that we know how to deploy these systems. And no matter what scale they are, from a few systems up, to literally thousands of systems, we've got something that we know how to do. Then when we're we're selling these solutions, like an SAP solution, for instance. The customer is not buying a server anymore, they're buying a solution, they're buying a functionality. And we want to be able to test that in our labs to ensure that that system, that rack, leaves our factory ready to do exactly what the customer is looking for. So any of the systems that are going to be coming from us, pre configured, pre tested, are all going to have Nvidia networking inside of them. >> Yeah, and I think that's, you mentioned the hybrid cloud. I think that's really important. That's really where we cut our teeth first in InfiniBand, but also with our Ethernet solutions. And so today, we're really driving a bunch of the big hyper scalars, as well as the big clouds. And as you see things like SAP or Azure, it's really important now that you're seeing Azure stack coming into a hybrid environment, that you have the known commodity here. So we're something that we're built in to many of those different platforms, with our Spectrum ASIC, as well as our adapters. And so now the ability with Nvidia, and Lenovo together, to bring that to enterprise customers, is really important. I think it's a proven set of components that together forms a solution. And that's the real key, as Scott said, is delivering a solution, not just piece parts, we have a platform, that software, hardware, all of it integrated. >> Well, it's great to see you. We've had an existing partnership for a while. I want to give you both the opportunity, anything specific, you've been hearing kind of the customer demand leading up this. Is it people that might be transitioning from InfiniBand to Ethernet? Or is it just general market adoption of new solutions that you have out there? (speakers talk over each other) >> You go ahead and start. >> Okay, so I think that there's different networks for different workloads, is what we've seen. And InfiniBand certainly is going to continue to be the best platform out there for HPC, and often for AI. But as Scott said, the enterprise frequently is not familiar with that, and for various reasons, would like to leverage Ethernet. So I think we'll see two different cases, one where there's Ethernet with an InfiniBand network. And the other is for new enterprise workloads that are coming, that are very AI centric, modern workloads, sort of cloud native workloads. You have all of the infrastructure in place with our Spectrum ASICs, and our Connectx adapters, and now integrated with GPUs, that we'll be able to deliver solutions rather than just compliments. And that's the key. >> Yeah, I think Stu, a great example, I think of where you need that networking, like we've been used to an HPC, is when you start looking at deep learning in training, scale out training. A lot of companies have been stuck on a single workstation, because they haven't been able to figure out how to spread that workload out, and chop it up, like we've been doing in HPC, because they've been running into networking issues. They can't run over an unoptimized network. With this new technology, we're hoping to be able to do a lot of the same things that HPC customers take for granted every day, about workload management, distribution of workload, chopping jobs up into smaller portions, and feeding them out to a cluster. We're hoping that we're going to be able to do those exact same things for our enterprise clients. And it's going to look magical to them, but it's the same kind of thing we've been doing forever. With Mellanox, in the past, now Nvidia networking, we're just going to take that to the enterprise. I'm really excited about it. >> Well, it's so much flexibility. We used to look at, it would take a decade to roll out some new generations. Kevin, if you could just give us latest speeds and feeds. If I look at Ethernet, did I see that this has from n gig, all the way up to 400 gig? I think I lose track a little bit of some of the pieces. I know the industry as a whole is driving it. But where are we with the general customer adoption of some of the some of the speeds today? >> Yeah indeed, we're coming up on the 40th anniversary of the first specification of Ethernet. And we're about 4000 times faster now, 40,000 times faster at 400 gigabits, versus 10 megabits. So yeah, we're shipping today at the adapter level, 100 gig, and even 200 gig. And then at the switch level, 400 gig. And people sort of ask, "Do we really need all that performance?" The answer is absolutely. So the amount of data that the GPU can crunch, and these AI workloads, these giant neural networks, it needs massive amounts of data. And then as you're scaling out, as Scott was talking about, much along the lines of InfiniBand Ethernet needs that same level of performance, throughput, latency and offloads, and we're able to deliver. >> Yeah, so Kevin, thank you so much. Scott, I want to give you a final word here. Anything else you want your customers to understand regarding this partnerships? >> Yeah, just a quick one Stu, quick one. So we've been really fortunate in working really closely with Mellanox over the years, and with Nvidia. And now the two together, we're just excited about what the future holds. We've done some really neat things in HPC, with being one of the first watercool an InfiniBand card. We're one of the first companies to deploy Dragonfly topology. We've done some unique things where we can share a single IP adapter, across multiple users. We're looking forward to doing a lot of that same exact kind of innovation, inside of our systems as we look to Ethernet. We often think that as speeds of Ethernet continue to go higher, we may see more and more people move from InfiniBand to Ethernet. I think that now having both of these offerings inside of our lineup, is going to make it really easy for customers to choose what's best for them over time. So I'm excited about the future. >> Alright, well Kevin and Scott, thank you so much. Deep integration and customer choice, important stuff. Thank you so much for joining us. >> Thank you Stu. >> Thanks Stu. >> Alright, I'm Stu Miniman, and thank you. Thanks for watching theCUBE. (upbeat music)
SUMMARY :
leaders all around the world, for the Lenovo Data Center Group. now of course the networking team, And of course that can be with HPC, We've shown in HPC that the days Is it 95% of the same architecture, And the Nvidia networking that kind of expansion, the portfolio, And by that I mean the software. Yeah, please Scott, if you And that's the beauty of that the Cumulus Linux, and all of the ability to do things that we've had with Onyx over the years, of the partnership mean? So any of the systems that And so now the ability with Nvidia, of the customer demand leading up this. And that's the key. do a lot of the same things of some of the some of the speeds today? that the GPU can crunch, Yeah, so Kevin, thank you so much. And now the two together, Scott, thank you so much. Miniman, and thank you.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Scott | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
Kevin | PERSON | 0.99+ |
Kevin Deierling | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
40,000 times | QUANTITY | 0.99+ |
Onyx | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Lenovo Data Center Group | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
10 megabits | QUANTITY | 0.99+ |
95% | QUANTITY | 0.99+ |
400 gig | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
September 2020 | DATE | 0.99+ |
200 gig | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
400 gigabits | QUANTITY | 0.99+ |
Scott Tease | PERSON | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
HPC | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
CUBE | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
five years ago | DATE | 0.98+ |
last month | DATE | 0.98+ |
InfiniBand | ORGANIZATION | 0.98+ |
two different cases | QUANTITY | 0.98+ |
Boston | LOCATION | 0.97+ |
first time | QUANTITY | 0.97+ |
Paresh Kharya & Kevin Deierling, NVIDIA | HPE Discover 2020
>> Narrator: From around the global its theCUBE, covering HPE Discover Virtual Experience, brought to you by HPE. >> Hi, I'm Stu Miniman and this is theCUBE's coverage of HPE, discover the virtual experience for 2020, getting to talk to Hp executives, their partners, the ecosystem, where they are around the globe, this session we're going to be digging in about artificial intelligence, obviously a super important topic these days. And to help me do that, I've got two guests from Nvidia, sitting in the window next to me, we have Paresh Kharya, he's director of product marketing and sitting next to him in the virtual environment is Kevin Deierling, who is this senior vice president of marketing as I mentioned both with Nvidia. Thank you both so much for joining us. >> Thank you, so great to be here. >> Great to be here. >> All right, so Paresh when you set the stage for us? AI, obviously, one of those mega trends to talk about but just, give us the stages, where Nvidia sits, where the market is, and your customers today, that they think about AI. >> Yeah, so we are basically witnessing a massive changes that are happening across every industry. And it's basically the confluence of three things. One is of course, AI, the second is 5G and IOT, and the third is the ability to process all of the data that we have, that's now possible. For AI we are now seeing really advanced models, from computer vision, to understanding natural language, to the ability to speak in conversational terms. In terms of IOT and 5G, there are billions of devices that are sensing and inferring information. And now we have the ability to act, make decisions in various industries, and finally all of the processing capabilities that we have today, at the data center, and in the cloud, as well as at the edge with the GPUs as well as advanced networking that's available, we can now make sense all of this data to help industrial transformation. >> Yeah, Kevin, you know it's interesting when you look at some of these waves of technology and we say, "Okay, there's a lot of new pieces here." You talk about 5G, it's the next generation but architecturally some of these things remind us of the past. So when I look at some of these architectures, I think about, what we've done for high performance computing for a long time, obviously, you know, Mellanox, where you came from through NVIDIA's acquisition, strong play in that environment. So, maybe give us a little bit compare, contrast, what's the same, and what's different about this highly distributed, edge compute AI, IOT environment and what's the same with what we were doing with HPC in the past. >> Yeah, so we've--Mellanox has now been a part of Nvidia for a little over a month and it's great to be part of that. We were both focused on accelerated computing and high performance computing. And to do that, what it means is the scale and the type of problems that we're trying to solve are just simply too large to fit into a single computer. So if that's the case, then you connect a lot of computers. And Jensen talked about this recently at the GTC keynote where he said that the new unit computing, it's really the data center. So it's no longer the box that sits on your desk or even in Iraq, it's the entire data center because that's the scale of the types of problems that we're solving. And so the notion of scale up and scale out, the network becomes really, really critical. And we're doing high-performance networking for a long time. When you move to the edge, instead of having, a single data center with 10,000 computers, you have 10,000 data centers, each of which as a small number of servers that is processing all of that information that's coming in. But in a sense, the problems are very, very similar, whether you're at the edge or you're doing massive HPC, scientific computing or cloud computing. And so we're excited to be part of bringing together the AI and the networking because they are really optimizing at the data center scale across the entire stack. >> All right, so it's interesting. You mentioned, Nvidia CEO, Jensen. I believe if I saw right in there, he actually could, wrote a term which I had not run across, it was the data processing unit or DPU in that, data center, as you talked about. Help us wrap our heads around this a little bit. I know my CPU, when I think about GPUs, I obviously think of Nvidia. TPUs, in the cloud and everything we're doing. So, what is DPUs? Is this just some new AI thing or, is this kind of a new architectural model? >> Yeah. I think what Jensen highlighted is that there's three key elements of this accelerated disaggregated infrastructure that the data center has becoming. And so that's the CPU, which is doing traditional single threaded workloads but for all of the accelerated workloads, you need the GPU. And that does massive parallelism deals with massive amounts of data, but to get that data into the GPU and also into the CPU, you need really an intelligent data processing because the scale and scope of GPUs and CPUs today, these are not single core entities. These are hundreds or even thousands of cores in a big system. And you need to steer the traffic exactly to the right place. You need to do it securely. You need to do it virtualized. You need to do it with containers and to do all of that, you need a programmable data processing unit. So we have something called our BlueField, which combines our latest, greatest, 100 gig and 200 gig network connectivity with Arm processors and a whole bunch of accelerators for security, for virtualization, for storage. And all of those things then feed these giant parallel engines which are the GPU. And of course the CPU, which is really the workload at the application layer for non-accelerated outs. >> Great, so Paresh, Kevin talked about, needing similar types of services, wherever the data is. I was wondering if you could really help expand for us a little bit, the implications of it AI at the edge. >> Sure, yeah, so AI is basically not just one workload. AI is many different types of models and AI also means training as well as inferences, which are very different workloads or AI printing, for example, we are seeing the models growing exponentially, think of any AI model, like a brain of a computer or like a brain, solving a particular use case a for simple models like computer vision, we have models that are smaller, bugs have computer vision but advanced models like natural language processing, they require larger brains or larger models, so on one hand we are seeing the size of the AI models increasing tremendously and in order to train these models, you need to look at computing at the scale of data center, many processors, many different servers working together to train a single model, on the other hand because of these AI models, they are so accurate today from understanding languages to speaking languages, to providing the right recommendations whether it's for products or for content that you may want to consume or advertisements and so on. These models are so effective and efficient that they are being powered by AI today. These applications are being powered by AI and each application requires a small amount of acceleration, so you need the ability to scale out or, and support many different applications. So with our newly launched MPR architecture, just couple of weeks to go that Jensen announced, in the virtual keynote for the first time, we are now able to provide both, scale up and scale out both training data analytics as well as imprints on the single architecture and that's very exciting. >> Yeah, so look at that. The other thing that's interesting is you're talking about at the edge and scale out versus scale up, the networking is critical for both of those. And there's a lot of different workloads. And as Paresh was describing, you've got different workloads that require different amounts of GPU or storage or networking. And so part of that vision of this data center as the computer is that, the DPU lets you scale independently, everything. So you can compose, you desegregate into DPUs and storage and CPUs, and then you compose exactly the computer that you need on the fly container, right, to solve the problem that you're solving right now. So these new way of programming is programming the entire data center at once and you'll go grab all of it and it'll run for a few hundred milliseconds even and then it'll come back down and recompose itself onsite. And to do that, you need this very highly efficient networking infrastructure. And the good news is we're here at HPE Discover. We've got a great partner with HPE. You know, they have our M series switches that uses the Mellanox hundred gig and now even 200 and 400 gig ethernet switches, we have all of our adapters and they have great platforms. The Apollo platform for example, is break for HPC and they have other great platforms that we're looking at with the new telco that we're doing or 5G and accelerating that. >> Yeah, and on the edge computing side, there's the edge line set of products which are very interesting, the other sort of aspect that I wanted to touch upon, is the whole software stack that's needed for the edge. So edge is different in the sense that it's not centrally managed, the edge computing devices are distributed remote locations. And so managing the workflow of running and updating software on it is important and needs to be done in a very secure manner. The second thing that's, that's very different again, for the edges, these devices are going to require connectivity. As Kevin was pointing out, the importance of networking so we also announced, a couple of weeks ago at our GTC, our EGX product that combines the Mellanox NIC and our GPUs into a single a processor, Mellanox NIC provides a fast connectivity, security, as well as the encryption and decryption capabilities, GPUs provide acceleration to run the advanced DI models, that are required for applications at the edge. >> Okay, and if I understood that, right. So, you've got these throughout the HPE the product line, HPE's got long history of making, flexible configurations, I remember when they first came out with a Blade server it was, different form factors, different connectivity options, they pushed heavily into composable infrastructure. So it sounds like this is just a kind of extending, you know, what HP has been doing for a couple of decades. >> Yeah, I think HP is a great partner there and these new platforms, the EGX, for example that was just announced, a great workload there is a 5G telco. So we'll be working with our friends at HPE to take that to market as well. And, you know, really, there's a lot of different workloads and they've got a great portfolio of products across the spectrum from regular servers. And 1U, 2U, and then all the way up to their big Apollo platform. >> Well I'm glad you brought up telco, I'm curious, are there any specific, applications or workloads that, where the low hanging fruit or the kind of the first targets that you use for AI acceleration? >> Yeah, so you know, the 5G workload is just awesome. We're introduced with the EGX, a new platform called Ariel which is a programming framework and there were lots of partners there that were part of that, including, folks like Ericsson. And the idea there is that you have a software defined hardware accelerated radio area network, so a cloud RAM and it really has all of the right attributes of the cloud and what's nice there is now you can change on the fly, the algorithms that you're using for the baseband codex without having to go climb a radio tower and change the actual physical infrastructure. So that's a critical part. Our role in that, on the networking side, we introduced the technology that's part of EGX then are connected, It's like the DX adapter, it's called 5T for 5G. And one of the things that happens is you need this time triggered transport or a telco technology. That's the 5T's for 5G. And the reason is because you're doing distributed baseband unit, distributed radio processing and the timing between each of those server nodes needs to be super precise, 20 nanosecond. It's something that simply can't be done in software. And so we did that in hardware. So instead of having an expensive FPGA, I try to synchronize all of these boxes together. We put it into our NIC and now we put that into industry standard servers HP has some fantastic servers. And then with the EGX platform, with that we can build, really scale out software to client cloud RAM. >> Awesome, Paresh, anything else on the application side you'd like to add in just about what Kevin spoke about. >> Oh yeah, so from application perspective, every industry has applications that touch on edge. If you take a look at the retail, for example, there is, you know, all the way from supply chain to inventory management, to keeping the right stock units in the shelves, making sure there is a there is no slippage or shrinkage. So to telecom, to healthcare, we are re-looking at constantly monitoring patients and taking actions for the best outcomes to manufacturing. We are looking to automate production detecting failures much early on in the production cycle and so on every industry has different applications but they all use AI. They can all leverage the computing capabilities and high-speed networking at the edge to transform their business processes. >> All right, well, it's interesting almost every time we've talked about AI, networking has come up. So, you know, Kevin, I think that probably ease up a little bit why, Nvidia, spent around $7 billion for the acquisition of Mellanox and not only was it the Mellanox acquisition, Cumulus Networks, very known in the network space for software defined really, operating system for networking but give us strategically, does this change the direction of Nvidia, how should we be thinking about Nvidia in the overall network? >> Yeah, I think the way to think about it is going back to that data center as the computer. And if you're thinking about the data center as computer then networking becomes the back plane, if you will of that data center computer and having a high performance network is really critical. And Mellanox has been a leader in that for 20 years now with our InfiniBand and our Ethernet product. But beyond that, you need a programmatic interface because one of the things that's really important in the cloud is that everything is software defined and it's containerized now and there is no better company in the world then Cumulus, really the pioneer and building Cumulus clinics, taking the Linux operating system and running that on multiple homes. So not just hardware from Mellanox but hardware from other people as well. And so that whole notion of an open networking platform more committed to, you need to support that and now you have a programmatic interface that you can drop containers on top of, Cumulus has been the leader in the Linux FRR, it's Free Range Routing, which is the core routing algorithm. And that really is at the heart of other open source network operating systems like Sonic and DENT so we see a lot of synergy here, all the analytics that Cumulus is bringing to bear with NetQ. So it's really great that they're going to be part here of the Nvidia team. >> Excellent, well thank you both much. Want to give you the final word, what should they do, HPE customers in their ecosystem know about the Nvidia and HPE partnership? >> Yeah, so I'll start you know, I think HPE has been a longtime partner and a customer of ours. If you have accelerated workloads, you need to connect those together. The HPE server portfolio is an ideal place. We can combine some of the work we're doing with our new amp years and existing GPUs and then also to connect those together with the M series, which is their internet switches that are based on our spectrum switch platforms and then all of the HPC related activities on InfiniBand, they're a great partner there. And so all of that, pulling it together, and now as at the edge, as edge becomes more and more important, security becomes more and more important and you have to go to this zero trust model, if you plug in a camera that's somebody has at the edge, even if it's on a car, you can't trust it. So everything has to become, validated authenticated, all the data needs to be encrypted. And so they're going to be a great partner because they've been a leader and building the most secure platforms in the world. >> Yeah and on the data center, server, portfolio side, we really work very closely with HP on various different lines of products and really fantastic servers from the Apollo line of a scale up servers to synergy and ProLiant line, as well as the Edgeline for the edge and on the super computing side with the pre side of things. So we really work to the fullest spectram of solutions with HP. We also work on the software side, wehere a lot of these servers, are also certified to run a full stack under a program that we call NGC-Ready so customers get phenomenal value right off the bat, they're guaranteed, to have accelerated workloads work well when they choose these servers. >> Awesome, well, thank you both for giving us the updates, lots happening, obviously in the AI space. Appreciate all the updates. >> Thanks Stu, great to talk to you, stay well. >> Thanks Stu, take care. >> All right, stay with us for lots more from HPE Discover Virtual Experience 2020. I'm Stu Miniman and thank you for watching theCUBE. (bright upbeat music)
SUMMARY :
the global its theCUBE, in the virtual environment that they think about AI. and finally all of the processing the next generation And so the notion of TPUs, in the cloud and And of course the CPU, which of it AI at the edge. for the first time, we are And the good news is we're Yeah, and on the edge computing side, the product line, HPE's across the spectrum from regular servers. and it really has all of the else on the application side and high-speed networking at the edge in the network space for And that really is at the heart about the Nvidia and HPE partnership? all the data needs to be encrypted. Yeah and on the data Appreciate all the updates. Thanks Stu, great to I'm Stu Miniman and thank
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Kevin Deierling | PERSON | 0.99+ |
Kevin | PERSON | 0.99+ |
Paresh Kharya | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
200 gig | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
10,000 computers | QUANTITY | 0.99+ |
Mellanox | ORGANIZATION | 0.99+ |
200 | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Paresh | PERSON | 0.99+ |
Cumulus | ORGANIZATION | 0.99+ |
Cumulus Networks | ORGANIZATION | 0.99+ |
Iraq | LOCATION | 0.99+ |
20 years | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
2020 | DATE | 0.99+ |
two guests | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
first time | QUANTITY | 0.99+ |
around $7 billion | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
each application | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
second | QUANTITY | 0.99+ |
20 nanosecond | QUANTITY | 0.99+ |
Linux | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
NetQ | ORGANIZATION | 0.99+ |
400 gig | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
10,000 data centers | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.98+ |
three key elements | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
thousands of cores | QUANTITY | 0.98+ |
three things | QUANTITY | 0.97+ |
Jensen | PERSON | 0.97+ |
Apollo | ORGANIZATION | 0.97+ |
Jensen | ORGANIZATION | 0.96+ |
single computer | QUANTITY | 0.96+ |
HPE Discover | ORGANIZATION | 0.95+ |
single model | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
hundred gig | QUANTITY | 0.94+ |
InfiniBand | ORGANIZATION | 0.94+ |
DENT | ORGANIZATION | 0.93+ |
GTC | EVENT | 0.93+ |
Craig Hibbert, Vcinity | CUBE Conversation, March 2020
from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hello everyone and welcome to this special presentation we're gonna introduce you to a new kind of company first you might recall we've been reporting extensively on multi cloud and the need to create consistent experiences across cloud at high performance now a key to that outcome is the ability to leave data in place where it belongs not moving it around and bringing a cloud like experience to that data we've talked about kubernetes as a multi cloud enabler but it's an insufficient condition for success latency matters in fact it's critical and the ability to access data at high speeds wherever that data lives well we believe be a fundamental tenet of multi cloud now today I want to introduce you to a company called vicinity V CIN ity the simplest way to think of this company is they turn wide area networks into a global land and with me is Craig Hobart to talk about this he's the VP at vicinity Craig good to see you again thanks a lot thanks Howie middays good to be back so when I first heard about this company I said wow no it can't that breaking the law of physics so first of all tell me a little bit background about the company sure yeah absolutely so about two decades ago this company was formerly known as Bay Microsystems they were they were asked to come up with a solution specific for the United States military and there was a couple of people involved in that that tender fortunately for us Bay Microsystems prevailed and they've had their solution in place with the US military for well over a decade approach in two decades so that is the foundation that is the infrastructure of where we originated so did I get it right it kind of come through what you do can you add some color to that yeah yeah as much as I can right so based on who the the main consumer is so we do some very creative things where we we take the benefits of tcp/ip which is the retransmit the ability to ensure the data arrives there in one piece but we take away all the bad things with it things like dropping packets typically ones are lossy networks and and most people are accustomed to two fiber channel networks which of course which are lossless right and so what we've done is take the beauty of tcp/ip but remove the hindrances to it and that's how we get it to function at the same speeds as Al and overall one so but there's got to be more to it than that I mean it just sounds like magic right so you're able to leave data in place and access it at very low latency very high speeds so you know what's the secret sauce behind that is it is it you know architecture patents I mean yeah absolutely so we have over 30 unique patents that contribute to that we're not just doing those things that I just thought about before is a lot more we're actually shortly in the typical OSI stack the the moving through those layers and using our DMA so a lot of companies users today obviously infinite out uses in between the nodes Dell uses at HP is it's a very ubiquitous technology but typically it has a very short span it's designed for low latency as a 21-foot limitation there's certain things you can do to get around that now so what we did in our earlier iterations is extend that so you could go across the world but utilizing that inside a proprietary sort of l2 a tunneling protocol allows you to reinstate those calls that happened on the local side and bring them up on the other side of the world so presumably that sets up for Rocky it does yeah and rocky to you absolutely so we use that we use it converged Ethernet we can do some magical things where we can go in InfiniBand and potentially come out rocky at the other end there's a lot of really good things that we do obviously if it uh bans expensive converged Ethernet it's a lot more feasible and a lot easier to adapt when we can make sure I understand this so you think InfiniBand you're thinking you know in a data center you know proximate and shocking synchronous distances are you saying that you can extend that we can but extended not extending finna band but you're saying you can you translate it into Ethernet yeah yeah we we translate into we have some proprietary mechanisms obviously that that all the patents on but in essence that's exactly what we're doing yeah we take in the earlier years InfiniBand and extend that to wherever it needed to be over any distance and and now we do it with conversion and infinite in like speeds yeah yeah so obviously you've got that we can't get around physics oh I mean it for instance between our Maryland office and our San Jose office it's a 60 millisecond r/t team we can't get beyond that we can't achieve physics but what we can do is deliver us sometimes a 20x payload inside that same RTT so in essence you could argue that would be due to the speed of light by delivering a higher payload is what's the trade-off I mean there's got to be something here yeah so it's today it's not it's not ideal for every single situation if you were to do a transactional LTP a database at one side of the world to the other it would that would not be great for that something files yeah so so what we actually do I mean some some great examples we have is seismic data we have some companies that are doing seismic exploration and it used to take a lot of time to bring that data back to shore copied to a disk array and then you know copied to multiple disk arrays across the world so people can analyze it in that particularly use case we bring that data back we can even access it via satellite directly from the boats that are doing the the surveys and then we can have multiple people around the world looking at that sample live when we do a demonstration for our customers that shows that so that's one great example of time to market and getting ahead of your competition what's the file system underneath so we have a choice of different file system is a parallel file system we chose spectrum Connect it's a very ubiquitous file system it's well known it has there is no other file system that has the the hours of runtime that that has we off you skate the complexities from the customers we do all of the tuning so it's a custom solution and so they don't see it but we do have some of the hyper scales that want to use lustre and cluster and be GFS and things that we can accommodate those so you have a choice but the preferred is gpfs is a custom one we have you absolutely if somebody wants to use another one we have done that and can certainly have dialogues around it could talk about how this is different from competitors I think of like guys like doing Wayne acceleration sure sure yeah so what acceleration regardless of who you are today with it's predicated upon caching substantial caching and some of the problems with that are obviously once you turn on encryption that compression and those deduplication or data reduction technologies are hampered in that caching based on who our primary customer was we're handed encrypted data from them we encrypted as well so we have double layers of encrypted data and that does not affect our performance so massive underlying technological differences that allow you to adapt to the modern world with encrypted data so we've been talking about I said in the intro a lot about multi cloud can you tell us sooner where do you fit in but first of all how do you see that evolving sure and where do you guys fit in Joe so I actually read to assess very certain dividends I read your article before we had a dialogue last week and there was a good article talking about the complexities around multi cloud and I think you know you look at Google it's got some refactoring involved in it they're all great approaches we think the best way to deal with multi cloud today is to hold your data yourself and bring those services that you want to it and before we came along you couldn't do that so think now a movie studio we have a company in California that needs people working on video editing across the world and typically they would proliferate multiple copies out to storage in India and China and Australia and not only is that costly but it's incredibly time consuming and in one of those instances it opens up security holes and the movies were getting hacked and stolen and of course that's billions of dollars worth of damage to to any movie company so by having one set of security tenants in your in your physical place you can now bring anybody you want to consume that day to bring them all together bid GCP AWS as you for the compute and you maintain your data and that segues well into things like gdpr and things like that where the data isn't moving so you're not affected by those rules and regulations the data stays in one place it's we think it's a huge advantage so has that helped you get some business I mean the fact that you have to move data and you can keep it in you can give us an example yeah it absolutely doesn't mean if you think of companies like pharmaceutical companies that have a lot of data to process whether it's electron microscopy data nano tissue samples they need heavy iron to do that we're talking craze so we can facilitate the ability to rent out supercomputers and the security company of the farmers is happy to do that because it's not leaving the four walls present the data and run it live because we're getting land speeds right we're giving you land speed performance over the wine so it's it's possible we've actually done it for them to do that craze make money by renting the farmers are happy because they can't afford craze it's a great way to accelerate time to marketing in that case they're making drug specific for your genome specific for your body tissue so the efficacy of the drugs is greatly improved as well well as you have been we know the storage business primary storage right now is I've said it's a knife fight yeah and it's a cloud is eating away at it flash was injected and gave people a lot of head rooms and they're not buying spindles for performance anymore but but data protection and backup and and data management is really taking off do you guys fit in there is are there use cases for you you there when you think of companies like cookie City and rubric and and many others that are the cloud seems to be a tailwind for them is it a tailwind for you I think so and I think he just brought up a great point if you look at and again another one of your articles I'm giving you some thanks Rick you know saying I won't forget it is the article you wrote I thought was excellent about how data is changed it's not so much about the primary data now it's about the backup data and what rubric and cohesive tea especially have done is bring value to that data and they've elevated it up the stack for analytics and AI and made available to DevOps and that's brilliant but today that can find it too within the four walls of that company what vicinity can do for those companies has come along and make that data available anywhere in the world at anytime so if they've got different countries that they're trying to sell into that may have diff back up types or different data they can access this and model the data and see how it's relevant to their specific industry right as we say our zeros and ones are different than your zeros and ones so it's a massive expansion it take that richness that they've created and extrapolate that globally and that's what facility brings to the table you know within the days of big data we used to look at high performance computing as an example going more into commercial notes that's clearly happened but mainstream is still VMware is there a VMware play for you guys or opportunity great question great question in q1 of this year so so January end of January 2020 typically in the intro we talked about how we were born on a6 which is incredibly expensive and limited you get one go ahead and then we move to FPGAs we actually wrote a lot of libraries that took the FPGAs into a VMware instance and so what we're doing now with our customers is when we go in and present they say there's no way you can do this and we show them the demo when we actually leave they can log-in download to VMware instances put one in in these case one the west coast or with one of my customers we have now one on the east coast one in London download the VM and see the improvement that we can get over their dedicated lines or even the Internet by using the VM fact we did that in a test with AWS last week and got a 90 percent improvement just using the VM so when you are talking to customers what's the you know what's the the situation that you're looking for the the problem that comes up that you say bone that's vicinity maybe you could show not you do slash call in there so I think a lot of that is people looking to use multi cloud right that aren't sure which way they want to go how they want to do it and for other companies that can't move the data there's a lot of companies that either went to the cloud and came back or cannot go to the cloud because of the sensitivity of the data so and also things like the the seismic exploration right there is no cloud solution that makes that expedient enough to consume it as it's been developed and so anybody that needs movie editing large file transfer dr you know if you're moving a lot of files from one location to another we can't get involved in storage replication but if it's a file share we can do that and one of the great things we do is if you have cysts or NFS shares today we can consume those shares with the with the spectrum scale the gpfs under the cover and make that appear anywhere else in the world and we do that through our proprietary technology of course so now remote offices can collapse a lot of the infrastructure they have and consume the resources from the main data center because we can reach right back here at land space they just become an extension of the land no different than me plug in the laptop into an Ethernet you pay a penalty on first byte we do but it's almost transparent because of the way tcp/ip works very chatty yeah it is so we drop all that and that that's a great question an analogy we use in house is you turn on a garden house and it takes a few seconds for that garden hose to fill but with us that water stream is constant and it's constantly output in water with tcp/ip a bit stop start stop start stop start and if you have to start doing retransmit which is a regular occurrence of tcp/ip and that entire capacity of that garden hose will be dropped and then refilled and this is where our advantage is the ability to keep that full and keep serving data in that what you just described makes people really think twice about multi clouds essentially they want to put the right workload in the right place and kind of leave it there and essentially it's like the old mini computer days they're creating you know silos you're helping sort of bridge those we are that and that is the plot and so you know we have B to B we are B to C I mean if you sit and think about the possibilities I mean it could end up on every one of these right this software you know do we tackle every Wireless point this is this is some of the things that we can do you're an app or do we put vicinity on that to take the the regular tcp/ip and send the communication you know through through our proprietary Network around proprietary configuration so there's a lot of things that we can do we can we can affect everybody and that is that is the goal so divide by hardware from you or software or both that's another great question so if you are in a data center in the analogy I just gave before about being a a big data center you would use a piece of hardware that's got accelerants in it and then the remote office could use a smaller piece of hardware or just the VM with the movie company example I gave you earlier India and Australia is edit in live files on the west coast of the United States of America just using the VM so it depends what we come in as we look at your needs and we don't oversell you we try and sell you the correct solution and that typically is a combination of some hardware in the main data center and some software at the others so I've said you know multi-cloud in many ways creates more problems today than it solves you guys are really in there attacking that multi-cloud is a reality it's it's happening you know I said historically it's been a symptom of multi-vendor but now it's becoming increasingly a strategy and I think frankly I think companies like yours are critical in the ecosystem to really you know drive that transformation for organizations so congratulations thank you thank you we hope so and I'm sure we'll be seeing more of you in the future excellent well thanks for coming in Craig and we'll talk to you soon thank you for watching everybody this is Dave latte for the cube and we'll see you next time
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bay Microsystems | ORGANIZATION | 0.99+ |
21-foot | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
90 percent | QUANTITY | 0.99+ |
March 2020 | DATE | 0.99+ |
India | LOCATION | 0.99+ |
London | LOCATION | 0.99+ |
Australia | LOCATION | 0.99+ |
Craig Hibbert | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Craig Hobart | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
China | LOCATION | 0.99+ |
billions of dollars | QUANTITY | 0.99+ |
Rick | PERSON | 0.99+ |
last week | DATE | 0.99+ |
San Jose | LOCATION | 0.99+ |
January end of January 2020 | DATE | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
20x | QUANTITY | 0.98+ |
ORGANIZATION | 0.98+ | |
Craig | PERSON | 0.98+ |
today | DATE | 0.98+ |
over 30 unique patents | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
60 millisecond | QUANTITY | 0.97+ |
US military | ORGANIZATION | 0.97+ |
one side | QUANTITY | 0.97+ |
Maryland | LOCATION | 0.97+ |
twice | QUANTITY | 0.96+ |
one piece | QUANTITY | 0.96+ |
Dave latte | PERSON | 0.96+ |
Joe | PERSON | 0.95+ |
one | QUANTITY | 0.94+ |
United States military | ORGANIZATION | 0.94+ |
a lot of files | QUANTITY | 0.94+ |
United States of America | LOCATION | 0.94+ |
one set | QUANTITY | 0.93+ |
HP | ORGANIZATION | 0.93+ |
Dell | ORGANIZATION | 0.93+ |
one place | QUANTITY | 0.92+ |
a lot of data | QUANTITY | 0.9+ |
couple of people | QUANTITY | 0.9+ |
vicinity | ORGANIZATION | 0.9+ |
InfiniBand | TITLE | 0.9+ |
VMware | TITLE | 0.88+ |
q1 of this year | DATE | 0.86+ |
gdpr | TITLE | 0.85+ |
west coast | LOCATION | 0.83+ |
V CIN | ORGANIZATION | 0.83+ |
about two decades ago | DATE | 0.82+ |
a lot of companies | QUANTITY | 0.82+ |
two fiber channel | QUANTITY | 0.8+ |
Vcinity | PERSON | 0.78+ |
single situation | QUANTITY | 0.76+ |
east coast | LOCATION | 0.72+ |
two decades | QUANTITY | 0.71+ |
time | QUANTITY | 0.68+ |
InfiniBand | COMMERCIAL_ITEM | 0.68+ |
lot | QUANTITY | 0.67+ |
Wayne | PERSON | 0.67+ |
VMware play | TITLE | 0.67+ |
OSI | OTHER | 0.63+ |
few seconds | QUANTITY | 0.63+ |
over a decade | QUANTITY | 0.61+ |
double | QUANTITY | 0.56+ |
GCP | ORGANIZATION | 0.54+ |
four walls | QUANTITY | 0.53+ |
a6 | COMMERCIAL_ITEM | 0.51+ |
InfiniBand | ORGANIZATION | 0.39+ |
Eric Herzog, IBM Storage | CUBE Conversation December 2019
(funky music) >> Hello and welcome to theCUBE Studios in Palo Alto, California for another CUBE conversation, where we go in-depth with thought leaders driving innovation across the tech industry. I'm your host Peter Burris. Well, as I sit here in our CUBE studios, 2020's fast approaching, and every year as we turn the corner on a new year, we bring in some of our leading thought leaders to ask them what they see the coming year holding in the particular technology domain in which they work. And this one is no different. We've got a great CUBE guest, a frequent CUBE guest, Eric Herzog, the CMO and VP of Global Channels, IBM Storage, and Eric's here to talk about storage in 2020. Eric? >> Peter, thank you. Love being here at theCUBE. Great solutions. You guys do a great job on educating everyone in the marketplace. >> Well, thanks very much. But let's start really quickly, quick update on IBM Storage. >> Well, been a very good year for us. Lots of innovation. We've brought out a new Storwize family in the entry space. Brought out some great solutions for big data and AI solutions with our Elastic Storage System 3000. Support for backup in container environments. We've had persistent storage for containers, but now we can back it up with our award-winning Spectrum Protect and Protect Plus. We've got a great set of solutions for the hybrid multicloud world for big data and AI and the things you need to get cyber resiliency across your enterprise in your storage estate. >> All right, so let's talk about how folks are going to apply those technologies. You've heard me say this a lot. The difference between business and digital business is the role that data plays in a digital business. So let's start with data and work our way down into some of the trends. >> Okay. >> How are, in your conversations with customers, 'cause you talk to a lot of customers, is that notion of data as an asset starting to take hold? >> Most of our clients, whether it be big, medium, or small, and it doesn't matter where they are in the world, realize that data is their most valuable asset. Their customer database, their product databases, what they do for service and support. It doesn't matter what the industry is. Retail, manufacturing. Obviously we support a number of other IT players in the industry that leverage IBM technologies across the board, but they really know that data is the thing that they need to grow, they need to nurture, and they always need to make sure that data's protected or they could be out of business. >> All right, so let's now, starting with that point, in the tech industry, storage has always kind of been the thing you did after you did your server, after you did your network. But there's evidence that as data starts taking more center stage, more enterprises are starting to think more about the data services they need, and that points more directly to storage hardware, storage software. Let's start with that notion of the ascension of storage within the enterprise. >> So with data as their most valuable asset, what that means is storage is the critical foundation. As you know, if the storage makes a mistake, that data's gone. >> Right. >> If you have a malware or ransomware attack, guess what? Storage can help you recover. In fact, we even got some technology in our Spectrum Protect product that can detect anomalous activity and help the backup admin or the storage admins realize they're having a ransomware or malware attack, and then they could take the right corrective action. So storage is that foundation across all their applications, workloads, and use cases that optimizes it, and with data as the end result of those applications, workloads, and use cases, if the storage has a problem, the data has a problem. >> So let's talk about what you see as in that foundation some of the storage services we're going to be talking most about in 2020. >> Eric: So I think one of the big things is-- >> Oh, I'm sorry, data services that we're going to be talking most about in 2020. >> So I think one of the big things is the critical nature of the storage to help protect their data. People when they think of cyber security and resiliency think about keeping the bad guy out, and since it's not an issue of if, it's when, chasing the bad guy down. But I've talked to CIOs and other executives. Sometimes they get the bad guy right away. Other times it takes them weeks. So if you don't have storage with the right cyber resiliency, whether that be data at rest encryption, encrypting data when you send it out transparently to your hybrid multicloud environment, whether malware and ransomware detection, things like air gap, whether it be air gap to tape or air gap to cloud. If you don't think about that as part of your overall security strategy, you're going to leave yourself vulnerable, and that data could be compromised and stolen. So I can almost say that in 2020, we're going to talk more about how the relationship between security and data and storage is going to evolve, almost to the point where we're actually going to start thinking about how security can be, it becomes almost a feature or an attribute of a storage or a data object. Have I got that right? >> Yeah, I mean, think of it as storage infused with cyber resiliency so that when it does happen, the storage helps you be protected until you get the bad guy and track him down. And until you do, you want that storage to resist all attacks. You need that storage to be encrypted so they can't steal it. So that's a thing, when you look at an overarching security strategy, yes, you want to keep the bad guy out. Yes, you want to track the bad guy down. But when they get in, you'd better make sure that what's there is bolted to the wall. You know, it's the jewelry in the floor safe underneath the carpet. They don't even know it's there. So those are the types of things you need to rely on, and your storage can do almost all of that for you once the bad guy's there till you get him. >> So the second thing I want to talk about along this vein is we've talked about the difference between hardware and software, software-defined storage, but still it ends up looking like a silo for most of the players out there. And I've talked to a number of CIOs who say, you know, buying a lot of these software-defined storage systems is just like buying not a piece of hardware, but a piece of software as a separate thing to manage. At what point in time do you think we're going to start talking about a set of technologies that are capable of spanning multiple vendors and delivering a more broad, generalized, but nonetheless high function, highly secure storage infrastructure that brings with it software-defined, cloud-like capabilities. >> So what we see is the capability of A, transparently traversing from on-prem to your hybrid multicloud seamlessly. They can't, it can't be hard to do. It's got to happen very easily. The cloud is a target, and by the way, most mid-size enterprise and up don't use one cloud, they use many, so you've got to be able to traverse those many, move data back and forth transparently. Second thing we see coming this year is taking the overcomplexity of multiple storage platforms coupled with hybrid cloud and merging them across. So you could have an entry system, mid-range system, a high-end system, traversing the cloud with a single API, a single data management platform, performance and price points that vary depending on your application workload and use case. Obviously you use entry storage for certain things, high-end storage for other things. But if you could have one way to manage all that data, and by the way, for certain solutions, we've got this with one of our products called Spectrum Virtualize. We support enterprise-class data service including moving the data out to cloud not only on IBM storage, but over 450 other arrays which are not IBM-logoed. Now, that's taking that seamlessness of entry, mid-range, on-prem enterprise, traversing it to the cloud, doing it not only for IBM storage, but doing it for our competitors, quite honestly. >> Now, once you have that flexibility, now it introduces a lot of conversations about how to match workloads to the right data technologies. How do you see workloads evolving, some of these data-first workloads, AI, ML, and how is that going to drive storage decisions in the next year, year and a half, do you think? >> Well, again, as we talked about already, storage is that critical foundation for all of your data needs. So depending on the data need, you've got multiple price points that we've talked about traversing out to the cloud. The second thing we see is there's different parameters that you can leverage. For example, AI, big data, and analytic workloads are very dependent on bandwidth. So if you can take a scalable infrastructure that scales to exabytes of capacity, can scale to terabytes per second of bandwidth, then that means across a giant global namespace, for example, we've got with our Spectrum Scale solutions and our Elastic Storage System 3000 the capability of racking and stacking two rack U at a time, growing the capacity seamlessly, growing the performance seamlessly, providing that high-performance bandwidth you need for AI, analytic, and big data workloads. And by the way, guess what, you could traverse it out to the cloud when you need to archive it. So looking at AI as a major force in the coming, not just next year, but in the coming years to go, it's here to stay, and the characteristics that IBM sees that we've had in our Spectrum Scale products, we've had for years that have really come out of the supercomputing and the high-performance computing space, those are the similar characteristics to AI workloads, machine workloads, to the big data workloads and analytics. So we've got the right solution. In fact, the two largest supercomputers on this planet have almost an exabyte of IBM storage focused on AI, analytics, and big data. So that's what we see traversing everywhere. And by the way, we also see these AI workloads moving from just the big enterprise guys down into small shops, as well. So that's another trend you're going to see. The easier you make that storage foundation underneath your AI workloads, the more easy it is for the big company, the mid-size company, the small company all to get into AI and get the value. The small companies have to compete with the big guys, so they need something, too, and we can provide that starting with a little simple two rack U unit and scaling up into exabyte-class capabilities. >> So all these new workloads and the simplicity of how you can apply them nonetheless is still driving questions about how the storage hierarchies evolved. Now, this notion of the storage hierarchy's been around for, what, 40, 50 years, or something like that. >> Eric: Right. >> You know, tape and this and, but there's some new entrants here and there are some reasons why some of the old entrants are still going to be around. So I want to talk about two. How do you see tape evolving? Is that, is there still need for that? Let's start there. >> So we see tape as actually very valuable. We've had a real strong uptick the last couple years in tape consumption, and not just in the enterprise accounts. In fact, several of the largest cloud providers use IBM tape solutions. So when you need to provide incredible amounts of data, you need to provide primary, secondary, and I'd say archive workloads, and you're looking at petabytes and petabytes and petabytes and exabytes and exabytes and exabytes and zetabytes and zetabytes, you've got to have a low-cost platform, and tape provides still by far the lowest cost platform. So tape is here to stay as one of those key media choices to help you keep your costs down yet easily go out to the cloud or easily pull data back. >> So tape still is a reasonable, in fact, a necessary entrant in that overall storage hierarchy. One of the new ones that we're starting to hear more about is storage-class memory, the idea of filling in that performance gap between external devices and memory itself so that we can have a persistent store that can service all the new kinds of parallelism that we're introducing into these systems. How do you see storage-class memory playing out in the next couple years? >> Well, we already publicly announced in 2019 that in 2020, in the first half, we'd be shipping storage-class memory. It would not only working some coming systems that we're going to be announcing in the first half of the year, but they would also work on some of our older products such as the FlashSystem 9100 family, the Storwize V7000 gen three will be able to use storage-class memory, as well. So it is a way to also leverage AI-based tiering. So in the old days, flash would tier to disk. You've created a hybrid array. With storage-class memory, it'll be a different type of hybrid array in the future, storage-class memory actually tiering to flash. Now, obviously the storage-class memory is incredibly fast and flash is incredibly fast compared to disk, but it's all relative. In the old days, a hybrid array was faster than an all hard drive array, and that was flash and disk. Now you're going to see hybrid arrays that'll be storage-class memory and with our easy tier function, which is part of our Spectrum Virtualize software, we use AI-based tiering to automatically move the data back and forth when it's hot and when it's cool. Now, obviously flash is still fast, but if flash is that secondary medium in a configuration like that, it's going to be incredibly fast, but it's still going to be lower cost. The other thing in the early years that storage-class memory will be an expensive option from all vendors. It will, of course, over time get cheap, just the way flash did. >> Sure. >> Flash was way more expensive than hard drives. Over time it, you know, now it's basically the same price as what were the old 15,000 RPM hard drives, which have basically gone away. Storage-class over several years will do that, of course, as well, and by the way, it's very traditional in storage, as you, and I've been around so long and I've worked at hard drive companies in the old days. I remember when the fast hard drive was a 5400 RPM drive, then a 7200 RPM drive, then a 10,000 RPM drive. And if you think about it in the hard drive world, there was almost always two to three different spin speeds at different price points. You can do the same thing now with storage-class memory as your fastest tier, and now a still incredibly fast tier with flash. So it'll allow you to do that. And that will grow over time. It's going to be slow to start, but it'll continue to grow. We're there at IBM already publicly announcing. We'll have products in the first half of 2020 that will support storage-class memory. >> All right, so let's hit flash, because there's always been this concern about are we going to have enough flash capacity? You know, is enough going to, enough product going to come online, but also this notion that, you know, since everybody's getting flash from the same place, the flash, there's not going to be a lot of innovation. There's not going to be a lot of differentiation in the flash drives. Now, how do you see that playing out? Is there still room for innovation on the actual drive itself or the actual module itself? >> So when you look at flash, that's what IBM has funded on. We have focused on taking raw flash and creating our own flash modules. Yes, we can use industry standard solid state disks if you want to, but our flash core modules, which have been out since our FlashSystem product line, which is many years old. We just announced a new set in 2018 in the middle of the year that delivered in a four-node cluster up to 15 million IOPS with under 100 microseconds of latency by creating our own custom flash. At the same time when we launched that product, the FlashSystem 9100, we were able to launch it with NVME technology built right in. So we were one of the first players to ship NVME in a storage subsystem. By the way, we're end-to-end, so you can go fiber channel of fabric, InfiniBand over fabric, or ethernet over fabric to NVME all the way on the back side at the media level. But not only do we get that performance and that latency, we've also been able to put up to two petabytes in only two rack U. Two petabytes in two rack U. So incredibly rack density. So those are the things you can do by innovating in a flash environment. So flash can continue to have innovation, and in fact, you should watch for some of the things we're going to be announcing in the first half of 2020 around our flash core modules and our FlashSystem technology. >> Well, I look forward to that conversation. But before you go here, I got one more question for you. >> Sure. >> Look, I've known you for a long time. You spend as much time with customers as anybody in this world. Every CIO I talk to says, "I want to talk to the guy who brings me "or the gal who brings me the great idea." You know, "I want those new ideas." When Eric Herzog walks into their office, what's the good idea that you're bringing them, especially as it pertains to storage for the next year? >> So, actually, it's really a couple things. One, it's all about hybrid and multicloud. You need to seamlessly move data back and forth. It's got to be easy to do. Entry platform, mid-range, high-end, out to the cloud, back and forth, and you don't want to spend a lot of time doing it and you want it to be fully automated. >> So storage doesn't create any barriers. >> Storage is that foundation that goes on and off-prem and it supports multiple cloud vendors. >> Got it. >> Second thing is what we already talked about, which is because data is your most valuable asset, if you don't have cyber-resiliency on the storage side, you are leaving yourself exposed. Clearly big data and AI, and the other thing that's been a hot topic, which is related, by the way, to hybrid multiclouds, is the rise of the container space. For primary, for secondary, how do you integrate with Red Hat? What do you do to support containers in a Kubernetes environment? That's a critical thing. And we see the world in 2020 being trifold. You're still going to have applications that are bare metal, right on the server. You're going to have tons of applications that are virtualized, VMware, Hyper-V, KVM, OVM, all the virtualization layers. But you're going to start seeing the rise of the container admin. Containers are not just going to be the purview of the devops guy. We have customers that talk about doing 10,000, 20,000, 30,000 containers, just like they did when they first started going into the VM worlds, and now that they're going to do that, you're going to see customers that have bare metal, virtual machines, and containers, and guess what? They may start having to have container admins that focus on the administration of containers because when you start doing 30, 40, 50,000, you can't have the devops guy manage that 'cause you're deploying it all over the place. So we see containers. This is the year that containers starts to go really big-time. And we're there already with our Red Hat support, what we do in Kubernetes environments. We provide primary storage support for persistency containers, and we also, by the way, have the capability of backing that up. So we see containers really taking off in how it relates to your storage environment, which, by the way, often ties to how you configure hybrid multicloud configs. >> Excellent. Eric Herzog, CMO and vice president of partner strategies for IBM Storage. Once again, thanks for being on theCUBE. >> Thank you. >> And thanks for joining us for another CUBE conversation. I'm Peter Burris. See you next time. (funky music)
SUMMARY :
in the particular technology everyone in the marketplace. But let's start really quickly, and the things you need is the role that data plays that data is the thing of been the thing you did is the critical foundation. and help the backup admin some of the storage services that we're going to be talking of the storage to help protect their data. once the bad guy's there till you get him. So the second thing I want including moving the data out to cloud and how is that going to and the characteristics that IBM sees and the simplicity of are still going to be around. and not just in the enterprise accounts. that can service all the So in the old days, and by the way, it's very in the flash drives. in the middle of the year that delivered But before you go here, storage for the next year? and you don't want to spend and it supports multiple cloud vendors. and now that they're going to do that, Eric Herzog, CMO and vice See you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric Herzog | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
December 2019 | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2020 | DATE | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
15,000 RPM | QUANTITY | 0.99+ |
5400 RPM | QUANTITY | 0.99+ |
30 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
10,000 | QUANTITY | 0.99+ |
7200 RPM | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
10,000 RPM | QUANTITY | 0.99+ |
50 years | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
two rack | QUANTITY | 0.99+ |
IBM Storage | ORGANIZATION | 0.99+ |
next year | DATE | 0.99+ |
Two petabytes | QUANTITY | 0.99+ |
Global Channels | ORGANIZATION | 0.99+ |
this year | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Elastic Storage System 3000 | COMMERCIAL_ITEM | 0.99+ |
CUBE | ORGANIZATION | 0.98+ |
first half | QUANTITY | 0.98+ |
Second thing | QUANTITY | 0.98+ |
under 100 microseconds | QUANTITY | 0.98+ |
20,000 | QUANTITY | 0.98+ |
second thing | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one way | QUANTITY | 0.96+ |
first | QUANTITY | 0.96+ |
one more question | QUANTITY | 0.96+ |
FlashSystem 9100 | COMMERCIAL_ITEM | 0.95+ |
four-node | QUANTITY | 0.95+ |
single | QUANTITY | 0.94+ |
theCUBE | ORGANIZATION | 0.94+ |
two petabytes | QUANTITY | 0.93+ |
CMO | ORGANIZATION | 0.92+ |
first players | QUANTITY | 0.92+ |
first half of 2020 | DATE | 0.91+ |
two largest supercomputers | QUANTITY | 0.89+ |
Red Hat | TITLE | 0.89+ |
terabytes | QUANTITY | 0.89+ |
over 450 other arrays | QUANTITY | 0.88+ |
theCUBE Studios | ORGANIZATION | 0.86+ |
next couple years | DATE | 0.85+ |
year and a half | QUANTITY | 0.85+ |
up to 15 million IOPS | QUANTITY | 0.84+ |
Spectrum Protect | COMMERCIAL_ITEM | 0.84+ |
years | QUANTITY | 0.84+ |
Stanley Zaffos, Infinidat | CUBEConversation, October 2019
from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi and welcome to the cube Studios for another cube conversation where we go in-depth with thought leaders driving innovation across the tech industry I'm your host Peter Burris if there's one thing we know about cloud it's that it's going to drive new data and a lot of it and that places a lot of load on storage technologies who have to be able to capture persist and ultimately deliver that data to new classes of applications in support of whatever the digital business is trying to do so how is the whole storage industry and the relationship between data and storage going to evolve I can't think of a better person to have that conversation with in stanley's a phos senior vice president product marketing infinite dad Stan welcome to the cube thank you for it's my pleasure to be here and I'm flattered with that introduction well hold on look you and I have known each other for a long time we have been walking into user presentations and you've been walking out until recently though you were generally regarded as the thought leader when it came to user side concerns about storage what is that problem that users are fundamentally focused on as they think about their data data management and storage requirements fundamental problems and this afflicts all classes of users whether in a financial institution at university government small business medium-sized businesses is that they're coping with the number of primal forces that don't change and the first is that the environment is becoming ever more competitive and with the environment being ever more competitive that means that they're always under budget constraints they're usually suffering from skill shortages especially now when we see so many new technologies and the realization that we can coax value out of the information that we capture and store creating new demands elsewhere within the IT organization so what we see historically is that uses understand that there you have an insatiable demand for capacity they have finite budgets they have limited skills and they realize that recovering from a loss of data integrity is a far more painful process than recovering from an application blowing up or a networking issue and they got to do it faster and they have to do it faster so what we see in some ways is in effect the perfect storm and this is part of the reason that we've seen a number of the technical evolutions that we've witnessed over the past decade or two decades or however long we'd like to admit we've been tracking this industry occurring and growing in importance what we've also seen is that many of the technologies that are useful in helping to deliver usable availability to the application are in some ways becoming more commoditized so when we look across these industries some of the things that we're looking for is cost efficiency we're looking at increasing levels of automation we're looking of increases in data mobility with the ultimate objective being of course to allow data to reside where it naturally belongs and we're trying to deliver these new capabilities at scale in infrastructures that were built with storage arrays that would design for a terabyte world instead of a petabyte world and it won't be too long before we start talking about exabytes as we're already seeing so to be able to satisfy new scale problems with traditional and well understood issues is there are three basic types of storage companies that are targeting this problem the first of the established storage companies the incumbents the incumbents and the incumbents I really don't envy them because they have to maintain backwards compatibility which limits their ability to innovate at the same time they're competing against privately held newer companies that aren't constrained by the need for backwards compatibility and therefore able to take better advantage of the technology improvements that we're seeing to live it and when I say technology improvements not just in hardware but also in terms of software also in terms of management and government and governing philosophies so beginning with the point that all companies large small have some basic problems that are similar what we then see is there are three types of storage companies addressing them one of the in established and common vendors the other and they've gotten a lot of press or the companies that realize that flash media very media that delivers one to two orders of magnitude improvements in terms of performance in terms of bandwidth in terms of environmental x' that they could create storage solutions that address real pain points within a data center within an organization but at a very high price point and then it was the third approach and this is the approach that infinite I chose to take and that is to define the customer problem to find the customer market and then create an architecture which is underpinned by brilliant software to solve these problems in a way that is both cost-effective and extensible and of course meeting all of the critical capabilities that users are looking for so we've got the situation where we've got the incumbents who have install bases and are trying to bring their customers forward but right I have to do so within the constraints of past technology choices we've got the new folks who are basically technology first and saying jump to a new innovation curve and we've got other companies that are trying to bring the best of the technology to the best of the customer reality and marry it and you're asserting that's what infinite at ease and then it's precisely what we've done so let's talk about why did you then come to infinite at what is it about infinite act that gets you excited well one of the things that got well your number of things that got me excited about it so the first is that when I look at this and I approach these things as an engineer who's steeped in aerospace and weapon systems design so you look at the problem you superimpose capabilities there and then you blow it up and then if well we do blow it up but we blow it up using economics we blow it up using superior post-sale support effectiveness we blow it up with a fundamentally different approach to how we give our install base access to new capabilities so we're established storage companies and to some extent media based storage companies of forcing upgrades to avoid architectural obsolescence that is to gain access to new features and functions that can improve their staff productivity or deliver new capabilities to support new applications and workloads we're not forcing a cadence of infrastructure refreshes to gain access to that so if you take a look at our history our past behavior we allow today we're allowing current software to run on n minus 2 generation hardware so that now when you're doing a refresh on your hardware you're doing a refresh on the hardware because you've outgrown it because it's so old that it's moved past its useful service life which hasn't happened to us yet because that's usually on the order of about eight years and sometimes longer if it's kept in a clean data center and we have a steady cadence of product announcements and we understood some underlying economics so whether I talk to banking institutions colleges manufacturing companies telcos service providers everybody's in general agreement that roughly two-thirds of the data that they have online and accessible is stale data meaning that it hasn't been accessed in 60 to 90 days and then when I take a look at industry forecasts in terms of dollar per terabyte pricing for HD DS for disk drives and I look at dollar per terabyte forecast for flash technologies there's an order of magnitude difference in meaning 10x and even if you want to be a pessimist call it only 5x what you see is that we have a built-in advantage for storing 60% of the data that's already up and spinning and there are those questions of whether or not the availability of flash is going to come under pressure over the next few years as because we're not expanding another fabs out there they're generating flash so let me come back right it's kind of core points out there so we have quality yeah the right now you guys are trying to bring the economics of HDD to the challenges are faster more reliable more scalable data delivery right so that you can think about not only persisting your data from transactional applications but also delivering that data to the new uses new requirements new applications new business needs so you've made you know infinite out has made some choices about how to bring technology together that are some somewhat that are unique first thing is the team that did this tell us a little bit about the team and then let's talk about some of those torches so one of the draws for me personally is that we have a development team that has had the unique possibly the unique experience of having done three not one not two but three clean sheet designs of storage arrays now if you believe that practice makes perfect and you're starting off with very bright people that experience before they designed a storage array when we look at the InfiniBand when we look at in Finnegan what we see is the benefit of three clean sheet designs and what does that design look like what is it how did you guys bring these different senses of technology together to improve the notion of it all right so what we looked at we looked at trends instead of being married to a technology or married to an architecture we were we define the users problem we understood that they have an insatiable need for data we can argue whether they're growing at fifteen percent 30 percent or 100 percent per year but data growth is insatiable stale data being a constant megive n' and of course now with digital business initiatives and moving the infrastructure to the edge where we could capture ever more data if anything the amount of stale data that was storing is likely to increase so we've all seen survey after survey that 80% of all the data created is unstructured data meaning we're collecting it we know that may be a value at some point but we're not quite sure when so this is not data that you want to store in the most expensive media that we know how to manufacture or sell right not happening so we have a built-in economic advantage for this at least 60% of the data that users want to keep online we understand that if you implement an archiving solution that archive data still has to be stored somewhere and for practical purposes that's either disk or tape and we're not here to talk about the fact that I can take tape and store in a bunker for years but if I want to recover something if I have to answer a problem I want it on disk so the economic gap the price Delta between an archive storage solution per se and our approach is much narrower because we're using a common technology and when Seagate or West and digital a Toshiba cell and HDD they're not asking you where you're putting it they're saying you want this capacity this rpm this mean time between face its this is how much it's going to cost so when we take a look at a lot of the innovation and go to market models what they really are or revenue protection schemes for the existing established vendors and for the emerging companies the difference is there are in the problems that they're solving am i creating a backup restore solution the backup and restore is always a high impact pain point am i creating a backup restore solution am i building a system for primary storage a my targeting virtualized environments my targeting VDI now our install base the bulk of our install base I'm not sure we actually we should share percentages but it's well north of 50 and if you take a look at some virtualized estimates probably 80% of workloads today are virtualized we understood that to satisfy this environment and to have a built-in advantage that's memorable after the marketing presentations are done in other words treating these things as black boxes so if we take a look at my high-level description of an infinite box array installed at a customer site consistent sub-millisecond response times and we're able to do that because we service over 80% of all iOS out of DRAM which is probably about four orders of magnitude faster than NAND flash and then we have a large read cache to increase our cache hit ratio even further and when I say large we're not talking about single digits of terabytes we're talking about 20 plus terabytes and that can grow as necessary so that when we're done we're achieving cache hit ratios that are typically in excess of 90% now if I'm servicing iOS out of cache do I really care what's on the back end the answer is no but what I do care about for certain analytics applications is I want lots of bandwidth and I want and if I have workloads with high right content I don't want to be spending a lot of time paying my raid right penalty so what we've done is to take the obvious solution and coalesce rights so that instead of doing partial stripe rights we're always doing full stripe rights so we have double bit protection on data stored on HD DS which means that the world is likely to come to an end before we lose this slight exaggeration I think we're expecting the world to come to an end in 14 billion years yeah yeah let's do so so if I'm wrong get back to me in a Bay and it's a little bit less than that but it doesn't matter yeah okay high on that all so we've got a so we've got a built in economic advantage we've got a built in performance advantage because when I'm servicing most iOS out of DRAM which is for does magnitude faster than NAND flash I've got a lot of room to do a lot of very clever things in terms of metadata and still be faster so and you got a team that's done it before and we've got a team that's done it before and experimented because remember this is a team that has experience with scale-up architectures as in symmetric s-- they have experience with scale-out architectures which is XIV which was very disruptive to the market well so was it symmetric spec and now of course we've got this third bite at the Apple with infinite at where they also understood that the rate of microprocessor performance improvement was going up a lot faster than than our ability to transfer data on and off of HD DS or SSDs so what they realized is that they could change the ratio they can have a much lower microprocessor or controller to back-end storage ratio and still be able to deliver this tremendous performance and now if you have fewer parts and you're not affecting the ID MTBF by driving more iOS through I've lowered my overall cost of goods so now I've got an advantage in back-end media I have a bag I have an advantage in terms of the number of controllers I need to deliver sub sillas eken response time I have an advantage in terms of delivering usable availability so I'm now in a position to be able to unashamedly compete on price unashamedly compete on performance unashamedly compete on a better post sale support experience because remember if there's less stuff they had a break we're taking less calls and because of the way we're organized our support generally goes to what other vendors might think of it's third level support because of a guided answer answers the phone from us doesn't solve the problem he's calling development so if you take a look at gotten apear insights we're off the scale in terms of having great reviews and when you have I think it's 99% I may be off by a percent ninety eight to a hundred percent of our customers saying they'd recommend our kit to their to their peers that's a pretty positive endorsement yeah so let me let me break in and and kind of wrap up a little bit let me make this quick observation because the other thing that you guys have done is you've demonstrated that you're not bound to a single technology so smart people with a great architecture that's capable of utilizing any technology to serve a customer problem at a price point that reflects the value of the problem that's being solved right and in fact we it's very insightful observation because when you recognize that we've built a multimedia integrated architecture that makes our that makes very easy for us to include storage class memory and because of the way we've done our drivers we're also going to be nvme over if ready when that starts to gain traction as well excellent Stanley Zappos senior vice president product management Infini debt thanks very much for being in the cube we'll have you back oh it's my pleasure there's been a blast and once again I want to thank you for joining us for another cube conversation on Peterborough's see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
60% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
fifteen percent | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
80% | QUANTITY | 0.99+ |
October 2019 | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
90 days | QUANTITY | 0.99+ |
three types | QUANTITY | 0.99+ |
Stanley Zaffos | PERSON | 0.99+ |
third approach | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
over 80% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
third bite | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
phos | ORGANIZATION | 0.97+ |
Finnegan | LOCATION | 0.97+ |
third level | QUANTITY | 0.97+ |
Infinidat | ORGANIZATION | 0.97+ |
5x | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Infini | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
three | QUANTITY | 0.95+ |
about eight years | QUANTITY | 0.95+ |
about 20 plus terabytes | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
two orders | QUANTITY | 0.94+ |
single | QUANTITY | 0.93+ |
Stan | PERSON | 0.93+ |
14 billion years | QUANTITY | 0.93+ |
stanley | PERSON | 0.92+ |
Palo Alto California | LOCATION | 0.91+ |
telcos | ORGANIZATION | 0.91+ |
30 percent | QUANTITY | 0.91+ |
three basic types | QUANTITY | 0.9+ |
years | QUANTITY | 0.87+ |
two decades | QUANTITY | 0.87+ |
north of 50 | QUANTITY | 0.87+ |
90% | QUANTITY | 0.86+ |
three clean sheet designs | QUANTITY | 0.86+ |
at least 60% of | QUANTITY | 0.84+ |
two-thirds of | QUANTITY | 0.84+ |
three clean sheet | QUANTITY | 0.83+ |
2 generation | QUANTITY | 0.81+ |
a hundred percent | QUANTITY | 0.81+ |
100 percent per year | QUANTITY | 0.79+ |
dollar per terabyte | QUANTITY | 0.79+ |
Delta | ORGANIZATION | 0.78+ |
ninety eight | QUANTITY | 0.78+ |
one of the draws | QUANTITY | 0.78+ |
things | QUANTITY | 0.76+ |
single digits of terabytes | QUANTITY | 0.73+ |
double | QUANTITY | 0.73+ |
one of | QUANTITY | 0.72+ |
lot | QUANTITY | 0.69+ |
Stanley Zappos | PERSON | 0.68+ |
dollar per terabyte | QUANTITY | 0.68+ |
West | ORGANIZATION | 0.67+ |
past decade | DATE | 0.64+ |
percent | QUANTITY | 0.64+ |
next few years | DATE | 0.61+ |
four orders | QUANTITY | 0.59+ |
those | QUANTITY | 0.48+ |
Peterborough | PERSON | 0.45+ |
InfiniBand | ORGANIZATION | 0.35+ |
Breaking Analysis: Spending Data Shows Cloud Disrupting the Analytic Database Market
from the silicon angle media office in Boston Massachusetts it's the queue now here's your host David on tape hi everybody welcome to this special cube in size powered by ET our enterprise Technology Research our partner who's got this database to solve the spending data and what we're gonna do is a braking analysis on the analytic database market we're seeing that cloud and cloud players are disrupting that marketplace and that marketplace really traditionally has been known as the enterprise data warehouse market so Alex if you wouldn't mind bringing up the first slide I want to talk about some of the trends in the traditional EDW market I almost don't like to use that term anymore because it's sort of a pejorative but let's look at it's a very large market it's about twenty billion dollars today growing it you know high single digits low double digits it's expected to be in the 30 to 35 billion dollar size by mid next decade now historically this is dominated by teradata who started this market really back in the 1980s with the first appliance the first converged appliance or coal with Exadata you know IBM I'll talk about IBM a little bit they bought a company called mateesah back in the day and they've basically this month just basically killed the t's and killed the brand Microsoft has entered the fray and so it's it's been a fairly large market but I say it's failed to really live up to the promises that we heard about in the late 90s early parts of the 2000 namely that you were going to be able to get a 360 degree view of your data and you're gonna have this flexible easy access to the data you know the reality is data warehouses were really expensive they were slow you had to go through a few experts to to get data it took a long time I'll tell you I've done a lot of research on this space and when you talked to the the data warehouse practitioners they would tell you we always had to chase the chips anytime Intel would come out with a new chip we forced it in there because we just didn't have the performance to really run the analytics as we need to it's took so long one practitioner described it as a snake swallowing a basketball so you've got all those data which is the sort of metaphor for the basketball just really practitioners had a hard time standing up infrastructure and what happened as a spate of new players came into the marketplace these these MPP players trying to disrupt the market you had Vertica who was eventually purchased by HP and then they sold them to Micro Focus greenplum was buy bought by EMC and really you know company is de-emphasized greenplum Netezza 1.7 billion dollar acquisition by IBM IBM just this month month killed the brand they're kind of you know refactoring everything par Excel was interesting was it was a company based on an open-source platform that Amazon AWS did a one-time license with and created a redshift it ever actually put a lot of innovation redshift this is really doing well well show you some data on that we've also at the time saw a major shift toward unstructured data and read much much greater emphasis on analytics it coincided with Hadoop which also disrupted the market economics I often joked it the ROI of a dupe was reduction on investment and so you saw all these data lakes being built and of course they turned into the data swamps and you had dozens of companies come into the database space which used to be rather boring but Mike Amazon with dynamodb s AP with HANA data stacks Redis Mongo you know snowflake is another one that I'm going to talk about in detail today so you're starting to see the blurring of lines between relational and non relational and what was was what once thought of is no sequel became not only sequel sequel became the killer app for Hadoop and so at any rate you saw this new class of data stores emerging and snowflake was one of the more interesting and and I want to share some of that data with you some of the spending intentions so over the last several weeks and months we've shared spending intentions from ETR enterprise technology research they're a company that that the manages of the spending data and has a panel of about 4,500 end-users they go out and do spending in tension surveys periodically so Alex if you bring up this survey data I want to show you this so this is spending intentions and and what it shows is that the public cloud vendors in snowflake who really is a database as a service offering so cloud like are really leading the pack here so the sector that I'm showing is the enterprise data warehouse and I've added in the the analytics business intelligence and Big Data section so what this chart shows is the vendor on the left-hand side and then this bar chart has colors the the red is we're leaving the platform the gray is our spending will be flat so this is from the July survey expect to expectations for the second half of 2019 so gray is flat the the dark green is increase and the lime green is we are a new customer coming on to the platform so if you take the the greens and subtract out the red and there's two Reds the dark red is leaving the lighter red is spending less so if you subtract the Reds from the greens you get what's called a net score so the higher the net score the better so you can see here the net score of snowflake is 81% so that very very high you can also see AWS in Microsoft a very high and Google so the cloud vendors of which I would consider a snowflake at cloud vendor like at the cloud model all kicking butt now look at Oracle look at the the incumbents Oracle IBM and Tara data Oracle and IBM are in the single digits for a net score and the Terra data is in a negative 10% so that's obviously not a good sign for those guys so you're seeing share gains from the cloud company snowflake AWS Microsoft and Google at the expense of certainly of teradata but likely IBM and Oracle Oracle's little for animal they got Exadata and they're putting a lot of investments in there maybe talk about that a little bit more now you see on the right hand side this black says shared accounts so the N in this survey this July survey that ETR did is a thousand sixty eight so of a thousand sixty eight customers each er is asking them okay what's your spending going to be on enterprise data warehouse and analytics big data platforms and you can see the number of accounts out of that thousand sixty eight that are being cited so snowflake only had 52 and I'll show you some other data from from past surveys AWS 319 Microsoft the big you know whale here trillion dollar valuation 851 going down the line you see Oracle a number you know very large number and in Tara data and IBM pretty large as well certainly enough to get statistically valid results so takeaway here is snowflake you know very very strong and the other cloud vendors the hyper scale is AWS Microsoft and Google and their data stores doing very well in the marketplace and challenging the incumbents now the next slide that I want to show you is a time series for selected suppliers that can only show five on this chart but it's the spending intentions again in that EDW and analytics bi big data segment and it shows the spending intentions from January 17 survey all the way through July 19 so you can see the the period the periods that ETR takes this the snapshots and again the latest July survey is over a thousand n the other ones are very very large too so you can see here at the very top snowflake is that yellow line and they just showed up in the January 19 a survey and so you're seeing now actually you go back one yeah January 19 survey and then you see them in July you see the net score is the July next net score that I'm showing that's 35 that's the number of accounts out of the corpus of data that snowflake had in the survey back in January and now it's up to 52 you can see they lead the packet just in terms of the spending intention in terms of mentions AWS and Microsoft also up there very strong you see big gap down to Oracle and Terra data I didn't show I BM didn't show Google Google actually would be quite high to just around where Microsoft is but you can see the pressure that the cloud is placing on the incumbents so what are the incumbents going to do about it well certainly you're gonna see you know in the case of Oracle spending a lot of money trying to maybe rethink the the architecture refactor the architecture Oracle open worlds coming up shortly I'm sure you're gonna see a lot of new announcements around Exadata they're putting a lot of wood behind the the exadata arrow so you know we'll keep in touch with that and stay tuned but you can see again the big takeaways here is that cloud guys are really disrupting the traditional edw marketplace alright let's talk a little bit about snowflakes so I'm gonna highlight those guys and maybe give a little bit of inside baseball here but what you need to know about snowflakes so I've put some some points here just some quick points on the slide Alex if you want to bring that up very fast-growing cloud and SAS based data warehousing player growing that couple hundred percent annually their annual recurring revenue very high these guys are getting ready to do an IPO talk about that a little bit they were founded in 2012 and it kind of came out of stealth and hiding in 2014 after bringing Bob Moog Leon from Microsoft as the CEO it was really the background on these guys is they're three engineers from Oracle will probably bored out of their mind like you know what we got this great idea why should we give it to Oracle let's go pop out and start a company and that NIN's and as such they started a snowflake they really are disrupting the incumbents they've raised over 900 million dollars in venture and they've got almost a four billion dollar valuation last May they brought on Frank salute Minh and this is really a pivot point I think for the company and they're getting ready to do an IPO so and so let's talk a little bit about that in a moment but before we do that I want to bring up just this really simple picture of Alex if you if you'd bring this this slide up this block diagram it's like a kindergarten so that you know people like you know I can even understand it but basically the innovation around the snowflake architecture was that they they separated their claim is that they separated the storage from the compute and they've got this other layer called cloud services so let me talk about that for a minute snowflake fundamentally rethought the architecture of the data warehouse to really try to take advantage of the cloud so traditionally enterprise data warehouses are static you've got infrastructure that kind of dictates what you can do with the data warehouse and you got to predict you know your peak needs and you bring in a bunch of storage and compute and you say okay here's the infrastructure and this is what I got it's static if your workload grows or some new compliance regulation comes out or some new data set has to be analyzed well this is what you got you you got your infrastructure and yeah you can add to it in chunks of compute and storage together or you can forklift out and put in new infrastructure or you can chase more chips as I said it's that snake swallowing a basketball was not pretty so very static situation and you have to over provision whereas the cloud is all about you know pay buy the drink and it's about elasticity and on demand resources you got cheap storage and cheap compute and you can just pay for it as you use it so the innovation from snowflake was to separate the compute from storage so that you could independently scale those and decoupling those in a way that allowed you to sort of tune the knobs oh I need more compute dial it up I need more storage dial it up or dial it down and pay for only what you need now another nuance here is traditionally the computing and data warehousing happens on one cluster so you got contention for the resources of that cluster what snowflake does is you can spin up a warehouse on the fly you can size it up you can size it down based on the needs of the workload so that workload is what dictates the infrastructure also in snowflakes architecture you can access the same data from many many different houses so you got again that three layers that I'm showing you the storage the compute and the cloud services so let me go through some examples so you can really better understand this so you've got storage data you got customer data you got you know order data you got log files you might have parts data you know what's an inventory kind of thing and you want to build warehouses based on that data you might have marketing a warehouse you might have a sales warehouse you might have a finance warehouse maybe there's a supply chain warehouse so again by separating the compute from that sort of virtualized compute from the from the storage layer you can access any data leave the data where it is and I'll talk about this in more and bring the compute to the data so this is what in part the cloud layer does they've got security and governance they got data warehouse management in that cloud layer and and resource optimization but the key in in my opinion is this metadata management I think that's part of snowflakes secret sauce is the ability to leave data where it is and have the smarts and the algorithms to really efficiently bring the compute to the data so that you're not moving data around if you think about how traditional data warehouses work you put all the data into a central location so you can you know operate on it well that data movement takes a long long time it's very very complicated so that's part of the secret sauce is knowing what data lives where and efficiently bringing that compute to the data this dramatically improves performance it's a game changer and it's much much less expensive now when I come back to Frank's Luqman this is somebody that I've is a career that I've followed I've known had him on the cube of a number of times I first met Frank Sloot when he was at data domain he took that company took it public and then sold it originally NetApp made a bid for the company EMC Joe Tucci in the defensive play said no we're not gonna let Ned afgan it there was a little auction he ended up selling the company for I think two and a half billion dollars sloop and came in he helped clean up the the data protection business of EMC and then left did a stint as a VC and then took over service now when snoop and took over ServiceNow and a lot of people know this the ServiceNow is the the shiny toy on Wall Street today service that was a mess when saluteth took it over it's about 100 120 million dollar company he and his team took it to 1.2 billion dramatically increased the the valuation and one of the ways they did that was by thinking about the Tam and expanding that Tim that's part of a CEOs job as Tam expansion Steuben is also a great operational guy and he brought in an amazing team to do that I'll talk a little bit about that team effect uh well he just brought in Mike Scarpelli he was the CFO was the CFO of ServiceNow brought him in to run finance for snowflake so you've seen that playbook emerge you know be interesting Beth white was the CMO at data domain she was the CMO at ServiceNow helped take that company she's an amazing resource she kind of you know and in retirement she's young but she's kind of in retirement doing some advisory roles wonder if slooping will bring her back I wonder if Dan Magee who was ServiceNow is operational you know guru wonder if he'll come out of retirement how about Dave Schneider who runs the sales team at at ServiceNow well he you know be be lord over we'll see the kinds of things that Sluman looks for just in my view of observing his playbook over the years he looks for great product he looks for a big market he looks for disruption and he looks for off-the-chart ROI so his sales teams can go in and really make a strong business case to disrupt the existing legacy players so I one of the things I said that snoopin looks for is a large market so let's look at this market and this is the thing that people missed around ServiceNow and to credit Pat myself and David for in the back you know we saw the Tam potential of ServiceNow is to be many many tens of billions you know Gartner when they when ServiceNow first came out said hey helpdesk it's a small market couple billion dollars we saw the potential to transform not only IT operations but go beyond helpdesk change management at cetera IT Service Management into lines of business and we wrote a piece on wiki Vaughn back then it's showing the potential Tam and we think something similar could happen here so the market today let's call 20 billion growing to 30 Billy big first of all but a lot of players in here what if so one of the things that we see snowflake potentially being able to do with its architecture and its vision is able to bring enterprise search you know to the marketplace 80% of the data that's out there today sits behind firewalls it's not searchable by Google what if you could unlock that data and access it in query at anytime anywhere put the power in the hands of the line of business users to do that maybe think Google search for enterprises but with provenance and security and governance and compliance and the ability to run analytics for a line of business users it's think of it as citizens data analytics we think that tam could be 70 plus billion dollars so just think about that in terms of how this company might this company snowflake might go to market you by the time they do their IPO you know it could be they could be you know three four five hundred billion dollar company so we'll see we'll keep an eye on that now because the markets so big this is not like the ITSM the the market that ServiceNow was going after they crushed BMC HP was there but really not paying attention to it IBM had a product it had all these products that were old legacy products they weren't designed for the cloud and so you know ServiceNow was able to really crush that market and caught everybody by surprise and just really blew it out there's a similar dynamic here in that these guys are disrupting the legacy players with a cloud like model but at the same time so the Amazon with redshift so is Microsoft with its analytics platform you know teradata is trying to figure it out they you know they've got an inertia of a large install base but it's a big on-prem install base I think they struggle a little bit but their their advantages they've got customers locked in or go with exudate is very interesting Oracle has burned the boats and in gone to cloud first in Oracle mark my words is is reacting everything for the cloud now you can say Oh Oracle they're old school they're old guard that's fine but one of the things about Oracle and Larry Ellison they spend money on R&D they're very very heavy investor in Rd and and I think that you know you can see the exadata as it's actually been a very successful product they will react attacked exadata believe you me to to bring compute to the data they understand you can't just move all this the InfiniBand is not gonna solve their problem in terms of moving data around their architecture so you know watch Oracle you've got other competitors like Google who shows up well in the ETR survey so they got bigquery and BigTable and you got a you know a lot of other players here you know guys like data stacks are in there and you've got you've got Amazon with dynamo DB you've got couch base you've got all kinds of database players that are sort of blurring the lines as I said between sequel no sequel but the real takeaway here from the ETR data is you've got cloud again is winning it's driving the discussion and the spending discussion with an IT watch this company snowflake they're gonna do an IPO I guarantee it hopefully they will see if they'll get in before the booth before the market turns down but we've seen this play by Frank Sluman before and his team and and and the spending data shows that this company is hot you see them all over Silicon Valley you're seeing them show up in the in the spending data so we'll keep an eye on this it's an exciting market database market used to be kind of boring now it's red-hot so there you have it folks thanks for listening is a Dave Volante cube insights we'll see you next time
SUMMARY :
David for in the back you know we saw
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
January 19 | DATE | 0.99+ |
Dave Schneider | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
2012 | DATE | 0.99+ |
Frank Sluman | PERSON | 0.99+ |
Mike Scarpelli | PERSON | 0.99+ |
Dan Magee | PERSON | 0.99+ |
Frank Sloot | PERSON | 0.99+ |
January 17 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
July 19 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
81% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
July | DATE | 0.99+ |
30 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
52 | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
360 degree | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Joe Tucci | PERSON | 0.99+ |
20 billion | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
Pat | PERSON | 0.99+ |
Hadoop | TITLE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Excel | TITLE | 0.99+ |
10% | QUANTITY | 0.99+ |
70 plus billion dollars | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
35 | QUANTITY | 0.99+ |
first slide | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
about 4,500 end-users | QUANTITY | 0.99+ |
over 900 million dollars | QUANTITY | 0.99+ |
Boston Massachusetts | LOCATION | 0.99+ |
two and a half billion dollars | QUANTITY | 0.98+ |
Tara | ORGANIZATION | 0.98+ |
first appliance | QUANTITY | 0.98+ |
Mike | PERSON | 0.98+ |
dozens of companies | QUANTITY | 0.98+ |
over a thousand | QUANTITY | 0.98+ |
Tim | PERSON | 0.98+ |
Dheeraj Pandey, Nutanix | CUBEConversation, September 2019
(funky music) >> Announcer: From our studios in the heart of Silicon Valley, Palo Alto, California, this is a CUBE Conversation. >> Everyone, welcome to this special CUBE Conversation here in Palo Alto, California with CUBE Studios. I'm John Furrier, your host of this CUBE Conversation with Dheeraj Pandey, CEO of Nutanix. CUBE alumni, very special part of our community. Great to see you again, thanks for coming in. We're previewing your big show coming up, Nutanix NEXT in Europe. Thanks for joining me. >> It's an honor. >> It's always great to get you. I saw your interview on Bloomberg with Emily Chang. Kind of short interview, but still, you're putting the message out there. You've been talking software. We covered your show here in North America. Clearly moving to the subscription model, and I want to get into that conversation. I think there's some notable things to talk about now that we're in this cloud 2.0 era, as we're calling it, kind of a goof on web 2.0. But cloud 2.0 is a whole shift happening, and you've been on it for a while. But you got the event coming up in Europe, Nutanix NEXT. What's the focus? Give a quick plug for that event. Let's talk about that. >> Yeah, in fact, the reiteration of the message is a key part of any of our user conferences. We have 14,000 customers around the world now, across 150 countries. We've done almost more than $5 billion worth of just software business in the last six, seven years of selling. It's a billion six run rate. There's a lot going on in the business, but we need to take a step back and in our user conference talk about the vision. So what's the vision of Nutanix? And the best part is that it hasn't changed. It's basically one of those timeless things that hopefully will withstand the test of time in the future as well. Make computing invisible anywhere. People scratch their heads. What does computing mean? What does invisible mean? What does anywhere mean? And that's where we'll actually go to these user conferences, talk about what is computing for us. Is it just infrastructure? Is it infrastructure and platform? Now that we're getting into desktop delivery, is it also about business users and applications? The same thing about invisible, what's invisible? For us, it's always been a special word. It's a very esoteric word. If you think about the B2B world, it doesn't talk about the word invisible a lot. But for us it's a very profound word. It's about autonomous software. It's about continuous, virtues of continuous delivery, continuous consumption, continuous mobility. That's how you make things invisible. And subscription is a big part of that continuous delivery message and continuous consumption message. >> So the event is October 9th, around the first week of October. You got some time there, but getting geared up for that. I wanted to ask you what you've learned from the North America conference and going into the European conference. It's ultimately the same message, same vision, with a tweak, you got some time under your belt since then. The subscription model business, which you were talking in your Bloomberg interview, is in play. It is not a new thing. It's been in operation for a while. Could you talk about that specifically? Because I think most people would say, hey, hardware to software, hard to do. Software subscription, hard to maintain and grow. Where are you on that transition? Explain and clarify your mix of business, hardware, software. Where are you in the progress of that transformation? >> Well, you know, I have been a big student of history, and I can't think of a company that's gone from hardware to software and software subscription in such a short span. Actually, I don't know of any company. If you know of one, please let me know. But why? The why of subscription is to be frictionless. Hybrid is impossible without having the same kind of consumption model, both on-prem and off-prem. And if we didn't go through that, we would be hypocritical as a company to talk about cloud and hybrid itself. The next 10 years for this company is about hybrid, and doing it as if private and public are one in the same is basically the essence of Nutanix's architecture. >> Well, I can think of some hardware-software dynamics that, again, might not match your criteria, but some might say Apple. Is it a software or hardware company? Hardware drives the ecosystem, they commoditize it. Peloton bicycle is a bike, but it's mainly a software business and in-person business. So there's different models. Oracle has hardware, they have software. It doesn't always relate to the enterprise. What's the argument to say, hey, why don't you just create your own box and kick ass with that box, or is it just different dynamics? What's that? >> Well, there's a tension in the system. People want to buy experiences as opposed to buying things. They don't want to integrate things, like, oh, I need to actually now get a hardware vendor to behave as a software vendor when it comes to support issues and such. And at the same time, you want to be flexible and portable. How do you really work with the customer with their relationships that they have with their hardware vendors? So the word anywhere in our vision is exactly that. It's like, okay, we can work on multiple servers, multiple hypervisors, and multiple clouds. At the end of the day, the customer experience is king. And that's one thing that the last 10 years has taught us, John, if anything, is don't sell things to people. You know, Kubernetes is a thing. Cloud is a thing. Can you really go sell experiences? The biggest lesson in the last year for us has been integrate better. Not just with partners, but also within your own products. And now if you can do that well, customers will buy from you. >> I think you just kind of clarified where I was thinking out loud, because if you think about Apple, the hardware is part of the experience. So they have to have it. >> Mm-hmm. >> You don't have to have the hardware to create those experiences. Is that right? >> Absolutely, which is why it's now 2% of our business, and yet we are saying that we take the burden of responsibility of supporting it, integrating with it. One of the biggest issues with cloud is operations. What is operations? It's day two patching. How do you do day two patching? Intel is coming up with microcode upgrades every quarter now because of security reasons. If we are not doing an awesome job of one-click upgrade of firmware and microcode and BIOS, we don't belong in the hybrid cloud world. I think that's the level of mundaneness that we've gotten to with our software that makes us such a high NPS company with our customers. >> I want to just drill in on the notion of a thing versus experience. You mentioned Kubernetes is a thing. I would say Hadoop was a thing. But Hadoop was a great example. It was hard to do. Kubernetes, jury's still out. People love them. Kubernetes, we'll see how that goes. If it can be abstracted away, it's not a thing anymore. We'll see. But Hadoop was a great example. Unbelievable technology direction, big data, all the goodness of object storage and unstructured data. We knew that. Just hard to work with. Setting up clusters, managing clusters. And it ended up being the death of the sector, in my opinion. What is an experience? Define what does that mean. Is it frictionless only? Is there a trust equation? Just unpack your vision on what that means. A thing, which could be a box with software on it, and experience, which is something different. >> Yeah, I mean, now you start to unpeel the word experience. It's really about being frictionless, trusted, and invisible. If you can really do these things well, around the word, define frictionless. Well, it has to be consumer-grade. It has to be web scalable, 'cause customers are looking for the Amazon architecture inside, and aren't just going and renting it from Amazon, but also saying, can I get the same experience inside? So you've got to make it web scale. You've got to make it consumer-grade. Because our operators and users, talk about Hadoop, I mean, they struggled with the experience of Hadoop itself because it was a thing, it was a technology, as opposed to being something that was consumer-grade itself. And then finally, security. Trust is very important. We must secure always on resilient. The word resilience is very important. In fact, that's one of the things we'll actually talk about at our conference, is resilience. What does it mean, not just for Nutanix stock, to be where it is today from where it was six months ago. And that's what I'm most proud of, is you go through these transitions, you actually talk about resilience of software, resilience of systems, resilience of customer support, and resilience of companies. >> So you mentioned hybrid cloud. We were talking before we came on camera about hybrid cloud. But software's a two-way relationship. Talk about what you mean by that, and then I want to ask you a follow-up question of where hardware may or may be an opportunity or a problem in that construct. >> Yeah, I mean, look, in the world of hybrid, what's really important is delivering an experience that's really without silos. Ideally, on-prem infrastructure is an availability zone. How do you make it look like an availability zone that can stand up shoulder-to-shoulder with a public cloud availability zone? That's where you sell an experience. That's how you talk about a management plane where you can actually have a single pane of glass that really delivers a cloud experience both ways. >> You're kind of a contrarian. I always love interviewing you because you seem to be on the next wave before any realizes it. Right now everyone's trying to go on-premise and you're moving from on-premise to the cloud. Not you guys moving, but your whole vision is. You've been there, done that on premises. Now you've got to be where the customers are, which is where they need to be, which is the cloud. I heard you say that. It's interesting, you're going the other way, right? >> Mm-hmm. But you could look at the infrastructure and say, hey, there's a lot of hardware inside these clouds that have a lot of hardware-specific features like hardware assist that software or network latency might not be able to deliver. Is that a missed opportunity for you guys, or does your software leverage these trends? And even on premises, there's hardware offload-like features coming. How do you reconcile that? Because I would just argue inside of the company, say, hey, Dheeraj, let's not go all in on software. We can maximize this new technology, this thing, for our software. How do you-- >> Look, I think if you look at our features, like security, the way we use TPM, which is a piece of assist that you get from Intel's motherboards for doing key encryption management. What does it mean to really do encryption at scale using Intel's vectored instructions? How do you do RDMA? How do you look at InfiniBand? How do you look at Optane drives? We've been really good at that lowest level, but making sure that it's actually selling a solution that can then go drive SAP HANA and Oracle databases and GPU for graphics and desktops. So as a company, we don't talk about those things because they are the how of the business. You don't talk about the how. You'd rather talk about the why and the what, actually. >> So from a business strategy standpoint, I just want to get this clear because there's downfalls for getting into the hardware business. You know them. Inventory, all these hardware cycles are moving fast. You mentioned Intel shipping microcode for security reasons. So you're basically saying you'd rather optimize for decoupling hardware from the software and ride the innovation of the hardware guys, like Nvidia and Intel and others. >> Absolutely, and do it faster than anybody else, but more integrated than anybody else. You know, all together now is kind of our message for .NEXT. How do you bring it all together? Because the world is struggling with things, and that's the opportunity for Nutanix. >> Well, I would say making compute invisible is a great tagline. I would add storage and networking to that too. >> Yeah, computing, by the way. >> Computing. >> I said computing. >> Okay, computing. >> 'Cause computing is compute storage networking. Computing is infrastructure, platform, and apps. It's a very clever word, and it's a very profound word as well. >> Well, let's just throw Kubernetes in there too and move up the stack, because ultimately, we're writing a lot of stories on covering this editorially, is that the world's flipped upside down. It used to be the infrastructure. We're calling this cloud 2.0, like I said earlier. The world used to be the infrastructure enabled what the apps could do, and they were limited to the resources they had. Now the apps are in charge. They're dictating terms below the software line, if you want to call it the app line. So the apps are in charge now. Whoever can serve up the best infrastructure capability, which changes the entire computing industry because now the suppliers who can deliver that elastic or flexible capacity or resource, wins. >> Absolutely. >> And that's ultimately a complete shift. >> You know, I tell people, John, about the strategy of Nutanix because we have some apps now. Frame is an app for us. Beam is an app. Calm is an app. These are apps, they're drawn on the platform, which is the core platform of Nutanix, the core hyper-convergence innovation that we did. If you go back to the '90s, who was to say that Windows really fueled Office or Office fueled Windows? They had to work in conjunction, because without one, there would be no, the other, actually. So without Office there would be no Windows. Without Windows there would be no Office. How platforms and apps work with each other synergistically is at the core of delivering that experience. >> I want to add just you're a student of history. As an entrepreneur, you've been there through the many waves and you also invest a lot, and I want to ask you this question. It used to be that platforms was the holy grail. You'd go to a VC and say, hey, I'm building a platform. Big time investment. An entrepreneur will come back: I got a tool. You're a feature. You're a feature, not a platform. Platforms was the elite engineering position to come in to look for the big money. How would you define platforms now? Because with cloud, if apps are in charge, and there's potential features that are coming around the corner that no one's yet invented, what is this platform 2.0 world look like if you were coming out of grad school or you were a young engineer or a young entrepreneur? How do you think about that right now? >> Well, the biggest thing is around extensibility and openness. You know, we were talking about openness before, but the idea of APIs, where API is the new graphically why, because the developer is the builder. And how do you really go sell to them and still deliver a great experience? And not just from the point of view of, well, I've given you the best APIs, but the best SDKs. What does it mean to give them a development kit that gets them up and running in no time? And maybe even a graphical Kickstarter. We're working with our partners a lot, where it's not just about delivering APIs or raw APIs because they're not as consumable, but to deliver SDKs and to deliver graphical structural kits to them so that they can be up and running, building applications in two months rather than two years. I think that's at the core of what our platform is. >> And data and having an operating system thinking seems to be another common pattern. Understand the subsystems of data. Running and assembling things together. >> I think what is Nutanix, I mean, if people ask me what is Nutanix, I start with data. Data is the core of the company. We've done data for virtualization. We're now doing data for applications with Nutanix Files. We have object store data. We are doing Era, which is database as a service. Without data, we'd be dead as a company. That's how important it is. Now, how do you meld that with design and delivery is basically where the three Ds come together: data-- >> I wrote a blog post. Dave Vellante always laughs when I bring this up because he always references it too. In 2007 I said, data is the new development kit. 'Cause back then, development kits existed. SDKs, software development kits. MSDN was Microsoft's thing. You remember those glory days, Dheeraj, I know. But the thesis was, if data does actually come in, it's actually an input into the software. This is what I think you guys are doing that is clever that's not well understood, is data is an input, like a software library almost. A module, but it's dynamic and it's always changing. And writing software for that is a nouveau kind of thing. This is new. >> Yeah, I know, and delivered to the developer, because right now data and hardware data is sitting in silos which are mainframe-like systems. How do you deliver it where they can spin it up on their own? Making sure that we democratize data is the biggest challenge in most companies. >> We're in a new era, I think you just pointed that out, and we talk about it at CUBE all the time. We don't really talk about up-front. It used to be UI was the thing, user interface, ease of use. I think now the new table stake feature in all companies is if you can't show value instantly in any solution that has a thing or things in it, then it's pretty much not going to happen. I mean, this is the new expectation that becomes the experience for-- >> Yeah, I mean, millennials are the new developers, and they need to actually see instant gratification, many of these-- >> Well, cost too. I don't want to spend a million dollars to find out it didn't work. I want to maybe spend something variable. >> And look, agility, the cliched word, and I don't want to talk about agility per se, but at the end of the day it's all about, can we provide that experience where you don't have to really learn something over 18 months and provide it in the next three hours. >> Great conversation here with Dheeraj Pandey, CEO of Nutanix, about his vision. I always loved your software vision. You guys have smart engineers there. Let's talk about your company. I think a lot of people at your conference and your community and others want to know, is how you're doing and how the company's doing. Because I think you guys are in the midst of a major transition we talked about earlier, hardware to software, software to subscription, recurring revenue. I mean, it's pretty much a disruptive enabler for you guys at one level as an opportunity. It's changing how you do accounting. It's having product management. Your customers are going to consume it differently. It's been a big challenge. And stock's taken a little bit of a hit, but you're kind of playing the long game. Talk about the growth strategy as you guys go forward. This has been a struggle. There's been some personnel changes in the company. What's going on? Give us the straight scoop. >> Yeah, in fact the biggest thing is about the transformation for this coming decade. And there's fundamental things that need to change for the world of cloud. Otherwise, you're basically just talking the word rather than walking the walk itself. So this last quarter I was very pleased to announce that we finally showed the first strong point of this whole transformation. There's a really good data point coming out that the company is growing back again. We beat street estimates on pretty much every metric. Billings, revenue, gross margin. And we also guided above street estimates for billings, revenue, and gross margin, and I think that's probably one of the biggest things I'm proud of in the last six, nine months of this subscription transition. We're also telling the street about how to look at us from software and support billings point of view as opposed to looking at overall billings and revenue. If you take a step back into the company, I talk about this in our earnings call, 'til three years ago, we were a commercial company, also doing federal and some international. And the last three years we proved to ourselves and to the community that we can do enterprise, you know, high-end customers, upmarket, and also do a very good job of international. Now, the next three years is really about saying, can we do both enterprise and commercial together? All together now, which is also our, coincidentally, our .NEXT message, is the proof that we actually have to go and show that we can do federal, enterprise, and commercial to really build a very large business from it. >> Well, federal's got certification levels. We know that's different depending upon which agency you're talking to. Commercial, a little bit different ball game. SaaS becomes important, cloud becomes important. The big trend is on-premise hardware. Outposts for AWS, Azure Stack for Microsoft. How do you fit into that? Because you, again, you said you're both ways. >> Mm-hmm. >> So are you worried about that? Is that a headwind, tailwind for you? What's the impact for this now fashionable on-premises shift? Which I think is just a temporary thing as cloud continues to grow. But I still argue with Michael Dell about this. I think cloud is going to be a bigger TAM. Even though there's a huge total addressable market on enterprise, that's like saying there's a great TAM for horses and buggies when cars are coming out. It's different world between public cloud and on-premises. How does that impact Nutanix, this on-premise-- >> Well, remember I said about the word anywhere in our vision? Make computing invisible anywhere? With software you can actually reduce the tension between public and private. It's not this or that. It's this and that. Our software running on Outpost is a reality. It's not like we're saying, Outpost is one thing and Nutanix is another. And that's the value of software. It's so fungible, it's so portable, that you don't have to take sides between-- >> Are you guys at ISV inside Amazon Marketplace? >> No, but again, it's still a thing. Marketplace is still not where it should be, and it's hard to search and discover things from there. So we are saying, let's do it right. Remember, we were not the first hyper-convergence company. Right? We were probably the ninth one, like the way Google was as a search engine, actually. But we did it right, because the experience mattered. You know that search box that did everything? That's what Nutanix's overall experience is today. We will do the public cloud right with our software so that we can use the customer's credits with Amazon-- >> But you're still selling direct. And your partners. >> Well, everything is coming through partners, so at the end of the day we have to do an even better job of that, like what we're doing at HPE now. I think being able to go and find that common ground with partners is what commercial is all about. Commercial is a lot about distribution. As a company, we've done a really good job of enterprise and federal. But doing it with partners-- >> What are the biggest impact areas for your business and business model, elements with software transition that you're scaling up on the subscription side? What are the biggest areas? >> Well, one is just communication, 'cause obviously a lot is changing. At a private company, things change, nobody cares. The board just needs to know about it. But at a public company, we have investors in the public market. And many of them are in the nosebleeding section, actually, of this arena. So really, you're sitting in the arena, being the man in the arena, or the woman in the arena. How do you really take this message to the bleachers section is probably the biggest one, actually. >> Well, I think one of the things I've always speculated on, you look at the growth of, just pick some stocks that we all know. VMware, Microsoft. You look at the demarcation point where, right when the stock was low to high was the shift to cloud and software. With VMware, it was they had a failing strategy and they kill it and they do a deal with Amazon. Game has changed, now they're all in the software-defined data center. Microsoft, Satya Nadella comes in, boom, they're in cloud. Real commitment. And with Microsoft specifically, that was a real management commitment. They were committed to software. They were committed to the cloud business model, and took whatever medicine they needed to take. >> That's it. That's it, you take short-term pain for long-term gain, and look, anything that becomes large over time, to me it's all about long-term greed, and I use this word a lot. I want all our employees and our customers and our investors to really think about the word. There's greed, but it's long-term greed, and that's how most companies have become large over time. So I think for us to have done this right, to say, look, we are set for the next 10 years, was very important. >> It's interesting. Everyone wants to be like Jeff Bezos. Everyone wants to be like you guys now, because long-term greed or long-term thinking is the new fashion. It's the new standard and tack. >> Yeah, I mean, look the CEOs, the top 200 CEOs, came out and talked about, are we taking good care of main street, or are we just focused on this hamster wheel of three months reporting to Wall Street alone? And I think consensus is emerging that you got to take care of main street. You and I were talking about, that I look at investors as customers, and I look at customers as investors. Which is really kind of a contrarian way of thinking about it. >> It's interesting. We live in the world, we've seen many waves. I think the wave we're on now from an entrepreneurial and venture creation standpoint, whether you're public or private, is the long game is the new 3D chess. It's where the masters are playing their best game. You look at the results of the best companies. I just bought the book about Uber from Mike Isaac from the New York Times. Short-term thinking, win at all costs, that's not the 3D chess game that's going on with entrepreneurs these days. All the investment thesis is stay long-term. And certainly now, with this perceived bubble popping, or this downturn that may or may not happen, long-term game is more important than ever. Your thoughts on it? >> I think the word authenticity has never been more important, not just in the Valley, but around the world, actually. What you're seeing with all this Me Too movement and a lot of skeletons in the cupboard out there, I think at the end of the day, the word authentic cannot be artificially created. It has to come from within. What you talk about, Satya... I look at Shantanu Narayen, the Adobe CEO, and they're authentic CEOs. I mean, I look at Dara now, at Uber, he's talking about bringing authenticity to Uber. I think there's no shortcuts to success in this world. >> I think Adobe's a great example. What they've done has been amazing. I know you're on the board there, so congratulations. Final word, I'll let you get your plug in for the event and your customer base. Talk to your customers and investors out there that might watch this. From your state of mind, what's the state of the union for Nutanix? Speak directly to your customers and investors right now. >> Well, the tagline for .NEXT Copenhagen is all together now. We're bringing clouds together. We're bringing app infrastructure and data together. I think it's a really large opportunity for us to go sell an experience to our customers, rather than selling things. All these buzzwords that come up in technology, as a company, we've done a really good job of integrating them, and the next decade is about integrating the public cloud and the private cloud. And I look at investors and customers alike. I talk about long-term greed with them. Providing an experience to them is the core of our journey. >> Thanks for your insight, Dheeraj. This was a CUBE Conversation here in Palo Alto. I'm John Furrier, thanks for watching. (funky music)
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California, Great to see you again, thanks for coming in. I think there's some notable things to talk about it doesn't talk about the word invisible a lot. and going into the European conference. and doing it as if private and public are one in the same What's the argument to say, hey, And at the same time, you want to be flexible and portable. I think you just kind of clarified You don't have to have the hardware One of the biggest issues with cloud is operations. all the goodness of object storage and unstructured data. In fact, that's one of the things and then I want to ask you a follow-up question Yeah, I mean, look, in the world of hybrid, I always love interviewing you Is that a missed opportunity for you guys, the way we use TPM, which is a piece of assist and ride the innovation of the hardware guys, and that's the opportunity for Nutanix. I would add storage and networking to that too. and it's a very profound word as well. is that the world's flipped upside down. And that's ultimately is at the core of delivering that experience. and I want to ask you this question. And not just from the point of view of, Understand the subsystems of data. Data is the core of the company. This is what I think you guys are doing that is clever is the biggest challenge in most companies. that becomes the experience for-- I don't want to spend a million dollars to find out but at the end of the day it's all about, Talk about the growth strategy as you guys go forward. is the proof that we actually have to go and show How do you fit into that? I think cloud is going to be a bigger TAM. And that's the value of software. and it's hard to search and discover things from there. And your partners. I think being able to go is probably the biggest one, actually. You look at the demarcation point where, to say, look, we are set for the next 10 years, is the new fashion. that you got to take care of main street. is the long game is the new 3D chess. and a lot of skeletons in the cupboard out there, Final word, I'll let you get your plug in for the event and the next decade is about integrating Thanks for your insight, Dheeraj.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Bezos | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Shantanu Narayen | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dheeraj Pandey | PERSON | 0.99+ |
Emily Chang | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Uber | ORGANIZATION | 0.99+ |
Mike Isaac | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Satya | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Satya Nadella | PERSON | 0.99+ |
North America | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
September 2019 | DATE | 0.99+ |
Dheeraj | PERSON | 0.99+ |
October 9th | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2007 | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
2% | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
14,000 customers | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Office | TITLE | 0.99+ |
one-click | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
CUBE | ORGANIZATION | 0.99+ |
two months | QUANTITY | 0.99+ |
Windows | TITLE | 0.99+ |
Dara | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
six months ago | DATE | 0.99+ |
Kickstarter | ORGANIZATION | 0.99+ |
ninth | QUANTITY | 0.99+ |
CUBE Studios | ORGANIZATION | 0.99+ |
Beam | TITLE | 0.99+ |
three years ago | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
last quarter | DATE | 0.98+ |
last year | DATE | 0.98+ |
today | DATE | 0.98+ |
next decade | DATE | 0.98+ |
both ways | QUANTITY | 0.98+ |
seven years | QUANTITY | 0.98+ |
more than $5 billion | QUANTITY | 0.97+ |
Bloomberg | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
SAP HANA | TITLE | 0.96+ |
Outpost | ORGANIZATION | 0.96+ |
DDN Chrowdchat | October 11, 2018
(uptempo orchestral music) >> Hi, I'm Peter Burris and welcome to another Wikibon theCUBE special feature. A special digital community event on the relationship between AI, infrastructure and business value. Now it's sponsored by DDN with participation from NIVIDA, and over the course of the next hour, we're going to reveal something about this special and evolving relationship between sometimes tried and true storage technologies and the emerging potential of AI as we try to achieve these new business outcomes. So to do that we're going to start off with a series of conversations with some thought leaders from DDN and from NVIDIA and at the end, we're going to go into a crowd chat and this is going to be your opportunity to engage these experts directly. Ask your questions, share your stories, find out what your peers are thinking and how they're achieving their AI objectives. That's at the very end but to start, let's begin the conversation with Kurt Kuckein who is a senior director of marketing at DDN. >> Thanks Peter, happy to be here. >> So tell us a little bit about DNN at the start. >> So DDN is a storage company that's been around for 20 years. We've got a legacy in high performance computing, and that's what we see a lot of similarities with this new AI workload. DDN is well known in that HPC community. If you look at the top 100 super computers in the world, we're attached to 75% of them. And so we have the fundamental understanding of that type of scalable need, that's where we're focused. We're focused on performance requirements. We're focused on scalability requirements which can mean multiple things. It can mean the scaling of performance. It can mean the scaling of capacity, and we're very flexible. >> Well let me stop you and say, so you've got a lot of customers in the high performance world. And a lot of those customers are at the vanguard of moving to some of these new AI workloads. What are customer's saying? With this significant engagement that you have with the best and the brightest out there. What are they saying about this transition to AI? >> Well I think it's fascinating that we have a bifurcated customer base here where we have those traditionalist who probably have been looking at AI for over 40 years, and they've been exploring this idea and they've gone to the peaks and troughs in the promise of AI, and then contraction because CPUs weren't powerful enough. Now we've got this emergence of GPS in the super computing world. And if you look at how the super computing world has expanded in the last few years. It is through investment in GPUs. And then we've got an entirely different segment which is a much more commercial segment, and they may be newly invested in this AI arena. They don't have the legacy of 30, 40 years of research behind them, and they are trying to figure out exactly what do I do here. A lot of companies are coming to us. Hey, I have an AI initiative. Well, what's behind it? We don't know yet but we've got to have something, and they don't you understand where is this infrastructure going to come from. >> So a general availability of AI technologies and obviously flash has been a big part of that. Very high speed networks within data centers. Virtualization certainly helps as well. Now opens up the possibility for using these algorithms, some of which have been around for a long time that require very specialized bespoke configurations of hardware to the enterprise. That still begs the question. There are some differences between high performance computing workloads and AI workloads. Let's start with some of the, what are the similarities and let's explore some of the differences. >> So the biggest similarity I think is it's an intractable hard IO problem. At least from the storage perspective, it requires a lot of high throughput. Depending on where those idle characteristics are from. It can be a very small file, high opt intensive type workflows but it needs the ability of the entire infrastructure to deliver all of that seamlessly from end to end. >> So really high performance throughput so that you can get to the data you need and keep this computing element saturated. >> Keeping the GPU saturated is really the key. That's where the huge investment is. >> So how do AI and HPC workloads differ? >> So how they are fundamentally different is often AI workloads operate on a smaller scale in terms of the amount of capacity, at least today's AI workloads, right? As soon as a project encounter success, what our forecast is is those things will take off and you'll want to apply those algorithm games bigger and bigger data sets. But today, we encounter things like 10 terabyte data sets, 50 terabyte data sets, and a lot of customers are focused only on that but what happens when you're successful? How you scale your current infrastructure to petabytes and multi petabytes when you'll need it in the future. >> So when I think of HPC, I think of often very, very big batch jobs. Very, very large complex datasets. When I think about AI, like image processing or voice processing whatever else it might be. Like for a lot of small files randomly access that require nonetheless some very complex processing that you don't want to have to restart all the time and the degree of some pushing that's required to make sure that you have the people that can do. Have I got that right? >> You've got that right. Now one, I think misconception is on the HPC side, that whole random small file thing has come in in the last five, 10 years, and it's something DDN have been working on quite a bit. Our legacy was in high performance throughput workloads but the workloads have evolved so much on the HPC side as well, and as you posited at the beginning so much of it has become AI and deep learning research. >> Right, so they look a lot more alike. >> They do look a lot more alike. >> So if we think about the revolving relationship now between some of these new data first workloads, AI oriented change the way the business operates type of stuff. What do you anticipate is going to be the future of the relationship between AI and storage? >> Well, what we foresee really is that the explosion in AI needs and AI capability is going to mimic what we already see, and really drive what we see on the storage side. We've been showing that graph for years and years of just everything going up into the right but as AI starts working on itself and improving itself, as the collection means keep getting better and more sophisticated, and have increased resolutions whether you're talking about cameras or in life sciences, acquisition. Capabilities just keep getting better and better and the resolutions get better and better. It's more and more data right and you want to be able to expose a wide variety of data to these algorithms. That's how they're going to learn faster. And so what we see is that the data centric part of the infrastructure is going to need the scale even if you're starting today with a small workload. >> Kurt, thank you very much, great conversation. How did this turn into value for users? Well let's take a look at some use cases that come out of these technologies. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides an enable into acceleration for a wide variety of AI and the use cases in any scale. The platform provides tremendous flexibility and supports a wide variety of workflows and data types. Already today, customers in the industry, academia and government all around the globe are leveraging DDN A3I within video DGX-1 for their AI and DL efforts. In this first example used case, DDN A3I enables the life sciences research laboratory to accelerate through microscopic capture and analysis pipeline. On the top half of the slide is the legacy pipeline which displays low resolution results from a microscope with a three minute delay. On the bottom half of the slide is the accelerated pipeline where DDN A3I within the video DGX-1 delivers results in real time. 200 times faster and with much higher resolution than the legacy pipeline. This used case demonstrates how a single unit deployment of the solution can enable researchers to achieve better science and the fastest times to results without the need to build out complex IT infrastructure. The white paper for this example used case is available on the DDN website. In the second example used case, DDN A3I with NVIDIA DGX-1 enables an autonomous vehicle development program. The process begins in the field where an experimental vehicle generates a wide range of telemetry that's captured on a mobile deployment of the solution. The vehicle data is used to train capabilities locally in the field which are transmitted to the experimental vehicle. Vehicle data from the fleet is captured to a central location where a large DDN A3I within video DGX-1 solution is used to train more advanced capabilities, which are transferred back to experimental vehicles in the field. The central facility also uses the large data sets in the repository to train experimental vehicles and simulate environments to further advance the AV program. This used case demonstrates the scalability, flexibility and edge to data center capability of the solution. DDN A3I within video DGX-1 brings together industry leading compute, storage and network technologies, in a fully integrated and optimized package that makes it easy for customers in all industries around the world to pursue break from business innovation using AI and DL. >> Ultimately, this industry is driven by what users must do, the outcomes if you try to seek. But it's always is made easier and faster when you got great partnerships working on some of these hard technologies together. Let's hear how DDN and NVIDIA are working together to try to deliver new classes of technology capable of making these AI workloads scream. Specifically, we've got Kurt Kuckein coming back. He's a senior director of marketing for DDN and Darrin Johnson who is global director of technical marketing for NVIDIA in the enterprise and deep learning. Today, we're going to be talking about what infrastructure can do to accelerate AI. And specifically we're going to use a relationship. A virgin relationship between DDN and NVIDIA to describe what we can do to accelerate AI workloads by using higher performance, smarter and more focused infrastructure for computing. Now to have this conversation, we've got two great guest here. We've got Kurt Kuckein, who is the senior director of marketing at DDN. And also Darrin Johnson, who's the global director of technical marketing for enterprise at NVIDIA. Kurt, Darrin, welcome to the theCUBE. >> Thank you very much. >> So let's get going on this 'cause this is a very, very important topic, and I think it all starts with this notion of that there is a relationship that you guys put forward. Kurt, why don't you describe. >> Sure, well so what we're announcing today is DDNs, A3I architecture powered by NVIDIA. So it is a full rack level solution, a reference architecture that's been fully integrated and fully tested to deliver an AI infrastructure very simply, very completely. >> So if we think about why this is important. AI workloads clearly put special stress on underline technology. Darrin talk to us a little bit about the nature of these workloads and why in particular things like GPUs, and other technologies are so important to make them go fast? >> Absolutely, and as you probably know AI is all about the data. Whether you're doing medical imaging, whether you're doing natural language processing. Whatever it is, it's all driven by the data. The more data that you have, the better results that you get but to drive that data into the GPUs, you need greater IO and that's why we're here today to talk about DDN and the partnership of how to bring that IO to the GPUs on our DGX platforms. >> So if we think about what you describe. A lot of small files often randomly distributed with nonetheless very high profile jobs that just can't stop midstream and start over. >> Absolutely and if you think about the history of high performance computing which is very similar to AI, really IO is just that. Lots of files. You have to get it there. Low latency, high throughput and that's why DDNs probably, nearly 20 years of experience working in that exact same domain is perfect because you get the parallel file system which gives you that throughput, gives you that low latency. Just helps drive the GPU. >> So you mentioned HPC from 20 years of experience. Now it use to be that HPC, you'd have a scientist with a bunch of graduate students setting up some of these big, honking machine. but now we're moving with commercial domain You don't have graduate students running around. You have very low cost, high quality people. A lot of administrators, nonetheless quick people but a lot to learn. So how does this relationship actually start making or bringing AI within reach of the commercial world? Kurt, why you don't you-- >> Yeah, that's exactly where this reference architecture comes in. So a customer doesn't need to start from scratch. They have a design now that allows them to quickly implement AI. It's something that's really easily deployable. We fully integrated the solution. DDN has made changes to our parallel file system appliance to integrate directly with the DGX-1 environment. Makes the even easier to deploy from there, and extract the maximum performance out of this without having to run around and tuning a bunch of knobs, change a bunch of settings. It's really going to work out of the box. >> And NVIDIA has done more than the DGX-1. It's more than hardware. You've don't a lot of optimization of different AI toolkits et cetera so talk a little bit about that Darrin. >> Talking about the example that used researchers in the past with HPC. What we have today are data scientists. A scientist understand pie charts, they understand TensorFlow, they understand the frameworks. They don't want to understand the underlying file system, networking, RDM, a InfiniBand any of that. They just want to be able to come in, run their TensorFlow, get the data, get the results, and just keep turning that whether it's a single GPU or 90 DGXs or as many DGXs as you want. So this solution helps bring that to customers much easier so those data scientist don't have to be system administrators. >> So roughly it's the architecture that makes things easier but it's more than just for some of these commercial things. It's also the overall ecosystem. New application fires up, application developers. How is this going to impact the aggregate ecosystem is growing up around the need to do AI related outcomes? >> Well, I think one point that Darrin was getting to there in one of the bigg effects is also as these ecosystems reach a point where they're going to need to scale. There's somewhere where DDN has tons of experience. So many customers are starting off with smaller datasets. They still need the performance, a parallel file system in that case is going to deliver that performance. But then also as they grow, going from one GBU to 90 GXs is going to be an incredible amount of both performance scalability that they're going to need from their IO as well as probably capacity, scalability. And that's another thing that we've made easy with A3I is being able to scale that environment seamlessly within a single name space, so that people don't have to deal with a lot of again tuning and turning of knobs to make this stuff work really well and drive those outcomes that they need as they're successful. In the end, it is the application that's most important to both of us, right? It's not the infrastructure. It's making the discoveries faster. It's processing information out in the field faster. It's doing analysis of the MRI faster. Helping the doctors, helping anybody who is using this to really make faster decisions better decisions. >> Exactly. >> And just to add to that. In automotive industry, you have datasets that are 50 to 500 petabytes, and you need access to all that data, all the time because you're constantly training and retraining to create better models to create better autonomous vehicles, and you need the performance to do that. DDN helps bring that to bear, and with this reference architecture is simplifies it so you get the value add of NVIDIA GPUs plus its ecosystem software plus DDN. It's match made in heaven. >> Kurt, Darrin, thank you very much. Great conversation. To learn more about what they're talking about, let's take a look at a video created by DDN to explain the product and the offering. >> DDN A3I within video NVIDIA DGX-1 is a fully integrated and optimized technology solution that enables and accelerates end to end data pipelines for AI and DL workloads of any scale. It is designed to provide extreme amounts of performance and capacity backed by a jointly engineered and validated architecture. Compute is the first component of the solution. The DGX-1 delivers over one petaflop of DL training performance leveraging eight NVIDIA tester V100 GPUs in a 3RU appliance. The GPUs are configured in a hybrid cube mesh topology using the NVIDIA and VLink interconnect. DGX-1 delivers linearly predictable application performance and is powered by the NVIDIA DGX software stack. DDN A31 solutions can scale from single to multiple DGX-1s. Storage is a second component of the solution. The DDN and the AI200 is all NVIDIA parallel file storage appliance that's optimized for performance. The AI200 is specifically engineered to keep GPU computing resources fully utilized. The AI200 ensures maximum application productivity while easily managing to update data operations. It's offered in three capacity options and a compact tour U chassis. AI200 appliance can deliver up to 20 gigabytes a second of throughput and 350,000 IOPS. The DDN A3I architecture can scale up and out seamlessly over multiple appliances. The third component of the solution is a high performance, low latency, RDM capable network. Both EDR and InfiniBand, and 100 gigabit ethernet options are available. This provides flexibility, interesting seamless scaling and easy integration of the solution within any IT infrastructure. DDN A3I solutions within video DGX-1 brings together industry leading compute, storage and network technologies in a fully integrated and optimized package that's easy to deploy and manage. It's backed by deep expertise and enables customers to focus on what really matters. Extracting the most value from their data with unprecedented accuracy and velocity. >> Always great to hear the product. Let's hear the analyst's perspective. Now I'm joined by Dave Vellante, who's now with Wikibon, colleague here at Wikibon and co-CEO of SiliconANGLE. Dave welcome to theCUBE. Dave a lot of conversations about AI. What is it about today that is making AI so important to so many businesses? >> Well I think it's three things Peter. The first is the data we've been on this decade long aduped bandwagon and what that did is really focused organizations on putting data at the center of their business, and now they're trying to figure okay, how do we get more value of that? So the second piece of that is technology is now becoming available, so AI of course have been around forever but the infrastructure to support that, GPUs, the processing power, flash storage, deep learning frameworks like TensorFlow have really have started to come to the marketplace. So the technology is now available to act on that data, and I think the third is people are trying to get digital right. This is it about digital transformation. Digital meets data. We talked about that all the time and every corner office is trying to figure out what their digital strategy should be. So there try to remain competitive and they see automation, and artificial intelligence, machine intelligence applied to that data as a lynch pan of their competitiveness. >> So a lot of people talk about the notion of data as a source value in some and the presumption that's all going to the cloud. Is that accurate? >> Oh yes, it's funny that you say that because as you know, we're done a lot of work of this and I think the thing that's important organizations have realized in the last 10 years is the idea of bringing five megabytes of compute to a petabyte of data is far more valuable. And as a result a pendullum is really swinging in many different directions. One being the edge, data is going to say there, and certainly the cloud is a major force. And most of the data still today lives on premises, and that's where most of the data os likely going to stay. And so no all the data is not going to go into the cloud. >> It's not the central cloud? >> That's right, the central public cloud. You can redefined the boundaries of the cloud and the key is you want to bring that cloud like experience to the data. We've talked about that a lot in the Wikibon and Cube communities, and that's all about the simplification and cloud business models. >> So that suggest pretty strongly that there is going to continue to be a relationship between choices about hardware infrastructure on premises, and the success at making some of these advance complex workloads, run and scream and really drive some of that innovative business capabilities. As you think about that what is it about AI technologies or AI algorithms and applications that have an impact on storage decisions? >> Well, the characteristics of the workloads are going to be often times is going to be largely unstructured data that's going to be small files. There's going to a lot of those small files, and they're going to be randomly distributed, and as a result, that's going to change the way in which people are going to design systems to accommodate those workloads. There's going to be a lot more bandwidth. There's going to be a lot more parallelism in those systems in order to accommodate and keep those CPUs busy. And yeah, we're going to talk about but the workload characteristics are changing so the fundamental infrastructure has to change as well. >> And so our goal ultimately is to ensure that we keep these new high performing GPUs saturated by flowing data to them without a lot of spiky performance throughout the entire subsystem. We've got that right? >> Yeah, I think that's right, and that's when I was talking about parallelism, that's what you want to do. You want to be able to load up that processor especially these alternative processors like GPUs, and make sure that they stay busy. The other thing is when there's a problem, you don't want to have to restart the job. So you want to have real time error recovery, if you will. And that's been crucial in the high performance world for a long, long time on terms of, because these jobs as you know take a long, long time to the extent that you don't have to restart a job from ground zero. You can save a lot of money. >> Yeah especially as you said, as we start to integrate some of these AI applications with some of the operational applications that are actually recording your results of the work that's being performed or the prediction that's being made or the recommendation that's been offered. So I think ultimately, if we start thinking about this crucial role that AI workloads is going to have in business and that storage is going to have on AI, move more processes closer to data et cetera. That suggest that there's going to be some changes in the offering for the storage industry. What are your thinking about how storage interest is going to evolve over time? >> Well there's certainly a lot of hardware stuff that's going on. We always talk about software define but they say hardware stuff matters. If obviously flash doors changed the game from a spinning mechanical disc, and that's part of this. Also as I said the day before seeing a lot more parallelism, high bandwidth is critical. A lot of the discussion that we're having in our community is the affinity between HPC, high performance computing and big data, and I think that was pretty clear, and now that's evolving to AI. So the internal network, things like InfiniBand are pretty important. NVIDIA is coming onto the scene. So those are some of the things that we see. I think the other one is file systems. NFS tends to deal really well with unstructured data and data that is sequential. When you have all the-- >> Streaming. >> Exactly, and you have all this what we just describe as random nature and you have the need for parallelism. You really need to rethink file systems. File systems are again a lynch pan of getting the most of these AI workloads, and the others if we talk about the cloud model. You got to make this stuff simple. If we're going to bring AI and machine intelligence workloads to the enterprise, it's got to be manageable by enterprise admins. You're not going to be able to have a scientist be able to deploy this stuff, so it's got to be simple or cloud like. >> Fantastic, Dave Vellante, Wikibon. Thanks for much for being on theCUBE. >> My pleasure. >> We've had he analyst's perspective. Now tells take a look at some real numbers. Not a lot of companies has delivered a rich set of bench marks relating AI, storage and business outcomes. DDN has, let's take a video that they prepared describing the bench mark associated with these new products. >> DDN A3I within video DGX-1 is a fully integrated and optimized technology solution that provides massive acceleration for AI and DL applications. DDN has engaged extensive performance and interoperable testing programs in close collaboration with expert technology partners and customers. Performance testing has been conducted with synthetic throughputs in IOPS workloads. The results demonstrate that the DDN A3I parallel architecture delivers over 100,000 IOPS and over 10 gigabytes per second of throughput to a single DGX-1 application container. Testing with multiple container demonstrates linear scaling up to full saturation of the DGX-1 Zyo capabilities. These results show concurrent IO activity from four containers with an aggregate delivered performance of 40 gigabytes per second. The DDN A3I parallel architecture delivers true application acceleration, extensive interoperability and performance testing has been completed with a dozen popular DL frameworks on DGX-1. The results show that with the DDN A3I parallel architecture, DL applications consistently achieve a higher training throughput and faster completion times. In this example, Caffe achieves almost eight times higher training throughput on DDN A3I as well it completes over five times faster than when using a legacy file sharing architecture and protocol. Comprehensive test and results are fully documented in the DDN A3I solutions guide available from the DDN website. This test illustrates the DGX-1 GPU utilization and read activity from the AI 200 parallel storage appliance during a TensorFlow training integration. The green line shows that the DGX-1 be used to achieve maximum utilization throughout the test. The red line shows the AI200 delivers a steady stream of data to the application during the training process. In the graph below, we show the same test using a legacy file sharing architecture and protocol. The green line shows that the DGX-1 never achieves full GPU utilization and that the legacy file sharing architecture and protocol fails to sustain consistent IO performance. These results show that with DDN A3I, this DL application on the DGX-1 achieves maximum GPU product activity and completes twice as fast. This test then resolved is also documented in the DDN A3I solutions guide available from the DDN website. DDN A3I solutions within video DGX-1 brings together industry meaning compute, storage and network technologies in a fully integrated and optimized package that enables widely used DL frameworks to run faster, better and more reliably. >> You know, it's great to see real benchmarking data because this is a very important domain, and there is not a lot of benchmarking information out there around some of these other products that are available but let's try to turn that benchmarking information into business outcomes. And to do that we've got Kurt Kuckein back from DDN. Kurt, welcome back. Let's talk a bit about how are these high value outcomes That seeks with AI going to be achieved as a consequence of this new performance, faster capabilities et cetera. >> So there is a couple of considerations. The first consideration, I think, is just the selection of AI infrastructure itself. Right, we have customers telling us constantly that they don't know where to start. Now they have readily available reference architectures that tell them hey, here's something you can implement, get installed quickly, you're up and running your AI from day one. >> So the decision process for what to get is reduced. >> Exactly. >> Okay. >> Number two is, you're unlocking all ends of the investment with something like this, right. You're maximizing the performance on the GPU side, you're maximizing the performance on the ingest side for the storage. You're maximizing the throughput of the entire system. So you're really gaining the most out of your investment there. And not just gaining the most out of your investment but truly accelerating the application and that's the end goal, right, that we're looking for with customers. Plenty of people can deliver fast storage but if it doesn't impact the application and deliver faster results, cut run times down then what are you really gaining from having fast storage? And so that's where we're focused. We're focused on application acceleration. >> So simpler architecture, faster implementation based on that, integrated capabilities, ultimately, all revealing or all resulting in better application performance. >> Better application performance and in the end something that's more reliable as well. >> Kurt Kuckein, thanks so much for being on theCUBE again. So that's ends our prepared remarks. We've heard a lot of great stuff about the relationship between AI, infrastructure especially storage and business outcomes but here's your opportunity to go into crowd chat and ask your questions get your answers, share your stories, engage your peers and some of the experts that we've been talking with about this evolving relationship between these key technologies, and what it's going to mean for business. So I'm Peter Burris. Thank you very much for listening. Let's step into the crowd chat and really engage and get those key issues addressed.
SUMMARY :
and over the course of the next hour, It can mean the scaling of performance. in the high performance world. A lot of companies are coming to us. and let's explore some of the differences. So the biggest similarity I think is so that you can get to the data you need Keeping the GPU saturated is really the key. of the amount of capacity, and the degree of some pushing that's required to make sure on the HPC side as well, and as you posited at the beginning of the relationship between AI and storage? of the infrastructure is going to need the scale that come out of these technologies. in the repository to train experimental vehicles of technical marketing for NVIDIA in the enterprise and I think it all starts with this notion of that there is and fully tested to deliver an AI infrastructure Darrin talk to us a little bit about the nature of how to bring that IO to the GPUs on our DGX platforms. So if we think about what you describe. Absolutely and if you think about the history but a lot to learn. Makes the even easier to deploy from there, And NVIDIA has done more than the DGX-1. in the past with HPC. So roughly it's the architecture that makes things easier so that people don't have to deal with a lot of DDN helps bring that to bear, to explain the product and the offering. and easy integration of the solution Let's hear the analyst's perspective. So the technology is now available to act on that data, So a lot of people talk about the notion of data And so no all the data is not going to go into the cloud. and the key is you want to bring and the success at making some of these advance so the fundamental infrastructure has to change as well. by flowing data to them without a lot And that's been crucial in the high performance world and that storage is going to have on AI, A lot of the discussion that we're having in our community and the others if we talk about the cloud model. Thanks for much for being on theCUBE. describing the bench mark associated and read activity from the AI 200 parallel storage appliance And to do that we've got Kurt Kuckein back from DDN. is just the selection of AI infrastructure itself. and that's the end goal, right, So simpler architecture, and in the end something that's more reliable as well. and some of the experts that we've been talking
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Kurt Kuckein | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Kurt | PERSON | 0.99+ |
50 | QUANTITY | 0.99+ |
200 times | QUANTITY | 0.99+ |
Darrin | PERSON | 0.99+ |
October 11, 2018 | DATE | 0.99+ |
DDN | ORGANIZATION | 0.99+ |
Darrin Johnson | PERSON | 0.99+ |
50 terabyte | QUANTITY | 0.99+ |
20 years | QUANTITY | 0.99+ |
10 terabyte | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
second piece | QUANTITY | 0.99+ |
third component | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
DNN | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
second component | QUANTITY | 0.99+ |
90 GXs | QUANTITY | 0.99+ |
first component | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
three minute | QUANTITY | 0.99+ |
AI200 | COMMERCIAL_ITEM | 0.98+ |
over 40 years | QUANTITY | 0.98+ |
first example | QUANTITY | 0.98+ |
DGX-1 | COMMERCIAL_ITEM | 0.98+ |
100 gigabit | QUANTITY | 0.98+ |
500 petabytes | QUANTITY | 0.98+ |
V100 | COMMERCIAL_ITEM | 0.98+ |
30, 40 years | QUANTITY | 0.98+ |
second example | QUANTITY | 0.97+ |
NIVIDA | ORGANIZATION | 0.97+ |
over 100,000 IOPS | QUANTITY | 0.97+ |
SiliconANGLE | ORGANIZATION | 0.97+ |
AI 200 | COMMERCIAL_ITEM | 0.97+ |
first consideration | QUANTITY | 0.97+ |
three things | QUANTITY | 0.96+ |
IBM Flash System 9100 Digital Launch
(bright music) >> Hi, I'm Peter Burris, and welcome to another special digital community event, brought to you by theCUBE and Wikibon. We've got a great session planned for the next hour or so. Specifically, we're gonna talk about the journey to the data-driven multi-cloud. Sponsored by IBM, with a lot of great thought leadership content from IBM guests. Now, what we'll do is, we'll introduce some of these topics, we'll have these conversations, and at the end, this is gonna be an opportunity for you to participate, as a community, in a crowd chat, so that you can ask questions, voice your opinions, hear what others have to say about this crucial issue. Now why is this so important? Well Wikibon believes very strongly that one of the seminal features of the transition to digital business, driving new-type AI classes of applications, et cetera, is the ability of using flash-based storage systems and related software, to do a better job of delivering data to more complex, richer applications, faster, and that's catalyzing a lot of the transformation that we're talking about. So let me introduce our first guest. Eric Herzog is the CMO and VP Worldwide Storage Channels at IBM. Eric, thanks for coming on theCUBE. >> Great, well thank you Peter. We love coming to theCUBE, and most importantly, it's what you guys can do to help educate all the end-users and the resellers that sell to them, and that's very, very valuable and we've had good feedback from clients and partners, that, hey, we heard you guys on theCUBE, and very interesting, so I really appreciate all the work you guys do. >> Oh, thank you very much. We've got a lot of great things to talk about today. First, and I want to start it off, kick off the proceedings for the next hour or so by addressing the most important issue here. Data-driven. Now Wikibon believes that digital transformation means something, it's the process by which a business treats data as an asset, and re-institutionalizes its work and changes the way it engages with customers, et cetera. But this notion of data-driven is especially important because it elevates the role that storage is gonna play within an organization. Sometimes I think maybe we shouldn't even call it storage. Talk to us a little bit about data-driven and how that concept is driving some of the concepts in innovation that are represented in this and future IBM products. >> Sure. So I think the first thing, it is all about the data, and it doesn't matter whether you're a small company, like Herzog's Bar and Grill, or the largest Fortune 500 in the world. The bottom line is, your most valuable asset is you data, whether that's customer data, supply chain data, partner data that comes to you, that you use, services data, the data you guys sell, right? You're an analysis firm, so you've got data, and you use that data to create you analysis, and then you use that as a product. So, data is the most critical asset. At the same time, data always goes onto storage. So if that foundation of storage is not resilient, is not available, is not performant, then either A, it's totally unavailable, right, you can't get to the customer data. B, there's a problem with the data, okay, so you're doing supply chain and if the storage corrupts the data, then guess what? You can't send out the T-shirts to the right retail location, or have it available online if you're an online retailer. >> Or you sent 200,000 instead of 20, and you get stuck with the bill. >> Right, exactly. So data is that incredible asset and then underneath, think of storage as the foundation of a building. Data is your building, okay, and all the various aspects of that data, customer data, your data, internal data, everything you're doing, that's the building. If the foundation of the building isn't rock solid the building falls down. Whether your building is big or small, and that's what storage does, and then storage can also optimize the building above it. So think of it more than just the foundation but the foundation if you will, that almost has like a tree, and has got things that come up from the bottom and have that beautiful image, and storage can help you out. For example, metadata. Metadata which is data about data could be used by analytics, package them, well guess what? The metadata about data could be exposed by the storage company. So that's why data-driven is so important from an end-user perspective and why storage is that foundation underneath a data-driven enterprise. >> Now we've seen a lot of folks talk about how cloud is the centerpiece of thinking about infrastructure. You're suggesting that data is the centerpiece of infrastructure, and cloud is gonna be an implementation decision. Where do I put the workloads, costs, all the other elements associated with it. But it suggests ultimately that data is not gonna end up in one place. We have to think about data as being where it needs to be to perform the work. That suggests multi-cloud, multi-premise. Talk to us a little bit about the role that storage and multi-cloud play together. >> So let's take multi-cloud first and peel that away. So multi-cloud, we see a couple of different things. So first of all, certain companies don't want to use a public cloud. Whether it's a security issue, and actually some people have found out that public cloud providers, no matter who the vendor is, sort of is a razor in a razor blade. Very cheap to put the storage out there but we want certain SLAs, guess what? The cloud vendors charge more. If you move data around a lot, in and out as you were describing, it's really that valuable, guess what? On ingress and egress gets you charges for that. The cloud provider. So it's almost the razor and the razor blades. So A, there's a cost factor in public only. B, you've got people that have security issues. C, what we've seen is, in many cases, hybrid. So certain datasets go out to the cloud and other datasets stay on the premises. So you've got that aspect of multi, which is public, private or hybrid. The second aspect, which is very common in bigger companies that are either divisionalized or large geographically, is literally the usage, in a hybrid or a public cloud environment, of multiple cloud vendors. So for example, in several countries the data has to physically stay within the confines of that country. So if you're a big enterprise and you've got offices in 200 different, well not 200, but 100 different countries, and 20 of 'em you have to keep in that country by law. If your cloud provider doesn't have a data center there you need to use a different cloud provider. So you've got that. And you also have, I would argue that the cloud is not new anymore. The internet is the original cloud. So it's really old. >> Cloud in many respects is the programming model, or the mature programming model for the internet-based programming applications. >> I'd agree with that. So what that means is, as it gets more mature, from the mid-sized company up, all of a sudden procurement's involved. So think about the way networking, storage and servers, and sometimes even software was bought. The IT guy, the CIO, the line of business might specify, I want to use it but then it goes to procurement. In the mid to big company it's like, great, are we getting three bids on that? So we've also seen that happen, particularly with larger enterprise where, well you were using IBM cloud, that's great, but you are getting a quote from Microsoft or Amazon right? So those are the two aspects we see in multi-cloud, and by the way, that can be a very complex situation dealing with big companies. So the key thing that we do at IBM, is make sure that whichever model you take, public, private or hybrid, or multiple public clouds, or multiple public cloud providers, using a hybrid configuration, that we can support that. So things like our transparent cloud tiering, we've also recently created some solution blueprints for multi-clouds. So these things allow you to simply and easily deploy. Storage has to be viewed as transparent to a cloud. You've gotta be able to move the data back and forth, whether that be backing the data up, or archiving the data, or secondary data usage, or whatever that may be. And so storage really is, gotta be multi-cloud and we've been doing those solutions already and in fact, but honestly for the software side of the IBM portfolio for storage, we have hundreds of cloud providers mid, big and small, that use our storage software to offer backup as a service or storage as a service, and we're again the software foundation underneath what an end-user would buy as a service from those cloud providers. >> So I want to pick up on a word you used, simplicity. So, you and I are old infrastructure hacks and for many years I used to tell my management, infrastructure must do no harm. That's the best way to think about infrastructure. Simplicity is the new value proposition, complexity remains the killer. Talk to us a little bit about the role that simplicity in packaging and service delivery and everything else is again, shaping the way you guys, IBM, think about what products, what systems and when. >> So I think there's a couple of things. First of all, it's all about the right tool for the right job. So you don't want to over-sell and sell a big, giant piece of high-end all-flash array, for example, to a small company. They're not gonna buy that. So we have created a portfolio of which our FlashSystem 9100 is our newest product, but we've got a whole set of portfolios from the entry space to the mid range to the high end. We also have stuff that's tuned for applications, so for example, our lasting storage server which comes in an all-flash configuration is ideal for big data analytics workloads. Our DS8000 family of flash is ideal for mainframe attach, and in fact we have close to 65% of all mainframe attached storage, is from IBM. But you have the right tool for the right job, so that's item number one. The second thing you want to do is easier and easier to use. Whether that be configuring the physical entity itself, so how do you cable, how do you rack and stack it, make sure that it easily integrates into whatever else they're putting together in their data center, but it a cloud data center, a traditional on-premises data center, it doesn't matter. The third thing is all about the software. So how do you have software that makes the array easier and easier to use, and is heavily automated based on AI. So the old automation way, and we've both been in that era, was you set policies. Policy-based management, and when it came out 10 years ago, it was a transformational event. Now it's all about using AI in your infrastructure. Not only does your storage need to be right to enable AI at the server workload level, but we're saying, we've actually deployed AI inside of our storage, making it easier for the storage manager or the IT manager, and in some cases even the app owner to configure the storage 'cause it's automated. >> Going back to that notion that the storage knows something about the metadata, too. >> Right, exactly, exactly. So the last thing is our multi-cloud blueprint. So in those cases, what we've done is create these multi-cloud blueprints. For example, disaster recovery and business continuity using a public cloud. Or secondary data use in a public cloud. How do you go ahead and take a snapshot, a replica or a backup, and use it for dev-ops or test or analytics? And by the way, our Spectrum copy data management software allows you, but you need a blueprint so that it's easy for the end user, or for those end users who buy through our partners, our partners then have this recipe book, these blueprints, you put them together, use the software that happens to come embedded in our new FlashSystem 9100 and then they use that and create all these various different recipes. Almost, I hate to say it, like a baker would do. They use some base ingredients in baking but you can make cookies, candies, all kinds of stuff, like a donut is essentially a baked good that's fried. So all these things use the same base ingredients and that software that comes with the FlashSystem 9100, are those base ingredients, reformulated in different models to give all these multi-cloud blueprints. >> And we've gotta learn more about vegetables so we can talk about salad in that metaphor, (Eric laughing) you and I. Eric once again. >> Great, thank you. >> Thank you so much for joining us here on the CUBE. >> Great, thank you. >> Alright, so let's hear this come to life in the form of a product video from IBM on the FlashSystem 9100. >> Some things change so quickly, it's impossible to track with the naked eye. The speed of change in your business can be just as sudden and requires the ability to rapidly analyze the details of your data. The new, IBM FlashSystem 9100, accelerates your ability to obtain real-time value from that information, and rapidly evolve to a multi-cloud infrastructure, fueled by NVMe technology. In one powerful platform. IBM FlashSystem 9100, combines the performance, of IBM FlashCore technology. The efficiency of IBM Spectrum Virtualize. The IBM software solutions, to speed your multi-cloud deployments, reduce overall costs, plan for performance and capacity, and simplify support using cloud-based IBM storage insights to provide AI-powered predictive analytics, and simplify data protection with a storage solution that's flexible, modern, and agile. It's time to re-think your data infrastructure. (upbeat music) >> Great to hear about the IBM FlashSystem 9100 but let's get some more details. To help us with that, we've got Bina Hallman who's the Vice President Offering Management at IBM Storage. Bina, welcome to theCUBE. >> Well, thanks for having me. It's an exciting even, we're looking forward to it. >> So Bina, I want to build on some of the stuff that we talked to Eric about. Eric did a good job of articulating the overall customer challenge. As IBM conceives how it's going to approach customers and help them solve these challenges, let's talk about some of the core values that IBM brings to bear. What would you say would be one of the, say three, what are the three things that IBM really focuses on, as it thinks about its core values to approach these challenges? >> Sure, sure. It's really around helping the client, providing a simple one-stop shopping approach, ensuring that we're doing all the right things to bring the capabilities together so that clients don't have to take different component technologies and put them together themselves. They can focus on providing business value. And it's really around, delivering the economic benefits around CapEx and OpEx, delivering a set of capabilities that help them move on their journey to a data-driven, multi-cloud. Make it easier and make it simpler. >> So, making sure that it's one place they can go where they can get the solution. But IBM has a long history of engineering. Are you doing anything special in terms of pre-testing, pre-packaging some of these things to make it easier? >> Yeah, we over the years have worked with many of our clients around the world and helping them achieve their vision and their strategy around multi-cloud, and in that journey and those set of experiences, we've identified some key solutions that really do make it easier. And so we're leveraging the breadth of IBM, the power of IBM, making those investment to deliver a set of solutions that are pre-tested, they are supported at the solutions level. Really focusing on delivering and underpinning the solutions with blueprints. Step-by-step documentation, and as clients deploy these solutions, they run into challenges, having IBM support to assist. Really bringing it all together. This notion of a multi-cloud architecture, around delivering modern infrastructure capabilities, NVMe acceleration, but also some of our really core differentiation that we deliver through FlashCore data reduction capabilities, along with things like modern data protection. That segment is changing and we really want to enable clients, their IT, and their line of business to really free them up and focus on a business value, versus putting these components together. So it's really around taking those complex things and make them easier for clients. Get improved RPO, RTO, get improved performance, get improved costs, but also flexibility and agility are very critical. >> That sounds like therefore, I mean the history of storage has been trade-offs that you, this can only go that fast, and that tape can only go that fast but now when we start thinking about flash, NVMe, the trade-offs are not as acute as they used to be. Is IBM's engineering chops capable of pointing how you can in fact have almost all of this at one time? >> Oh absolutely. The breadth and the capabilities in our R and D and the research capabilities, also our experiences that I talked about, engagements, putting all of that together to deliver some key solutions and capabilities. Like, look, everybody needs backup and archive. Backup to recover your data in case of a disaster occurs, archive for long-term retention. That data management, the data protection segment, it's going through a transformation. New emerging capabilities, new ways to do backup. And what we're doing is, pulling all of that together, with things that we introduced, for example, our Protect Plus in the fourth quarter, along with this FS 9100 and the cloud capabilities, to deliver a solution around data protection, data reuse, so that you have a modern backup approach for both virtual and physical environments that is really based on things like snapshots and mountable copies, So you're not using that traditional approach to recovering your copy from a backup by bringing it back. Instead, all you're doing is mounting one of those copies and instantly getting your application back and running for operational recovery. >> So to summarize some of those value, once stop, pre-tested, advanced technologies, smartly engineered. You guys did something interesting on July 10th. Why don't you talk about how those values, and the understanding of the problem, manifested so fast. Kind of an exciting set of new products that you guys introduced on July 10th. >> Absolutely. On July 10th we not only introduced our flagship FlashSystem, the FS 9100, which delivers some amazing client value around the economic benefits of CapEx, OpEx reduction, but also seamless data mobility, data reuse, security. All the things that are important for a client on their cloud journey. In addition to that, we infused that offering with AI-based predictive analytics and of course that performance and NVMe acceleration is really key, but in addition to doing that, we've also introduced some very exciting solutions. Really three key solutions. One around data protection, data reuse, to enable clients to get that agility, and second is around business continuity and data reuse. To be able to really reduce the expense of having business continuity in today's environment. It's a high-risk environment, it's inevitable to have disruptions but really being prepared to mitigate some of those risks and having operational continuity is important and by doing things like leveraging the public cloud for your DR capabilities. That's very important, so we introduced a solution around that. And the third is around private cloud. Taking your IBM storage, your FS 9100, along with the heterogeneous environment you have, and making it cloud-ready. Getting the cloud efficiencies. Making it to where you can use it for environments to create things like native cloud applications that are portable, from on-prem and into the cloud. So those are some of the key ways that we brought this together to really deliver on client value. >> So could you give us just one quick use case of your clients that are applying these technologies to solve their problems? >> Yeah, so let me use the first one that I talked about, the data protection and data reuse. So to be able to take your on-premise environment, really apply an abstraction layer, set up catalogs, set up SLAs and access control, but then be able to step away and manage that storage all through API bays. We have a lot of clients that are doing that and then taking that, making the snapshots, using those copies for things like, whether it's the disaster recovery or secondary use cases like analytics, dev-ops. You know, dev-ops is a really important use case and our clients are really leveraging some of these capabilities for it because you want to make sure that, as application developers are developing their applications, they're working with the latest data and making sure that the testing they're doing is meaningful in finding the maximum number of defects so you get the highest quality of code coming out of them and being able to do that, in a self-service driven way so that they're not having to slow down their innovation. We have clients leveraging our capabilities for those kinds of use cases. >> It's great to hear about the FlashSystem 9100 but let's hear what customers have to say about it. Not too long ago, IBM convened a customer panel to discuss many aspects of this announcement. So let's hear what some of the customers had to say about the FlashSystem 9100. >> Now Owen, you've used just about every flash system that IBM has made. Tell us, what excites you about this announcement of our new FlashSystem 9100. >> Well, let's start with the hardware. The fact that they took the big modules from the older systems, and collapsed that down to a two-and-a-half inch form-factor NVMe drive is mind-blowing. And to do it with the full speed compression as well. When the compression was first announced, for the last FlashSystem 900, I didn't think it was possible. We tested it, I was proven wrong. (laughing) It's entirely possible. And to do that on a small form-factor NVMe drive is just astounding. Now to layer on the full software stack, get all those features, and the possibilities for your business, and what we can do, and leverage those systems and technologies, and take the snapshots in the replication and the insights into what our system's doing, it is really mind-blowing what's coming out today and I cannot wait to just kick those tires. There's more. So with that real-world compression ratio, that we can validate on the new 900, and it's the same in this new system, which is astounding, but we can get more, and just the amount of storage you get in this really small footprint. Like, two rack units is nothing. Half our services are two rack units, which is absolutely astounding, to get that much data in such a very small package, like, 460 terabytes is phenomenal, with all these features. The full solution is amazing, but what else can we do with it? And especially as they've said, if it's for a comparable price as what we've bought before, and we're getting the full solution with the software, the hardware, the extremely small form-factor, what else can you do? What workloads can you pull forward? So where our backup systems weren't on the super fast storage like our production systems are, now we can pull those forward and they can give the same performance as production to run the back-end of the company, which I can't wait to test. >> It's great to hear from customers. The centerpiece of the Wikibon community. But let's also get the analyst's perspective. Let's hear from Eric Burgener, who's the Research Vice President for Storage at IDC. >> Thanks very much Peter, good to be back. >> So we've heard a lot from a number of folks today about some of the changes that are happening in the industry and I want to amplify some things and get the analyst's perspective. So Wikibon, as a fellow analyst, Wikibon believes pretty strongly that the emergence of flash-based storage systems is one of the catalyst technologies that's driving a lot of the changes. If only because, old storage technologies are focused on persisting data. Disc, slow, but at least it was there. Flash systems allow a bit flip, they allow you to think about delivering data to anywhere in your organization. Different applications, without a lot of complexity, but it's gotta be more than that. What else is crucial, to making sure that these systems in fact are enabling the types of applications that customers are trying to deliver today. >> Yeah, so actually there's an emerging technology that provides the perfect answer to that, which is NVMe. If you look at most of the all-flash systems that have shipped so far, they've been based around SCSI. SCSI was a protocol designed for hard disk drives, not flash, even though you can use it with flash. NVMe is specifically designed for flash and that's really gonna open up the ability to get the full value of the performance, the capacity utilization, and the efficiencies, that all-flash arrays can bring to the market. And in this era of big data, more than ever, we need to unlock that performance capability. >> So as we think about the big data, AI, that's gonna have a significant impact overall in the market and how a lot of different vendors are jockeying for position. When IDC looks at the impact of flash, NVMe, and the reemergence of some traditional big vendors, how do you think the market landscape's gonna be changing over the next few years? >> Yeah, how this market has developed, really the NVMe-based all-flash arrays are gonna be a carve-out from the primary storage market which are SCSI-based AFAs today. So we're gonna see that start to grow over time, it's just emerging. We had startups begin to ship NVMe-based arrays back in 2016. This year we've actually got several of the majors who've got products based around their flagship platforms that are optimized for NVMe. So very quickly we're gonna move to a situation where we've got a number of options from both startups and major players available, with the NVMe technology as the core. >> And as you think about NVMe, at the core, it also means that we can do more with software, closer to the data. So that's gotta be another feature of how the market's gonna evolve over the next couple of years, wouldn't you say? >> Yeah, absolutely. A lot of the data services that generate latencies, like in-line data reduction, encryption and that type of thing, we can run those with less impact on the application side when we have much more performant storage on the back-end. But I have to mention one other thing. To really get all that NVMe performance all the way to the application side, you've gotta have an NVMe Over Fabric connection. So it's not enough to just have NVMe in the back-end array but you need that RDMA connection to the hosts and that's what NVMe Over Fabric provides for you. >> Great, so that's what's happening on the technology-product-vendor side, but ultimately the goal here is to enable enterprises to do something different. So what's gonna be the impact on the enterprise over the next few years? >> Yeah, so we believe that SCSI clearly will get replaced in the primary storage space, by NVMe over time. In fact, we've predicted that by 2021, we think that over 50% of all the external, primary storage revenue, will be generated by these end-to-end NVMe-based systems. So we see that transition happening over the course of the next two to three years. Probably by the end of this year, we'll have NVMe-based offerings, with NVMe Over Fabric front ends, available from six of the established storage providers, as well as a number of smaller startups. >> We've come a long way from the brown, spinning stuff, haven't we? >> (laughing) Absolutely. >> Alright, Eric Burgener, thank you very much. IDC Research Vice President, great once again to have you in theCUBE. >> Thanks Peter. >> Always great to get the analyst's perspective, but let's get back to the customer perspective. Again, from that same panel that we saw before, here's some highlights of what customers had to say about IBM's Spectrum family of software. (upbeat music) We love hearing those customer highlights but let's get into some of the overall storage trends and to do that we've asked Eric Herzog and Bina Hallman back to theCUBE. Eric, Bina, thanks again for coming back. So, what I want to do now is, I want to talk a little bit about some trends within the storage world and what the next few years are gonna mean, but Eric, I want to start with you. I was recently at IBM Think, and Ginni Rometty talked about the idea of putting smart to work. Now, I can tell you, that means something to me because the whole notion of how data gets used, how work gets institutionalized around your data, what does storage do in that context? To put smart to work. >> Well I think there's a couple of things. First we've gotta realize that it's not about storage, it's about the data and the information that happens to sit on the storage. So you have to have storage that's always available, always resilient, is incredibly fast, and as I said earlier, transparently moves things in and out of the cloud, automatically, so that the user doesn't have to do it. Second thing that's critical is the integration of AI, artificial intelligence. Both into the storage solution itself, of what the storage does, how you do it, and how it plays with the data, but also, if you're gonna do AI on a broad scale, and for example we're working with a customer right now and their AI configuration in 100 petabytes. Leveraging our storage underneath the hood of that big, giant AI analytics workload. So that's why they have to both think of it in the storage to make the storage better and more productive with the data and the information that it has, but then also as the undercurrent for any AI solution that anyone wants to employ, big, medium or small. >> So Bina, I want to pick up on that because there are gonna be some, there's some advanced technologies that are being exploited within storage right now, to achieve what Eric's talking about, but there's gonna be a lot more. And there's gonna be more intensive application utilizations of some of those technologies. What are some of the technologies that are becoming increasingly important, from a storage standpoint, that people have to think about as they try to achieve their digital transformation objectives. >> That's right, I mean Peter, in addition to some of the basics around making sure your infrastructure is enabled to handle the SLAs and the level of performance that's required by these AI workloads, when you think about what Eric said, this data's gonna reside, it's gonna reside on-premise, it's gonna be behind a firewall, potentially in the cloud, or multiple public clouds. How do you manage that data? How do you get visibility to that data? And then be able to leverage that data for your analytics. And so data management is going to be very important but also, being able to understand what that data contains and be able to run the analytics and be able to do things like tagging the metadata and then doing some specialized analytics around that is going to be very important. The fabric to move that data, data portability from on-prem into the cloud, and back and forth, bidirectionally, is gonna be very important as you look into the future. >> And obviously things like IOT's gonna mean bigger, more, more available. So a lot of technologies, in a big picture, are gonna become more closely associated with storage. I like to say that, at some point in time we've gotta stop thinking about calling stuff storage because it's gonna be so central to the fabric of how data works within a business. But Eric, I want to come back to you and say, those are some of the big picture technologies but what are some of the little picture technologies? That none-the-less are really central to being able to build up this vision over the course of the next few years? >> Well a couple of things. One is the move to NVMe, so we've integrated NVMe into our FLashSystem 9100, we have fabric support, we already announced back in February actually, fabric support for NVMe over an InfiniBand infrastructure with our FlashSystem 900 and we're extending that to all of the other inter-connects from a fabric perspective for NVMe, whether that be ethernet or whether that be fiber channel and we put NVMe in the system. We also have integrated our custom flash models, our FlashCore technology allows us to take raw flash and create, if you will, a custom SSD. Why does that matter? We can get better resiliency, we can get incredibly better performance, which is very tied in to your applications workloads and use cases, especially in data-driven multi-cloud environment. It's critical that the flash is incredibly fast and it really matters. And resilient, what do you do? You try to move it to the cloud and you lose your data. So if you don't have that resiliency and availability, that's a big issue. I think the third thing is, what I call the cloud-ification of software. All of IBM's storage software is cloud-ified. We can move things simultaneously into the cloud. It's all automated. We can move data around all over the place. Not only our data, not only to our boxes, we could actually move other people's array's data around for them and we can do it with our storage software. So it's really critical to have this cloud-ification. It's really cool to have this now technology, NVMe from an end-to-end perspective for fabric and then inside the system, to get the right resiliency, the right availability, the right performance for your applications, workloads and use cases, and you've gotta make sure that everything is cloud-ified and portable, and mobile, and we've done that with the solutions that are wrapped into our FlashSystem 9100 that we launched a couple of weeks ago. >> So you are both though leaders in the storage industry. I think that's very clear, and the whole notion of storage technology, and you work with a lot of customers, you see a lot of use cases. So I want to ask you one quick question, to close here. And that is, if there was one thing that you would tell a storage leader, a CIO or someone who things about storage in a broad way, one mindset change that they have to make, to start this journey and get it going so that it's gonna be successful. What would that one mindset change be? Bina, what do you think? >> You know, I think it's really around, there's a lot of capabilities out there. It's really around simplifying your environment and making sure that, as you're deploying these new solutions or new capabilities, that you've really got a partnership with a vendor that's gonna help you make it easier. Take those complex tasks, make them easier, deliver those step-by-step instructions and documentation and be right there when you need their assistance. So I think that's gonna be really important. >> So look at it from a portfolio perspective, where best of breed is still important, but it's gotta work together because it leverages itself. >> It's gotta work together, absolutely. >> Eric, what would you say? >> Well I think the key thing is, people think storage is storage. All storage is not the same and one of the central tenets at IBM storage is to make sure that we're integrated with the cloud. We can move data around transparently, easily, simply, Bina pointed out the simplicity. If you can't support the cloud, then you're really just a storage box, and that's not what IBM does. Over 40% of what we sell is actually storage software and all that software works with all of our competitors' gear. And in fact our Spectrum Virtualize for Public Cloud, for example, can simultaneously have datasets sitting in a cloud instantiation, and sitting on premises, and then we can use our copy data management to take advantage of that secondary copy. That's all because we're so cloud-ified from a software perspective, so all storage is not the same, and you can't think of storage as, I need the cheapest storage. It's gotta be, how does it drive business value for my oceans of data? That's what matters most, and by the way, we're very cost-effective anyway, especially because of our custom flash model which allows us to have a real price advantage. >> You ain't doing business at a level of 100 petabytes if you're not cost effective. >> Right, so those are the things that we see as really critical, is storage is not storage. Storage is about data and information. >> So let me summarize your point then, if I can really quickly. That in other words, that we have to think about storage as the first step to great data management. >> Absolutely, absolutely Peter. >> Eric, Bina, great conversation. >> Thank you. >> So we've heard a lot of great thought leaderships comments on the data-driven journey with multi-cloud and some great product announcements. But now, let's do the crowd chat. This is your opportunity to participate in this proceedings. It's the centerpiece of the digital community event. What questions do you have? What comments do you have? What answers might you provide to your peers? This is an opportunity for all of us collectively to engage and have those crucial conversations that are gonna allow you to, from a storage perspective, drive business value in your digital business transformations. So, let's get straight to the crowd chat. (bright music)
SUMMARY :
the journey to the data-driven multi-cloud. and the resellers that sell to them, and changes the way it engages with customers, et cetera. and if the storage corrupts the data, then guess what? and you get stuck with the bill. and have that beautiful image, and storage can help you out. is the centerpiece of infrastructure, the data has to physically stay Cloud in many respects is the programming model, already and in fact, but honestly for the software side is again, shaping the way you guys, IBM, think about from the entry space to the mid range to the high end. Going back to that notion that the storage so that it's easy for the end user, (Eric laughing) you and I. Thank you so much in the form of a product video from IBM and requires the ability to rapidly analyze the details Great to hear about the IBM FlashSystem 9100 It's an exciting even, we're looking forward to it. that IBM brings to bear. so that clients don't have to pre-packaging some of these things to make it easier? and in that journey and those set of experiences, and that tape can only go that fast and the research capabilities, also our experiences and the understanding of the problem, manifested so fast. Making it to where you can use it for environments and making sure that the testing they're doing It's great to hear about the FlashSystem 9100 Tell us, what excites you about this announcement and it's the same in this new system, which is astounding, The centerpiece of the Wikibon community. and get the analyst's perspective. that provides the perfect answer to that, and the reemergence of some traditional big vendors, really the NVMe-based all-flash arrays over the next couple of years, wouldn't you say? So it's not enough to just have NVMe in the back-end array over the next few years? over the course of the next two to three years. great once again to have you in theCUBE. and to do that we've asked Eric Herzog so that the user doesn't have to do it. from a storage standpoint, that people have to think about and be able to run the analytics because it's gonna be so central to the fabric One is the move to NVMe, so we've integrated NVMe and the whole notion of storage technology, and be right there when you need their assistance. So look at it from a portfolio perspective, It's gotta work together, and by the way, we're very cost-effective anyway, You ain't doing business at a level of 100 petabytes that we see as really critical, as the first step to great data management. on the data-driven journey with multi-cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric Burgener | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
July 10th | DATE | 0.99+ |
Owen | PERSON | 0.99+ |
Herzog's Bar and Grill | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
six | QUANTITY | 0.99+ |
February | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
Bina Hallman | PERSON | 0.99+ |
200,000 | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
100 petabytes | QUANTITY | 0.99+ |
Bina | PERSON | 0.99+ |
two aspects | QUANTITY | 0.99+ |
DS8000 | COMMERCIAL_ITEM | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
100 different countries | QUANTITY | 0.99+ |
two-and-a-half inch | QUANTITY | 0.99+ |
460 terabytes | QUANTITY | 0.99+ |
Ginni Rometty | PERSON | 0.99+ |
FlashSystem 9100 | COMMERCIAL_ITEM | 0.99+ |
FlashSystem 900 | COMMERCIAL_ITEM | 0.99+ |
second aspect | QUANTITY | 0.99+ |
FS 9100 | COMMERCIAL_ITEM | 0.99+ |
hundreds | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
ingress | ORGANIZATION | 0.98+ |
One | QUANTITY | 0.98+ |
Over 40% | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.98+ |
first step | QUANTITY | 0.98+ |
Eric Herzog, IBM Storage Systems | Cisco Live US 2018
>> Live from Orlando, Florida, it's theCUBE, covering Cisco Live 2018. Brought to you by Cisco, NetApp, and theCUBE's ecosystem partners. >> Hello, everyone. Welcome back to theCUBE's live coverage here in Orlando, Florida for Cisco Live 2018. I'm John Furrier with Stu Miniman. Our next guest, Eric Herzog, Chief Marketing Officer and Vice President Global Channel Sales for IBM Storage. CUBE alum, great to see you. Thanks for comin' by. >> Great, we always love comin' and talkin' to theCUBE. >> Love havin' you on. Get the insight, and you get down and dirty in the storage. But I gotta, before we get into the storage impact, the cloud, and all the great performance requirements, and software you guys are building, news is that the CEO of Cisco swung by your booth? >> Yes, Chuck did come by today and asked how-- Chuck Robbins came by today, asked how we're doin'. IBM has a very broad relationship with Cisco, beyond just the storage division. The storage division, the IOT division, the collaboration group. Security's doin' a lot of stuff with them. IBM is one of Cisco's largest resellers through the GTS and GBS teams. So, he came by to see how were doin', and gave him a little plug about the VersaStack, and how it's better than any other converge solutions, but talked about all of IBM, and the strong IBM Cisco relationship. >> I mean, it's not a new relationship. Expand on what you guys are doin'. How does that intersect with division that he put on stage yesterday with the keynote. He laid out, and said publicly, and put the stake in the ground, pretty firmly, "This is the old way." Put an architecture, a firewall, a classic enterprise network diagram. >> Right, right. >> And said, "That's the old way," and put in a big circle, with all these different kinda capabilities with the cloud. It's a software defined world. Clearly Cisco moving up the stack, while maintaining the networking shops. >> Right. >> Networking and storage, always the linchpin of cloud and enterprise computing. What's the connection? Share the touch points. >> Sure, well I think the key thing is everyone's gotta realize that whether you're in a private cloud, a hybrid cloud, or a public cloud configuration, storage is that rock solid foundation. If you don't have a good foundation, the building will fall right over, and it's great that you've got cloud with its flexibility, it's ability to transform, the ability to modernize, move data around, but if what's underneath doesn't work, the whole thing topples over, and storage is a cruel element to that. Now, what we've done at IBM is we have made all of our solutions on the storage side, VersaStack, our all-flash arrays, all of our software defined storage, our modern data protection, everything is what we'll say is cloudified. K, it's, I designed for multiple cloud scenarios, whether it be private, hybrid, or public, or, as you've probably seen, in some the enterprise accounts, they actually use multiple public cloud providers. Whether it be from a price issue, or a legal issues, because they're all over the world, and we're supporting that with all our solutions. And, our VersaStack, specifically, just had a CVD done with Cisco, Cisco Validated Design, with IBM Cloud Private on a VersaStack. >> Talk about the scale piece, because this becomes the key differentiator. We've talked about on theCUBE, many of the times with you around, some of the performance you guys have, and the numbers are pretty good. You might wanna do a quick review on that. I'm not lookin' for speech and feeds. Really, Eric, I'd like to get your reaction, and view, and vision, on how the scale piece is kicking, 'cause clients want scale optionality. They're gonna have a lot of stuff on premise. They have cloud goin' on, multi cloud on the horizon, but they gotta scale. The numbers are off the charts. You're seein' all these security threats. I mean, it's massive. How are you guys addressing the scale question with storage? >> So, we've got a couple things. So first of all, the storage itself is easily scalable. For example, on our A9000 all-flash array, you just put a new one, automatically grows, don't have to do anything, k? With our transparent cloud tiering, you can set it up, whether it be our Spectrum Scale software, whether it be our Spectrum Virtualize software, or whether it be on our all-flash arrays, that you could automatically just move data to whatever your cloud target may be. Whether that be something with an object store, whether that be a block store, and it's all automated. So, the key thing here on scalability is transparency, ease of use, and automation. They wanna automatically join new capacity, wanna automatically move data from cloud to cloud, automatically move data from on premise to cloud, automatically move data from on premise to on premise, and IBM's storage solutions, from a software perspective, are all designed with that data mobility in mind, and that transportability, both on premise, and out to any cloud infrastructure they have. >> What should Cisco customers know about IBM storage, if you get to talk to them directly? We're here at Cisco Live. We've talked many times about what you guys got goin' on with the software. Love the software systems approach. You know we dig that. But a Cisco deployment, they've been blocking and tackling in the enterprise for years, clouds there. What's the pitch? What's the value proposition to Cisco clients? >> So, I think they key thing for us talkin' to a Cisco client is the deep level of integration we have. And, in this case, not just the storage division, but other things. So, for example, a lot of their collaboration stuff uses under pitting software from IBM, and IBM also uses some software from Cisco inside our collaboration package. In our storage package, the fact that we put together the VersaStack with all these Cisco Validated Designs, means that the customer, whether it be a cloud product, for example, on the VersaStack, about 20 of our public references are all small and medium cloud providers that wheel in the VersaStack, connect 'em, and it automatically grows simply and easily. So, in that case, you're looking at a cloud provider customer of Cisco, right? When you're looking at a enterprise customer of Cisco, man, the key thing is the level of integration that we have, and how we work together across the board, and the fact that we have all these Cisco Validated Designs for object storage, for file storage, for block storage, for IBM Cloud Private. All these things mean they know that it's gonna work, right outta the box, and whether they deploy it themselves, whether they use one of our resellers, one of our channel partners, or whether they use IBM services or Cisco service. Bottom line, it works right out of the box, easy to go, and they're up and running quickly. >> So, Eric, you talked a bunch about VersaStack, and you've been involved with Cisco and their UCS since the early days when they came up, and helped drive, really, this wave of converged infrastructure. >> Right. >> One of the biggest changes I've seen in the last couple years, is when you talk to customers, this is really their private cloud platform that they're building. When it first got rolled out, it was virtualization. We kinda added a little bit of management there. What, give us your viewpoint as to kinda high-level, why's this still such an important space, what are the reasons that customers are rolling this out, and how that fits into their overall cloud story? >> Well, I think you hit it, Stu, right on the head. First of all, it's easy to put in and deploy, k? That is a big check box. You're done, ready to go. Second thing that's important is be able to move data around easily, k? In an automated fashion like I said earlier, whether that be to a public cloud if they're gonna tier out. If I'm a private cloud, I got multiple data centers. I'm moving data around all the time. So, the physical infrastructure and data center A is a replica, or a DR center, for data center B, and vice versa. So, you gotta be able to move all this stuff around quickly easy. Part of the reason you're seeing converge infrastructure is it's the wave of what's hit in the server world. Instead of racking and stacking individual servers, and individual pieces of storage, you've got a pre-packed VersaStack. You've got Cisco networking, Cisco server, VMware, all of our storage, our storage software, including the ability to go out to a cloud, or with our ICP IBM Private Cloud, to create a private cloud. And so, that's why you're seeing this move towards converge. Yes, there's some hyperconverged out there in the market, too, but I think the big issue, in certain workloads, hyperconverged is the right way to go. In other workloads, especially if you're creating a giant private cloud, or if you're a cloud provider, that's not the way to go because the real difference is with hyperconverged you cannot scale compute and storage independently, you scale them together, So, if you need more storage, you scale compute, even if you don't need it. With regular converge, you scale them independently, and if you need more storage, you get more storage. If you need more compute-- If you need both, you get both. And that's a big advantage. You wanna keep the capex and opex down as you create this infrastructure for cloud. 'Member, part of the whole idea of cloud are a couple things. A, it's supposed to be agile. B, it's supposed to be super flexible. C, of course, is the modern nomenclature, but D is reduce capex and opex. And you wanna make sure that you can do that simply and easily, and VersaStack, and our relationship with Cisco, even if you're not using a VersaStack config, allows us to do that for the end user. >> And somethin' we're seeing is it's really the first step for customers. I need to quote, as you said, modernize the platform, and then I can really start looking at modernizing my applications on top of that. >> Right. Well, I think, today, it's all about how do you create the new app? What are you doin' with containers? So, for example, all of our arrays, and all of our arrays that go into a VersaStack, have free persistent storage support for any containerize environ, for dockers and kubernetes, and we don't charge for that. You just get it for free. So, when you buy those solutions, you know that as you move to the container world, and I would argue virtualization is still here to stay, but that doesn't mean that containers aren't gonna overtake it. And if I was the CEO of a couple different virtualization companies, I'd be thinkin' about buyin' a container company 'cause that'll be the next wave of the future, and you'll say-- >> Don't fear kubernetes. >> Yeah, all of that. >> Yeah, Eric Herzog's flying over to Dockercon, make a big announcement, I think, so. (laughing) >> Evaluation gonna drop a little bit. I gotta ask you a question. I mean, obviously, we watch the trends that David Floy and our team, NVMe is big topic. What is the NVMe leadership plan for you, on the product side, for you? Can you take a minute to share your vision for what that is gonna be? >> Sure, well we've already publicly announced. We've been shipping an NVMe over fabric solution leveraging InfiniBand since February of this year, and we demoed it, actually, in December at the AI Conference in New York City. So, we've had a fabric solution for NVMe already since December, and then shipping in February. The other thing we're doing is we publicly announced that we'd be supporting the other NVMe over fabric protocols, both fabric channel and ethernet by the end of the year. We publicly already announced that. We also announced that we would have an end to end strategy. In this case, you would be talking about NVMe on the fabric side going out to the switching and the host infrastructure, but also NVMe in a storage sub-system, and we already publicly announced that we'd be doing that this year. >> And how's the progress on that plan? You feel good about it? >> We're getting there. I can't comment yet, but just stay tuned on July 1st, and see what happens. >> So, talk about the Spectrum NAS, and other announcements that you have. What's goin' on? What are the big news? What's happening? >> Well, I think that, yeah, the big thing for us has been all about software. As you know, for the analysts that track the numbers, we are, and ended up in 2017, as tied as the number one storage software company in the world, independent of our system's business. So, one of the key powers there is that our software works with everyone's gear, whether it be a white box through a distributor or reseller, whether it be our direct competitors. Spectrum Protect, which is a, one of the best enterprise backup packages. We backup everybody's gear, our gear, NetApp's gear, HP's gear, Pure's gear, Hitachi's gear, the old Dell stuff, it doesn't matter to us, we backup everything. So, one of the powers that IBM has, from a software perspective, is always being able to support not only our own gear, but supporting all of our competitors as well. And the whole white box market, with things that our partners may put together through the distributors. >> I know somethin' might be obvious to you, but just take me through the benefits to the customer. What's the impact to the customer? Obviously, supporting everything, it sounds like you guys have done that with software, so you're agnostic on hardware. >> Right. >> So, is it a single pane of glass? What's the benefit to the customer with that software capability? >> Yeah, I feel there's a couple things. So, first of all, the same software that we sell as standalone software, we also sell on our arrays. So if you're in a hybrid configuration, and you're using our Flashsystem V9000 in our Storwize family, that software also works with an EMC, or NetApp box. So, one license, one way to do everything, one set of training, which in a small shop is not that important, but in a big shop, you don't have to manage three licenses, right? You don't have to get trained up on three different ways to do things, and you don't have to, by the way, document, which all the big companies would do. So it dramatically simplifies their life from an opex perspective. Makes it easier for them to run their business. >> Eric, we'd love to get your opinion on just how's Cisco doin' out there? It's a big sprawling company. I looked at the opening keynote, the large infrastructure business doing very well in the data center, but they've got collaboration, they do video, they're moving out in the cloud. Wanna see your thoughts as to how are they doing, and still making sure they take care of core networking, while still expanding and going through their own transformation, that they're talkin' very public about. How do we measure Cisco as a software company? >> Well, we see some very good signs there. I mean, we partner with 'em all the time, as I mentioned, for example, in both the security group and our collaboration group, and I'm not talkin' storage now, just IBM in general, we leverage software from them, and they leverage software from us. We deliver joint solutions through our partners, or through each of the two service organizations, but we also have products where we incorporate their software into ours, and they incorporate software in us. So, from our perspective, we've already been doing it beyond their level, now, of expanding into a much greater software play. For us, it's been a strong play for us already because of the joint work we've been doing now for several years on software that they've been selling in the more traditional world, and now pushing out into the broader areas, like cloud, for example. >> Awesome work. Eric, thanks for coming on. I gotta ask you one final, personal, question. >> Sure. >> You got the white shirt on, you usually have a Hawaiian shirt on. >> Well, because Chuck Robbins came by the booth, as we talked about earlier today, felt that I shouldn't have my IBM Hawaiian shirt on, however, now that I've met Chuck, next time, at next Cisco Live, I'll have my IBM Hawaiian shirt on versus my IBM traditional shirt. >> Chuck's a cool guy. Thanks for comin' on. As always, great commentary. You know your stuff. >> Great, thank you. >> Great to have the slicing and dicing, the IBM storage situation, as well as the overall industry landscape. At Cisco Live, we're breakin' it down, here on theCUBE in Orlando. Second day of three days of coverage. I'm John Furrier, Stu Miniman, stay with us for more live coverage after this break.
SUMMARY :
Brought to you by Cisco, NetApp, and Vice President Global Channel Sales for IBM Storage. news is that the CEO of Cisco swung by your booth? and gave him a little plug about the VersaStack, and put the stake in the ground, pretty firmly, And said, "That's the old way," What's the connection? all of our solutions on the storage side, many of the times with you around, So first of all, the storage itself is easily scalable. in the enterprise for years, clouds there. and the fact that we have all these Cisco Validated Designs So, Eric, you talked a bunch about VersaStack, One of the biggest changes I've seen including the ability to go out to a cloud, it's really the first step for customers. and all of our arrays that go into a VersaStack, Yeah, Eric Herzog's flying over to Dockercon, What is the NVMe leadership plan for you, on the fabric side going out to the switching and see what happens. and other announcements that you have. So, one of the powers that IBM has, What's the impact to the customer? So, first of all, the same software I looked at the opening keynote, and now pushing out into the broader areas, I gotta ask you one final, personal, question. You got the white shirt on, Well, because Chuck Robbins came by the booth, You know your stuff. the IBM storage situation,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Chuck | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
David Floy | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
July 1st | DATE | 0.99+ |
February | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Chuck Robbins | PERSON | 0.99+ |
December | DATE | 0.99+ |
Hitachi | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
New York City | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Orlando | LOCATION | 0.99+ |
one license | QUANTITY | 0.99+ |
Second day | QUANTITY | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
IBM Storage Systems | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
each | QUANTITY | 0.98+ |
A9000 | COMMERCIAL_ITEM | 0.98+ |
one way | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
VersaStack | TITLE | 0.98+ |
VersaStack | ORGANIZATION | 0.97+ |
two service organizations | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
one set | QUANTITY | 0.97+ |
about 20 | QUANTITY | 0.96+ |
Second thing | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
Roy Kim, Pure Storage | CUBE Conversation
(upbeat music) >> Hi, I'm Peter Burris, and welcome once again to another Cube Conversation from our studios here in beautiful Palo Alto, California. Today, we've got a really special guest. We're going to be talking about AI and some of the new technologies that are making that even more valuable to business. And we're speaking with Roy Kim, who's the lead for AI solutions at Pure Storage. Roy, welcome to theCUBE. >> Thank you for having me, very excited. >> Well, so let's start by just, how does one get to be a lead for AI solutions? Tell us a little bit about that. >> Well, first of all, there aren't that many AI anything in the world today. But I did spend eight years at Nvidia, helping build out their AI practice. I'm fairly new to Storage, I'm about 11 months into Pure Storage, so, that's how you get into it, you cut your teeth on real stuff, and start at Nvidia. >> Let's talk about some real stuff, I have a thesis, I (mumbles) it by you and see what you think about it. The thesis that I have: Wikibon has been at the vanguard of talking about the role that flash is going to play, flash memory, flash storage systems, are going to play in changes in the technology industry. We were one of the first to really talk about it. And well, we believe, I believe, very strongly that if you take a look at all the changes that are happening today with AI and the commercialization of AI and even big data and some other things that are happening, a lot of that can be traced back directly to the transition from memory, which had very very long lag times, millisecond speed lag times, to flash, which is microsecond speed. And, when you go to microsecond, you can just do so much more with data, and it just seems as though that transition from disk to flash has kind of catalyzed a lot of this change, would you agree with that? >> Yeah, that transition from disk to flash was the fundamental transition within the storage industry. So the fundamental thing is that data is now fueling this whole AI revolution, and I would argue that the big data revolution with Hadoop Spark and all that is really the essence underneath it is to use data get insight. And so, disks were really fundamentally designed to store data and not to deliver data. If you think about it, the way that it's designed, it's really just to store as much data as possible. Flash is the other way around, it's to deliver data as fast as possible. That transition is fundamentally the reason why this is happening today. >> Well, it's good to be right. (laughs) >> Yeah, you are definitely right. >> So, the second observation I would make is that we're seeing, and it makes perfect sense, a move to start, or trend to start, move more processing closer to the data, especially, as you said, on flash systems that are capable of delivering data so much faster. Is that also starting to happen, in you experience? >> That's right. So this idea that you take a lot of this data and move it to compute as fast as possible-- >> Peter: Or move the compute even closer to the data. >> And the reason for that, and AI really exposes that as much as possible because AI is this idea that you have these really powerful processors that need as much data as quickly as possible to turn that around into neural networks that give you insight. That actually leads to what I'll be talking about, but the thing that we built, this thing called AIRI, this idea that you pull compute, and storage, and networking all into this compact design so there is no bottleneck, that data lives close to compute, and delivers that fastest performance for your neural network training. >> Let's talk about that a little bit. If we combine your background at Nvidia, the fact that you're currently at Pure, the role that flash plays in delivering data faster, the need for that faster delivery in AI applications, and now the possibility of moving GPUs and related types of technology even closer to the data. You guys have created a partnership with Nvidia, what exactly, tell us a little bit more about AIRI. >> Right, so, this week we announced AIRI. AIRI is the industry's first AI complete platform for enterprises. >> Peter: AI Ready-- >> AI Ready Infrastructure for enterprises, that's where AIRI comes from. It really brought Nvidia and Pure together because we saw a lot of these trends within customers that are really cutting their teeth in building an infrastructure, and it was hard. There's a lot of intricate details that go into building AI infrastructure. And, we have lots of mutual customers at Nvidia, and we found is that there some best practices that we can pull into a single solution, whether it's hardware and software, so that the rest of the enterprises can just get up and running quickly. And that is represented in AIRI. >> We know it's hard because if it was easy it would've been done a long time ago. So tell us a little bit about, specifically about the types of technologies that are embedded within AIRI. How does it work? >> So, if you think about what's required to build deep learning and AI practice, you start from data scientists, and you go into frameworks like TensorFlow and PyTorch, you may have heard of them, then you go into the tools and then GPUs, InfiniBand typically is networking of choice, and then flash, right? >> So these are all the components, all these parts that you have access to. >> That's right, that's right. And so enterprises today, they have to build all of this together by hand to get their data centers ready for AI. What AIRI represents everything but data scientists, so start from the tools like TensorFlow all the way down to flash, all built and tuned into a single solution so that all, really, enterprises need to do is give it to a data scientist and to get up and running. >> So, we've done a fair amount of research on this at Wikibon, and we discovered that one of the reasons why big data and AI-related projects have not been as successful as they might have been, is precisely because so much time was spent trying to understand the underlying technologies in the infrastructure required to process it. And, even though it was often to procure this stuff, it took a long time to integrate, a long time to test, a long time to master before you could bring application orientations to bear on the problems. What you're saying is you're slicing all that off so that folks that are trying to do artificial intelligence related workloads can have a much better time-to-value. Have I got that right? >> That's right. So, think about, just within that stack, everything I just talked about InfiniBand. Enterprises are like, "What is InfiniBand?" GPU, a lot of people know what GPU is, but enterprises will say that they've never deployed GPUs. Think about TensorFlow or PyTorch, these are tools that are necessary to data scientists, but enterprises are like, "Oh, my goodness, what is that?" So, all of this is really foreign to enterprises, and they're spending months and months trying to figure out what it is, and how to deploy it, how to design it, and-- >> How to make it work together. >> How to make it work together. And so, what Nvidia and Pure decided to do is take all the learnings that we had from these pioneers, trailblazers within the enterprise industry, bring all those best practices into a single solution, so that enterprises don't have to worry about InfiniBand, or ethernet, or GPUs, or scale out flash, or TensorFlow. It just works. >> So, it sounds like it's a solution that's specifically designed and delivered to increase the productivity of data scientists as they try to do data science. So, tell us a little bit about some of those impacts. What kinds of early insights about more productivity with data science are you starting to see as a consequence of this approach. >> Yeah, you know, you'll be surprised that most data scientists doing AI today, when they kick off a job, it takes a month to finish. So think about that. When someone, I'm a data scientist, I come in on Monday, early February, I kick off a job, I go on vacation for four weeks, I come back and it's still running. >> What do you mean by "kicking off a job?" >> It means I start this workload that helps train neural nets, right? It requires GPUs to start computing, and the TensorFlow to work, and the data to get it consumed. >> You're talking about, it takes weeks to run a job that does relatively simple things in a data science sense, like train a model. >> Train a model, takes a month. And so, the scary thing about that is you really have 12 tries a year to get it right. Just imagine that. And that's not something that we want enterprises to suffer through. And so, what AIRI does, it cuts what used to take a month down to a week. Now, that's amazing, if you think about it. What used to, they only had 12 tries in a year, now they have 48 tries in a year. Transformative, right? The way that that worked is we, in AIRI, if you look at it there's actually four servers with FlashBlade. We figured out a way to have that job run across all four servers to give you 4X the throughput. Think that that's easy to do, but it actually is not. >> So you parallelized it. >> We parallelized it. >> And that is not necessarily easy to do. These are often not particularly simple jobs. >> But, that's why no one's doing it today. >> But, if you think about it, going back to your point, it's like the individual who takes performance-enhancement drugs so they can get one more workout than the competition and that lets them hit another 10, 15 home runs which leads to millions of extra dollars. You're kind of saying something similar. You used to be able to get only 12 workouts a year, now you can do 48 workouts, which business is going to be stronger and more successful as a result. >> That's a great analogy. Another way to look at it is, a typical data scientist probably makes about half a million dollars a year. What if you get 4X the productivity out of that person? So, you get the return of two million dollars in return, out of that $500,000 investment you make. That's another way of saying performance-enhancing drug for that data scientist. >> But I honestly think it's even more than that. Because, there's a lot of other support staff that are today, doing a lot of the data science grunt work, let's call it. Lining up the pipelines, building the, testing pipelines, making sure that they run, testing sources, testing sinks. And, this is reducing the need for infrastructure types of tasks. So, you're getting more productivity out of the data scientitists, but you're also getting more productivity out of all the people who heretofore were, you were spending on doing this type of stuff, when all they were doing was just taking care of the infrastructure. >> Yeah. >> Is that right? >> That's exactly right. We have a customer in the UK, one of the world's largest hedge fund companies that's publicly traded. And, what they told us is that, with FlashBlade, and not necessarily an AIRI customer at this time, but they're actually doing AI with FlashBlade today at Pure, from Pure. What they said is, with FlashBlade they actually got two engineers that were full time taking care of infrastructure, now they're doing data science. Right? To your point, that they don't have to worry about infrastructure anymore, because the simplicity of what we bring from Pure. And so now they're working on models to help them make more money. >> So the half a million dollars a year that you were spending on a data scientist and a couple of administrators, that you were getting two million dollars worth, that you're now getting two million dollars return, you can now take those administrators and have them start doing more data science, without necessarily paying them more. It's a little secret. But you're now getting four, five, six million dollars in return as a consequence of this system. >> That's right. >> As we think about where AIRI is now, and you think about where it's going to go, give us a sense of, kind of, how this presages new approaches to thinking about problem solving as it relates to AI and other types of things. >> One of the beauty about AI is that it's always evolving. What used to be what they call CNNs as the most popular model, now is GANs, which-- >> CNN stands for? >> Convolution Neural Nets. Typically used for image processing. Now, people are using things like Generative Adversarial Networks, which is putting two networks against each other to-- >> See which one works and is more productive. >> And so, that happened in a matter of a couple of years. AI's always changing, always evolving, always getting better and so it really gives us an opportunity to think about how does AIRI evolve to keep up and bring the best, state of the art technology to the data scientist. There's actually boundless opportunities to-- >> Well, even if you talk about GANs, or Generative Adversarial Networks, the basic algorithms have been in place for 15, 20, maybe even longer, 30 years. But, the technology wouldn't allow it to work. And so, really what we're talking about is a combination of deep understanding of how some of these algorithms work, that's been around for a long time, and the practical ability to get business value out of them. And that's kind of why this is such an exploding thing, because there's been so much knowledge about how this stuff, or what this stuff could do, that now we can actually apply it to some of these complex business problems. >> That's exactly right. I tell people that the promise of big data has been around for a long time. People have been talking about big data for 10, 20 years. AI is really the first killer application of big data. Hadoop's been around for a really long time, but we know that people have struggled with Hadoop. Spark has been great but what AI does is it really taps into the big data platform and translates that into insight. And whatever the data is. Video, text, all kinds of data can, you can use AI on. That really is the reason why there's a lot of excitement around AI. It really is the first killer application for big data. >> I would say it's even more than that. It's an application, but it's also, we think there's a bifurcation, we think that we're seeing an increased convergence inside the infrastructure, which is offering up greater specialization in AI. So, AI as an application, but it also will be the combination of tooling, especially for data scientists, will be the new platform by which you build these new classes of applications. You won't even know you're using AI, you'll just build an application that has those capabilities, right? >> Right, that's right, I mean I think it's as technical as that or as simple as when you use your iPhone and you're talking to Siri, you don't know that you're talking to AI, it's just part of your daily life. >> Or, looking at having it recognize your face. I mean, that is processing, the algorithms have been in place for a long time, but it was only recently that we had the hardware that was capable of doing it. And Pure Storage is now bringing a lot of that to the enterprise through this relationship with Nvidia. >> That's right, so AIRI does represent all the best of AI infrastructure from all our customers, we pulled it into what AIRI is, and we're both really excited to give it to all our customers. >> So, I guess it's a good time to be the lead for AI solutions at Pure Storage, huh? >> (laughs) That's right. There's a ton of work, but a lot of excitement. You know, this is really the first time a storage company was spotlighted and became, and went on the grand stage of AI. There's always been Nvidia, there's always been Google, Facebook, and Hyperscalers, but when was the last time a storage company was highlighted on the grand stage of AI? >> Don't think it will be the last time, though. >> You know, it's to your point that this transition from disk to flash is that big transition in industry. And fate has it that Pure Storage has the best flash-based solution for deep learning. >> So, I got one more question for you. So, we've got a number of people that are watching the video, watching us talk, a lot of them very interested in AI, trying to do AI, you've got a fair amount of experience. What are the most interesting problems that you think we should be focusing on with AI? >> Wow, that's a good one. Well, there's so many-- >> Other than using storage better. >> (laughs) Yeah, I think there's so many applications just think about customer experience, just one of the most frustrating things for a lot of people is when they dial in and they have to go through five different prompts to get to the right person. That area alone could use a lot of intelligence in the system. I think, by the time they actually speak to a real live person, they're just frustrated and the customer experience is poor. So, that's one area I know that there's a lot of research in how does AI enhance that experience. In fact, one of our customers is Global Response, and they are a call center services company as well as an off-shoring company, and they're doing exactly that. They're using AI to understand the sentiment of the caller, and give a better experience. >> All that's predicated on the ability to do the delivery. So, I'd like to see AI be used to sell AI. (Roy laughs) Alright, so Roy Kim, who's the lead of AI solutions at Pure Storage. Roy, thank you very much for being on theCUBE and talking with us about AIRI and the evolving relationship between hardware, specifically storage, and new classes of business solutions powered by AI. >> Thank you for inviting me. >> And again, I'm Peter Burris, and once again, you've been watching theCUBE, talk to you soon. (upbeat music)
SUMMARY :
and some of the new technologies how does one get to be that many AI anything in the world today. that flash is going to play, is to use data get insight. Well, it's good to be right. Is that also starting to and move it to compute even closer to the data. that data lives close to compute, and now the possibility of moving GPUs AIRI is the industry's first so that the rest of the enterprises the types of technologies all these parts that you have access to. and to get up and running. a long time to test, a long time to master and how to deploy it, don't have to worry about to increase the productivity it takes a month to finish. and the TensorFlow to work, and to run a job that does Think that that's easy to And that is not necessarily easy to do. But, that's why no and that lets them hit out of that $500,000 investment you make. lot of the data science We have a customer in the UK, that you were getting two and you think about One of the beauty about AI which is putting two networks and is more productive. to the data scientist. and the practical ability to I tell people that the promise of big data the combination of tooling, as when you use your iPhone a lot of that to the enterprise to give it to all our customers. but a lot of excitement. be the last time, though. And fate has it that that you think we should Wow, that's a good one. a lot of intelligence in the system. the ability to do the delivery. talk to you soon.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Roy | PERSON | 0.99+ |
15 | QUANTITY | 0.99+ |
$500,000 | QUANTITY | 0.99+ |
Roy Kim | PERSON | 0.99+ |
12 tries | QUANTITY | 0.99+ |
two million dollars | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
two million dollars | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
48 tries | QUANTITY | 0.99+ |
30 years | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
Siri | TITLE | 0.99+ |
four weeks | QUANTITY | 0.99+ |
48 workouts | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
4X | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
two engineers | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Pure Storage | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Palo Alto, California | LOCATION | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
Hadoop | TITLE | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
a month | QUANTITY | 0.99+ |
CNN | ORGANIZATION | 0.98+ |
two networks | QUANTITY | 0.98+ |
early February | DATE | 0.98+ |
a week | QUANTITY | 0.98+ |
Wikibon | ORGANIZATION | 0.98+ |
second observation | QUANTITY | 0.98+ |
one more question | QUANTITY | 0.98+ |
five | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.97+ |
five different prompts | QUANTITY | 0.97+ |
about half a million dollars a year | QUANTITY | 0.96+ |
about 11 months | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Spark | TITLE | 0.96+ |
15 home runs | QUANTITY | 0.95+ |
FlashBlade | ORGANIZATION | 0.95+ |
12 tries a year | QUANTITY | 0.94+ |
millions of extra dollars | QUANTITY | 0.94+ |
PyTorch | TITLE | 0.94+ |
20 years | QUANTITY | 0.94+ |
single solution | QUANTITY | 0.93+ |
TensorFlow | TITLE | 0.93+ |
Eric Herzog, IBM | IBM Think 2018
>> Announcer: Live from Las Vegas, it's theCUBE. Covering IBM Think 2018. (upbeat music) Brought to you by IBM. >> Welcome back to IBM Think 2018 everybody. My name is Dave Vellante and I'm with my co-host Peter Burris. You're watching theCUBE, the leader in live tech coverage. This is day three of our wall to wall coverage of IBM Think. The inaugural Think conference. Good friend Eric Herzog is here. He runs marketing for IBM storage. They're kicking butt. You've been in three years, making a difference, looking great, new Hawaiian shirt. (laughter) Welcome back my friend. >> Thank you, thank you. >> Good to see you. >> Always love being on theCUBE. >> So this is crazy. I mean, I miss Edge, I loved that show, but you know, one stop shopping. >> Well, a couple things. One when you look at other shows in the tech industry, they tend to be for the whole company so we had a lot of small shows and that was great and it allowed focus, but the one thing it didn't do is every division, including storage, we have all kinds of IBM customers who are not IBM storage customers. So this allows us to do some cross pollination and go and talk to those IBM customers who are not IBM storage customers which we can always do at a third party show like a VM World or Oracle World, but you know those guys tend to have a show that's focused on every division they have. So it could be a real advantage for IBM to do it that way, give us more mass. And it also, you know, helps us spend more on third party shows to go after a whole bunch of new prospects and new clients in other venues. >> You, you've attracted some good storage DNA. Yourself and some others, Ed Walsh was on yesterday. He said Joe Tucci made a comment years ago Somebody asked him what's your biggest fear. If IBM wakes up and figures it out in storage. Looks like you guys are figuring it out. >> Whipping it up and figuring it out. >> Four quarters of consistent growth, you know redefining your portfolio towards software defined. One of the things we've talked about a lot, and I know you brought this was the discipline around you know communicating, getting products to market, faster cycles, because people buy products and solutions right? So you guys have really done a good job there, but what's your perspective on how you guys have been winning in the last year or so? >> Well I think there's a couple of things. One is pure accident, okay. Which is not just us, is one of the leaders in the industry, where I used to work and Ed used to work has clearly stubbed its toe and has lost its way and that has benefited not only IBM but actually even some of our other competitors have grown at the expense of, you know, EMC. And they're not doing as well as they used to do and they've been cutting head count and you know, there's a big difference in the engineering spend of what EMC does versus what Michael Dell likes to spend on engineering. We have been continuing to invest. Sales resources, marketing resources, tech support resources in the field, technical resources from a development perspective. The other thing we did as Ed came back was rationalize the portfolio. Make sure that you don't have 27 products that overlap, you have one. And maybe it has a slight overlap with the product next to it, but you don't have to have three things that do the same thing and quite honestly, IBM, before I showed up, we did have that. So that's benefited us and then I think the third thing is we've gone to a solution-oriented focus. So can we talk about, as nerdy as tracks per sector and TPI and BPI and, I mean all the way down to the hard drive or to the flash layer? Sure we can. You know what, have you ever... You guys have been doing this forever. Ever met a CIO who was a storage guy? >> No, no. CIOs don't care about storage. >> Exactly, so you've got to... >> We've had quite a couple of ex-CIOs who were storage guys. (laughter) >> So you've really got to talk about applications, workloads, and use cases. How you solve the business problems. We've created a whole set of sales tools that we call the conversations available to the IBM sales team and our business partners which is how to talk to a CIO, how to talk to a line of business owner, how to talk to the VP of software development in a global enterprise who doesn't care at all, and also to get people to understand that it's not... Storage is a critical foundation for cloud, for AI, for other workloads, but if you talk latency right off the top, especially with a CIO or the senior executive, it's like what are you talking about? What you have to say is we can make your cloud sing, we can make your cloud never go down. We can make sure that the response time on the web browser is in a second. Whereas you know Google did that test about if you click and it takes more than two and a half seconds, they go away. Well even if that's your own private cloud, guess what they do the same thing. So you've got to be able to show them how the storage enables cloud and AI and other workloads. >> Let's talk about that for a second. Because I was having a thought here. It's maybe my only interesting thought here at Think, being pretty much overwhelmed. But the thought that I had was if you think about all the things that IBM is talking about, block chain, analytics, cloud, go on down the list, none of them would have been possible if we were still working at 10, 20, 30 milliseconds of wait time on a disc head. The fundamental change that made all of this possible is the move from disc to flash. >> Eric: Right. >> Storage is the fundamental change in this industry that has made all of this possible. What do you think about that? >> So I would agree with that. There is no doubt and that's part of the reason I had said storage is a critical foundation for cloud or AI workloads. Whether you're talking not just pure performance but availability and reliability. So we have a public reference Medicat. They deliver healthcare services as a service, so it's a software as a service model. Well guess what? They provide patient records into hospitals and clinics that tend to be focused at the university level like the University of California Health Center for the students. Well guess what? If not only does it need to be fast, if it's not available then you can't get the healthcare records can you? So, and while it's a cloud model, you have to be able to have that availability characteristic, reliability. So storage is, again, that critical foundation. If you build a building in a major city and the foundation isn't very good, the building falls over. And storage is that critical foundation for any cloud, any AI, and even for the older workloads like an SAP Hana or a Oracle workload, right? If, again if the storage is not resilient, oh well you can't access the shipping database or the payroll database or the accounts receivable database cause the storage is down and then obviously if it's not fast, it takes forever to get Dave Vellante's bill, right. And that's a waste of time. >> So it's plumbing, but the plumbing's getting more intelligent isn't it? >> Well that's the other thing we've done is we are automating everything. We are imbuing our software, and we announced this, that our range are going to be having an intelligent infrastructure software plane if you will that is going to help do diagnostics. For example, in one of the coming releases, if a customer allows access, if a power supply is going bad, we will tell them it's going bad and it'll automatically send a PO to IBM with a serial number, the address, and say please send me a new power supply before the power supply actually fails. But it also means they don't have to stock a power supply on their shelf which means they have a higher cost of cap ex. And for a big shop there's a bunch of power supplies, a bunch of flash modules, maybe hard drives if they're still dinosauric in how they behave. And they have those things and they buy them from us and our competitors. So imbuing it with intelligence, automating everything we can automate. So automatically tiering data, moving data around from tier to tier, moving it out to the cloud, what we do with the reuse of backup sets. Instead of doing it the old way of back up. And I know you've got Sam Warner coming on later today and he'll talk about modern data protection, how that is revolutionizing what dev ops and other guys can do with their, essentially, what we would've called in the old days back up data. >> Let's talk about your spectrum launch. Spectrum NAS, give us some plugs for that. What's the update there? >> So we announced on the 20th of February a whole set of changes regarding the Spectrum family. We have things around Spectrum PROTECT, with GDPR, Spectrum PROTECT Plus as a service as well as some additional granularity features and I know Sam Warner's going to come on later today. Spectrum NAS software defined network attached storage. Okay, we're not going to sell any infrastructure with it. We have for big data analytics our Spectrum scale, but think of Spectrum NAS as traditional network attached storage workloads. Home directories. Things like that. Small file service where Spectrum scale has one of our public references, and they were here actually at Edge a couple of years ago, one of the largest banks in the world, their entire fraud detection system is based on Spectrum scale. That's not what you would use Spectrum NAS for. So, and it's often common as you know in the file world to have sort of a traditional file system and then a big one that does big data, analytics and AI and is very focused on that and so that's what we've done. Spectrum NAS is a software only, software defined, rounds out our block, now gives a traditional file. We had scale out file already and IBM cloud object storage is also software defined. >> Well how about the get put world. What's happening there? I mean we've been waiting for it to explode. >> Ah so the get put world is all about NVME. NVME, new storage protocol as you know it's been scuzzy forever. Scuzzy and/or SATA. And it's been that way for years and years and years and years, but now you've got flash. As Peter pointed out spinning disc is really slow. Flash is really fast and the protocol of Scuzzy was not keeping up with the performance so NVME is coming out. We announced an NVME over InfiniBand Fabric solution. We announced that we will be adding a fiber channel. NVME fabric based and also in ethernet. Those will come and one of the key things we're doing is our hardware, our infrastructure's all ready to go so all you have to do is a non-disruptive software upgrade and for anyone who's bought today, it'll be free. So you can start off with fiber channel or ethernet fabrics today or InfiniBand fabric now that we can ship, but on the ethernet and fiber channel side, they buy the array today and then later this year in the second half software upgrade and then they'll have NVME over fiber channel or NVME over ethernet. >> Explain why NVME and NVME over fabric is so important generally but in particular for this sort of new class of applications that's emerging. >> Well the key thing with the new class of applications is they're incredibly performance and latency sensitive. So we're trying to do real artificial intelligence and they're trying to, for example, I just did a presentation and one of our partners, Mark III has created a manufacturing system using AI and Watson. So you use cameras all over, which has been common, but it actually will learn. So it'll tell you whether cans are bad. Another one of our customers is in the healthcare space and they're working on a genomic process for breast cancer along with radiology and they've collected over 20 million radiological samples of breast cancer analysis. So guess what, how are you going to sort through that? Are you or I could sort through 20 million images? Well guess what, AI can do that, narrow it down, and say whether it's this type of breast cancer or that type of breast cancer. And then the doctor can decide what to do about it. And that's all empowered by AI and that requires incredible performance which is what NVME delivers. Again, that underlying foundation of AI, in this case going from flash with Scuzzy, flash to NVME, increasing the power that AI can deliver because of its storage foundation. >> But even those are human time transactions. What about when we start taking the output of that AI and put it directly into operational transactions that have to run like a bat out of hell. >> Which is where NVME will come in as well. You cannot have the performance that we've had these last almost 30 years with Scuzzy and even slower when you talk about SATA. That's just not going to cut it with flash. And by the way, you know there's going to be things beyond flash that will be faster than flash. So flash two, flash three, it's just the way it was with the hard drive world, right? It was 2400 RPM then 36 then 54 then 72 then 10k then 15/5. >> More size, more speed, lower energy. >> Which is what NVME will help you do and you can do it as a fabric infrastructure or you can do it in the array itself. You dual in box and out of box connectivity with NVME increasing the performance within your array and increasing the performance outside of the array as you go out to your host and out into your switching infrastructure. >> So I'm loving Think. It's too many people to count, I've been joking all week. 30,000 40,000. We're still tallying up. I'm going to miss Edge for sure. I'm going to miss the updates in the you know, late spring. But so let's get 'em now. What can we expect? What are you trying to accomplish in the next six to nine months? What should we be looking for without giving any confidential information. >> Well we've already publicly announced that we'll be fleshing out NVME across the board. >> Dave: Right. >> So we already publicly announced that. That will be a big to-do. The other thing we're looking at is continuing to imbue what we do with additional solution sets. So that's something we have a wide set of software. For example, we publicly announced this week that the Versa stack, all flash array will be available with IBM cloud private with a CYSCO validated design in May. So again, in this case taking a very powerful system, the Versa Stack all flash, which already delivers ROI and TCO, but still is if you will a box. Now that box is a converge box with compute with switching with all flash array and with a virtual environment. But now we're putting, again as a bundle, IBM cloud private on there. So you'll see more and more of those types of solutions both with the rest of IBM but also from third parties. So if that offers the right solution set to cut capx/opx, automate processes, and again, for the cloud workloads, AI workloads and any workloads, storage is that foundation. The critical foundation. So we will make sure that we'll have solutions wrapped around that throughout the rest of this year. >> So it's great to see the performance in the storage division. Great people. We're under counting it. We're not even counting all the cloud storage that goes and counts somewhere else. You guys are doing a great job. You know, best of luck and really keep it up Eric, thanks very much for coming back on theCUBE. >> Great thank you very much. >> We really appreciate it. >> Thanks again Peter. >> Alright keep it right there everybody we'll be back with our next segment right after this short break. You're watching theCUBE live from Think 2018. (upbeat music)
SUMMARY :
Brought to you by IBM. Welcome back to IBM Think 2018 everybody. but you know, one stop shopping. and it allowed focus, but the one thing it didn't do Looks like you guys are figuring and figuring it out. and I know you brought this was the discipline have grown at the expense of, you know, EMC. CIOs don't care about storage. who were storage guys. We can make sure that the response time is the move from disc to flash. Storage is the fundamental change and clinics that tend to be focused Well that's the other thing we've done What's the update there? So, and it's often common as you know Well how about the get put world. all ready to go so all you have to do is so important generally but in particular Well the key thing with the new class of applications the output of that AI and put it directly And by the way, you know there's outside of the array as you go in the next six to nine months? that we'll be fleshing out NVME across the board. So if that offers the right solution set to cut capx/opx, So it's great to see the performance with our next segment right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Teresa | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
USDOT | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
John | PERSON | 0.99+ |
six | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Securitas | ORGANIZATION | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Ed Walsh | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Teresa Carlson | PERSON | 0.99+ |
Kim Majerus | PERSON | 0.99+ |
Joe Tucci | PERSON | 0.99+ |
Chicago | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
seven weeks | QUANTITY | 0.99+ |
Eric | PERSON | 0.99+ |
Monday | DATE | 0.99+ |
Washington | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
$1.8 million | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
2010 | DATE | 0.99+ |
Hardik Bhatt | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Federal Highway Administration | ORGANIZATION | 0.99+ |
300% | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
27 products | QUANTITY | 0.99+ |
85% | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
$60 million | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Allied Universal | ORGANIZATION | 0.99+ |
three people | QUANTITY | 0.99+ |
49 days | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Washington DC | LOCATION | 0.99+ |
Sam Warner | PERSON | 0.99+ |
University of California Health Center | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
New Orleans | LOCATION | 0.99+ |
Uturn Data Solutions | ORGANIZATION | 0.99+ |
120 cities | QUANTITY | 0.99+ |
two hundred | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
20 million images | QUANTITY | 0.99+ |
Department of Transportation | ORGANIZATION | 0.99+ |
14 states | QUANTITY | 0.99+ |
10k | QUANTITY | 0.99+ |
Al Martin, IBM | IBM Think 2018
>> Narrator: Live from Las Vegas, it's theCUBE covering IBM Think 2018. Brought to you by IBM. >> Welcome back to IBM Think 2018. This is theCUBE, the leader in live tech coverage and my name is Dave Vellante, and we've been covering IBM Think, this is our second day. IBM's inaugural conference will be here at three days, wall-to-wall coverage. Al Martin is here, he's the IBM VP of Hybrid Data Management, client success, I'm going to get that in there because it's such an important part of the title. Al, welcome to theCUBE, thanks for coming on. >> Thank you, pleasure. >> We'll start with hybrid data management, what do you mean by hybrid data management, what is that? >> Well I think, it starts with data, and they call it information technology not data technology for a reason, meaning I have the pleasure or the burden, one of the two in terms of being able to set up what we call the AI ladder. Meaning you start with data, you push it up the stack, push value up the stack that being analytics, ML, AI, and data today is a challenge, I mean it's a huge problem. It doesn't matter what size client you are, it's a challenge for you, and so it's unstructured, it's structured, it can be in the cloud, it can be on-prem. So when we say hybrid, it's across- The challenge that I have is across all those different foreign factors. We've got to make data simple and accessible all across all those foreign factors, that's hybrid. >> It's a tall job, tall order. >> Pretty much all jobs. >> Okay, how do you do it? >> How do I do it. Well, very carefully. We develop technologies that do just that. What we do at via is common analytics engine first and foremost. We use an engine like, no matter what foreign factors say when I'm in an appliance, I can query the appliance, and then if I want to take that work load outside that appliance and put it against my own hardware, I can take that database out and still query, do the same analytics, I can put that in the cloud, do the same query and analytics, no different. So, the way we do it is we don't care whether it's structured or unstructured, we don't care whether it's no SQL or SQL, we'll do both, we'll do analytic processing, we'll do operational processing and we try to do it within the same footprint, that's essentially how we do it. >> Okay, so what I like about this is your chan is every customer, I mean of every company (mumbles). >> That's the challenge. >> What's the conversation like when you walk into a client or a prospect, what are the words they're using to describe their problems, helps us understand that. >> That is a great question, because it is very difficult to get those words out very often. A lot of clients are struggling where they are on what I call the maturity curve. So, to that point, what I typically do is start with a conceptual maturity curve, and if you can imagine a graph going from left to right, it's a hockey stick a value relative to maturity, and so we figure out where our client is on that maturity curve. By example, imagine four quadrants. On the left-more quadrant is operations, that's your ERP systems, your billing systems. If they're there the opportunity is cost-optimization, or the deal is operational systems don't typically do well with analytics. So if they're looking at analytics then they'll move to the next quadrant and do data warehousing, then the opportunities tend to be data legs, you might want to get into Hadoop, and then once you graduate from there you go into self-service analytics, that'd be like the third quadrant, and then you're thinking about Spark as a common analytics engine, you're thinking about IOT, and then you start getting into machine-learning, and by the time you hit the fourth quadrant, that is where new models begin and you're really driving machine-learning and driving the progress to AI. When I look at that model, those four quadrants I just walked you through, is I'm pushing as much as I can to both the developer and the business, and give them the empowerment, and when you do that then governance comes into play, data science comes into play, new personas come into play. So it's quite a challenge, but I find where the client is on that graph and figure out where they want to be, current state, desired state, and then we draw up a plan to get them there. >> So let's talk about those, sort of. That is I guess the maturity model, right? We started with a core systems, ERP, transaction systems, you started to build data warehouses, data marts, they were largely bespoke systems, it was sort of an asynchronous data move, you have it build big complicated cubes. Still do, still doing that. >> Still do. Still doing it in many cases. >> And they're driving decision support, but it got really expensive, and a lot of times it was like a snake swallowing a basketball to make a change. Okay, so then along comes Hadoop thrown into a data leg like you say, it's got a reduction of investment, but then you got to get value out of it. Now you're talking about self-service analytics, Spark comes into play, simplifies things a little bit and now you get ML, more automation. My question is, as you proceed, as customers proceed down that journey, is there a hybrid data management architecture that has to be put in place so that these aren't separate bespoke pieces that I leave behind but they all come together in an enterprise data model. >> Here's the way I would explain that, in making the complex as simple as possible. We figure out where they are, and then there's essentially five different key elements that we key on. One is hybrid data management, that's what I'm responsible for, and by example, the database we use supports HDAP, which means it'll do both analytical or warehousing and transactional processing at the same time by example. When you're looking at unified governance that would be number two. Unified governance is, the best way to describe that is, is unified governance is done for data, what libraries do for books, same concept. And then the third one is then when you're pushing that closer to the developer, then that's when you get into data science and the models start building upon themselves and that's where the magic happens. Those are the three, but there's two more. Under data science, I usually call out machine learning, because machine learning is very important. I mean that enables that path to AI that everybody talks about, the bridge to AI. And then finally I think a key to any client strategy is open source. Most people don't know that IBM is one of the largest contributors to open source, like a patchy Spark by example. We believe in open source because it increases the pace to market, so if you have those five different strategies, that's how you be successful. Within my organization you can have an appliance, for hybridated management, you can have an HDAP database, we have one-click data movement, all those things go into that to make up that complete solution. >> HDAP by the way is hybrid transaction and analytic processing. >> That's exactly right. >> You see those worlds come together, I remember the Z 13 announcement a couple of years ago, you guys made a big deal out of that, and so that's actually happening is that right? >> That is absolutely happening, yes. >> So that involves what actually doing the analytics in the transaction system, is that right, in the database of the transaction? >> I mean it depends on work loads, there's a lot of depending factors, but yeah, that's the- >> As opposed to what, putting it in some kind of Infiniband pipe into my data warehouse. >> Well you talked about it earlier, where previously you have to create complete separate data marks, you have to transition and use ETL to go from an operational store or a transactional store, to an analytical store completely separate. Trying to do both those in the same databases is our objective, that's HDAP. >> Excellent. Now you're also running the global elite program. >> I am. >> What is that all about? >> Well, let me back up for a second and tell you how we got here. I am running the global elite program but it started out just as a sheer campaign of driving personalization for our clients, pretty simple right? We have got the technology now to really personalize our experience with our clients. Using ML and some of the same technologies that I talked about. By example, we use ML and Watson to both internally and externally with clients, in other words, internally we make recommendations to our analyst, externally you can use a bot and ask them the questions. We're pushing all our content out, essentially free-of-charge, opening it up, we have very aggressive push to push that content out, and we're driving direct to expect. So that's just standard now for us, that's the basic, but then we've taken that further because we want to treat each client relative to their needs and profile, so what we've done is, for the platform offerings that we have, we just came up with a new offering called Enhanced Support. So what that does is it's front-of-the-line service. Consider it your airline priority service, so it's front-of-the-line, it's faster response time targets, and it also provides some consulting, and then on top of that, we've got what's called a premium tier, and that premium tier does everything of what I've already described, but then it adds a named context, and experts, to work directly with you with one foot within IBM, and one foot within whatever client in that expertise required. So I give you all that, global lead is at the top of that. These are our partners that are innovating with us, that are rewarding us with their business but they're innovating with us, they're serving as references, and together we're partnering and transforming together whether it's retail, insurance, or otherwise. So those are a small set of our global elite clients, and I encourage any clients that are listening out there, if they feel like, hey I want to partner directly with IBM, I want to push the envelope, references are in my future, I'm in. >> What are some examples that you can share with us? >> What we've done, we tend to have a motto with the global elites that we never say no, and I'm still waiting, I haven't said no yet, but we'll see if that ever comes. Well we never say no, and what we've done by example as an evolution of the global elite program is think conferences like this, a lot of times you can only send so many people. So what we've done is we've taken a mini conference, and we call it Analytics University, and we've taken that directly to clients, and we'll do a day or two and do this conference in a miniature scale focused on the areas and the content that they prefer. The other thing we've done is then a lot of times when we do that, we'll find interests and visions that they have that they have not been able to really get into a road map or progress. So then we'll bring them into the lab and we'll do design thinking sessions, and then we'll work together. And in terms of doing the design thinking sessions, what we essentially, ultimately accomplish is one independent road map between two different companies, because they help set our road map, we help influence theirs, and all of a sudden they've got a strategy to the future, and it's organically aligned with ours. >> Excellent. Alright Al, let's put the bumper sticker on IBM Think 2018, it's only day two here but what's your takeaway from the conference. Trucks are pulling away, what's the bumper sticker say. >> The bumper sticker says, make data simple. >> There you go. >> That's where my head's at, make data simple. I got a podcast out there that's called Make Data Simple. I'd encourage everybody to listen to it, we get into all these different technologies, but I think we make data simple with a- The wider the breadth we get data we can drive value up the stack. >> So, Make Data Simple podcast, right? >> It's actually under Analytics Insights in iTunes. >> Analytics insights under iTunes. >> That's all me. >> Alright, beautiful. Yeah, Make Data Simple podcast, Google that and you'll find it. Al, thanks very much for coming to CUBE. >> Alright, thank you. >> Pleasure having you. Alright, keep it right there everybody, we'll be back, right after this short break.
SUMMARY :
Brought to you by IBM. Al Martin is here, he's the IBM VP to set up what we call the AI ladder. I can put that in the cloud, Okay, so what I like about this What's the conversation like and then you start getting into That is I guess the maturity model, right? Still doing it in many cases. and now you get ML, more automation. increases the pace to market, so if you have HDAP by the way is hybrid transaction As opposed to what, putting it in some kind of it earlier, where previously you have to create Now you're also running the global elite program. Using ML and some of the same technologies and the content that they prefer. Alright Al, let's put the bumper sticker on but I think we make data simple with a- and you'll find it. we'll be back, right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Al Martin | PERSON | 0.99+ |
one foot | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Make Data Simple | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
second day | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
fourth quadrant | QUANTITY | 0.99+ |
a day | QUANTITY | 0.99+ |
two different companies | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
third quadrant | QUANTITY | 0.98+ |
third one | QUANTITY | 0.98+ |
Al | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
iTunes | TITLE | 0.98+ |
today | DATE | 0.98+ |
five different key elements | QUANTITY | 0.97+ |
Analytics University | ORGANIZATION | 0.97+ |
five different strategies | QUANTITY | 0.97+ |
each client | QUANTITY | 0.96+ |
four quadrants | QUANTITY | 0.95+ |
one-click | QUANTITY | 0.95+ |
IBM Think 2018 | EVENT | 0.95+ |
Spark | TITLE | 0.92+ |
day two | QUANTITY | 0.91+ |
Watson | TITLE | 0.89+ |
Hadoop | TITLE | 0.88+ |
ORGANIZATION | 0.87+ | |
2018 | DATE | 0.85+ |
Analytics Insights | TITLE | 0.84+ |
SQL | TITLE | 0.84+ |
first | QUANTITY | 0.83+ |
couple of years ago | DATE | 0.82+ |
two more | QUANTITY | 0.75+ |
Think 2018 | EVENT | 0.73+ |
a second | QUANTITY | 0.73+ |
Z | ORGANIZATION | 0.65+ |
CUBE | ORGANIZATION | 0.53+ |
IBM Think | ORGANIZATION | 0.51+ |
Infiniband | ORGANIZATION | 0.5+ |
13 | TITLE | 0.47+ |
theCUBE | ORGANIZATION | 0.46+ |
Think | COMMERCIAL_ITEM | 0.27+ |
IBM’s 20 February 2018 Storage Announcements with Eric Herzog
(fast orchestral music) >> Hi, I'm Peter Burris, and welcome to another Wikibon CUBE Conversation. Today I'm joined by Eric Herzog, who's the CMO and Vice President of Channels in IBM's Storage Group. Welcome, Eric. >> Peter, thank you very much. Really appreciate spending time with theCube. >> Absolutely, it's always great to have you here, Eric. And you know, it's interesting. When you come in, it's kind of, let's focus on storage, cause that's what you do, but it's kind of interesting overall, the degree to which storage and business is now becoming more than just a thing that you have to have, but part of your overall business strategy increasingly because of the role that digital business is playing. Well, earlier today IBM made some pretty consequential announcements about how you intend to help customers draw those two together closely. Why don't you take us through 'em? >> So, first thing I think, with the digital business, it's all about data. And the digital business is driven by data. Data always ends up on storage and is always managed by storage software, so while it may be underneath the hood if you will, it is the critical engine underneath that entire car. If you don't have the right engine or transmission, which you could argue storage and storage software is, then you can't have a truly digital business. >> True, so tell us, what did IBM do? >> So what we do is we announced a number of technologies today, some of which were enhancing, some of which were brand new. So for example, a lot of it was around our Spectrum storage software family. We introduced a new software-defined storage for NAS, Spectrum NAS. We introduced enhancements to our IBM cloud object storage offering, also to our Spectrum Virtualize, several enhancements to our modern data protection suite, which is Spectrum Protect and Spectrum Tech Plus were enhanced. And lastly, from an infrastructure perspective, we announced a first real product around an NVMe storage solution over an InfiniBand fabric, and what we're going to do for rest year-round NVMe and how that impacts storage systems. Which are of course, a critical component in your digital data business. >> You also announced some new terms and conditions, or new ways of conceiving how you can get access to the storage, capacity storage plans you want. Why don't ya give us a little bit of inside on that. >> So one of the things we've done is we've already created, a couple years ago the Spectrum storage suite which has a whole raft of different products, file software, block software, back-up, archive software. So we added the Spectrum Protect Plus offering into that suite. We also had a back-up only suite which focuses just on modern data protection. We've put it in there and in both cases, it's at no additional fee. So if you buy the suite, you get Spectrum Protect Plus. If you buy the back-up only suite, so you're more focused on back-up only, again at no extra charge to the end user. The other thing we've done is we announced in Q4, a storage utility model. So think that you can buy storage, the way you buy your power bill or your water bill or your gas bill. So it can go up and it can go down. We bill you quarterly. We added our IBM cloud object storage on premises solution to that set of products. We had an earlier set of products built around flash we announced in Q4 of last year. Now we've added object storage as a way to consume in basically a utility offering model. >> So we talk a lot at Wikibon about the need for what we call the true private client approach which is basically the idea that you want the cloud experience wherever your data requires. And it sounds like IBM is actually starting to accelerate the process by which it introduces many of these features, especially in the storage unit. You've bought in more stuff underneath the spectrum family. You're starting to introduce some of those new highly innovative technologies like NVMe over Fabric and you've also introduced an honest utility model that allows people to have or to treat their storage capacity more like that cloud experience. Have I got that right? >> Absolutely. And we've done one other things too. For example, as you know, from a cloud perspective everyone is moving to containers, right? Our Spectrum Connect product offers free support for dockers and kubernetes. So if you're going to create a private cloud, and you're going to do that on your own, of even hybrid cloud where you're, you know, sluffing some of it into your public cloud provider. Bottom line is that dockers support, that container support is what you need to create the true private cloud experience that Wikibon has been talking about for the last year and half now. >> Well, let's talk about the kubernetes and dockers and the notion of containers as a dissociative storage. I want to take it in two directions. First off, tell us a little bit about how it works kind of dissolver oriented terms and then, let's talk about what that's going to mean to the ecosystem and how people are going to think about buying storage going forward. So why don't we start with how does this capability work? >> Sure. So the key thing we've done with the Spectrum Connect product is provide persistent storage capability to a container environment. As you know, containers just like VM's in the past can come up and come down very frequently especially if you're in a dev-ops environment. The whole point is they can spin them up quickly and take them down quickly. The problem is they don't allow for persistent storage. So our Spectrum container product allows for the capability of doing persistent storage connected to a containerized environment. >> So they way this would would work is you'd still have a server, you'd still have machine with some compute that would be responsible for spinning the containers up and down. But you'd have a storage feature that would make sure that that storage associated with that container would persist. >> Correct. >> Therefore you could continue to do the container up and down in the server while at the same time persisting the storage over an extended period of time. >> Right. So what that means is any of our customers who have our Spectrum Accelerate software defined storage for block, our Spectrum Virtualized software defined storage for block, and the associated family of arrays that ship with that software embedded. Remember, for us, our software defined storage can be sold stand alone as just a piece of software or embedded in our arrays, which for example, at Spectrum Virtualized means there's hundreds and hundreds of thousands of our software defined storage between the software only version and the array version. So for people who have those arrays, the container support is absolutely free. So if you've already bought the product and you're on our maintenance support, you just download the Spectrum Connect, boom you're off to the races, you deploy your containers for your private cloud environment and you've got it right there. If you're a brand new customer, you're going to buy let's say for example next week, you buy it next week. You get the Spectrum Virtualized, let's say for example on our Storewize V7000 F all-flash array cause that software comes with it. And you could go download Spectrum Connect at no fee cause you just type in you're a customer, put in your serial number, boom! They can just download it. And we don't charge anything for that. >> And now your storage guys and your developer guys are working a little bit more closely together as opposed to being at each others' throats. >> And saying what happened to the storage? >> There you go. >> Oh wait. I thought that was going to be... well no, it's not persistent. And in this case, it's persistent. They can take it up. They can take it down. They can do whatever they want. And that container product is free so the IT guy doesn't go, "Oh now I got to pay more money cause he doesn't." And then the guys on the dev-ops side and on the deployment application side are saying oh okay now I don't have to worry about that as an issue anymore. The IT guys took care of that for me. So you get everybody working together. You get the persistent storage that is not, you know, comes when you get a container environment. You get the exact opposite that is not persistent. And now we've offered that. And again it's a no charge for the users so it's easy to deploy. Easy to use and there's no fee. >> And so Eric, the reason I ask questions is because it's the compounding of these little annoyances that make it difficult for companies to accelerate their entree into digital business. And how they engage their customers differently and so this is one of those examples where as you said, data is the asset that distinguishes a digital business from a regular business competitor. What types of changes is this going to mean to the way the business thinks, the way the business buys, the way the business perceives storage? >> So I think the first thing is they need to realize that in a digital business, data is the oil. It is the gold, it is the silver, it's the diamonds. It is the number one entity. >> It's the value. >> It is the value of your digital business. So, you have to realize that the underlying infrastructure if it goes down, guess what? Your digital business is no longer up and running. So from that perspective, you need to have your underlying foundation from a storage perspective. In this case, think of Storage System the highly, highly available, highly, highly reliable and it needs to be incredibly fast because now you're doing everything from a digital business. And so everything is pounding on your server and storage infrastructure. Not that it wasn't a traditional data center but if certain things need to be slow, it's okay. But now that you've gone true private cloud with a full digital business, it can't be slow. It has to be resilient and it has to be always available. And those are things we've built in to both our storage software lair, the Spectrum family and to all of our storage arrays. The Storewize family, our DS family, our Flash System family. All are highly redundant, highly available and they're all flash. >> And let me add two more things to that. Cause I think it's pertinent to the direction that IBM is taking here, because data is not exactly like oil or not exactly like diamonds, in the sense that, oil and diamonds still follow the laws of scarcity. The value of data increases, and I know you've made this point, as you use it more. >> Right. >> So on the one hand, the storage has to provide the flexibility that developers can go after the same data at different times and in different ways. But still have that data be persistent and related to that obviously is that you want to ensure that you're able to drive that through-put through the system as aggressively as possible without creating a whole bunch of administrative headaches. So if we pivot for a second to NVMe, what does that mean to introduce things like NVMe to those five things we just talked about? Especially you know, the performance and the flexibility of having multiple applications and groups being able to go at the same data, perhaps do some snapshots and copies? >> So, couple things. From a software perspective that sits on top of all of our products, we've taken the approach of modern data protection. It's not let's just do an incremental back-up like in the old days. So what we do today is we have basically incessant snapshotting which is a full boat copy. What you can do is you can check those out with our Spectrum Content data manager which we didn't announce anything new on that, but we announced it last year. And with that, you can have unending snapshots. The dev-ops guys can grab a real piece of software, a real piece of data. So when they're doing their development, they're not using a faux set. And that faux set often can introduce more bugs. It doesn't get up as quickly. >> And so now you got more data, so you take the snapshot. By the way, it's self service. They can check it out themselves. Now when you look at it from the IT guy's perspective, guess what? There's a log of who's got what. So if there was a security issue, they can say, oh Eric Herzog, you're the one that had that. It looks like that leaked out from you. Even if it was inadvertent, the point is the dev-op guys can go in and grab from this new modern data production paradigm that we have. At the same time, the IT guys can at least track what is going on, so it's interesting. Then from a NVMe perspective, the key thing that NVMe has is A, all of the existing infrastructures, InfiniBand, Fabric, Fibre Channel Fabric, and Ethernet Fabric will be supported. Okay, over time, we're announcing today an InfiniBand Fabric solution, but all of the arrays that you buy today, if you for example bought a flash system V9000 and you wanted to do NVMe over Ethernet later in the year, software upgrade only. You buy the hardware now, you're done, okay? Our A9000 flash systems, Fibre Channel Connect, you buy they Fibre Channel now, you just upgrade the software a little bit later. So the key things within a NVMe configuration is A, the box is already highly resistant, highly available. Okay, they resist failures. They're easy to fix if there is a hardware failure for example, failed power supply. You know it's going to happen, okay? The smart business has an extra power supply sitting on the shelf. He pulls it out, he swaps it then sends it back to IBM. And when it's under warranty, boom, we take care of it. Okay? So that's the resiliency and the availability aspect from a physical perspective. But with NVMe, you get a better performance, which means that the arrays can handle more workloads. So as you go to a truly digital business built around the private cloud that Wikibon has been talking about now for 18 months, as you go to that model, you want to get more and apps pounding on the same storage, if you will. And with an NVMe Fabric solution, NVMe over time in the sub system itself, all that gives you more apps can work on the same set of storage. Now, do I have enough capacity, which is a separate topic. But as far as can the array handle the workload with NVMe from a Fabric perspective and NVMe in a storage sub system? You can handle additional workloads on the same physical infrastructure which saves you time, saves you money and gives you the performance for all workloads. Not just for a few niche workloads and all the other ones have to be slow. >> So Eric, you're out spending a lot of time with customers. Tell us a little about how they see their environments changing as a consequence of these and other related announcements. Are developers going to be looking at storage more as a potential source of value? How are administrators dealing with this? And give us some examples if you would. >> Sure, sure. So I think the key thing is with things like our content data manager. As we've got customers right now and they're able to check it out to all the test step guides which they couldn't do before. They're getting work done faster with real data. So the amount of bugs that come up with internal developers just like commercial developers like IBM or any other software company, the Microsofts, the Oracles, everybody has bugs. Well, guess what? In house developers got the same bugs. But, we help reduce that bug count. We make it easier for them to fix. Cause we're working on a real data set and not a fake data set, right? The IT guys love it because the dev-op guys don't say can you spin this up, spin this down? They do it on their own, right? Which accelerates them in doing their work. And the IT guys aren't bothered for it. That one concern on security, guess what? You got that long saying who's got what. >> Right, right. >> Burris has this. Herzog has that. >> That's a big deal because the IT guys ultimately, if something leaks out or there's a security issue, they get the call from the Chief Legal Officer, not the dev-ops guy. So this way, everybody is happy. The dev-op guys are happy. The IT guys are happy. The IT guys can focus on spinning up and spinning down for the dev guys. You can build it all yourself. Our copy data management and all of our storage softwares are API driven. Rest API's, integration with all of the object storage interfaces including S3. So it's easier and easier for the IT guy to make the dev-ops guys happy and give the dev-op guys self service, which, as you know, self service is one of the key attributes of the private cloud that Wikibon keeps talking about is self service. So we can give more through the software side. >> So I have one more question Eric. As we think about kind of where this announcement is, most important to businesses that are trying to affect that type of transformation we're talking about, is there one specific feature that is your conversation with customers, your conversations with the channels, since you're also very very close with the channel, that keeps popping to the top of the list of things to focus on as companies? As I said, try to figure out how to use data and assets differently? >> Well I think what the key thing from a storage guy perspective is one, interfacing with all the API's which we've done across our whole family, okay? Second thing is automation, automation, automation. The dev-ops guys like it. In a smaller shop, there may be only one IT guy who has to take care of their entire infrastructure. So the fact that our Spectrum Protect Plus for example can do VMware hyper V back-up except it can be done by the VMware hyper V guy or a general IT guy not a storage guy or a back-up admin. In the enterprise, sure there's a back-up admin in the big enterprises, but if you're at Herzog's Bar and Grill there is no back-up admin. So that ease of use, that simplicity, that integration with common API's and automating as much as possible is critical as people go to the digital business based on private clouds. >> Excellent. Eric Herzog, CMO, Vice-President of Channels at IBM storage group, talking about a number of things that were announced today as businesses try to marry their storage capability and their digital business strategy more closely together. Thanks for being here. >> Great, thank you very much. >> Once again, I'm Peter Burris. This has been a Wikibon CUBE Conversation with Eric Herzog of IBM. (fast orchestral music)
SUMMARY :
and welcome to another Wikibon CUBE Conversation. Peter, thank you very much. the degree to which storage and business And the digital business is driven by data. So for example, a lot of it was around to the storage, capacity storage plans you want. the way you buy your power bill the need for what we call the true private client approach that container support is what you need to create and the notion of containers as a dissociative storage. allows for the capability of doing persistent storage for spinning the containers up and down. in the server while at the same time persisting the storage for block, and the associated family of arrays as opposed to being at each others' throats. You get the persistent storage that is not, you know, And so Eric, the reason I ask questions is because that in a digital business, data is the oil. the Spectrum family and to all of our storage arrays. oil and diamonds still follow the laws of scarcity. So on the one hand, the storage has to provide And with that, you can have unending snapshots. in the sub system itself, all that gives you more apps And give us some examples if you would. So the amount of bugs that come up with internal developers Burris has this. So it's easier and easier for the IT guy that keeps popping to the top of the list of things So the fact that our Spectrum Protect Plus for example that were announced today as businesses try to marry with Eric Herzog of IBM.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Microsofts | ORGANIZATION | 0.99+ |
Peter | PERSON | 0.99+ |
Oracles | ORGANIZATION | 0.99+ |
20 February 2018 | DATE | 0.99+ |
next week | DATE | 0.99+ |
A9000 | COMMERCIAL_ITEM | 0.99+ |
first | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
18 months | QUANTITY | 0.99+ |
five things | QUANTITY | 0.99+ |
both cases | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
two directions | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.98+ |
Spectrum Virtualized | TITLE | 0.98+ |
last year and half | DATE | 0.98+ |
Today | DATE | 0.98+ |
first thing | QUANTITY | 0.97+ |
Herzog's Bar and Grill | ORGANIZATION | 0.96+ |
Spectrum Connect | TITLE | 0.96+ |
two more things | QUANTITY | 0.95+ |
one more question | QUANTITY | 0.94+ |
V9000 | COMMERCIAL_ITEM | 0.93+ |
VMware hyper V | TITLE | 0.93+ |
Herzog | PERSON | 0.93+ |
Spectrum Protect Plus | COMMERCIAL_ITEM | 0.93+ |
hundreds of thousands | QUANTITY | 0.93+ |
Spectrum Tech Plus | COMMERCIAL_ITEM | 0.93+ |
Spectrum Protect | COMMERCIAL_ITEM | 0.91+ |
Q4 | DATE | 0.9+ |
one specific feature | QUANTITY | 0.89+ |
Spectrum Protect Plus | TITLE | 0.88+ |
Spectrum Virtualize | COMMERCIAL_ITEM | 0.86+ |
InfiniBand | ORGANIZATION | 0.86+ |
Spectrum | TITLE | 0.85+ |
earlier today | DATE | 0.84+ |
S3 | TITLE | 0.82+ |
Second thing | QUANTITY | 0.81+ |
hundreds and | QUANTITY | 0.81+ |
Storewize V7000 F | COMMERCIAL_ITEM | 0.8+ |
Q4 of | DATE | 0.76+ |
Spectrum Accelerate | TITLE | 0.76+ |
Spectrum | COMMERCIAL_ITEM | 0.76+ |
Spectrum Protect | TITLE | 0.74+ |
Connect | COMMERCIAL_ITEM | 0.74+ |
a couple years ago | DATE | 0.74+ |
Spectrum | ORGANIZATION | 0.73+ |
DS | COMMERCIAL_ITEM | 0.73+ |
Chief Legal Officer | PERSON | 0.71+ |
couple things | QUANTITY | 0.7+ |
Vice | PERSON | 0.68+ |
Peter Burris, Wikibon | Action Item Quick Take: NVMe over Fabrics, Feb 2018
(gentle electronic music) >> Hi, I'm Peter Burris. Welcome to another Wikibon Action Item Quick Take. A lot of new technology throughout the entire stack, including still Inside Systems. One in particular's pretty important, tell us about it. >> Thank you, NVMe over Fabric is what I'm going to talk about. And my take on this is that it's going to be very real in 2018. It's going to support all the protocols, it'll support iSCSI, it'll support Fibre Channel, InfiniBand and Ethernet. So it's going to affect all storage. The incremental costs are low, very low. The performance of it is absolutely outstanding and fantastic, and there'll be huge savings, potential huge savings on things, for example, like core licensing. So the savings within storage and the savings across the system will be large. My view is it should become the design standard in 2018 for storage. So the Action Item here is to assume that you are going to be implementing NVMe over Fabrics over the next 18 months as part of all storage purchases and ensure that all the NICs and the software etc will support it. So the key question to ask of any vendor is 'What is your committed NVMe rollout in 2018 and the start of 2019?' >> David Floyer, thank you very much. Once again, the idea here is NVMe becoming not just a technology standard, but now becoming ready for prime time in a commercial way. This has been a Wikibon Action Item Quick Take. Thanks for watching. (gentle electronic music)
SUMMARY :
Welcome to another Wikibon Action Item Quick Take. So the Action Item here is to assume Once again, the idea here is NVMe
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
Feb 2018 | DATE | 0.99+ |
Wikibon | ORGANIZATION | 0.97+ |
months | DATE | 0.84+ |
iSCSI | OTHER | 0.77+ |
One | QUANTITY | 0.77+ |
2019 | DATE | 0.71+ |
InfiniBand | OTHER | 0.71+ |
18 | QUANTITY | 0.55+ |
Channel | OTHER | 0.5+ |
Eric Herzog, IBM | Cisco Live EU 2018
>> Announcer: Live from Barcelona, Spain it's theCUBE covering Cisco Live 2018. Brought to you by Cisco, Veeam, and theCUBE's ecosystem partners. >> Hello everyone and welcome back. This is theCUBE live here in Barcelona for Cisco Live Europe. I'm John Furrier, the co-host of theCUBE, with Stu Miniman analyst at Wikibon, covering networking storage and all infrastructure cloud. Stu Miniman, Stu. Our next guest is Eric Herzog, who's the Chief Marketing Officer at IBM Storage Systems. Eric, CUBE alumni, he's been on so many times I can't even count. You get the special VIP badge. We're here breaking down all the top stories at Cisco Live in Europe, kicking off 2018. Although it's the European show, not the big show, certainly kicking off the year with a lot of new concepts that aren't necessarily new, but they're innovative. Eric, welcome to theCUBE again. >> Well, thank you. We always love participating in theCUBE. IBM is a strong supporter of theCUBE and all the things you do for us, so thank you very much for having us again. >> A lot of great thought leadership from IBM, really appreciate you guys' support over the years. But now we're in a sea change. IBM had their first quarter of great results, and that will be well-reported on SiliconANGLE, but the sea change is happening. You've been living this generation, you've seen couple cycles in the past. Cisco putting forth a vision of the future, which is pretty right on. They were right on Internet of Things ten years ago, they had it all right, but they're a networking company that's transformed up the stack over the years. Now on the front lines of no perimeter, okay, more security challenges, cloud big whales with no networking and storage. You're in the middle of it. Break it down. Why is Cisco Live so important now than ever before? >> Well, for us it's very important because one, we have a strategic relationship with Cisco, the Storage Division does a product with Cisco called the VersaStack, converged infrastructure, and in fact one of our key constituents for the VersaStack are MSPs and CSPs, which is a key constituent of Cisco, especially with their emphasis on the cloud. Second thing for us is IBM storage has gone heavily cloud. So going heavily cloud with our software, in addition to what we do with our solutions as a foundation for CSPs and MSPs. Just what we've integrated into our software-defined storage for cloud makes Cisco Live an ideal venue for us, and Cisco an ideal partner. >> So I've got to ask you, we've had conversations on theCUBE before, they're all on youtube.com/siliconangle, just search Eric Herzog, you'll find them. But I want to recycle this one point and get your comments and reaction here in Barcelona. You guys have transformed with software at IBM big-time with storage. Okay, you're positioned well for the cloud. What's the most important thing that companies have to do, like IBM and Cisco, to play an innovator role in the cloud game as we have software at the center of the value proposition? >> Well I think the key thing is, when you look at cloud infrastructure, first of all, the cloud's got to run on something. So you need some sort of structural, infrastructure foundation. Servers, networking, and compute. So at IBM and with Cisco, we're positioning ourselves as the ideal rock-solid foundation for the cloud building, if you will. So that's item number one. Item number two, our software in particular can survive, not only on premises, but can bridge and go from on-premise to a public cloud, creating a hybrid infrastructure, and that allows us to also run cloud instantiation. Several of our products are available from IBM Cloud Division, Amazon offers some of the IBM storage software, over three hundred cloud service providers, smaller ones, offer IBM Spectrum Protect as a back-up service. So we've already morphed into storage software, either A, bridging the cloud in a hybrid config, or being used by cloud providers as some of their storage offerings for end-users and businesses. >> Eric, wanted to get to, one of the partnership areas that you've talked about with Cisco is VersaStack. We've talked with you a number of times about converged infrastructure, that partnership, Cisco UCS taking all the virtualization. The buzz in the market, there's a lot of discussion, oh it's hyper-converged, it's cloud. Why is converged infrastructure still relevant today? >> Well, when you look at the analysts that track the numbers, you can see that the overall converged market is growing and hyper-converged is viewed as a subset. When you look at those numbers, this year close to 17 billion US, about 75% of it is still standard converged versus hyper-converged. One of the other differences, it's the right tool for the right job. So customers need to go in eyes open. So when you do a hyper-converged infrastructure, by the way IBM offers a hyper-converged infrastructure currently with Nutanix, so we actually have both, the Nutanix partnership offering hyper-converged and a partnership with Cisco on standard converged. It's really, how do you size the right tool for the right job? And one of the negatives of hyper-converged, very easy to deploy, that's great, but one of the negatives is every time you need more storage, you have to add more server. Every time you need more server, you add more storage. With this traditional converged infrastructure, you can add servers only, or networking only, or storage only. So I think when you're in certain configurations, workloads, and applications, hyper-converged is the right solution, IBM's got a solution. In other situations, particularly as your middle-sized and bigger apps, regular converged is better 'cause you can basically parse and size up or down compute, networking, and the storage independent of each other, whereas in hyper-converged you have to do it at the same time. And that's a negative where you're either over-buying your storage when you don't need it, or you're over-buying your compute when you don't need it. With standard converged, you don't have that issue. You buy what you need when you need it. But I think most big companies, for sure, have certain workloads that are best with hyper-converged, and we've got that, and other workloads that are best with converged, and we have that as well. >> Okay, the other big growth area in storage for the last bunch of years has been flash. IBM's got a strong position in all-flash arrays. What's new there, how are some of the technologies changing? Any impact on the network that we should be really understanding at this show? >> Sure, so couple things. So first of all, we just brought out some very high-density all-flash arrays in Q4. We can put 220 terabytes in two rack U, which is a building block that we use in several different of our all-flash configurations, including our all-flash VersaStack. The other thing we do is we embed software-defined storage on our, software-defined storage actually on our physical all-flash arrays. Most companies don't do that, so they've got an all-flash offering and if they have a software-defined offering it's actually a different piece of software. For us it's the same, so it's easier to deploy, it's easier to train, it's easier to license, it's easier for a reseller to sell if you happen to be using a reseller. And the other thing is it's battle-hardened, because it's not only standalone software, but it's actually on the arrays as well. So from a test infrastructure quality issue, versus other vendors that have certain software that goes on their all-flash array, and then a different set of software for all software-defined. It doesn't make logical sense when you can cover it with one thing. So that's an important difference for us, and a big innovator. I think the last thing you're going to see that does impact networking is the rise of NVMe over fabrics. IBM did a statement of direction last May outlining what we're doing. We did a public demonstration of an InfiniBand fabric at the AI summit in New York in December, and we will be having an announcement around NVMe fabrics on the 20th of February. So stay tuned to hear us then. We'll be launching some more NVMe with fabric infrastructure at that time. >> Eric, I just, people that have been watching, there's been a lot of discussion about NVMe for a number of years, and NVMe over fabric more recently. How big a deal is this for the industry? You've seen many of these waves. Is this transformational or is it, you know, every storage company I talk to is working on this, so how's it going to be differentiated? What should users be looking to be able to, who do they partner with, how do they choose that solution, and when's it going to be ready? >> So first of all, I view it as an evolution, okay. If you take storage in general, arrays, you know we used to do punch cards. I'm old enough I remember using punch cards at the University of California. Then, it all went to tape. And if you look at old Schwarzenegger movies from the 80s, I love Schwarzenegger spy movies, what's there? IBM systems with big IBM tape, and not for back-up, for primary storage. Then in the late-80s, early-90s, IBM and a few other vendors came out with hard drive-based arrays that got hooked up to mainframes and then obviously into minis and to the rise of the LAN. Those have given away to all-flash arrays. From a connectivity perspective, you've had SCSI, you had ultra SCSI, you had ultra fast SCSI, ultra fast wide SCSI. Then you had fiber channel. So now as an infrastructure both in an array, as a connectivity between storage and the CPUs used in an array system, will be NVMe, and then you're going to have NVMe running over fabrics. So I view this as an evolution, right? >> John: What's the driver, performance or flexibility? >> A little bit of both. So from the in-box perspective, inside of an array solution, the major chip manufacturers are putting NVMe to increase the speed from storage going into the CPUs. So that will benefit the performance to the end-user for applications, workloads, and use cases. Then what they've done is Intel has pushed, with all the industry, IBM's a member of the NVMe consortium as well, has pushed using the NVMe protocol over fabrics, which also gives some added performance over fabric networks as well. So you've got it, but again I view this again as evolution, because punch cards, tape was faster, hard drive arrays were faster than tape, then flash arrays are faster, now you're going to have NVMe in the flash array, and also NVMe over fabric with connecting all-flash array. >> So I have to ask you the real question that's on everyone's mind that's out there, because storage is one of those areas that you never see it stopping. There's always venture back start-ups, you see new hot start-ups coming out of the woodwork, and there's been some failures lately and some blame NVMe's innovation to kind of killing some start-ups, I won't name names. But the real issue is the lines that were once blurred are now forming, and there's the wrong side of history and the right side of history. So I've got to ask you, what's going to be the right side of history in the storage architecture that people need to get onto to win in the future? >> So, there's a couple key points. One, all storage infrastructure and storage software needs to interface with cloud infrastructure. Got to be hybrid, if you have a software play like we do, where the software, such as our Spectrum Scale or our Spectrum Protect or Spectrum Protect Plus, can exist as a cloud service through a service rider, that's where you want to be. You don't want to have just a standard array and that's all you sell. So you want to have an array business, you want to make sure that's highly performant, you want to make sure that's the position, and the infrastructure underneath clouds, which means not only very fast, but also incredibly resilient. And that includes both cloud configs and AI. If you're going to do real-time AI, if you're going to do dark trading on Wall Street using AI instead of human beings, A, if the storage isn't really fast you're going to miss a 10 million dollar, hundred million dollar transaction. Second thing, if it's not resilient and always available, you're really in trouble. And god forbid when they bring AI to healthcare, and I mean AI in the operating room, boy if that storage fails when I'm on the table, wow. That's not going to be good. So those are the things you got to integrate with in the future. AI and cloud, whether it's software-defined in the array space, or if you're like IBM in both markets. >> John: Performance and resilient. >> Performance and resiliency is critical. >> All right, so Eric I have a non-storage question for you. >> Eric: Absolutely. >> So you've got the CMO hat for a division of IBM. You've been CMO of a start-up, you've been in this industry for a while. What's the changing role of the CMO in today's digital world? >> So I think the key thing is digital is a critical method of the overall marketing mix. And everything needs to reinforce everything. So let's take an example. One of the large storage websites and magazines recently announced that IBM is a finalist for four product-of-the-year awards. Two for all-flash arrays and two for software-defined storage. So guess what we've done? We've amplified it over LinkedIn, over IBM Facebook, through our Twitter handle, we leverage that. We use it at trade shows. So digital is A, the first foray, right? People look on your website and look at what you're doing socially before they even decide, should I really call them up, or should I really go to their booth a trade show? >> So discovery and learning is happening online. >> Discovery and learning, but even progression. We just, I just happened to tweet and LinkedIn this morning, Clarinet, a large European cloud MSP and CSP, just selected IBM all-flash arrays, IBM Spectrum Protect, and IBM Spectrum Virtualize for their cloud infrastructure. And obviously their target, they sell to end-users and companies, right? But the key thing is we tweeted it, we linked it in, we're going to use it here at the show, we're going to use it in PR efforts. So digital is a critical element of the marketing mix, it's not a fad. It also can be a lead dog. So if you're going to a trade show, you should tweet about it and link it in, just the way you guys do. We all knew you were coming to this show, we know you're going to IBM Think, we know you're going to VM World and Oracle, all these great shows. How do we find out? We follow you on social media and on the digital market space, so it's critical. >> And video, video a big role in - >> Video is critical. We use your videos all the time, obviously. I always tweet them and link them in once I'm posted. >> Clip and stick is the new buzzword. Clip 'em and stick 'em. Our new clipper tool, you've seen that. >> (laughs) Yes, I have. So it's really critical, though, that, you can, and remember, I'm like one of the oldest guys in the storage business, I'm 60 years old, I've been doing this 32 years, seven start-ups, EMC, IBM twice, Mac store Seagate, so I've done big and small. This is a sea change transformation in marketing. The key thing is you have to make it not stand on its own, integrate everything. PR, analyst relations, digital in everything you do, digital with shows and how you integrate the whole buyer's journey, and put it together. And people are using digital more and more, in fact I saw a survey from a biz school, 75% of people are looking at you digitally before they ever even call you up or call one of your resellers if you use the channel, to talk about your products. That's a sea change. >> You guys do a great job with content marketing, hats off to you guys. All right, final question for you, take a minute to just quickly explain the relationship that IBM has with Cisco and the importance of it, specifically what you guys are doing with them, how you guys go on to market to customers, and what's the impact to the customer. >> So, first of all, we have a very broad relationship with Cisco, Obviously I'm the CMO of the Storage Division, so I focus on storage, but several other divisions of IBM have powerful relationships. The IoT group, the Collaboration group. Cisco's one of our valued partners. We don't have networking products, so our Global Technology Services Division is one of the largest resellers of Cisco in the world, whether it be networking, servers, converge, what-have-you, so it's a strong, powerful relationship. From an end-user perspective, the importance is they know that the two companies are working together hand-in-glove. Sometimes you have two companies where you buy solutions from the A and B, and A and B don't even talk to each other, and yes they both go to the PlugFest or the Compatibility Lab, but they don't really work together, and their technology doesn't work together. IBM and Cisco have gone well beyond that to make sure that we work closely together in all of the divisions, including the storage division, with our Cisco-validated designs. And then lastly, whether it's delivered through the direct sales model or through the valued business partners that IBM and Cisco share, it's critical the end-user know, and the partners know, they're getting something that works together and doesn't just have the works option. It's tightly-honed and finely-integrated, whether it be storage or the IoT Division, the Collaboration Division, Cisco is a heavy proponent of IBM Security Division. >> Product teams work together? >> Yeah, all the product teams work together, trade APIs back and forth, not just doing the, and let's go do a test, compatibility test. Which everybody does that, but we go well beyond that with IBM and Cisco together. >> And it's a key relationship for you guys? >> Key relationship for the Storage Division, as well as for many of the other divisions of IBM, it's a critical relationship with Cisco. >> All right, Eric Herzog, Chief Marketing Officer for the Storage Systems group at IBM. It's theCUBE live coverage in Barcelona, I'm John Furrier, Stu Miniman, back with more from Barcelona Cisco Live Europe after this short break. (upbeat techno music)
SUMMARY :
Brought to you by Cisco, Veeam, I'm John Furrier, the co-host of theCUBE, and all the things you do for us, You're in the middle of it. for the VersaStack are MSPs and CSPs, What's the most important thing for the cloud building, if you will. The buzz in the market, there's a lot of discussion, And one of the negatives of hyper-converged, Any impact on the network that we should be but it's actually on the arrays as well. Is this transformational or is it, you know, and the CPUs used in an array system, will be NVMe, So from the in-box perspective, and the right side of history. and the infrastructure underneath clouds, What's the changing role of the CMO So digital is A, the first foray, right? just the way you guys do. We use your videos all the time, obviously. Clip and stick is the new buzzword. and remember, I'm like one of the oldest guys and the importance of it, and doesn't just have the works option. Yeah, all the product teams work together, Key relationship for the Storage Division, for the Storage Systems group at IBM.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Eric | PERSON | 0.99+ |
Eric Herzog | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Two | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
two companies | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
75% | QUANTITY | 0.99+ |
December | DATE | 0.99+ |
220 terabytes | QUANTITY | 0.99+ |
CUBE | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
32 years | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
10 million dollar | QUANTITY | 0.99+ |
Global Technology Services Division | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Clarinet | ORGANIZATION | 0.99+ |
Schwarzenegger | PERSON | 0.99+ |
theCUBE | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
20th of February | DATE | 0.99+ |
VM World | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
IBM Storage Systems | ORGANIZATION | 0.99+ |
Seagate | ORGANIZATION | 0.98+ |
two rack | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Susan Bobholz, Intel | Super Computing 2017
>> [Announcer] From Denver, Colorado, it's the Cube covering Super Computing 17, brought to you by Intel. (techno music) >> Welcome back, everybody, Jeff Frick with the Cube. We are at Super Computing 2017 here in Denver, Colorado. 12,000 people talking about big iron, heavy lifting, stars, future mapping the brain, all kinds of big applications. We're here, first time ever for the Cube, great to be here. We're excited for our next guest. She's Susan Bobholtz, she's the Fabric Alliance Manager for Omni-Path at Intel, Susan, welcome. >> Thank you. >> So what is Omni-Path, for those that don't know? >> Omni-Path is Intel's high performance fabric. What it does is it allows you to connect systems and make big huge supercomputers. >> Okay, so for the royal three-headed horsemen of compute, store, and networking, you're really into data center networking, connecting the compute and the store. >> Exactly, correct, yes. >> Okay. How long has this product been around? >> We started shipping 18 months ago. >> Oh, so pretty new? >> Very new. >> Great, okay and target market, I'm guessing has something to do with high performance computing. >> (laughing) Yes, our target market is high performance computing, but we're also seeing a lot of deployments in artificial intelligence now. >> Okay and so what's different? Why did Intel feel compelled that they needed to come out with a new connectivity solution? >> We were getting people telling us they were concerned that the existing solutions were becoming too expensive and weren't going to scale into the future, so they said Intel, can you do something about it, so we did. We made a couple of strategic acquisitions, we combined that with some of our own IP and came up with Omni-Path. Omni-Path is very much a proprietary protocol, but we use all the same software interfaces as InfiniBand, so your software applications just run. >> Okay, so to the machines it looks like InfiniBand? >> Yes. >> Just plug and play and run. >> Very much so, it's very similar. >> Okay what are some of the attributes that make it so special? >> The reason it's really going very well is that it's the price performance benefits, so we have equal to, or better, performance than InfiniBand today, but we also have our switch technology is 48 ports verses InfiniBand is 36 ports. So that means you can build denser clusters in less space and less cables, lower power, total cost of ownership goes down, and that's why people are buying it. >> Really fits into the data center strategy that Intel's executing very aggressively right now. >> Fits very nicely, absolutely, yes, very much so. >> Okay, awesome, so what are your thoughts here at the show? Any announcements, anything that you've seen that's of interest? >> Oh yeah, so, a couple things. We've had really had good luck on the Top 500 list. 60% of the servers that are running a 100 gigabyte fabrics in the Top 500 list are running connected via Omni-Path. >> What percentage again? >> 60% >> 60? >> Yes. >> You've only been at it for 18 months? >> Yes, exactly. >> Impressive. >> Very, very good. We've got systems in the Top 10 already. Some of the Top 10 systems in the world are using Omni-Path. >> Is it rip and replace, do you find, or these are new systems that people are putting in. >> Yeah, these are new systems. Usually when somebody's got a system they like and run, they don't want to touch it. >> Right. >> These are people saying I need a new system. I need more power, I need more oompf. They have the money, the budget, they want to put in something new, and that's when they look to Omni-Path. >> Okay, so what are you working on now, what's kind of next for Omni-Path? >> What's next for us is we are announcing a new higher, denser switch technology, so that will allow you to go for your director class switches, which is the really big ones, is now rather than having 768 ports, you can go to 1152, and that means, again, denser topologies, lower power, less cabling, it reduces your total cost of ownership. >> Right, I think you just answered my question, but I'm going to ask you anyway. >> (laughs) Okay. >> We talked a little bit before we turned the camera on about AI and some of the really unique challenges of AI, and that was part of the motivation behind this product. So what are some of the special attributes of AI that really require this type of connectivity? >> It's very much what you see even with high performance computing. You need low latency, you need high bandwidth. It's the same technologies, and in fact, in a lot of cases, it's the same systems, or sometimes they can be running software load that is HPC focused, and sometimes they're running a software load that is artificial intelligence focused. But they have the same exact needs. >> Okay. >> Do it fast, do it quick. >> Right, right, that's why I said you already answered the question. Higher density, more computing, more storing, faster. >> Exactly, right, exactly. >> And price performance. All right, good, so if we come back a year from now for Super Computing 2018, which I guess is in Dallas in November, they just announced. What are we going to be talking about, what are some of your priorities and the team's priorities as you look ahead to 2018? >> Oh we're continuing to advance the Omni-Path technology with software and additional capabilities moving forward, so we're hoping to have some really cool announcements next year. >> All right, well, we'll look forward to it, and we'll see you in Dallas in a year. >> Thanks, Cube. >> All right, she's Susan, and I'm Jeff. You're watching the Cube from Super Computing 2017. Thanks for watching, see ya next time. (techno music)
SUMMARY :
covering Super Computing 17, brought to you by Intel. She's Susan Bobholtz, she's the Fabric Alliance Manager What it does is it allows you to connect systems Okay, so for the royal three-headed horsemen Okay. has something to do with high performance computing. in artificial intelligence now. so they said Intel, can you do something So that means you can build denser clusters Really fits into the data center strategy in the Top 500 list are running connected via Omni-Path. Some of the Top 10 systems in the world are using Omni-Path. Is it rip and replace, do you find, and run, they don't want to touch it. They have the money, the budget, so that will allow you to go for your director class but I'm going to ask you anyway. about AI and some of the really unique challenges of AI, It's very much what you see you already answered the question. and the team's priorities as you look ahead to 2018? moving forward, so we're hoping to have and we'll see you in Dallas in a year. All right, she's Susan, and I'm Jeff.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Susan Bobholtz | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Susan Bobholz | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
November | DATE | 0.99+ |
Susan | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
36 ports | QUANTITY | 0.99+ |
60% | QUANTITY | 0.99+ |
12,000 people | QUANTITY | 0.99+ |
Cube | PERSON | 0.99+ |
next year | DATE | 0.99+ |
100 gigabyte | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Denver, Colorado | LOCATION | 0.99+ |
48 ports | QUANTITY | 0.99+ |
768 ports | QUANTITY | 0.99+ |
60 | QUANTITY | 0.98+ |
first time | QUANTITY | 0.97+ |
Cube | COMMERCIAL_ITEM | 0.97+ |
18 months ago | DATE | 0.97+ |
Super Computing 2017 | EVENT | 0.96+ |
today | DATE | 0.92+ |
InfiniBand | TITLE | 0.91+ |
Top 10 | QUANTITY | 0.91+ |
1152 | QUANTITY | 0.91+ |
Super Computing 17 | EVENT | 0.91+ |
Top 10 systems | QUANTITY | 0.85+ |
a year | QUANTITY | 0.82+ |
three-headed | QUANTITY | 0.8+ |
Path | OTHER | 0.79+ |
Super Computing | EVENT | 0.76+ |
Top | QUANTITY | 0.72+ |
Omni-Path | TITLE | 0.72+ |
Omni-Path | OTHER | 0.72+ |
Omni-Path | COMMERCIAL_ITEM | 0.71+ |
Omni | TITLE | 0.59+ |
Omni | ORGANIZATION | 0.58+ |
Omni-Path | ORGANIZATION | 0.57+ |
couple | QUANTITY | 0.5+ |
-Path | OTHER | 0.49+ |
Path | ORGANIZATION | 0.3+ |
500 | OTHER | 0.29+ |