Matt Burr, Pure Storage & Rob Ober, NVIDIA | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE! Covering Pure Storage Accelerate 2018 brought to you by Pure Storage. >> Welcome back to theCUBE's continuing coverage of Pure Storage Accelerate 2018, I'm Lisa Martin, sporting the clong and apparently this symbol actually has a name, the clong, I learned that in the last half an hour. I know, who knew? >> Really? >> Yes! Is that a C or a K? >> Is that a Prince orientation or, what is that? >> Yes, I'm formerly known as. >> Nice. >> Who of course played at this venue, as did Roger Daltry, and The Who. >> And I might have been staff for one of those shows. >> You could have been, yeah, could I show you to your seat? >> Maybe you're performing later. You might not even know this. We have a couple of guests joining us. We've got Matt Burr, the GM of FlashBlade, and Rob Ober, the Chief Platform Architect at NVIDIA. Guys, welcome to theCUBE. >> Hi. >> Thank you. >> Dave: Thanks for coming on. >> So, lots of excitement going on this morning. You guys announced Pure and NVIDIA just a couple of months ago, a partnership with AIRI. Talk to us about AIRI, what is it? How is it going to help organizations in any industry really democratize AI? >> Well, AIRI, so AIRI is something that we announced, the AIRI Mini today here at Accelerate 2018. AIRI was originally announced at the GTC, Global Technology Conference, for NVIDIA back in March, and what it is is, it essentially brings NVIDIA's DGX servers, connected with either Arista or Cisco switches down to the Pure Storage FlashBlade, so this is something that sits in less than half a rack in the data center, that replaces something that was probably 25 or 50 racks of compute and store, so, I think Rob and I like to talk about it as kind of a great leap forward in terms of compute potential. >> Absolutely, yeah. It's an AI supercomputer in a half rack. >> So one of the things that this morning, that we saw during the general session that Charlie talked about, and I think Matt (mumbles) kind of a really brief history of the last 10 to 20 years in storage, why is modern external storage essential for AI? >> Well, Rob, you want that one, or you want me to take it? Coming from the non storage guy, maybe? (both laugh) >> Go ahead. >> So, when you look at the structure of GPUs, and servers in general, we're talking about massively parallel compute, right? These are, we're now taking not just tens of thousands of cores but even more cores, and we're actually finding a path for them to communicate with storage that is also massively parallel. Storage has traditionally been something that's been kind of serial in nature. Legacy storage has always waited for the next operation to happen. You actually want to get things that are parallel so that you can have parallel processing, both at the compute tier, and parallel processing at the storage tier. But you need to have big network bandwidth, which was what Charlie was eluding to, when Charlie said-- >> Lisa: You like his stool? >> When Charlie was, one of his stools, or one of the legs of his stool, was talking about, 20 years ago we were still, or 10 years ago, we were at 10 gig networks, in merges of 100 gig networks has really made the data flow possible. >> So I wonder if we can unpack that. We talked a little bit to Rob Lee about this, the infrastructure for AI, and wonder if we can go deeper. So take the three legs of the stool, and you can imagine this massively parallel compute-storage-networking grid, if you will, one of our guys calls it uni-grid, not crazy about the name, but this idea of alternative processing, which is your business, really spanning this scaled out architecture, not trying to stuff as much function on a die as possible, really is taking hold, but what is the, how does that infrastructure for AI evolve from an architect's perspective? >> The overall infrastructure? I mean, it is incredibly data intensive. I mean a typical training set is terabytes, in the extreme it's petabytes, for a single run, and you will typically go through that data set again and again and again, in a training run, (mumbles) and so you have one massive set that needs to go to multiple compute engines, and the reason it's multiple compute engines is people are discovering that as they scale up the infrastructure, you actually, you get pretty much linear improvements, and you get a time to solution benefit. Some of the large data centers will run a training run for literally a month and if you start scaling it out, even in these incredibly powerful things, you can bring time to solution down, you can have meaningful results much more quickly. >> And you be a sensitive, sort of a practical application of that. Yeah there's a large hedge fund based in the U.K. called Man AHL. They're a system-based quantitative training firm, and what that means is, humans really aren't doing a lot of the training, machines are doing the vast majority if not all of the training. What the humans are doing is they're essentially quantitative analysts. The number of simulations that they can run is directly correlative to the number of trades that their machines can make. And so the more simulations you can make, the more trades you can make. The shorter your simulation time is, the more simulations that you can run. So we're talking about in a sort of a meta context, that concept applies to everything from retail and understanding, if you're a grocery store, what products are not on my shelves at a given time. In healthcare, discovering new forms of pathologies for cancer treatments. Financial services we touched on, but even broader, right down into manufacturing, right? Looking at, what are my defect rates on my lines, and if it used to take me a week to understand the efficiency of my assembly line, if I can get that down to four hours, and make adjustments in real time, that's more than just productivity, it's progress. >> Okay so, I wonder if we can talk about how you guys see AI emerging in the marketplace. You just gave an example. We were talking earlier again to Rob Lee about, it seems today to be applied and, in narrow use cases, and maybe that's going to be the norm, whether it's autonomous vehicles or facial recognition, natural language processing, how do you guys see that playing out? Whatever be, this kind of ubiquitous horizontal layer or do you think the adoption is going to remain along those sort of individual lines, if you will. >> At the extreme, like when you really look out at the future, let me start by saying that my background is processor architecture. I've worked in computer science, the whole thing is to understand problems, and create the platforms for those things. What really excited me and motivated me about AI deep learning is that it is changing computer science. It's just turning it on its head. And instead of explicitly programming, it's now implicitly programming, based on the data you feed it. And this changes everything and it can be applied to almost any use case. So I think that eventually it's going to be applied in almost any area that we use computing today. >> Dave: So another way of asking that question is how far can we take machine intelligence and your answer is pretty far, pretty far. So as processor architect, obviously this is very memory intensive, you're seeing, I was at the Micron financial analyst meeting earlier this week and listening to what they were saying about these emerging, you got T-RAM, and obviously you have Flash, people are excited about 3D cross-point, I heard it, somebody mentioned 3D cross-point on the stage today, what do you see there in terms of memory architectures and how they're evolving and what do you need as a systems architect? >> I need it all. (all talking at once) No, if I could build a GPU with more than a terabyte per second of bandwidth and more than a terabyte of capacity I could use it today. I can't build that, I can't build that yet. But I need, it's a different stool, I need teraflops, I need memory bandwidth, and I need memory capacity. And really we just push to the limit. Different types of neural nets, different types of problems, will stress different things. They'll stress the capacity, the bandwidth, or the actual compute. >> This makes the data warehousing problem seem trivial, but do you see, you know what I mean? Data warehousing, it was like always a chase, chasing the chips and snake swallowing a basketball I called it, but do you see a day that these problems are going to be solved, architecturally, it talks about, More's laws, moderating, or is this going to be this perpetual race that we're never going to get to the end of? >> So let me put things in perspective first. It's easy to forget that the big bang moment for AI and deep learning was the summer of 2012, so slightly less than six years ago. That's when Alex Ned get the seed and people went wow, this is a whole new approach, this is amazing. So a little less than six years in. I mean it is a very young, it's a young area, it is in incredible growth, the change in state of art is literally month by month right now. So it's going to continue on for a while, and we're just going to keep growing and evolving. Maybe five years, maybe 10 years, things will stabilize, but it's an exciting time right now. >> Very hard to predict, isn't it? >> It is. >> I mean who would've thought that Alexa would be such a dominant factor in voice recognition, or that a bunch of cats on the internet would lead to facial recognition. I wonder if you guys can comment, right? I mean. >> Strange beginnings. (all laughing) >> But very and, I wonder if I can ask you guys ask about the black box challenge. I've heard some companies talk about how we're going to white box everything, make it open and, but the black box problem meaning if I have to describe, and we may have talked about this, how I know that it's a dog. I struggle to do that, but a machine can do that. I don't know how it does it, probably can't tell me how it does it, but it knows, with a high degree of accuracy. Is that black box phenomenon a problem, or do we just have to get over it? >> Up to you. >> I think it's certain, I don't think it's a problem. I know that mathematicians, who are friends, it drives them crazy, because they can't tell you why it's working. So it's a intellectual problem that people just need to get over. But it's the way our brains work, right? And our brains work pretty well. There are certain areas I think where for a while there will be certain laws in place where you can't prove the exact algorithm, you can't use it, but by and large, I think the industry's going to get over it pretty fast. >> I would totally agree, yeah. >> You guys are optimists about the future. I mean you're not up there talking about how jobs are going to go away and, that's not something that you guys are worried about, and generally, we're not either. However, machine intelligence, AI, whatever you want to call it, it is very disruptive. There's no question about it. So I got to ask you guys a few fun questions. Do you think large retail stores are going to, I mean nothing's in the extreme, but do you think they'll generally go away? >> Do I think large retail stores will generally go away? When I think about retail, I think about grocery stores, and the things that are going to go away, I'd like to see standing in line go away. I would like my customer experience to get better. I don't believe that 10 years from now we're all going to live inside our houses and communicate over the internet and text and half of that be with chat mods, I just don't believe that's going to happen. I think the Amazon effect has a long way to go. I just ordered a pool thermometer from Amazon the other day, right? I'm getting old, I ordered readers from Amazon the other day, right? So I kind of think it's that spur of the moment item that you're going to buy. Because even in my own personal habits like I'm not buying shoes and returning them, and waiting five to ten times, cycle, to get there. You still want that experience of going to the store. Where I think retail will improve is understanding that I'm on my way to their store, and improving the experience once I get there. So, I think you'll see, they need to see the Amazon effect that's going to happen, but what you'll see is technology being employed to reach a place where my end user experience improves such that I want to continue to go there. >> Do you think owning your own vehicle, and driving your own vehicle, will be the exception, rather than the norm? >> It pains me to say this, 'cause I love driving, but I think you're right. I think it's a long, I mean it's going to take a while, it's going to take a long time, but I think inevitably it's just too convenient, things are too congested, by freeing up autonomous cars, things that'll go park themselves, whatever, I think it's inevitable. >> Will machines make better diagnoses than doctors? >> Matt: Oh I mean, that's not even a question. Absolutely. >> They already do. >> Do you think banks, traditional banks, will control of the payment systems? >> That's a good one, I haven't thought about-- >> Yeah, I'm not sure that's an AI related thing, maybe more of a block chain thing, but, it's possible. >> Block chain and AI, kind of cousins. >> Yeah, they are, they are actually. >> I fear a world though where we actually end up like WALLE in the movie and everybody's on these like floating chez lounges. >> Yeah lets not go there. >> Eating and drinking. No but I'm just wondering, you talked about, Matt, in terms of the number of, the different types of industries that really can verge in here. Do you see maybe the consumer world with our expectation that we can order anything on Amazon from a thermometer to a pair of glasses to shoes, as driving other industries to kind of follow what we as consumers have come to expect? >> Absolutely no question. I mean that is, consumer drives everything, right? All flash arrays were driven by you have your phone there, right? The consumerization of that device was what drove Toshiba and all the other fad manufacturers to build more NAM flash, which is what commoditized NAM flash, which what brought us faster systems, these things all build on each other, and from a consumer perspective, there are so many things that are inefficient in our world today, right? Like lets just think about your last call center experience. If you're the normal human being-- >> I prefer not to, but okay. >> Yeah you said it, you prefer not to, right? My next comment was going to be, most people's call center experiences aren't that good. But what if the call center technology had the ability to analyze your voice and understand your intonation, and your inflection, and that call center employee was being given information to react to what you were saying on the call, such that they either immediately escalated that call without you asking, or they were sent down a decision path, which brought you to a resolution that said that we know that 62% of the time if we offer this person a free month of this, that person is going to view, is going to go away a happy customer, and rate this call 10 out of 10. That is the type of things that's going to improve with voice recognition, and all of the voice analysis, and all this. >> And that really get into how far we can take machine intelligence, the things that machines, or the humans can do, that machines can't, and that list changes every year. The gap gets narrower and narrower, and that's a great example. >> And I think one of the things, going back to your, whether stores'll continue being there or not but, one of the biggest benefits of AI is recommendation, right? So you can consider it userous maybe, or on the other hand it's great service, where a lot of, something like an Amazon is able to say, I've learned about you, I've learned about what people are looking for, and you're asking for this, but I would suggest something else, and you look at that and you go, "Yeah, that's exactly what I'm looking for". I think that's really where, in the sales cycle, that's really where it gets up there. >> Can machines stop fake news? That's what I want to know. >> Probably. >> Lisa: To be continued. >> People are working on that. >> They are. There's a lot, I mean-- >> That's a big use case. >> It is not a solved problem, but there's a lot of energy going into that. >> I'd take that before I take the floating WALLE chez lounges, right? Deal. >> What if it was just for you? What if it was just a floating chez lounge, it wasn't everybody, then it would be alright, right? >> Not for me. (both laughing) >> Matt and Rob, thanks so much for stopping by and sharing some of your insights and we should have a great rest of the day at the conference. >> Great, thank you very much. Thanks for having us. >> For Dave Vellante, I'm Lisa Martin, we're live at Pure Storage Accelerate 2018 at the Bill Graham Civic Auditorium. Stick around, we'll be right back after a break with our next guest. (electronic music)
SUMMARY :
brought to you by Pure Storage. I learned that in the last half an hour. Who of course played at this venue, and Rob Ober, the Chief Platform Architect at NVIDIA. Talk to us about AIRI, what is it? I think Rob and I like to talk about it as kind of It's an AI supercomputer in a half rack. for the next operation to happen. has really made the data flow possible. and you can imagine this massively parallel and if you start scaling it out, And so the more simulations you can make, AI emerging in the marketplace. based on the data you feed it. and what do you need as a systems architect? the bandwidth, or the actual compute. in incredible growth, the change I wonder if you guys can comment, right? (all laughing) I struggle to do that, but a machine can do that. that people just need to get over. So I got to ask you guys a few fun questions. and the things that are going to go away, I think it's a long, I mean it's going to take a while, Matt: Oh I mean, that's not even a question. maybe more of a block chain thing, but, it's possible. and everybody's on these like floating to kind of follow what we as consumers I mean that is, consumer drives everything, right? information to react to what you were saying on the call, the things that machines, or the humans can do, and you look at that and you go, That's what I want to know. There's a lot, I mean-- It is not a solved problem, I'd take that before I take the Not for me. and sharing some of your insights and Great, thank you very much. at the Bill Graham Civic Auditorium.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Rob Lee | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Rob Ober | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
62% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Alex Ned | PERSON | 0.99+ |
Roger Daltry | PERSON | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
U.K. | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
ten times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
today | DATE | 0.99+ |
less than half a rack | QUANTITY | 0.98+ |
Arista | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
summer of 2012 | DATE | 0.98+ |
three legs | QUANTITY | 0.98+ |
tens of thousands of cores | QUANTITY | 0.97+ |
less than six years | QUANTITY | 0.97+ |
Man AHL | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
a week | QUANTITY | 0.96+ |
earlier this week | DATE | 0.96+ |
more than a terabyte | QUANTITY | 0.96+ |
50 racks | QUANTITY | 0.96+ |
Global Technology Conference | EVENT | 0.96+ |
this morning | DATE | 0.95+ |
more than a terabyte per second | QUANTITY | 0.95+ |
Pure | ORGANIZATION | 0.94+ |
GTC | EVENT | 0.94+ |
less than six years ago | DATE | 0.93+ |
petabytes | QUANTITY | 0.92+ |
terabytes | QUANTITY | 0.92+ |
half rack | QUANTITY | 0.92+ |
one of the legs | QUANTITY | 0.92+ |
single run | QUANTITY | 0.92+ |
a month | QUANTITY | 0.91+ |
FlashBlade | ORGANIZATION | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
Pure Storage Accelerate 2018 | EVENT | 0.88+ |
20 years | QUANTITY | 0.87+ |