Image Title

Search Results for NAND flash:

Dr Eng Lim Goh, Vice President, CTO, High Performance Computing & AI


 

(upbeat music) >> Welcome back to HPE Discover 2021, theCUBE's virtual coverage, continuous coverage of HPE's Annual Customer Event. My name is Dave Vellante, and we're going to dive into the intersection of high-performance computing, data and AI with Doctor Eng Lim Goh, who's a Senior Vice President and CTO for AI at Hewlett Packard Enterprise. Doctor Goh, great to see you again. Welcome back to theCUBE. >> Hello, Dave, great to talk to you again. >> You might remember last year we talked a lot about Swarm intelligence and how AI is evolving. Of course, you hosted the Day 2 Keynotes here at Discover. And you talked about thriving in the age of insights, and how to craft a data-centric strategy. And you addressed some of the biggest problems, I think organizations face with data. That's, you've got a, data is plentiful, but insights, they're harder to come by. >> Yeah. >> And you really dug into some great examples in retail, banking, in medicine, healthcare and media. But stepping back a little bit we zoomed out on Discover '21. What do you make of the events so far and some of your big takeaways? >> Hmm, well, we started with the insightful question, right, yeah? Data is everywhere then, but we lack the insight. That's also part of the reason why, that's a main reason why Antonio on day one focused and talked about the fact that we are in the now in the age of insight, right? And how to try thrive in that age, in this new age? What I then did on a Day 2 Keynote following Antonio is to talk about the challenges that we need to overcome in order to thrive in this new age. >> So, maybe we could talk a little bit about some of the things that you took away in terms of, I'm specifically interested in some of the barriers to achieving insights. You know customers are drowning in data. What do you hear from customers? What were your takeaway from some of the ones you talked about today? >> Oh, very pertinent question, Dave. You know the two challenges I spoke about, that we need to overcome in order to thrive in this new age. The first one is the current challenge. And that current challenge is, you know, stated is now barriers to insight, when we are awash with data. So that's a statement on how do you overcome those barriers? What are the barriers to insight when we are awash in data? In the Day 2 Keynote, I spoke about three main things. Three main areas that we receive from customers. The first one, the first barrier is in many, with many of our customers, data is siloed, all right. You know, like in a big corporation, you've got data siloed by sales, finance, engineering, manufacturing and so on supply chain and so on. And there's a major effort ongoing in many corporations to build a federation layer above all those silos so that when you build applications above, they can be more intelligent. They can have access to all the different silos of data to get better intelligence and more intelligent applications built. So that was the first barrier we spoke about, you know? Barriers to insight when we are awash with data. The second barrier is that we see amongst our customers is that data is raw and disperse when they are stored. And you know, it's tough to get at, to tough to get a value out of them, right? And in that case, I use the example of, you know, the May 6, 2010 event where the stock market dropped a trillion dollars in terms of minutes. We all know those who are financially attuned with know about this incident but that this is not the only incident. There are many of them out there. And for that particular May 6 event, you know, it took a long time to get insight. Months, yeah, before we, for months we had no insight as to what happened. Why it happened? Right, and there were many other incidences like this and the regulators were looking for that one rule that could mitigate many of these incidences. One of our customers decided to take the hard road they go with the tough data, right? Because data is raw and dispersed. So they went into all the different feeds of financial transaction information, took the tough, you know, took a tough road. And analyze that data took a long time to assemble. And they discovered that there was caught stuffing, right? That people were sending a lot of trades in and then canceling them almost immediately. You have to manipulate the market. And why didn't we see it immediately? Well, the reason is the process reports that everybody sees, the rule in there that says, all trades less than a hundred shares don't need to report in there. And so what people did was sending a lot of less than a hundred shares trades to fly under the radar to do this manipulation. So here is the second barrier, right? Data could be raw and dispersed. Sometimes it's just have to take the hard road and to get insight. And this is one great example. And then the last barrier has to do with sometimes when you start a project to get insight, to get answers and insight, you realize that all the data's around you, but you don't seem to find the right ones to get what you need. You don't seem to get the right ones, yeah? Here we have three quick examples of customers. One was a great example, right? Where they were trying to build a language translator or machine language translator between two languages, right? By not do that, they need to get hundreds of millions of word pairs. You know of one language compare with the corresponding other. Hundreds of millions of them. They say, well, I'm going to get all these word pairs. Someone creative thought of a willing source and a huge, it was a United Nations. You see? So sometimes you think you don't have the right data with you, but there might be another source and a willing one that could give you that data, right? The second one has to do with, there was the sometimes you may just have to generate that data. Interesting one, we had an autonomous car customer that collects all these data from their their cars, right? Massive amounts of data, lots of sensors, collect lots of data. And, you know, but sometimes they don't have the data they need even after collection. For example, they may have collected the data with a car in fine weather and collected the car driving on this highway in rain and also in snow. But never had the opportunity to collect the car in hill because that's a rare occurrence. So instead of waiting for a time where the car can drive in hill, they build a simulation by having the car collected in snow and simulated him. So these are some of the examples where we have customers working to overcome barriers, right? You have barriers that is associated. In fact, that data silo, they federated it. Virus associated with data, that's tough to get at. They just took the hard road, right? And sometimes thirdly, you just have to be creative to get the right data you need. >> Wow! I tell you, I have about a hundred questions based on what you just said, you know? (Dave chuckles) And as a great example, the Flash Crash. In fact, Michael Lewis, wrote about this in his book, the Flash Boys. And essentially, right, it was high frequency traders trying to front run the market and sending into small block trades (Dave chuckles) trying to get sort of front ended. So that's, and they chalked it up to a glitch. Like you said, for months, nobody really knew what it was. So technology got us into this problem. (Dave chuckles) I guess my question is can technology help us get out of the problem? And that maybe is where AI fits in? >> Yes, yes. In fact, a lot of analytics work went in to go back to the raw data that is highly dispersed from different sources, right? Assembled them to see if you can find a material trend, right? You can see lots of trends, right? Like, no, we, if humans look at things that we tend to see patterns in Clouds, right? So sometimes you need to apply statistical analysis math to be sure that what the model is seeing is real, right? And that required, well, that's one area. The second area is you know, when this, there are times when you just need to go through that tough approach to find the answer. Now, the issue comes to mind now is that humans put in the rules to decide what goes into a report that everybody sees. Now, in this case, before the change in the rules, right? But by the way, after the discovery, the authorities changed the rules and all shares, all trades of different any sizes it has to be reported. >> Right. >> Right, yeah? But the rule was applied, you know, I say earlier that shares under a hundred, trades under a hundred shares need not be reported. So, sometimes you just have to understand that reports were decided by humans and for understandable reasons. I mean, they probably didn't wanted a various reasons not to put everything in there. So that people could still read it in a reasonable amount of time. But we need to understand that rules were being put in by humans for the reports we read. And as such, there are times we just need to go back to the raw data. >> I want to ask you... >> Oh, it could be, that it's going to be tough, yeah. >> Yeah, I want to ask you a question about AI as obviously it's in your title and it's something you know a lot about but. And I'm going to make a statement, you tell me if it's on point or off point. So seems that most of the AI going on in the enterprise is modeling data science applied to, you know, troves of data. But there's also a lot of AI going on in consumer. Whether it's, you know, fingerprint technology or facial recognition or natural language processing. Well, two part question will the consumer market, as it has so often in the enterprise sort of inform us is sort of first part. And then, there'll be a shift from sort of modeling if you will to more, you mentioned the autonomous vehicles, more AI inferencing in real time, especially with the Edge. Could you help us understand that better? >> Yeah, this is a great question, right? There are three stages to just simplify. I mean, you know, it's probably more sophisticated than that. But let's just simplify that three stages, right? To building an AI system that ultimately can predict, make a prediction, right? Or to assist you in decision-making. I have an outcome. So you start with the data, massive amounts of data that you have to decide what to feed the machine with. So you feed the machine with this massive chunk of data, and the machine starts to evolve a model based on all the data it's seeing. It starts to evolve, right? To a point that using a test set of data that you have separately kept aside that you know the answer for. Then you test the model, you know? After you've trained it with all that data to see whether its prediction accuracy is high enough. And once you are satisfied with it, you then deploy the model to make the decision. And that's the inference, right? So a lot of times, depending on what we are focusing on, we in data science are, are we working hard on assembling the right data to feed the machine with? That's the data preparation organization work. And then after which you build your models you have to pick the right models for the decisions and prediction you need to make. You pick the right models. And then you start feeding the data with it. Sometimes you pick one model and a prediction isn't that robust. It is good, but then it is not consistent, right? Now what you do is you try another model. So sometimes it gets keep trying different models until you get the right kind, yeah? That gives you a good robust decision-making and prediction. Now, after which, if it's tested well, QA, you will then take that model and deploy it at the Edge. Yeah, and then at the Edge is essentially just looking at new data, applying it to the model that you have trained. And then that model will give you a prediction or a decision, right? So it is these three stages, yeah. But more and more, your question reminds me that more and more people are thinking as the Edge become more and more powerful. Can you also do learning at the Edge? >> Right. >> That's the reason why we spoke about Swarm Learning the last time. Learning at the Edge as a Swarm, right? Because maybe individually, they may not have enough power to do so. But as a Swarm, they may. >> Is that learning from the Edge or learning at the Edge? In other words, is that... >> Yes. >> Yeah. You do understand my question. >> Yes. >> Yeah. (Dave chuckles) >> That's a great question. That's a great question, right? So the quick answer is learning at the Edge, right? And also from the Edge, but the main goal, right? The goal is to learn at the Edge so that you don't have to move the data that Edge sees first back to the Cloud or the Call to do the learning. Because that would be the reason, one of the main reasons why you want to learn at the Edge. Right? So that you don't need to have to send all that data back and assemble it back from all the different Edge devices. Assemble it back to the Cloud Site to do the learning, right? Some on you can learn it and keep the data at the Edge and learn at that point, yeah. >> And then maybe only selectively send. >> Yeah. >> The autonomous vehicle, example you gave is great. 'Cause maybe they're, you know, there may be only persisting. They're not persisting data that is an inclement weather, or when a deer runs across the front. And then maybe they do that and then they send that smaller data setback and maybe that's where it's modeling done but the rest can be done at the Edge. It's a new world that's coming through. Let me ask you a question. Is there a limit to what data should be collected and how it should be collected? >> That's a great question again, yeah. Well, today full of these insightful questions. (Dr. Eng chuckles) That actually touches on the the second challenge, right? How do we, in order to thrive in this new age of insight? The second challenge is our future challenge, right? What do we do for our future? And in there is the statement we make is we have to focus on collecting data strategically for the future of our enterprise. And within that, I talked about what to collect, right? When to organize it when you collect? And then where will your data be going forward that you are collecting from? So what, when, and where? For what data to collect? That was the question you asked, it's a question that different industries have to ask themselves because it will vary, right? Let me give you the, you use the autonomous car example. Let me use that. And we do have this customer collecting massive amounts of data. You know, we're talking about 10 petabytes a day from a fleet of their cars. And these are not production autonomous cars, right? These are training autonomous cars, collecting data so they can train and eventually deploy commercial cars, right? Also this data collection cars, they collect 10, as a fleet of them collect 10 petabytes a day. And then when they came to us, building a storage system you know, to store all of that data, they realized they don't want to afford to store all of it. Now here comes the dilemma, right? What should I, after I spent so much effort building all this cars and sensors and collecting data, I've now decide what to delete. That's a dilemma, right? Now in working with them on this process of trimming down what they collected, you know, I'm constantly reminded of the 60s and 70s, right? To remind myself 60s and 70s, we called a large part of our DNA, junk DNA. >> Yeah. (Dave chuckles) >> Ah! Today, we realized that a large part of that what we call junk has function as valuable function. They are not genes but they regulate the function of genes. You know? So what's junk in yesterday could be valuable today. Or what's junk today could be valuable tomorrow, right? So, there's this tension going on, right? Between you deciding not wanting to afford to store everything that you can get your hands on. But on the other hand, you worry, you ignore the wrong ones, right? You can see this tension in our customers, right? And then it depends on industry here, right? In healthcare they say, I have no choice. I want it all, right? Oh, one very insightful point brought up by one healthcare provider that really touched me was you know, we don't only care. Of course we care a lot. We care a lot about the people we are caring for, right? But who also care for the people we are not caring for? How do we find them? >> Uh-huh. >> Right, and that definitely, they did not just need to collect data that they have with from their patients. They also need to reach out, right? To outside data so that they can figure out who they are not caring for, right? So they want it all. So I asked them, so what do you do with funding if you want it all? They say they have no choice but to figure out a way to fund it and perhaps monetization of what they have now is the way to come around and fund that. Of course, they also come back to us rightfully, that you know we have to then work out a way to help them build a system, you know? So that's healthcare, right? And if you go to other industries like banking, they say they can afford to keep them all. >> Yeah. >> But they are regulated, seemed like healthcare, they are regulated as to privacy and such like. So many examples different industries having different needs but different approaches to what they collect. But there is this constant tension between you perhaps deciding not wanting to fund all of that, all that you can install, right? But on the other hand, you know if you kind of don't want to afford it and decide not to start some. Maybe those some become highly valuable in the future, right? (Dr. Eng chuckles) You worry. >> Well, we can make some assumptions about the future. Can't we? I mean, we know there's going to be a lot more data than we've ever seen before. We know that. We know, well, not withstanding supply constraints and things like NAND. We know the prices of storage is going to continue to decline. We also know and not a lot of people are really talking about this, but the processing power, but the says, Moore's law is dead. Okay, it's waning, but the processing power when you combine the CPUs and NPUs, and GPUs and accelerators and so forth actually is increasing. And so when you think about these use cases at the Edge you're going to have much more processing power. You're going to have cheaper storage and it's going to be less expensive processing. And so as an AI practitioner, what can you do with that? >> Yeah, it's a highly, again, another insightful question that we touched on our Keynote. And that goes up to the why, uh, to the where? Where will your data be? Right? We have one estimate that says that by next year there will be 55 billion connected devices out there, right? 55 billion, right? What's the population of the world? Well, of the other 10 billion? But this thing is 55 billion. (Dave chuckles) Right? And many of them, most of them can collect data. So what do you do? Right? So the amount of data that's going to come in, it's going to way exceed, right? Drop in storage costs are increasing compute power. >> Right. >> Right. So what's the answer, right? So the answer must be knowing that we don't, and even a drop in price and increase in bandwidth, it will overwhelm the, 5G, it will overwhelm 5G, right? Given the amount of 55 billion of them collecting. So the answer must be that there needs to be a balance between you needing to bring all of that data from the 55 billion devices of the data back to a central, as a bunch of central cost. Because you may not be able to afford to do that. Firstly bandwidth, even with 5G and as the, when you'll still be too expensive given the number of devices out there. You know given storage costs dropping is still be too expensive to try and install them all. So the answer must be to start, at least to mitigate from to, some leave most a lot of the data out there, right? And only send back the pertinent ones, as you said before. But then if you did that then how are we going to do machine learning at the Core and the Cloud Site, if you don't have all the data? You want rich data to train with, right? Sometimes you want to mix up the positive type data and the negative type data. So you can train the machine in a more balanced way. So the answer must be eventually, right? As we move forward with these huge number of devices all at the Edge to do machine learning at the Edge. Today we don't even have power, right? The Edge typically is characterized by a lower energy capability and therefore lower compute power. But soon, you know? Even with low energy, they can do more with compute power improving in energy efficiency, right? So learning at the Edge, today we do inference at the Edge. So we data, model, deploy and you do inference there is. That's what we do today. But more and more, I believe given a massive amount of data at the Edge, you have to start doing machine learning at the Edge. And when you don't have enough power then you aggregate multiple devices, compute power into a Swarm and learn as a Swarm, yeah. >> Oh, interesting. So now of course, if I were sitting and fly on the wall and the HPE board meeting I said, okay, HPE is a leading provider of compute. How do you take advantage of that? I mean, we're going, I know it's future but you must be thinking about that and participating in those markets. I know today you are, you have, you know, Edge line and other products. But there's, it seems to me that it's not the general purpose that we've known in the past. It's a new type of specialized computing. How are you thinking about participating in that opportunity for the customers? >> Hmm, the wall will have to have a balance, right? Where today the default, well, the more common mode is to collect the data from the Edge and train at some centralized location or number of centralized location. Going forward, given the proliferation of the Edge devices, we'll need a balance, we need both. We need capability at the Cloud Site, right? And it has to be hybrid. And then we need capability on the Edge side that we need to build systems that on one hand is an Edge adapter, right? Meaning they environmentally adapted because the Edge differently are on it, a lot of times on the outside. They need to be packaging adapted and also power adapted, right? Because typically many of these devices are battery powered. Right? So you have to build systems that adapts to it. But at the same time, they must not be custom. That's my belief. It must be using standard processes and standard operating system so that they can run a rich set of applications. So yes, that's also the insight for that Antonio announced in 2018. For the next four years from 2018, right? $4 billion invested to strengthen our Edge portfolio. >> Uh-huh. >> Edge product lines. >> Right. >> Uh-huh, Edge solutions. >> I could, Doctor Goh, I could go on for hours with you. You're just such a great guest. Let's close. What are you most excited about in the future of, certainly HPE, but the industry in general? >> Yeah, I think the excitement is the customers, right? The diversity of customers and the diversity in the way they have approached different problems of data strategy. So the excitement is around data strategy, right? Just like, you know, the statement made for us was so was profound, right? And Antonio said, we are in the age of insight powered by data. That's the first line, right? The line that comes after that is as such we are becoming more and more data centric with data that currency. Now the next step is even more profound. That is, you know, we are going as far as saying that, you know, data should not be treated as cost anymore. No, right? But instead as an investment in a new asset class called data with value on our balance sheet. This is a step change, right? Right, in thinking that is going to change the way we look at data, the way we value it. So that's a statement. (Dr. Eng chuckles) This is the exciting thing, because for me a CTO of AI, right? A machine is only as intelligent as the data you feed it with. Data is a source of the machine learning to be intelligent. Right? (Dr. Eng chuckles) So, that's why when the people start to value data, right? And say that it is an investment when we collect it it is very positive for AI. Because an AI system gets intelligent, get more intelligence because it has huge amounts of data and a diversity of data. >> Yeah. >> So it'd be great, if the community values data. >> Well, you certainly see it in the valuations of many companies these days. And I think increasingly you see it on the income statement. You know data products and people monetizing data services. And yeah, maybe eventually you'll see it in the balance sheet. I know Doug Laney, when he was at Gartner Group, wrote a book about this and a lot of people are thinking about it. That's a big change, isn't it? >> Yeah, yeah. >> Dr. Goh... (Dave chuckles) >> The question is the process and methods in valuation. Right? >> Yeah, right. >> But I believe we will get there. We need to get started. And then we'll get there. I believe, yeah. >> Doctor Goh, it's always my pleasure. >> And then the AI will benefit greatly from it. >> Oh, yeah, no doubt. People will better understand how to align, you know some of these technology investments. Dr. Goh, great to see you again. Thanks so much for coming back in theCUBE. It's been a real pleasure. >> Yes, a system is only as smart as the data you feed it with. (Dave chuckles) (Dr. Eng laughs) >> Excellent. We'll leave it there. Thank you for spending some time with us and keep it right there for more great interviews from HPE Discover 21. This is Dave Vellante for theCUBE, the leader in Enterprise Tech Coverage. We'll be right back. (upbeat music)

Published Date : Jun 8 2021

SUMMARY :

Doctor Goh, great to see you again. great to talk to you again. And you talked about thriving And you really dug in the age of insight, right? of the ones you talked about today? to get what you need. And as a great example, the Flash Crash. is that humans put in the rules to decide But the rule was applied, you know, that it's going to be tough, yeah. So seems that most of the AI and the machine starts to evolve a model they may not have enough power to do so. Is that learning from the Edge You do understand my question. or the Call to do the learning. but the rest can be done at the Edge. When to organize it when you collect? But on the other hand, to help them build a system, you know? all that you can install, right? And so when you think about So what do you do? of the data back to a central, in that opportunity for the customers? And it has to be hybrid. about in the future of, as the data you feed it with. if the community values data. And I think increasingly you The question is the process We need to get started. And then the AI will Dr. Goh, great to see you again. as smart as the data Thank you for spending some time with us

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Michael LewisPERSON

0.99+

Doug LaneyPERSON

0.99+

DavePERSON

0.99+

2018DATE

0.99+

$4 billionQUANTITY

0.99+

AntonioPERSON

0.99+

two languagesQUANTITY

0.99+

10 billionQUANTITY

0.99+

55 billionQUANTITY

0.99+

two challengesQUANTITY

0.99+

second challengeQUANTITY

0.99+

55 billionQUANTITY

0.99+

HPEORGANIZATION

0.99+

last yearDATE

0.99+

Gartner GroupORGANIZATION

0.99+

first lineQUANTITY

0.99+

10QUANTITY

0.99+

second areaQUANTITY

0.99+

bothQUANTITY

0.99+

tomorrowDATE

0.99+

Hundreds of millionsQUANTITY

0.99+

TodayDATE

0.99+

todayDATE

0.99+

second barrierQUANTITY

0.99+

two partQUANTITY

0.99+

May 6, 2010DATE

0.99+

OneQUANTITY

0.99+

EdgeORGANIZATION

0.99+

first barrierQUANTITY

0.99+

less than a hundred sharesQUANTITY

0.99+

next yearDATE

0.98+

EngPERSON

0.98+

yesterdayDATE

0.98+

first partQUANTITY

0.98+

May 6DATE

0.98+

United NationsORGANIZATION

0.98+

theCUBEORGANIZATION

0.98+

one areaQUANTITY

0.98+

one modelQUANTITY

0.98+

first oneQUANTITY

0.98+

Hewlett Packard EnterpriseORGANIZATION

0.98+

Dr.PERSON

0.97+

less than a hundred sharesQUANTITY

0.97+

three stagesQUANTITY

0.97+

one ruleQUANTITY

0.97+

Three main areasQUANTITY

0.97+

Flash BoysTITLE

0.97+

one languageQUANTITY

0.97+

oneQUANTITY

0.96+

10 petabytes a dayQUANTITY

0.96+

Flash CrashTITLE

0.95+

under a hundredQUANTITY

0.95+

FirstlyQUANTITY

0.95+

Day 2 Kick off | Pure Accelerate 2019


 

>> Announcer: From Austin, Texas it's The Cube covering Pure Storage Accelerate 2019, brought to you by Pure Storage. >> Good morning. From Austin, Texas, Lisa Martin with Dave Vellante at Pure Accelerate 2019. This is our second day. We just came from a very cool, interesting, keynote, Dave whenever there's astronauts my inner NASA geek from the early 2000s. She just comes right back up Leland Melvin was on >> Amazing, right? >> With a phenomenal story. Talking about technology and the feeling of innovation but also a great story of inspiration from a steam perspective science, technology, engineering, arts, math, I loved that and, >> Dave: And fun >> Very fun. But also... >> One of the better talks I've ever seen >> It really was. It had so many elements that I think you didn't have to be a NASA fan or a NASA geek or a space geek to appreciate the all of the lessons that Leland Melvin learned along the way that he really is inspiring, everybody the audience to take note of. It was I thought it was... >> And incredibly accomplished, right? I mean scientist, MIT engineer, played in the NFL, went to space, he had some really fun stuff when they were, you know, messing around with with gravity. >> Lisa: Yes. >> I never knew you could do that. He had like this water. >> Lisa: Water, yeah. >> Bubble. >> I'd never seen that before and they were throwing M&M's inside (laughter) and he, you know consumed it choked on it, which is pretty funny. >> Yeah, well it was near and dear to me. I worked with NASA my first job out of grad school. >> Dave: Really? >> I did, and managed biological pilots that flew on the space shuttle and the mission that the he talked about that didn't land, Colombia. That was the mission that I worked on. So when he talked about that countdown clock going positive. I was there on the runway with that. So for me, it just struck a chord of, >> Dave: so this is of course the 50th anniversary of the moonwalk. And you know I have this thing about watches, kind of like what you have with shoes (chuckles) >> Lisa: Hey, handbags. >> Is that not true? Oh, It's handbags for you? (laughing) >> Dave: I know this really that was a terrible thing for me to say. >> That's okay. >> Dave: You have great shoes so I just I just assumed that not good to make assumptions. So I bought a moon watch this year which was the watch that Neil Armstrong used to not the exact one but similar one, right? >> Lisa: Yeah. And it actually has an acrylic face because they're afraid if it cracked in space you'd have glass all over the place. [Lisa] Right. So that's a little nostalgia there. >> Well one of the main things too as you look at the mission that President John F. Kennedy established in the 60's for getting a man in space in that 10-year period. That being accomplished and kind of a parallel with what Pure Storage has done in its first 10 years of tremendous innovation. This keynote again Day 2, standing room only at least about 3000 people or so here. Storage as James Governor said, your friend and also who keynoted after Leland this morning you know, (mumbles) Software's eating the world storage is eating the world we have to have secure locations to store all this data so that we can extract maximum value from it. So nice parallel between the space program and Pure Storage. >> James is really good, isn't he? I mean he had to follow Leland and I mean again one of the better talks I've ever heard, but James is very strong, he's funny, he's witty he's he cuts to the chase. >> Lisa: Yes. >> He always tells it like it is. He's a very Monkchips is very focused on developers and they do a really good job there, one of the things he talked about was S3 and how Amazon uses this working backwards methodology which maybe a lot of people don't know about but what they do is they write and rewrite and rewrite and vet and rewrite the press release before they announce the product and even before they develop the products they write the press release and then they work backwards from there. So this is the outcome that we are trying to achieve, and it's very disciplined process that they use and as he said they may revise it hundreds and hundreds of times and he put up Andy Jassy's quote from 2004, around S3. That actually surprised me. 2000...Maybe I read it wrong. >> Lisa: No, it was 2004. >> Because S3 came out after EC2 which was 2006 so I don't know. Maybe I'm getting my dates wrong or I think James actually got his dates wrong but who knows, maybe you know what? Maybe he got a copy of that from the internal working document, working backwards doc that could be what it was but again the point being they envisioned this simple storage that developers didn't have to think about >> Lisa: Right. >> That was virtually unlimited in capacity, highly available and you know, dirt cheap which is what people want and so he talked about that and then he gave a little history of the Dell technology families and I tweeted out this in a funny little you know basically pivotal VM ware EMC and Dell and their history Dell was basically IPO 1984 and then today. There was a few things in between I know but he's got a great perspective on things and I think it resonated with the audience then he talked a lot about Kubernetes jokingly tongue-in-cheek how Kubernetes everybody thought was going to kill VMware but his big takeaway was look you got all these skills of (mumbles) Skills, core database skills, I would even add to that you know understanding how storage works and I always joke if your career is based on managing lawns you might want to rethink your career. But his point was which I liked was look all those skills you've learned are valuable but you now have to step up your game and learn new skills. You have to build on top of those skills so the history you have and the knowledge that you've built up is very valuable but it's not going to propel you to the next decade and so I thought that was a good takeaway and it was an excellent talk. >> So looking back at the conversations yesterday the press releases that came out the advancements of what Pure is doing, with AWS, with Nvidia, with the AI data-hub for example, delivering more of their portfolio as a service to allow businesses whether it's a law-firm like we talked to yesterday utility or Mercedes AMG Petronas Motor-sport, to be able to access data securely, incredibly quickly, recover it restore it absolutely critical and really can be game-changing depending on the type of organization. I want to get your perspectives on some of the things you heard anecdotally yesterday after we wrapped in terms of the atmosphere, the vibe, the thoughts on Pure's next 10 years. >> Yeah, so several things, just some commentary so it's always good at night you go around you get a lot of data we sometimes call it metadata. I think one of the more interesting announcements to me was the block-storage on AWS. I don't necessarily think that this is going to be a huge product near term for Pure in terms of meaningful revenue, but I think it's interesting that they're embracing the trend of the Cloud and are actually architecting Cloud solutions using Amazon services and blending in their own super gluing their own, I mean it's not really superglue but blending in their own software for their customers to extend. Now, you know some of the nuances I don't think they are going to have they have better right performance I think they'll have better read performance clearly they have better availability I think it's going to be a little bit more expensive. All these things are TBD that's just my take based on looking at what I've seen and talking to some people but to me the important thing is that Pure's embracing that Cloud model. Historically, companies that are trying to defend an existing business, they retreat. You know, they denigrate they don't embrace. We know that Pure's going to make more money on pram than it does in the Cloud. At least I think. And so it's to their advantage for companies to stay on-prem but at the same time they understand that trend is your friend and they're embracing that so that was kind of one thing. The second thing I learned is Charlie Giancarlo spent a lot of time with them last night as did you. He's a bit of a policy wonk in very certain narrow areas. He shared with me some of the policy work that he's done around IP protection and not necessarily though on the side that you would think. You would think that okay IP protection that's a good thing but a lot of the laws that were trying to be promoted for IP protection were there to help big companies essentially crush small companies so he fought against that. He shared with me some things around net neutrality. You would think you know you think you know which side of net neutrality he'd be on not necessarily so he had some really interesting perspectives on that. We also talked to and I won't share the name of the company but a very large financial institution that's that's betting a lot on Pure was very interesting to me. This is one of the brand names everybody would know it if you heard it. And their head of storage infrastructure was here, at the show. Now I know this individual and this person doesn't go to a lot of shows >> Maybe a couple a year. >> This person chose to come to this show because they're making an investment in Pure. In a fairly big way and they spent a lot of time with Pure management, expressing their desires as part of an executive form that Pure holds they didn't really market that a lot they didn't really tell us too much about it because it was a little private thing but I happen to know this individual and and I learned several things. They like Pure a lot, they use it for a lot of their workloads, but they have a lot of other storage, they can't necessarily get rid of that other storage for a lot of reasons. Inertia, technical debt, good tickets at the baseball game, all kinds of politics going on there. I also asked specifically about some hybrid companies products where the the cost structure's a little bit better so this gets me to flash array C and we talked to Charlie Giancarlo about this about his flash prices come down and it and opens up new markets. I got some other data yesterday and today that you know that flash array C is not going to be quite priced we don't think as well as hybrid arrays closing the gap it's between one and one and a quarter, one and a half dollars per gigabyte whereas hybrid arrays you are seeing half that, 70 cents a gigabyte. Sometimes as low as 60 cents a gigabyte. Sometimes higher, sometimes high as a dollar but the average around 65-70 cents a gigabyte so there's still a gap there. Flash prices have to come down further. Another thing I learned I'm going to just keep going. >> Lisa: Go ahead! >> The other thing I learned is that China is really building a lot of fab capacity in NAND to try to take out the thumb-drive market-place so they are going to go after the low-end. So companies like Samsung and Toshiba, Toshiba just renamed the company, I can't remember the name of the company but Micron and the NAND flash NAND manufacturers are going to have to now go use their capacity and go after the enterprise because China fab is going to crush the low-end and bomb the low-end pricing. Somebody else told me about a third of flash consumption is in China now. So interesting things going on there. So near term, flash array C is not going to just crush spinning disk and hybrid, it's going to get closer and it's going to slowly eat away at that as NAND prices come down it really could more rapidly eat away at that. So I just learned some other stuff too but I'll take a breath. (laughter) >> So one of the things I think we are resounding with it we heard not just yesterday on the program day but even last night at the executive event we were at is that from this large financial services company that you mentioned, Pure storage is a strategic partner to many organizations from small to large that is incredibly valued to your point the Shuttleman only goes to maybe a couple of events a year and this is one of them? >> Dave: Right. >> This is a company that in its first 10 years has embraced competition head on and I loved how you talked about yesterday 10 years ago they just drove a truck through EMC's market and sort of ripping and replacing. They're bold but they're also doing it in a way that's very methodical. They're working on bringing you know changing companies' perspectives of even backup data as becoming an asset to put it on flash. Because if you can't rapidly restore that, if there's an outage whether it is an attack or it's unintentional human related, that data can't be recovered quickly, you're in a big big problem. And so them as a strategic component of this isn't in any industry I think it was a very resounding sentiment that I heard and felt yesterday. >> Yeah, this ties into tam expansion of what we talked to Charlie Giancarlo about new workloads with AI as an example flash or AC lowering prices will open up those some of those new workloads data protection backup is clearly an opportunity and I think it's interesting, you're seeing a lot of companies now announce a lot of vendors announce flash based recovery systems I'll call them recovery systems because I don't even consider them backup anymore it's not about backup, it's about recovery. Oracle was actually one of the first to use that kind of concept with the zero data loss recovery appliance they call it recovery. So it's all about fast and near instantaneous recovery. Why is that important? It's because it's companies move toward a digital transformation and what does that mean? And what is a digital business? Digital business is all about how you use data and leveraging data in new ways to create new value to monetise or cut cost. And so being able to have access to that data and recover from any inaccess to that data in a split-second is crucial. So Pure can participate in that, now Pure's not alone You know, it's no coincidence that Veritas and Veeam and Cohesity and Rubrik they work with Pure, they work with HPE. They work with a lot of the big players and so but so Pure has to you know, has some work to do to win its fair share. Staying on backup for a moment, you know it's interesting to see, behind us, Veritas and Veeam have the biggest sort of presence here. Rubrik has a presence here. I'm sure Cohesity is here maybe someway, somehow but I haven't seen them >> I haven't either. >> Maybe they're not here. I'll have to check that up, but you know Veeam is actually doing very well particularly with lower ASPs we know that about Veeam. They've always come at it from the mid-market and SMB. Whereas Cohesity and Rubrik and Veritas traditionally are coming at it from a higher-end. Certainly Cohesity and Rubrik on higher ASPs. Veeam's doing very well with Pure. They're also doing very well with HPE which is interesting. Cohesity announced a deal with HPE recently I don't know, about six months ago somebody thought "Oh maybe Veeaam's on the outs." No, Veeam's doing very well with HPE. It's different parts of the organization. One works with the server group, one works with the storage group and both companies are actually doing quite well I actually think Veeam is ahead of the curve 'cause they've been working with HPE for quite some time and they're doing very well in the Pure base. By partnering with companies, Pure is able to enter that market much in the same way that NetApp did in the early days. They have a very tight relationship for example with Commvault. So, the other thing I was talking to Keith Townsend last night totally not secretor but he's talking about Outpost and how Amazon is going to be challenged to service Outpost Outpost is the on-prem Amazon stack, that VMware and Amazon announced that they're co-marketing. So who is going to service outpost? It's not going to be Amazon, that's not their game in professional service. It's going to have to be the ecosystem, the large SIs or the Vars the partners, VMware partners 'cause that's not Vmwares play either. So Keith Townsend's premise, I'd love to have him on The Cube to talk about this, is they're going to have trouble scaling Outpost because of that service issue. Believe it or not when we come to these conferences, we talk about other things than just, Pure. There's a lot of stuff going on. New Relic is happening this week. Oracle open world is going on this week. John Furrier just got back from AWS Bahrain, and of course we're here at Pure Accelerate. >> We are and this is our second day of two days of coverage. We've got Coz on next who I think has never been on The Cube. >> Dave: Not to my knowledge. >> We've got Kix on later. A great lineup, more customers Rob Lee is going to be on. So we're going to be digging more into Pure's Cloud strategy, the next ten years, how they're going to accelerate that and pack it into the next couple of years. >> I'll tell you one of the things I want to do, Lisa. I'll just call it out. An individual from Dell EMC wrote a blog ahead of Pure Accelerate I think it was last week, about four or five days ago and this individual called out like one, two, three, four.... five things that we should ask Pure so we should ask them, we should ask Coz we should ask Kix. There was criticism, of course they're biased. These guys they always fight. >> Lisa: Naturally. >> They have these internecine wars. >> Lisa: Yep. >> Sometimes I like to call them... no I won't say it. So scale out, question mark there we want to ask Coz about that and Kix. Pure uses proprietary flash modules. They do that because it allows them to do things that you can't do with off-the-shelf flash. I want to ask and challenge them that. I want to ask about their philosophy on tiering. They don't really believe in tiering, why not? I want to understand that better. They've made some acquisitions, Compuverde is one acquisition, it's a file system. What does that mean for flash play? >> Now we didn't hear anything about that yesterday, so that's a good point that we should dig into that. >> Yeah, so we'll bring that up. And then the Evergreen competitors hate Evergreen because Pure was first with it they caught everybody off guard. I said it yesterday, competitors hate Evergreen because competitors live off of maintenance and if you're not on their maintenance they just keep jacking up the maintenance prices and if you don't move to the new system, maintenance just keeps getting more and more and more and more expensive and so they force you, you're locked in. Force you to move. Pure introduced this different model. You pay for the CapEx up front and then, you know, after three years you get a controller swap. You know, so... >> To your point competitors hate it, customers love it. We heard a lot about that yesterday, we've got a couple more customers on our packed program today, Dave so let's get right to it! >> Great. >> Let's wrap up so we can get Coz on stage. >> Dave: Alright, awesome. >> Alright, for Dave Vellante. I'm Lisa Martin, you're watching The Cube from Pure Accelerate 2019, day two. Stick around 'Coz' John Colgrove, CTO, founder of Pure, will be on next. (upbeat music)

Published Date : Sep 18 2019

SUMMARY :

brought to you by Pure Storage. my inner NASA geek from the early 2000s. Talking about technology and the feeling of innovation But also... is inspiring, everybody the audience to take note of. played in the NFL, went to space, I never knew you could do that. and he, you know consumed it choked on it, I worked with NASA my first job out of grad school. that flew on the space shuttle and kind of like what you have with shoes Dave: I know this really that was a Dave: You have great shoes so I just I just assumed that So that's a little nostalgia there. Well one of the main things too as you look I mean he had to follow Leland and I mean again one of the things he talked about was S3 and how Amazon Maybe he got a copy of that from the internal so the history you have and the knowledge that you've So looking back at the conversations yesterday I don't necessarily think that this is going to be array C is not going to be quite priced market-place so they are going to go after the low-end. as becoming an asset to put it on flash. but so Pure has to and how Amazon is going to be challenged to service Outpost We are and this is our second day and pack it into the next couple of years. I think it was last week, about four or five days ago They do that because it allows them to do things so that's a good point that we should dig into that. and if you don't move to the new system, so let's get right to it! CTO, founder of Pure, will be on next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Rob LeePERSON

0.99+

ToshibaORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

JamesPERSON

0.99+

SamsungORGANIZATION

0.99+

Dave VellantePERSON

0.99+

NASAORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

LisaPERSON

0.99+

NvidiaORGANIZATION

0.99+

DavePERSON

0.99+

EvergreenORGANIZATION

0.99+

Keith TownsendPERSON

0.99+

Charlie GiancarloPERSON

0.99+

DellORGANIZATION

0.99+

OracleORGANIZATION

0.99+

two daysQUANTITY

0.99+

2004DATE

0.99+

Leland MelvinPERSON

0.99+

Andy JassyPERSON

0.99+

ChinaLOCATION

0.99+

2006DATE

0.99+

70 centsQUANTITY

0.99+

MITORGANIZATION

0.99+

60 centsQUANTITY

0.99+

CohesityORGANIZATION

0.99+

VeritasORGANIZATION

0.99+

PureORGANIZATION

0.99+

AWSORGANIZATION

0.99+

HPEORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

PresidentPERSON

0.99+

todayDATE

0.99+

MicronORGANIZATION

0.99+

last weekDATE

0.99+

second dayQUANTITY

0.99+

John ColgrovePERSON

0.99+

yesterdayDATE

0.99+

Austin, TexasLOCATION

0.99+

both companiesQUANTITY

0.99+

oneQUANTITY

0.99+

RubrikORGANIZATION

0.99+

CapExORGANIZATION

0.99+

10-yearQUANTITY

0.99+

five thingsQUANTITY

0.99+

Kaustubh Das, Cisco & Laura Crone, Intel | Cisco Live US 2019


 

>> Live from San Diego, California It's the queue covering Sisqo Live US 2019 Tio by Cisco and its ecosystem barkers. >> Welcome back. It's the Cube here at Cisco Live, San Diego 2019 times. Two minute My co host is Day Volante. First, I want to welcome back custom dos Katie, who is the vice president. Product management with Cisco Compute. We talked with him a lot about Piper Flex anywhere in Barcelona. Wanna welcome to the program of first time guests Laura Crone, who's the vice president of sales and marketing group in NSG sales and marketing at Intel. Laura, thanks so much for joining us, All right, So since Katie has been our program, let let's start with you. You know, we know, you know. We've watched, you know, Cisco UCS and that compute, you know, since it rolled out for about a decade ago. Now on DH, you know Intel always up on stage with Cisco talking about the latest enhancements everywhere I go this year, people are talking about obtained and how technologies like envy me are baking in tow. The environment storage class memories, you know, coming there. So you know, let's start with kind of intel. What's happening in your world and you know your activities. Francisco live >> great. So I'm glad to hear you've heard a lot about octane because I have some marketing of my organization. So obtain is the first new memory architecture er in over 25 years. And it is different than Nanda, right? It is you, Khun, right? Data to the silicon that is programs faster and has greater endurance. So when you think of obtain its fast like D ram But it's persistent, like nay on three D now. And it has some industry leading combinations of capabilities such a cz high throughput, high endurance, high quality of service and low latent see. And for a storage device, what could be better than having fast performance and hi consistency. Oh, >> Laura's you say? Yeah, but 25 years since this move. You know, I remember when I when I started working with Dave, it was, you know, how do we get out of you know, the horrible, scuzzy stack is what we had lived on for decades there. And finally, Now it feels like we're coming through the clearing and there is just going to be wave after wave of new technologies that air free to get us high performance low latent c on the like. >> Yeah, And I think the other big part of that which is part of Cisco's hyper flex all in Vienna, is the envy me standards. So, you know, we've lived in a world of legacy satya controllers, which created a lot of bottlenecks and the performance Now that the industry is moving toe envy me, that even opens up it. Mohr And so, as we were developing, obtain, we knew we had Teo go move the industry to a new protocol. Otherwise, that pairing was not going to be very successful. >> Alright, so Katie all envy me, tell more. >> So we come here and we talk about all the cool innovations we do within the company. And then sometimes you come here and we talk about all the cool innovation we do with our partners, our technology partner, that intel being a fantastic technology partner, obviously being the server business, you've got a partner with intel on. We've really going away that across the walls ofthe two organizations to bring, uh, just do to life, right? So Cisco 80 I hyper flex is one of the products >> we >> talked about in the past. Hyper Flex, all in Miami that uses Intel's obtain technology is, well, it's Intel's three demand all envy me devices to power really the fastest workloads that customers want to put on this device. So you talked about free envy me. Pricing is getting to a point where it becomes that much more accessible to youth, ese for powering databases for par like those those work clothes required that leyton see characteristics and acquire those I ops on DH. That's what we've enabled with Cisco Hyper Flex collaborating with Intel of Envy Me portfolio. >> Remember when I started in the business, somebody was sharing with me to educate me on the head? A pyramid? Think of the period is a storage hierarchy. And at the top of it, was it actually an Intel solid state device, which back then was not It was volatile, right? So you had to put, you know, backup power supplies on it. Uh, so but any rate and then with all this memory architecture coming and flash towards people have been saying, well, it's going to flatten that pyramid. But now, with obtain. You're seeing the reemergence of that periods of that pyramid. So help us understand, sort of where it fits from a supplier standpoint and a no yam and ultimate customer. Because if I understand it, so obtain is faster than NAND, but it's going to be more expensive, but it's slower than D Ram, but it's cheaper, right? So where does it fit? What, the use cases? Where does it fit in that hierarchy? Maybe. >> Yeah. So if you think about the hierarchy at the very top is D RAM, which is going to be your fastest lowest Leighton see product. But right below that is obtained. Persistent memory, the dims and you get greater density because that's one of the challenges with the Ram is they're not dense enough, nor are they affordable enough, right? And so you get that creates a new tear in the store tire curry. Go below that and you have obtain assist ease, which bring even mohr density. So we go up to a 1.5 terabyte in a obtain sst, uh, and you that now get performance for your storage and memory expansion. Then you have three Dean and and then even below that, you have three thing and Q l c, which gives you cost effective, high density capacity. And then below that is the old fashioned hard disk drive. And then magnet. Yeah, you start inserting all these tears that give architects and both hardware and software an opportunity. Teo rethink how they wantto do storage. >> So the demand for this granularity obviously coming from your your buyers, your direct bars and your customers. So what does it do for you and specifically your customers? >> Yeah. So the name of the game is performance and the ability to have in a land where things are not very predictable, the ability to support any thing that the your end customers may throw at you if you're a 90 department. That may mean a bur internal of, uh, data scientist team are traditional architect off a traditional application. Now, what Intel and Cisco can do together is truly unique because we control all parts of the stack, everything from the sober itself to the to the storage devices to the distributed file system that sits on top ofit. So, for example, in Etienne, hyper flecks were using obtain as a cashing here on because we write the distributed file system. We can speak in a balance between what we put in the cash in care how it moved out data to the non cashing 3 90 year, as as Intel came out with their latest processors that support memory class torched last memory. We support that now we can engineer this whole system and to end so that we can deliver to customers the innovation that Intel is bringing to the table in a way that's consumable by their, uh, one more thing I'll throw out there. So technology is great, but it needs to be resilient because I D departments will occasionally yank out the wrong wire. They are barely yank out the wrong drive. One of the things that we work together with Intel What? How do we court rise into this? How to be with reliability, availability, serviceability? How do we prevent against accidental removal or accidental insertion on DH? Some of those go innovations have let Teo asked, getting out in the market a hyper flecked system that uses these technologies in a way that's really usable by teens in our customs. I'd >> love to double click on that in the context of envy. Envy? What you guys were talking about, You mentioned horrible storage deck. I think he called it the horrible, scuzzy stack. And Laura, you were talking about the You know, the cheap and deep now is a spinning disk. So my understanding is that you've got a lot of overhead in the traditional scuzzy protocol, but nobody ever noticed because you had this mechanical device. Now, with flash storage, it all becomes exposed. And VM e allows just a like a bat phone. Right? Okay, so correct me where I got that wrong, But maybe you could give us the perspective. You know what? Why Envy Emmy is important from your standpoint. And how are you guys using it? >> Yeah, I think envy and me is just a much faster protocol. And you're absolutely right. We have a graph that we show of the old world and how much overhead there is all the way down to when you have obtained in a dim solution with no overhead octane assist. E still has a tiny bit, but there's a graph that shows all of that Leyton C is removed when you deploy, obtain so envy me gives you much greater band with right. The CPU is not bottlenecked, and you get greater CPU efficiency when you have a faster interface like and >> and like hyper flexes taking advantage of this house. Oh, >> yeah? Let me give you a couple of examples. So anything performance, the first thing that comes to mind is databases. So for those kinds of workloads, this system gets about 25% better performance. Next thing that comes to mind is people really don't know what they're gonna put on the system. So sometimes they put databases, sometimes put mixed workloads. So when we look at mixed workloads way get about 65% or so better I ops, we get 37% better lately sees. So even in a mixed I opened Wyman wherever have databases you may have a Web theory may have other things. This thing is definite resilient to handle the workload. So it's it just opens up the splatter abuse cases. >> So any other questions I had was specific to obtain. D ram has consumer applications, as does Flash Anand was obtained. Have similar consumer applications can achieve that volume so that the prices, you can come down, not free, but continue to sort of drive the curves. >> Eso When we look at the overall tam, we see the tam growing out over time. I don't know exactly when it crosses. Over the volume are the bits of the ram, but we absolutely see it growing over time. And as a technology ramps, it'll have a you know, it costs ramping curves. Well, >> it'll follow that curve. Okay, good. >> Yeah, Just Katie. Give us a little bit. Broad view of hyper flex here. Att? The show, people, you know, play any labs with the brand new obtained pieces or what? What other highlights that you and the team have this week? >> Yeah, absolutely. So in in Barcelona, we talked about high, perfect for all that is live today. So in the show floor, people can look at the hyper flex at the edge combined with S t one. How do you control How did deploy thousands of edge locations from a centralized location to the part of the inner side which cloud based management too? So that whole experience is unable. Now, at the other end of the spectrum is how do we drive even more performance. So we were always, always the performance leader. Now we're comparing ourselves to ourselves to behavior 35% better than our previous all flash. With the innovation Intel is bringing to the table, some of the other pieces are actually use cases. So there's a big hospital chain where my kids go toe goto, get treated and look and see the doctor. There are lots of medical use cases which require epic the medical software company to power it, whether it is the end terminals or it is the back and database. So that epic hyperspace and happy cachet those have been out be invalidated on hyper flex, using the technology that we just talked about around update on doll in via me that can get me there is that much more power. That means that when my my doctor and the nurse pulls off, the records don't show up fast. But all the medical records, all of those other high performance seeking applications also run that much more streamlined, so I would encourage people little water solution. We've got a tremendous set off demos out there to go up there and check us out >> and there's a great white paper out on this, right? That e g s >> e g is made one of the a company that I've seen benchmarking Ah, a hyper flex. >> So whatever Elaborate where they do a lab report or >> it's what they do is they bench around different hyper converge infrastructure vendors. So they did this first time around and they and they said, Well, we could pack that much more We EMS on a on a hyper flex with rotating drives. And then they did it again And I said, Well, now that you got all flash Well, deacon, you got now the performance and the ladies see leadership and then they did it again and they said, Well, hang on, you you've kind of left the competition that does that. That's not going to make a pretty chart to show when we compare your all in Miami against your hyper so many. When you get that good, you compare against yourselves. We've been the performance theater on the estate has been doing the >> data obtained. The next generation added up, >> and this is what a database workload. OK, nowyou bringing obtain a little toast to the latest report >> has that measures >> measures obtain against are all flash report and then also ship or measure across vendors. So >> where can I get this? Is at some party or website or >> it's off all of this. All of this is off off the Cisco Hyper Flex website on artist go dot com. But F is the companies that want to go directly to their about getting a more >> I guess final final question for you is you know, I think back the early is ucs. It was the memory enhancements that they had that allowed the dentist virtual ization in the industry back when it started. It sounds like we're just taking that to the next level with this next generation of solutions. What what else would you out about? The relationship with Cisco and Intel? >> Eso, Intel and Cisco worked together for years right innovation around the CPU and the platform, and it's super exciting to be expanding our relationship to storage. And I'm even more excited that the Cisco hyper flex solution is endorsing Intel obtain and three thing and and we're seeing great examples of really use workloads where are in customers can benefit from this technology. >> Katie Laura. Thanks so much for the update. Congratulations on the progress that you've made so far for David Dante on Student, and we'll be back with more coverage here. It's just go live 2019 in San Diego. Thanks for watching the cue >> theme.

Published Date : Jun 10 2019

SUMMARY :

Live from San Diego, California It's the queue covering So you know, So when you think of obtain its fast like D ram But it's You know, I remember when I when I started working with Dave, it was, you know, how do we get out of you So, you know, we've lived in a world of legacy So Cisco 80 I hyper flex is one of the products So you talked about free envy me. So you had to put, you know, backup power supplies on it. Persistent memory, the dims and you get greater density So what does it do for you and specifically your customers? One of the things that we work And Laura, you were talking about the You know, of that Leyton C is removed when you deploy, obtain so envy me gives and like hyper flexes taking advantage of this house. So anything performance, the first thing that comes to mind is databases. prices, you can come down, not free, but continue to sort of drive the curves. are the bits of the ram, but we absolutely see it growing over time. it'll follow that curve. What other highlights that you and the team have this week? So in the show floor, people can look at the hyper flex at the edge e g is made one of the a company that I've seen benchmarking Ah, And then they did it again And I said, Well, now that you got all flash Well, deacon, you got now the performance and the The next generation added up, and this is what a database workload. So But F is the companies that want to go directly to What what else would you out about? And I'm even more excited that the Cisco hyper flex solution Congratulations on the progress that you've made so far for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Laura CronePERSON

0.99+

LauraPERSON

0.99+

CiscoORGANIZATION

0.99+

KatiePERSON

0.99+

MiamiLOCATION

0.99+

BarcelonaLOCATION

0.99+

DavePERSON

0.99+

Katie LauraPERSON

0.99+

David DantePERSON

0.99+

San DiegoLOCATION

0.99+

ViennaLOCATION

0.99+

37%QUANTITY

0.99+

Kaustubh DasPERSON

0.99+

FirstQUANTITY

0.99+

San Diego, CaliforniaLOCATION

0.99+

IntelORGANIZATION

0.99+

EsoORGANIZATION

0.99+

intelORGANIZATION

0.99+

25 yearsQUANTITY

0.99+

firstQUANTITY

0.99+

Hyper FlexCOMMERCIAL_ITEM

0.99+

over 25 yearsQUANTITY

0.98+

oneQUANTITY

0.98+

bothQUANTITY

0.98+

about 25%QUANTITY

0.98+

todayDATE

0.98+

LeightonORGANIZATION

0.98+

first timeQUANTITY

0.98+

TeoPERSON

0.97+

this weekDATE

0.97+

about 65%QUANTITY

0.97+

Envy EmmyPERSON

0.97+

thousandsQUANTITY

0.97+

1.5 terabyteQUANTITY

0.96+

threeQUANTITY

0.96+

OneQUANTITY

0.95+

35%QUANTITY

0.95+

Two minuteQUANTITY

0.95+

Cisco ComputeORGANIZATION

0.94+

two organizationsQUANTITY

0.94+

3QUANTITY

0.92+

hyper flexORGANIZATION

0.9+

decadesQUANTITY

0.88+

90 yearQUANTITY

0.88+

90 departmentQUANTITY

0.87+

this yearDATE

0.87+

2019DATE

0.87+

MohrPERSON

0.87+

first thingQUANTITY

0.84+

Cisco UCSORGANIZATION

0.84+

envyPERSON

0.83+

Cisco LiveEVENT

0.83+

NandaORGANIZATION

0.81+

NANDORGANIZATION

0.8+

octaneOTHER

0.8+

envyORGANIZATION

0.78+

a decade agoDATE

0.78+

hyper flexCOMMERCIAL_ITEM

0.78+

NSGORGANIZATION

0.74+

USLOCATION

0.72+

FlexORGANIZATION

0.72+

Scott Nelson & Doug Wong, Toshiba Memory America | CUBE Conversation, December 2018


 

>> (enchanted music) >> Hi, I'm Peter Burris and welcome to another CUBE Conversation from our awesome Palo Alto Studios. We've got a great conversation today. We're going to be talking about flash memory, other types of memory, classes of applications, future of how computing is going to be made more valuable to people and how it's going to affect us all. And to do that we've got Scott Nelson who's the Senior Vice President and GM of the memory unit at Toshiba Memory America. And Doug Wong who's a member of the technical staff also at Toshiba Memory America. Gentlemen, welcome to the CUBE >> Thank you >> Here's where I want to start. That when you think about where we are today in computing and digital devices, etc., a lot of that has been made possible by new memory technologies, and let me explain what I mean. For a long, time storage was how we persisted data. We wrote transactions to data and we kept it there so we could go back and review it if we wanted to. But something happened in the last dozen years or so, it happened before then but it's really taken off, where we're using semi-conductor memory which allows us to think about how we're going to deliver data to different classes of devices, both the consumer and the enterprise. First off, what do you think about that and what's Toshiba's association with these semi-conductor memories been? Why don't we start with you. >> So, appreciate the observation and I think that you're spot on. So, roughly 35 years ago Toshiba had the vision of a non-volatile storage device. So, we brought to market, we invented NOR flash in 1984. And then later the market wanted something that was higher density, so we developed NAND flash technology, which was invented in 1987. So, that was kind of the genesis of this whole flash revolution that's really been disruptive to the industry as we see it today. >> So, added up, it didn't start off in large data centers. It started off in kind of almost unassuming devices associated with particular classes of file. What were they? >> So, it was very disruptive technology. So the first application for the flash technology was actually replacing audio tape and the phone answering machine. And then it evolved beyond that into replacing digital film. Kept going replacing cassette tapes and then if you look at today it enabled the thin and light that we see with the portability of the notebooks and the laptops. The mobility of content with our pictures, and our videos and our music. And then today, the smart phone, that wouldn't really be without the flash technology that's necessary that gives us all of the high density storage that we see. >> So, this suggests a pretty expansive role of semi-conductive related memory. Give us a little sense of where is the technology today? >> Well, the technology today is evolving. So, originally floating-gate flash was the primary type of flash that we created. It's called two-dimensional, cleaner, floating-gate flash. And that existed from the beginning all the way through maybe to 2015 or so. But, it was not possible to really shrink flash any further to increase the density. >> In the 2D form? >> In the 2D form, exactly. So, we to move to a 3D technology. Now Toshiba presented the world's first research papers on 3D flash back in 2007, but at that time it was not necessary to actually use 3D technology at that time. When it became difficult to increase the density of flash further that's when we actually moved to production of our 3D flash memory which we call BiCS flash. And BiCS stands for bit column stacked flash and that's our trade name for our 3D memory. >> So, we're now in 3D memory technology because we're creating more data and the applications are demanding more data, both for customer experience and new classes of application. So, when we think about those applications Toshiba used to have to go to people and tell them how they could use this technology and now you've got an enormous number of designers coming to you. Doug, what are some of the applications that you're anticipating hearing about that's driving the demand for these technologies? >> Well, beyond the existing applications, such as personal information appliances like laptops and portables, and also in data centers which is actually a large part of our business as well. We also see emerging technologies as becoming eventual large users of flash memory. Things like autonomous vehicles or augmented or virtual reality. Or even the emerging IOT infrastructure and that's necessary to support all these portable devices. So these are devices that currently aren't using large amounts of flash, but are going to be in the future. Especially as the flash memory gets more dense, and less expensive. >> So there's an enormous range of applications on the horizon. Going to drive greater demand for flash, but there's some business challenges of achieving that demand. We've seen periodic challenges of supply, price volatility. Scott, when we think about Toshiba as a leader in sustaining a kind of good flow of technology into these applications, what is Toshiba doing to continue to satisfy customer demand, sustain that leadership in this flash marketplace? >> So, first off as Doug had mentioned the floating-gate technology has reached its ability to scale in a meaningful way. And so the other part of that also, is the limitation on the dye density so the market demand for these applications are asking for a higher density, higher performance, lower latency type of applications. And so because floating-gate has reached the end of its usefulness in terms of being able to scale, that brought about the 3D. And so the 3D, that gives us our higher density and then along with the performance it enables these applications. So, from Toshiba's point, we are seeing that migration that is happening today. So, the floating-gate is migrating over to the 3D. It's not to say that floating-gate demand will go away. There's a lot of applications that require the lower density. But certainly the higher density where you need a dye level 256 512 giga bit even up to terabit of data. That's where the 3D's go into play. Second to that really goes into the cap back. So, obviously that requires a significant amount of cap backs not only on the development but also in terms of capacity. And that, of course, is very important to our customers and to the industry as a whole for the assurance of supply. >> So, we're looking so Toshiba's value to the marketplace is both in creating these new technologies, filling out a product line, but also stepping up and establishing the capacity through significant capital investments in a lot of places around the globe to ensure that the supply is there for the future. >> Exactly right. You know, Toshiba is the most experienced flash vendor out there and so we led the industry in terms of the floating-gate technology and we are technology leaders; industry's migrating into the 3D. And so, with that, we continue with a significant capital investment to maintain our presence in the industry as a leader. >> So, when we think about leadership, we think about leadership both in consumer markets, because volume is crucial to sustaining these investments, generating returns, but I also want to spend just a second talking about the enterprise as well. What types of enterprise relationships do you guys envision? And what types of applications do you think are going to be made possible by the continued exploitation of flash in some of these big applications that we're building? Doug, what do you think? >> Well, I think that new types of flash will be necessary for new, emerging applications such as AI or instant recognition of images. So, we are working on next generation flash technology. So, historically flash was designed for lowest cost per bit. So that's how flash began to take over the market for storage from hard drives. But there are a class of applications that do require very low latencies. In other words, they want faster performance. So we are working on a new flash technology that actually optimizes performance over cost. And that is actually a new change to the flash memory landscape. And as you alluded to earlier there's a lot of differentiation in flash now to address specific market segments. So that's what we are working on, actually. Now, generically, these new non-volatile memory technologies are called storage class memories. And they include things like optimized flash or potentially face change memories resistive memories. But all these memories, even though they're slower than say the volatile memories such as D-ram and S-ram they are, number one they're non-volatiles which means they can learn and they can store data for the future. So we believe that this class of memory is going to become more important in the future to address things like learning systems and AI. >> Because you can't learn what you can't remember. >> Exactly. >> I heard somebody say that once. In fact, I've got to give credit. That came straight from Doug. So, if we think about looking forward the challenges that we face ultimately is have the capital structure necessary to build these things. The right relationships with the designers necessary to provide guidance and suggest about the new cost of applications, and the ability to consistently deliver into this. Especially for some of these new applications as we look forward. Do you guys anticipate that there will be in the next few years, particular moments or particular application forms that are going to just kick a lot of or further kick some of the new designs, some of the new technologies into higher gear? Is there something autonomous vehicles or something that's just going to catalyze a whole new way of thinking about the role that memory plays in computing and in devices? >> Well, I think that building off of a lot of the applications that are utilizing NAND technology that we're going to see now we have the enterprise, we have the data center that's really starting to take off to adopt the value proposition of NAND. And as Doug had mentioned when we get into the autonomous vehicle we get into AI or we get into VR a lot of applications to come will be utilizing the high-density, low-latency that the flash offers for storage. >> Excellent. Gentlemen, thanks very much for being on the CUBE. Great conversation about Toshiba's role in semi-conductor memory, flash memory, and future leadership as well. >> Thank you, Peter. >> Scott Nelson is the Senior Vice President and GM of the memory unit at Toshiba Memory America. Doug Wong is a member of the tactical staff at Toshiba Memory America. I'm Peter Burris. Thanks once again for watching the CUBE. (enchanted music)

Published Date : Jan 4 2019

SUMMARY :

future of how computing is going to be made more valuable both the consumer and the enterprise. disruptive to the industry as we see it today. So, added up, it didn't start off in large data centers. and light that we see with the portability So, this suggests a pretty expansive role And that existed from the beginning all the way In the 2D form, exactly. that's driving the demand for these technologies? but are going to be in the future. on the horizon. So, the floating-gate is migrating over to the 3D. in a lot of places around the globe the floating-gate technology are going to be made possible by the So that's how flash began to take over the market and the ability to consistently deliver into this. a lot of the applications that are utilizing NAND technology being on the CUBE. Doug Wong is a member of the tactical staff

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

ToshibaORGANIZATION

0.99+

2007DATE

0.99+

Doug WongPERSON

0.99+

1987DATE

0.99+

DougPERSON

0.99+

ScottPERSON

0.99+

2015DATE

0.99+

1984DATE

0.99+

Scott NelsonPERSON

0.99+

December 2018DATE

0.99+

PeterPERSON

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

bothQUANTITY

0.99+

first applicationQUANTITY

0.99+

todayDATE

0.98+

SecondQUANTITY

0.98+

FirstQUANTITY

0.98+

firstQUANTITY

0.96+

35 years agoDATE

0.94+

first research papersQUANTITY

0.94+

CUBEORGANIZATION

0.92+

Palo Alto StudiosORGANIZATION

0.92+

BiCSTITLE

0.85+

a secondQUANTITY

0.82+

level 256 512 giga bitQUANTITY

0.8+

next few yearsDATE

0.77+

last dozen yearsDATE

0.76+

NAND flashOTHER

0.75+

3DQUANTITY

0.67+

twoQUANTITY

0.62+

onceQUANTITY

0.58+

to terabitQUANTITY

0.56+

2DQUANTITY

0.53+

CUBE ConversationEVENT

0.49+