Image Title

Search Results for Toshiba Memory America:

Brian Kumagai & Scott Beekman, Toshiba Memory America | CUBE Conversation, December 2018


 

>> Pomp YouTubers. Welcome to another cube conversation from ours, the Cube Studios in Palo Alto, California In this conversation, we're going to build upon some other recent conversations we've had which explores this increasingly important relationship between Senate conductor, memory or flash and new classes of applications that are really making life easier and changing the way that human beings in Iraq with each other, both in business as wells and consumer domains. And to explore these crucial issues. We've got two great guests. Brian Kumagai is the director of business development at Kashima Memory America. Scott Beekman is the director of managed flashes to Sheba Memory America's Well, gentlemen, welcome to the Cube. And yet so I'm gonna give you my perspective. I think this is pretty broadly held generally is that as a technology gets more broadly adopted, people get experience with. And as designers, developers, users gain experience with technology, they start to apply their own creativity, and it starts to morph and change and pull and stretch of technology and a lot of different directions. And that leads to increased specialization. That's happening in the flash work I got there, right? Scott? >> Yes, you know the great thing about flashes. Just how you because this it is and how widely it's used. And if you think about any electronic device it needs, it needs a brain processor. Needs to remember what it's doing. Memory and memories, What? What we do. And so we see it used in, you know, so many applications from smartphones, tablets, printers, laptops, you know, streaming media devices. And, uh and so you know, that that technology we see used, for example, like human see memory. It's a low power memory is designed for, for, like, smartphones that aren't plugged in. And, uh, and so when you see smartphones, one point five billion smartphones, it drives that technology and then migrates into all kinds of other applications is well, and then we see new technologies that come and replace that like U F s Universal flash storage. It's intended to be the high performance replacement. Mm. See, And so now that's also mag raiding its way through smartphones and all these other applications. >> So there's a lot of new applications that are requiring new classes of flash. But there's still a fair amount of, AH applications that require traditional flash technology. These air not coming in squashing old flash or traditional flasher other pipe types of parts, but amplifying their use in specialized ways. Brian Possible. But about >> that. So it's interesting that these days no one's really talks about the original in the hand flash that was ever developed back in nineteen eighty seven and that was based on a single of a cell, or SLC technology, which today still offers the highest reliability and fastest before me. Anand device available in the market today. And because of that, designers have found this type of memory to work well for storing boot code and some levels of operating system code. And these are in a wide variety of devices, both and consumer and industrial segments. Anything from set top boxes connecting streaming video. You've got your printers. You, Aye aye. Speakers. Just a numerous breath of product. I >> gotta also believe a lot of AA lot of i o t lot of industrial edge devices they're goingto feature. A lot of these kinds of parts may be disconnected, maybe connected beneath low power, very high speed, low cost, highly reliable. >> That's correct. And because these particular devices air still offered in lower densities. It does offer a very cost effective solutions for designers today. >> Okay, well, let's start with one of the applications. That is very, very popular. Press. When automated driving autonomous funerals on the work, it's it's There's a Thomas vehicles, but there's autonomous robots more broadly, let's start with Autonomous vehicle Scott. What types of flash based technologies are ending up in cars and why? >> Okay, so we've seen a lot of changes within vehicles over the last few years. You know, increasing storage requirements for, like, infotainment systems. You know, more sophisticated navigations of waste recognition. Ah, no instrument clusters more informed of digital displays and then ate ass features. You know, collision avoidance things like like that and all that's driving maur Maureen memory storage and faster performance memory. And in particular, what we've seen for automotive is it's basically adopting the type of memory that you have in your smartphone. So smart phones have a long time have used this political this. Mm. See a memory. And that has made you made my greatest weigh in automotive. And now a CZ smartphones have transition been transitioning do you? A fast, in fact, sushi. But it was the first introduced samples of U F U F S in early two thousand thirteen, and then you started to see it in smartphones in two thousand fifteen. Well, that's now migrating in tow. Automotive as well. They need to take advantage of the higher performance, the higher densities and so and so to Chiba. Zero. We're supporting, you know this, this growth within automotive as well. >> But automotive is a is a market on DH. Again, I think it's a great distinction you made. It's just not autonomous. It's thie even when the human being is still driving. It's the class of services that provided to that driver, both from an entertainment, say and and safety and overall experience standpoint. Is driving a very aggressively forward that volume in and the ability to demonstrate what you can do in a car is having a significant implications on the other classes of applications that we think for some of these high end parts. How is the experience that were incorporating into an automotive application or set of applications starting to impact? How others envision how their consumer products can be made better, Better experience safer, etcetera in other domains >> uh, well, yeah, I mean, we see that all kinds of applications are taking advantage of the these technologies. Like like even air via air, for example. Again, it's all it's all taking advantage of this idea of needing higher, larger density of storage at a lower cost with low power, good performance and all these applications air taking an advantage of that, including automotive. And if you look it automotive, you know, it's it's not just within the vehicle. Actually, it's estimated, you know, projected that autonomous vehicles we need, like one two, three terabytes of storage within the within the vehicle. But then all the data that's collected from cameras and sensors need to be uploaded to the cloud and all that needs to be stored. So that's driving storage to data centers because you basically need to learn from that to improve the software. For the for, Ah, you know, for the time being, Yeah, exactly. So all these things are driving more and more storage, both with within the devices themselves, like a car is like a device, but also in the data centers as >> well. So if we can't Brian take us through some of the decisions that designer has to go through to start to marry some of these different memory technologies together to create, whether it's an autonomous car, perhaps something a little bit more mundane. This might be a computing device. What is the designer? How does is I think about how these fit together to serve the needs of the user in the application. >> Um, I think >> these days, you know a lot of new products. They require a lot of features and capabilities. So I think a lot of input or thought is going into the the memory size itself. You know, I think software guys are always wanting to have more storage, to write more code, that sort of thing. So I think that is one lt's step that they think about the size of the package and then cost is always a factor as well. So you know nothing about the Sheba's. We do offer a broad product breath that producing all types of I'm not about to memory that'll fit everyone's needs. >> So give us some examples of what that product looks like and how it maps to some of these animation needs. >> So we like unmentioned we offered the lower density SLC man that's thought that a one gigabit density and then it max about maximum thirty to get bit dying. And as you get into more multi level cell or triple level cell or cue Elsie type devices, you're been able to use memory that's up to a single diet could be upto one point three three terror bits. So there's such a huge range of memory devices available >> today. And so if we think about where the memories devices are today and we're applications or pulling us, what kind of stuff is on the horizon scarred? >> Well, one is just more and more storage for smartphones. We want more, you know, two fifty six gigabyte fight told Gigabyte, one terabyte and and in particular for a lot of these mobile devices. You know, like convention You f s is really where things were going and continuing to advance that technology continuing to increase their performance, continuing to increase the densities. And so, you know, and that enables a lot of applications that we actually a hardman vision at this point. And when we know autonomous vehicles are important, I'm really excited about that because I'm in need that when I'm ninety, you know can drive anywhere. I want everyone to go, but and then I I you know where I's going, so it's a lot of things. So you know, we have some idea now, but there's things that we can't envision, and this technology enables that and enables other people who can see how do I take advantage of that? The faster performance, the greater density is a lower cost forbid. >> So if we think about, uh, General Computer, especially some of these out cases were talking about where the customer experience is a function of how fast a device starts up or how fast the service starts up, or how rich the service could be in terms of different classes of input, voice or visual or whatever else might be. And we think about these data centers where the closed loop between the processing and the interesting of some of these models and how it affects what that transactions going to do. We're tournament lower late. See, that's driving a lot of designers to think about how they can start moving certain classes of function closer to the memory, both from a security standpoint from an error correction standpoint, talk to us a little bit about the direction that to Sheba imagines, Oh, the differential ability of future memories relative Well, memories today, relative to where they've been, how what kinds of features and functions are being added to some of these parts to make them that much more robust in some of these application. >> I think a >> CZ you meant mentioned the robustness. So the memory itself. And I think that actually some current memory devices will allow you to actually identify the number of bits that are being corrected. And then that kind of gives an indication the integrity or the reliability of a particular block of memory. And I think as users are able to get early detection of this, they could do things to move the data around and then make their overall storage more reliable. >> Things got way. Yeah. I mean, we continue, Teo, figure out how to cram orbits within a given space. You know, moving from S l see them. I'll see the teal seemed. And on cue, Elsie. That's all enabling that Teo enabled greater storage. Lower cost on DH, then, Aziz, we just talked from the beginning. Just that there's all kinds of differentiation in terms of of flash products that are really tailored for certain things. Someone focus for really high performance and give up some power. And others you need a certain balance of that. Were, you know, a mobile device, you know, handheld device. You're not going to play. You know, You give up some performance for less power. And so there's a whole spectrum. It's someone you know. Endurance is incredibly important. So we have a full breast of products that address all those particular needs. >> The designer. It's just whatever I need. I could come to you. >> Yeah, that's right. So she betrays them. The full breath of products available. >> All right, gentlemen. Thank you very much for being on the Cube. Brian Coma Guy, director of business development to Sheba Memory America. Scott Beekman, director of Manage Flash. Achieve a memory. America again. Thanks very much for being on the Q. Thank you. Thank you. And this closes this cube conversation on Peter Burress until next time. Thank you very much for watching

Published Date : Jan 30 2019

SUMMARY :

And that leads to increased specialization. And so we see it used in, you know, so many applications from smartphones, So there's a lot of new applications that are requiring new classes of flash. So it's interesting that these days no one's really talks about the original A lot of these kinds of parts may be disconnected, And because these particular devices air still offered in lower densities. When automated driving autonomous funerals on the work, And that has made you made my greatest weigh in automotive. It's the class of services that provided to that driver, both from an entertainment, And if you look it automotive, you know, it's it's not just within the to serve the needs of the user in the application. So you know nothing about the Sheba's. And as you get into more multi level cell or triple And so if we think about where the memories devices are today and we're And so, you know, the direction that to Sheba imagines, Oh, And I think that actually some current memory devices And others you need a certain balance of that. I could come to you. So she betrays them. Thank you very much for being on the Cube.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Brian KumagaiPERSON

0.99+

Scott BeekmanPERSON

0.99+

Peter BurressPERSON

0.99+

Brian Coma GuyPERSON

0.99+

Brian PossiblePERSON

0.99+

IraqLOCATION

0.99+

December 2018DATE

0.99+

AzizPERSON

0.99+

firstQUANTITY

0.99+

ScottPERSON

0.99+

ninetyQUANTITY

0.99+

Kashima Memory AmericaORGANIZATION

0.99+

BrianPERSON

0.99+

Sheba Memory AmericaORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

three terabytesQUANTITY

0.99+

SenateORGANIZATION

0.99+

ElsiePERSON

0.99+

one terabyteQUANTITY

0.98+

bothQUANTITY

0.98+

Cube StudiosORGANIZATION

0.98+

GigabyteORGANIZATION

0.97+

TeoPERSON

0.97+

todayDATE

0.97+

two thousand fifteenQUANTITY

0.97+

CubeORGANIZATION

0.96+

Toshiba Memory AmericaORGANIZATION

0.96+

U F U F SCOMMERCIAL_ITEM

0.94+

twoQUANTITY

0.94+

two great guestsQUANTITY

0.94+

five billion smartphonesQUANTITY

0.93+

one gigabitQUANTITY

0.93+

U FORGANIZATION

0.92+

two fifty six gigabyteQUANTITY

0.92+

oneQUANTITY

0.91+

AmericaLOCATION

0.91+

ShebaORGANIZATION

0.9+

singleQUANTITY

0.85+

threeQUANTITY

0.85+

one pointQUANTITY

0.82+

ShebaPERSON

0.74+

eighty sevenQUANTITY

0.74+

FlashTITLE

0.73+

thirtyQUANTITY

0.73+

three terror bitsQUANTITY

0.72+

ScottORGANIZATION

0.72+

AnandORGANIZATION

0.71+

early twoDATE

0.71+

one ofQUANTITY

0.7+

single dietQUANTITY

0.68+

MaureenPERSON

0.67+

thousand thirteenQUANTITY

0.64+

ChibaORGANIZATION

0.63+

ZeroQUANTITY

0.62+

cellQUANTITY

0.61+

last few yearsDATE

0.54+

ThomasPERSON

0.52+

nineteenDATE

0.51+

applicationsQUANTITY

0.47+

Scott Nelson & Doug Wong, Toshiba Memory America | CUBE Conversation, December 2018


 

>> (enchanted music) >> Hi, I'm Peter Burris and welcome to another CUBE Conversation from our awesome Palo Alto Studios. We've got a great conversation today. We're going to be talking about flash memory, other types of memory, classes of applications, future of how computing is going to be made more valuable to people and how it's going to affect us all. And to do that we've got Scott Nelson who's the Senior Vice President and GM of the memory unit at Toshiba Memory America. And Doug Wong who's a member of the technical staff also at Toshiba Memory America. Gentlemen, welcome to the CUBE >> Thank you >> Here's where I want to start. That when you think about where we are today in computing and digital devices, etc., a lot of that has been made possible by new memory technologies, and let me explain what I mean. For a long, time storage was how we persisted data. We wrote transactions to data and we kept it there so we could go back and review it if we wanted to. But something happened in the last dozen years or so, it happened before then but it's really taken off, where we're using semi-conductor memory which allows us to think about how we're going to deliver data to different classes of devices, both the consumer and the enterprise. First off, what do you think about that and what's Toshiba's association with these semi-conductor memories been? Why don't we start with you. >> So, appreciate the observation and I think that you're spot on. So, roughly 35 years ago Toshiba had the vision of a non-volatile storage device. So, we brought to market, we invented NOR flash in 1984. And then later the market wanted something that was higher density, so we developed NAND flash technology, which was invented in 1987. So, that was kind of the genesis of this whole flash revolution that's really been disruptive to the industry as we see it today. >> So, added up, it didn't start off in large data centers. It started off in kind of almost unassuming devices associated with particular classes of file. What were they? >> So, it was very disruptive technology. So the first application for the flash technology was actually replacing audio tape and the phone answering machine. And then it evolved beyond that into replacing digital film. Kept going replacing cassette tapes and then if you look at today it enabled the thin and light that we see with the portability of the notebooks and the laptops. The mobility of content with our pictures, and our videos and our music. And then today, the smart phone, that wouldn't really be without the flash technology that's necessary that gives us all of the high density storage that we see. >> So, this suggests a pretty expansive role of semi-conductive related memory. Give us a little sense of where is the technology today? >> Well, the technology today is evolving. So, originally floating-gate flash was the primary type of flash that we created. It's called two-dimensional, cleaner, floating-gate flash. And that existed from the beginning all the way through maybe to 2015 or so. But, it was not possible to really shrink flash any further to increase the density. >> In the 2D form? >> In the 2D form, exactly. So, we to move to a 3D technology. Now Toshiba presented the world's first research papers on 3D flash back in 2007, but at that time it was not necessary to actually use 3D technology at that time. When it became difficult to increase the density of flash further that's when we actually moved to production of our 3D flash memory which we call BiCS flash. And BiCS stands for bit column stacked flash and that's our trade name for our 3D memory. >> So, we're now in 3D memory technology because we're creating more data and the applications are demanding more data, both for customer experience and new classes of application. So, when we think about those applications Toshiba used to have to go to people and tell them how they could use this technology and now you've got an enormous number of designers coming to you. Doug, what are some of the applications that you're anticipating hearing about that's driving the demand for these technologies? >> Well, beyond the existing applications, such as personal information appliances like laptops and portables, and also in data centers which is actually a large part of our business as well. We also see emerging technologies as becoming eventual large users of flash memory. Things like autonomous vehicles or augmented or virtual reality. Or even the emerging IOT infrastructure and that's necessary to support all these portable devices. So these are devices that currently aren't using large amounts of flash, but are going to be in the future. Especially as the flash memory gets more dense, and less expensive. >> So there's an enormous range of applications on the horizon. Going to drive greater demand for flash, but there's some business challenges of achieving that demand. We've seen periodic challenges of supply, price volatility. Scott, when we think about Toshiba as a leader in sustaining a kind of good flow of technology into these applications, what is Toshiba doing to continue to satisfy customer demand, sustain that leadership in this flash marketplace? >> So, first off as Doug had mentioned the floating-gate technology has reached its ability to scale in a meaningful way. And so the other part of that also, is the limitation on the dye density so the market demand for these applications are asking for a higher density, higher performance, lower latency type of applications. And so because floating-gate has reached the end of its usefulness in terms of being able to scale, that brought about the 3D. And so the 3D, that gives us our higher density and then along with the performance it enables these applications. So, from Toshiba's point, we are seeing that migration that is happening today. So, the floating-gate is migrating over to the 3D. It's not to say that floating-gate demand will go away. There's a lot of applications that require the lower density. But certainly the higher density where you need a dye level 256 512 giga bit even up to terabit of data. That's where the 3D's go into play. Second to that really goes into the cap back. So, obviously that requires a significant amount of cap backs not only on the development but also in terms of capacity. And that, of course, is very important to our customers and to the industry as a whole for the assurance of supply. >> So, we're looking so Toshiba's value to the marketplace is both in creating these new technologies, filling out a product line, but also stepping up and establishing the capacity through significant capital investments in a lot of places around the globe to ensure that the supply is there for the future. >> Exactly right. You know, Toshiba is the most experienced flash vendor out there and so we led the industry in terms of the floating-gate technology and we are technology leaders; industry's migrating into the 3D. And so, with that, we continue with a significant capital investment to maintain our presence in the industry as a leader. >> So, when we think about leadership, we think about leadership both in consumer markets, because volume is crucial to sustaining these investments, generating returns, but I also want to spend just a second talking about the enterprise as well. What types of enterprise relationships do you guys envision? And what types of applications do you think are going to be made possible by the continued exploitation of flash in some of these big applications that we're building? Doug, what do you think? >> Well, I think that new types of flash will be necessary for new, emerging applications such as AI or instant recognition of images. So, we are working on next generation flash technology. So, historically flash was designed for lowest cost per bit. So that's how flash began to take over the market for storage from hard drives. But there are a class of applications that do require very low latencies. In other words, they want faster performance. So we are working on a new flash technology that actually optimizes performance over cost. And that is actually a new change to the flash memory landscape. And as you alluded to earlier there's a lot of differentiation in flash now to address specific market segments. So that's what we are working on, actually. Now, generically, these new non-volatile memory technologies are called storage class memories. And they include things like optimized flash or potentially face change memories resistive memories. But all these memories, even though they're slower than say the volatile memories such as D-ram and S-ram they are, number one they're non-volatiles which means they can learn and they can store data for the future. So we believe that this class of memory is going to become more important in the future to address things like learning systems and AI. >> Because you can't learn what you can't remember. >> Exactly. >> I heard somebody say that once. In fact, I've got to give credit. That came straight from Doug. So, if we think about looking forward the challenges that we face ultimately is have the capital structure necessary to build these things. The right relationships with the designers necessary to provide guidance and suggest about the new cost of applications, and the ability to consistently deliver into this. Especially for some of these new applications as we look forward. Do you guys anticipate that there will be in the next few years, particular moments or particular application forms that are going to just kick a lot of or further kick some of the new designs, some of the new technologies into higher gear? Is there something autonomous vehicles or something that's just going to catalyze a whole new way of thinking about the role that memory plays in computing and in devices? >> Well, I think that building off of a lot of the applications that are utilizing NAND technology that we're going to see now we have the enterprise, we have the data center that's really starting to take off to adopt the value proposition of NAND. And as Doug had mentioned when we get into the autonomous vehicle we get into AI or we get into VR a lot of applications to come will be utilizing the high-density, low-latency that the flash offers for storage. >> Excellent. Gentlemen, thanks very much for being on the CUBE. Great conversation about Toshiba's role in semi-conductor memory, flash memory, and future leadership as well. >> Thank you, Peter. >> Scott Nelson is the Senior Vice President and GM of the memory unit at Toshiba Memory America. Doug Wong is a member of the tactical staff at Toshiba Memory America. I'm Peter Burris. Thanks once again for watching the CUBE. (enchanted music)

Published Date : Jan 4 2019

SUMMARY :

future of how computing is going to be made more valuable both the consumer and the enterprise. disruptive to the industry as we see it today. So, added up, it didn't start off in large data centers. and light that we see with the portability So, this suggests a pretty expansive role And that existed from the beginning all the way In the 2D form, exactly. that's driving the demand for these technologies? but are going to be in the future. on the horizon. So, the floating-gate is migrating over to the 3D. in a lot of places around the globe the floating-gate technology are going to be made possible by the So that's how flash began to take over the market and the ability to consistently deliver into this. a lot of the applications that are utilizing NAND technology being on the CUBE. Doug Wong is a member of the tactical staff

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Peter BurrisPERSON

0.99+

ToshibaORGANIZATION

0.99+

2007DATE

0.99+

Doug WongPERSON

0.99+

1987DATE

0.99+

DougPERSON

0.99+

ScottPERSON

0.99+

2015DATE

0.99+

1984DATE

0.99+

Scott NelsonPERSON

0.99+

December 2018DATE

0.99+

PeterPERSON

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

bothQUANTITY

0.99+

first applicationQUANTITY

0.99+

todayDATE

0.98+

SecondQUANTITY

0.98+

FirstQUANTITY

0.98+

firstQUANTITY

0.96+

35 years agoDATE

0.94+

first research papersQUANTITY

0.94+

CUBEORGANIZATION

0.92+

Palo Alto StudiosORGANIZATION

0.92+

BiCSTITLE

0.85+

a secondQUANTITY

0.82+

level 256 512 giga bitQUANTITY

0.8+

next few yearsDATE

0.77+

last dozen yearsDATE

0.76+

NAND flashOTHER

0.75+

3DQUANTITY

0.67+

twoQUANTITY

0.62+

onceQUANTITY

0.58+

to terabitQUANTITY

0.56+

2DQUANTITY

0.53+

CUBE ConversationEVENT

0.49+

Ravi Pendekanti, Dell EMC and Steve Fingerhut, Toshiba Memory America | Dell Technologies World 2018


 

>> Narrator: Live from Las Vegas, it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Welcome back to the Sands! We continue here live on theCUBE, our coverage here of Dell Technologies World 2018. 14,000 attendees wrapping up day 3. We are live as I said with Stu Miniman. I'm John Walls, and it is now our pleasure to welcome to the set Steve Fingerhut, who is the SVP and GM of SSD and Cloud Software Business Units at Toshiba Memory Americas. Steve, good to see you, sir. >> Great to be here. >> And Ravi Pendekanti, who is the SVP of Server Solutions Product Management and Marketing at Dell. >> Thank you, John. >> Ravi, good to see you, sir. >> Same here, sir. >> Yeah, let's talk about, first off, show theme. Make it real, right? Digital transformation, but make it real. >> Ravi: Yup. >> So, what does it mean to the two of you? We've heard that theme over and over again, and what do you think that means to your customers as well? How do you make it real for them? >> First and foremost, I think the whole idea of new workloads come in play. People talk about machine learning and deep learning as you, I'm sure, are aware of. People talk about analytics. The fact is, each of us is collecting a lot more data than a year ago. Which is good for my friend Steve and others, and obviously, we like the fact that customers are looking at making more real-time, if not near real-time, analysis. And the whole notion of governmental agencies across the world trying to go into more of a digital world where if you look at a country like India, for example, I mean, they have a billion people who are looking at other cards where they didn't have a form of identification for each individuals. Now if they're gone through a new transformation phase where they want to ensure that every single one of them actually has a way of identification, and it's all done digitally with accounts and everything else that goes on, this is just some of the manifestations of the digital transformation we see, whether it is in your industries, pick your favorite one, whether it's financial sector, the manufacturing, health care, all the way to governmental agencies. I think each of them are looking at how do they look at providing rights out of services. Either for their customers or their communities at large, and, you know, we can't be more excited about what this provides an opportunity for us to go back and provide a way for them to communicate and do some cool takes. >> Steve? >> Yeah, Ravi, you mentioned the workloads that are driving the new campaign or that you're highlighting in the new campaign Make It Real, and, many of those workloads are, they're new architectures, and they were basically built from day one on SSDs, right? Counting on that performance, reliability, etc. And so obviously, that's what we're here to promote at the show. And you can see the new workloads, obviously anything Cloud very much counts on SSDs and Flash. And then as you get into machine learning, different types of artificial intelligence, those are certainly counting on the performance of SSDs. And keep nothing more real than actual products in hands so with Ravi's products and ours, we have a number of demo's, including the new AMD platforms that the Power Edge team is rolling out, running all of these new workloads on Toshiba SSDs. So it's a good way to make it real. >> Yeah, Steve, maybe bring us in a little bit kind of the state of storage, though. We have talked about SSDs, and we're now a decent way into it. Dell's announcement talking a lot about NVMe. Maybe give us the Toshiba viewpoint on memory and storage and some of those transitions we're going through. >> Right, well, I guess the secret's out that SSDs are a great addition. Right? You take pretty much any environment, and you add SSDs, and it will go faster. So it's pretty much the biggest bang for the buck in terms of incremental performance. So what that means is just tremendous growth. And the last couple years have been, really for the industry, keeping up with that really increased demand. So there's inherent efficiencies in the SSDs. We're trying to build as many as we can, and then obviously trying to help our customers use them in the most efficient ways possible. >> Yeah, I agree with Steve. I mean, it is an efficiency equation. The fact of the matter is, you really do need to provide customers with a better way of ensuring that timely information is made available. Again, it's information, and it has to be timely. Because if you really don't provide it at a time when our customers need it, there's really no advantage of being really, having right infrastructure, right? Or lack of it, for that matter. Case in point, if you look at what we just announced, Stu. Yesterday, we had talked about the R840, for example, which is a 4-socket server. And we actually announced it with 44 NVMe drives, believe it or not. That's about two times more than the nearest competitor that just gives you an idea into the amount of data that customers are consuming on the applications, obviously. And more importantly, when we were coming up with this notion, we felt that 12 was probably a good number. Maybe 24 was going to be a stretch. And the number of customers we have talked to even in the last two days, I mean it's been huge. We're hearing them saying, "Wow, we can't wait "to go get this product in our hands." Because that really shows you that there is already a pretty big demand for these kinds of technologies to be brought in. >> Yeah, I like what you were saying there, Ravi, because I'd like both of you to help connect the dots for us a little bit. 'Cause when I think back to, okay, what speed disc did I have? Or was the flash piece in? This was something that, it was traditionally the server admin. Maybe there was some application person that came in. But you're talking about C-level discussions here. The trends that Jeff Clark talked about in his keynote as to, you know, this is what the business is driving things, like AINML and some of those. Steve, how are the conversations changing to get this piece of the infrastructure up at more of the C-level discussion? >> Right, it certainly is part of the transformation where it's been talked about several times this week. IT has moved from being a cost center to the revenue center and then that puts it on the CEO's radar much more squarely. You definitely want to, if you're the CIO, CTO, infrastructure leader, your goal is to try to deliver that agility, right? Don't stand in the way of revenue, while managing security, managing cost. And it's those dynamics and, you know, it's not a new conversation, but it's the public versus private hybrid. What exactly should go where? And those are still top-of-mind for all the customers we're talking to. >> Actually, Steve hit on something else, if I may, which is about security. And I can't tell you, Stu, a good 70% of the customers on average today, do not finish a conversation in the 30-minute chunks we have had without talking about what is it you guys are going to do for security. And that's a huge number or an increase from where we were just even a year or two ago. And imagine having said that, if you really had a longer conversation, security obviously is one of those fundamental pillars that everybody comes down to. Because everybody's worried about data, and the fact that there's leakage of information, if I may, pertaining to this. And more importantly, you know, making it real, if I may, to your point earlier on, Jon, as well. Which is, customers don't want to look at just the buzz words. They're now asking for proof points. Proof points on, "Hey, what does this really mean "in terms of security?" For example, when we talk about zero arrays or, you know, secure arrays, sorry, which is, how do you go retire an old data server or a box without necessarily worrying about the bits and bytes being left on the disc drives? So we have come up with new technologies which enables all the drives to be wiped. Makes it a lot easier, of course, with some of the stuff we do with Toshiba, and some of their technologies as well. But my point, again, being that I think now, our C-level execs are coming in asking us for, not just the major teams, but they're actually more interested in finding out how and what is it we're doing to help some of those major teams. And I think the number of requests we have had for some of the white papers we have come out with, Steve, I think has only grown up now. >> Absolutely. >> Which, I don't think was happening in the past from the C-level execs. So it's absolutely a valid statement. >> Yeah, well, there were Senate hearings last year and some pretty famous data breaches, and you have senators grilling CEO's, and it was shocking. They actually used, there was a senator who used the term, full disc encryption, and taking a CEO to task for not using full disc encryption and so I think that might help, talking about getting on the C-level radar. That helps. >> That was good staff work there. >> Exactly, exactly. That was a good plant. >> Yeah, right. But to the point of security. Obviously with this exponential growth of data, unstructured, blowing up, and then all of a sudden, you become a lot riper, if you will, and you've got a lot more to manage. And so with that, how much more at risk are people, and is that what's raising the awareness now in the C-sweep? Is they realize that they're a much bigger target now than maybe when data wasn't as plentiful you know, back in the old days, if you will. Is that part of this? Or is that it? >> I believe that's a big part of it. And, one of the other things that's obviously going with this is, if you really look at the disclosures that any of us have to go through, even in terms of whether it's a simple credit card you're looking at. I don't know if you've ever seen those. As we were doing some of the analysis, we noticed. You want a simple credit card application, we'd had some security, and, you know, personal information clauses is actually garnered by about 120% in terms of the number of things they ask for. And making sure that the consumer is aware as well. Right? I don't think that happened before. And the fact of the matter is, I don't think there's a single day that we can go through any of the trade press without somebody coming out with a security breach maybe, or a security feature, whether it's hardware or software. And I think there's a whole security encryption device or drives, I think there's a huge demand for that as well, right? >> Absolutely. And you talk about the data growth. It's obviously been phenomenal. In his keynote Monday, Michael Dell talked about the data growth from machine to machine, and it's going to make this look like a little bit of data. So like you said, just that risk, the exposure is much larger, and you have to keep that data secure. So as Ravi mentioned, we work closely with Dell. There's a lot of, it's not an easy problem to solve, right? So there's a lot of engineering to make sure that you have that end-to-end security, and that's why we work with things like the instant system erase, right? So you can, one button, erase the system in minutes, versus in the past, it might take hours and days. And do you really trust that it's gone? Those types of things, so I think that those are enabling a much more robust security, and you basically have to make it easy, right? >> Letting people sleep at night. >> Exactly. >> That's what you're doing. >> It's interesting. In the past, the only way you could do that was you had to write a series of 0's and 1's on their driver. And that would take, you know, hours together. That's how you would erase your data, right? I love when you talk about autonomous vehicles. Imagine there's a whole big, a whole discussion as much as how do you make sure that you have the, that's kind of an edge computing as Jeff, I think, mentioned on stage yesterday. That you want to not have latency come in between making a deterministic turn, right? Or an object appears. You don't want to wait for the breaking system to play because some decision needs to be made in a remote center. Right? Which essentially means now you have got data being collected and analyzed and acted upon. And there are things like that, and you've probably heard of all the insurance companies are working on, you know, what kind of data can we collect it, because when crashes happen, right? How do you make sure that, you know, there are privacy laws in place and what-not, who has access to it, plenty of stuff. >> John: Sure. >> Steve, want to get your viewpoint. We're getting not far from the end of the show. Why don't you give, in general, the partner viewpoint of Dell technology's world in, specifically Toshiba. I know you've got, there's the booze, there's the party, there's demos, there's labs, so a lot of activity your team's doing, for those that haven't been here. And, you know, Toshiba's worked with both Legacy Dell, Legacy MC. Any commentary to close on that coming together? >> Right. I think last year, I used the Jordan/Pippen analogy, but it's only gotten better since then. So it's a great partnership. We're definitely growing strong together, and like you said, that doesn't happen overnight. That's years of hard work and trust that makes that a possibility. But I truly believe we're only getting started. And you know, one of our goals we're working together is how do we make these important capabilities like security more common, more accessible, lower cost, those types of things. So that's a major factor, major focus area for us going forward. But definitely see this is just the beginning. >> Any key highlight from the show or activities that your team's been doing here that you'd like to leave us with? >> Sure. Yeah, we have a significant presence here. We have eight server demos running. I mentioned the AMD servers, multiple workloads across these new emerging workloads. And then the hands-on demo zone. Where actually, the developers can use the systems and software they want to evaluate. They can use them in the Cloud. Those are all being driven by Toshiba, and of course, as part of the Dell Solution. Yeah, we're happy. Honored to be a big part of the show this year. >> Jordan/Pippen, I was thinking more like Curry/Durant. That's where I was going with that. >> Exactly. That might be a little more up-to-date, right? >> I'm good with Jordan. No, he wasn't bad. Pretty good pair like you two are. Thanks for joining us both. We appreciate it, Ravi, Steve. >> Thank you. >> Thank you. >> Good seeing you here. Back with more of a continue, our live coverage here on theCUBE where Dell Technologies World 2018, and we are in Las Vegas.

Published Date : May 2 2018

SUMMARY :

Brought to you by Dell EMC and its ecosystem partners. I'm John Walls, and it is now our pleasure And Ravi Pendekanti, who is the SVP of Yeah, let's talk about, first off, show theme. of the digital transformation we see, And you can see the new workloads, obviously anything Cloud kind of the state of storage, though. and you add SSDs, and it will go faster. And the number of customers we have talked to because I'd like both of you to help connect the dots And it's those dynamics and, you know, And more importantly, you know, making it real, if I may, from the C-level execs. and you have senators grilling CEO's, That was That was a good plant. you know, back in the old days, if you will. And making sure that the consumer is aware as well. and you have to keep that data secure. In the past, the only way you could do that Why don't you give, in general, the partner viewpoint and like you said, that doesn't happen overnight. and of course, as part of the Dell Solution. That's where I was going with that. That might be a little more up-to-date, right? Pretty good pair like you two are. Good seeing you here.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
StevePERSON

0.99+

Jeff ClarkPERSON

0.99+

JordanPERSON

0.99+

Steve FingerhutPERSON

0.99+

Stu MinimanPERSON

0.99+

JohnPERSON

0.99+

CurryPERSON

0.99+

Ravi PendekantiPERSON

0.99+

Michael DellPERSON

0.99+

John WallsPERSON

0.99+

ToshibaORGANIZATION

0.99+

DurantPERSON

0.99+

JonPERSON

0.99+

JeffPERSON

0.99+

RaviPERSON

0.99+

Las VegasLOCATION

0.99+

twoQUANTITY

0.99+

30-minuteQUANTITY

0.99+

MondayDATE

0.99+

PippenPERSON

0.99+

70%QUANTITY

0.99+

DellORGANIZATION

0.99+

last yearDATE

0.99+

eachQUANTITY

0.99+

yesterdayDATE

0.99+

Dell EMCORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Legacy MCORGANIZATION

0.99+

bothQUANTITY

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

FirstQUANTITY

0.98+

YesterdayDATE

0.98+

Dell Technologies World 2018EVENT

0.98+

one buttonQUANTITY

0.98+

Toshiba Memory AmericasORGANIZATION

0.98+

Toshiba Memory AmericaORGANIZATION

0.97+

this weekDATE

0.97+

this yearDATE

0.97+

about 120%QUANTITY

0.97+

44 NVMeQUANTITY

0.97+

Legacy DellORGANIZATION

0.96+

zero arraysQUANTITY

0.96+

day 3QUANTITY

0.95+

R840COMMERCIAL_ITEM

0.94+

IndiaLOCATION

0.94+

a yearDATE

0.93+

a year agoDATE

0.93+

each individualsQUANTITY

0.93+

4-socketQUANTITY

0.93+

StuPERSON

0.91+

single dayQUANTITY

0.9+

12QUANTITY

0.9+

14,000 attendeesQUANTITY

0.86+

0QUANTITY

0.86+

SenateORGANIZATION

0.86+

Joel Dedrick, Toshiba | CUBEConversation, February 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.

Published Date : Feb 28 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Peter BurrisPERSON

0.99+

2012DATE

0.99+

20 secondsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

Joel DedrickPERSON

0.99+

1986DATE

0.99+

100sQUANTITY

0.99+

500 waysQUANTITY

0.99+

February 2019DATE

0.99+

200 waysQUANTITY

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

GooglesORGANIZATION

0.99+

300%QUANTITY

0.99+

twoQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

10,000QUANTITY

0.99+

Toshiba Memory CorporationORGANIZATION

0.99+

25 years laterDATE

0.98+

Black FridayEVENT

0.98+

bothQUANTITY

0.98+

10ths of secondsQUANTITY

0.98+

oneQUANTITY

0.97+

SaasORGANIZATION

0.96+

Silicon Valley,LOCATION

0.96+

Every monthQUANTITY

0.93+

50,000, 100,000QUANTITY

0.92+

FlashORGANIZATION

0.92+

EMCORGANIZATION

0.91+

two hard drivesQUANTITY

0.9+

Networks Storage SoftwareORGANIZATION

0.89+

millions of times a minuteQUANTITY

0.88+

one wayQUANTITY

0.88+

million machines a yearQUANTITY

0.88+

first transportQUANTITY

0.87+

single computeQUANTITY

0.83+

ChristmasEVENT

0.82+

Cloud ProviderORGANIZATION

0.81+

KubernetesTITLE

0.78+

FlashTITLE

0.78+

two block storage command setsQUANTITY

0.77+

step oneQUANTITY

0.75+

NVMETITLE

0.75+

1,000s of machinesQUANTITY

0.75+

CubeORGANIZATION

0.72+

coupleQUANTITY

0.63+

NVMEORGANIZATION

0.62+

Cube ConversationEVENT

0.6+

SCSITITLE

0.57+

KubernetesORGANIZATION

0.49+

CUBEConversationEVENT

0.49+

nodeTITLE

0.49+

PresidentPERSON

0.48+

ConversationEVENT

0.36+