Joel Dedrick, Toshiba | CUBEConversation, February 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joel | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
20 seconds | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Joel Dedrick | PERSON | 0.99+ |
1986 | DATE | 0.99+ |
100s | QUANTITY | 0.99+ |
500 ways | QUANTITY | 0.99+ |
February 2019 | DATE | 0.99+ |
200 ways | QUANTITY | 0.99+ |
Toshiba Memory America | ORGANIZATION | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
300% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
10,000 | QUANTITY | 0.99+ |
Toshiba Memory Corporation | ORGANIZATION | 0.99+ |
25 years later | DATE | 0.98+ |
Black Friday | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
10ths of seconds | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Saas | ORGANIZATION | 0.96+ |
Silicon Valley, | LOCATION | 0.96+ |
Every month | QUANTITY | 0.93+ |
50,000, 100,000 | QUANTITY | 0.92+ |
Flash | ORGANIZATION | 0.92+ |
EMC | ORGANIZATION | 0.91+ |
two hard drives | QUANTITY | 0.9+ |
Networks Storage Software | ORGANIZATION | 0.89+ |
millions of times a minute | QUANTITY | 0.88+ |
one way | QUANTITY | 0.88+ |
million machines a year | QUANTITY | 0.88+ |
first transport | QUANTITY | 0.87+ |
single compute | QUANTITY | 0.83+ |
Christmas | EVENT | 0.82+ |
Cloud Provider | ORGANIZATION | 0.81+ |
Kubernetes | TITLE | 0.78+ |
Flash | TITLE | 0.78+ |
two block storage command sets | QUANTITY | 0.77+ |
step one | QUANTITY | 0.75+ |
NVME | TITLE | 0.75+ |
1,000s of machines | QUANTITY | 0.75+ |
Cube | ORGANIZATION | 0.72+ |
couple | QUANTITY | 0.63+ |
NVME | ORGANIZATION | 0.62+ |
Cube Conversation | EVENT | 0.6+ |
SCSI | TITLE | 0.57+ |
Kubernetes | ORGANIZATION | 0.49+ |
CUBEConversation | EVENT | 0.49+ |
node | TITLE | 0.49+ |
President | PERSON | 0.48+ |
Conversation | EVENT | 0.36+ |
Brian Kumagai & Scott Beekman, Toshiba Memory America | CUBE Conversation, December 2018
>> Pomp YouTubers. Welcome to another cube conversation from ours, the Cube Studios in Palo Alto, California In this conversation, we're going to build upon some other recent conversations we've had which explores this increasingly important relationship between Senate conductor, memory or flash and new classes of applications that are really making life easier and changing the way that human beings in Iraq with each other, both in business as wells and consumer domains. And to explore these crucial issues. We've got two great guests. Brian Kumagai is the director of business development at Kashima Memory America. Scott Beekman is the director of managed flashes to Sheba Memory America's Well, gentlemen, welcome to the Cube. And yet so I'm gonna give you my perspective. I think this is pretty broadly held generally is that as a technology gets more broadly adopted, people get experience with. And as designers, developers, users gain experience with technology, they start to apply their own creativity, and it starts to morph and change and pull and stretch of technology and a lot of different directions. And that leads to increased specialization. That's happening in the flash work I got there, right? Scott? >> Yes, you know the great thing about flashes. Just how you because this it is and how widely it's used. And if you think about any electronic device it needs, it needs a brain processor. Needs to remember what it's doing. Memory and memories, What? What we do. And so we see it used in, you know, so many applications from smartphones, tablets, printers, laptops, you know, streaming media devices. And, uh and so you know, that that technology we see used, for example, like human see memory. It's a low power memory is designed for, for, like, smartphones that aren't plugged in. And, uh, and so when you see smartphones, one point five billion smartphones, it drives that technology and then migrates into all kinds of other applications is well, and then we see new technologies that come and replace that like U F s Universal flash storage. It's intended to be the high performance replacement. Mm. See, And so now that's also mag raiding its way through smartphones and all these other applications. >> So there's a lot of new applications that are requiring new classes of flash. But there's still a fair amount of, AH applications that require traditional flash technology. These air not coming in squashing old flash or traditional flasher other pipe types of parts, but amplifying their use in specialized ways. Brian Possible. But about >> that. So it's interesting that these days no one's really talks about the original in the hand flash that was ever developed back in nineteen eighty seven and that was based on a single of a cell, or SLC technology, which today still offers the highest reliability and fastest before me. Anand device available in the market today. And because of that, designers have found this type of memory to work well for storing boot code and some levels of operating system code. And these are in a wide variety of devices, both and consumer and industrial segments. Anything from set top boxes connecting streaming video. You've got your printers. You, Aye aye. Speakers. Just a numerous breath of product. I >> gotta also believe a lot of AA lot of i o t lot of industrial edge devices they're goingto feature. A lot of these kinds of parts may be disconnected, maybe connected beneath low power, very high speed, low cost, highly reliable. >> That's correct. And because these particular devices air still offered in lower densities. It does offer a very cost effective solutions for designers today. >> Okay, well, let's start with one of the applications. That is very, very popular. Press. When automated driving autonomous funerals on the work, it's it's There's a Thomas vehicles, but there's autonomous robots more broadly, let's start with Autonomous vehicle Scott. What types of flash based technologies are ending up in cars and why? >> Okay, so we've seen a lot of changes within vehicles over the last few years. You know, increasing storage requirements for, like, infotainment systems. You know, more sophisticated navigations of waste recognition. Ah, no instrument clusters more informed of digital displays and then ate ass features. You know, collision avoidance things like like that and all that's driving maur Maureen memory storage and faster performance memory. And in particular, what we've seen for automotive is it's basically adopting the type of memory that you have in your smartphone. So smart phones have a long time have used this political this. Mm. See a memory. And that has made you made my greatest weigh in automotive. And now a CZ smartphones have transition been transitioning do you? A fast, in fact, sushi. But it was the first introduced samples of U F U F S in early two thousand thirteen, and then you started to see it in smartphones in two thousand fifteen. Well, that's now migrating in tow. Automotive as well. They need to take advantage of the higher performance, the higher densities and so and so to Chiba. Zero. We're supporting, you know this, this growth within automotive as well. >> But automotive is a is a market on DH. Again, I think it's a great distinction you made. It's just not autonomous. It's thie even when the human being is still driving. It's the class of services that provided to that driver, both from an entertainment, say and and safety and overall experience standpoint. Is driving a very aggressively forward that volume in and the ability to demonstrate what you can do in a car is having a significant implications on the other classes of applications that we think for some of these high end parts. How is the experience that were incorporating into an automotive application or set of applications starting to impact? How others envision how their consumer products can be made better, Better experience safer, etcetera in other domains >> uh, well, yeah, I mean, we see that all kinds of applications are taking advantage of the these technologies. Like like even air via air, for example. Again, it's all it's all taking advantage of this idea of needing higher, larger density of storage at a lower cost with low power, good performance and all these applications air taking an advantage of that, including automotive. And if you look it automotive, you know, it's it's not just within the vehicle. Actually, it's estimated, you know, projected that autonomous vehicles we need, like one two, three terabytes of storage within the within the vehicle. But then all the data that's collected from cameras and sensors need to be uploaded to the cloud and all that needs to be stored. So that's driving storage to data centers because you basically need to learn from that to improve the software. For the for, Ah, you know, for the time being, Yeah, exactly. So all these things are driving more and more storage, both with within the devices themselves, like a car is like a device, but also in the data centers as >> well. So if we can't Brian take us through some of the decisions that designer has to go through to start to marry some of these different memory technologies together to create, whether it's an autonomous car, perhaps something a little bit more mundane. This might be a computing device. What is the designer? How does is I think about how these fit together to serve the needs of the user in the application. >> Um, I think >> these days, you know a lot of new products. They require a lot of features and capabilities. So I think a lot of input or thought is going into the the memory size itself. You know, I think software guys are always wanting to have more storage, to write more code, that sort of thing. So I think that is one lt's step that they think about the size of the package and then cost is always a factor as well. So you know nothing about the Sheba's. We do offer a broad product breath that producing all types of I'm not about to memory that'll fit everyone's needs. >> So give us some examples of what that product looks like and how it maps to some of these animation needs. >> So we like unmentioned we offered the lower density SLC man that's thought that a one gigabit density and then it max about maximum thirty to get bit dying. And as you get into more multi level cell or triple level cell or cue Elsie type devices, you're been able to use memory that's up to a single diet could be upto one point three three terror bits. So there's such a huge range of memory devices available >> today. And so if we think about where the memories devices are today and we're applications or pulling us, what kind of stuff is on the horizon scarred? >> Well, one is just more and more storage for smartphones. We want more, you know, two fifty six gigabyte fight told Gigabyte, one terabyte and and in particular for a lot of these mobile devices. You know, like convention You f s is really where things were going and continuing to advance that technology continuing to increase their performance, continuing to increase the densities. And so, you know, and that enables a lot of applications that we actually a hardman vision at this point. And when we know autonomous vehicles are important, I'm really excited about that because I'm in need that when I'm ninety, you know can drive anywhere. I want everyone to go, but and then I I you know where I's going, so it's a lot of things. So you know, we have some idea now, but there's things that we can't envision, and this technology enables that and enables other people who can see how do I take advantage of that? The faster performance, the greater density is a lower cost forbid. >> So if we think about, uh, General Computer, especially some of these out cases were talking about where the customer experience is a function of how fast a device starts up or how fast the service starts up, or how rich the service could be in terms of different classes of input, voice or visual or whatever else might be. And we think about these data centers where the closed loop between the processing and the interesting of some of these models and how it affects what that transactions going to do. We're tournament lower late. See, that's driving a lot of designers to think about how they can start moving certain classes of function closer to the memory, both from a security standpoint from an error correction standpoint, talk to us a little bit about the direction that to Sheba imagines, Oh, the differential ability of future memories relative Well, memories today, relative to where they've been, how what kinds of features and functions are being added to some of these parts to make them that much more robust in some of these application. >> I think a >> CZ you meant mentioned the robustness. So the memory itself. And I think that actually some current memory devices will allow you to actually identify the number of bits that are being corrected. And then that kind of gives an indication the integrity or the reliability of a particular block of memory. And I think as users are able to get early detection of this, they could do things to move the data around and then make their overall storage more reliable. >> Things got way. Yeah. I mean, we continue, Teo, figure out how to cram orbits within a given space. You know, moving from S l see them. I'll see the teal seemed. And on cue, Elsie. That's all enabling that Teo enabled greater storage. Lower cost on DH, then, Aziz, we just talked from the beginning. Just that there's all kinds of differentiation in terms of of flash products that are really tailored for certain things. Someone focus for really high performance and give up some power. And others you need a certain balance of that. Were, you know, a mobile device, you know, handheld device. You're not going to play. You know, You give up some performance for less power. And so there's a whole spectrum. It's someone you know. Endurance is incredibly important. So we have a full breast of products that address all those particular needs. >> The designer. It's just whatever I need. I could come to you. >> Yeah, that's right. So she betrays them. The full breath of products available. >> All right, gentlemen. Thank you very much for being on the Cube. Brian Coma Guy, director of business development to Sheba Memory America. Scott Beekman, director of Manage Flash. Achieve a memory. America again. Thanks very much for being on the Q. Thank you. Thank you. And this closes this cube conversation on Peter Burress until next time. Thank you very much for watching
SUMMARY :
And that leads to increased specialization. And so we see it used in, you know, so many applications from smartphones, So there's a lot of new applications that are requiring new classes of flash. So it's interesting that these days no one's really talks about the original A lot of these kinds of parts may be disconnected, And because these particular devices air still offered in lower densities. When automated driving autonomous funerals on the work, And that has made you made my greatest weigh in automotive. It's the class of services that provided to that driver, both from an entertainment, And if you look it automotive, you know, it's it's not just within the to serve the needs of the user in the application. So you know nothing about the Sheba's. And as you get into more multi level cell or triple And so if we think about where the memories devices are today and we're And so, you know, the direction that to Sheba imagines, Oh, And I think that actually some current memory devices And others you need a certain balance of that. I could come to you. So she betrays them. Thank you very much for being on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brian Kumagai | PERSON | 0.99+ |
Scott Beekman | PERSON | 0.99+ |
Peter Burress | PERSON | 0.99+ |
Brian Coma Guy | PERSON | 0.99+ |
Brian Possible | PERSON | 0.99+ |
Iraq | LOCATION | 0.99+ |
December 2018 | DATE | 0.99+ |
Aziz | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
ninety | QUANTITY | 0.99+ |
Kashima Memory America | ORGANIZATION | 0.99+ |
Brian | PERSON | 0.99+ |
Sheba Memory America | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
three terabytes | QUANTITY | 0.99+ |
Senate | ORGANIZATION | 0.99+ |
Elsie | PERSON | 0.99+ |
one terabyte | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
Gigabyte | ORGANIZATION | 0.97+ |
Teo | PERSON | 0.97+ |
today | DATE | 0.97+ |
two thousand fifteen | QUANTITY | 0.97+ |
Cube | ORGANIZATION | 0.96+ |
Toshiba Memory America | ORGANIZATION | 0.96+ |
U F U F S | COMMERCIAL_ITEM | 0.94+ |
two | QUANTITY | 0.94+ |
two great guests | QUANTITY | 0.94+ |
five billion smartphones | QUANTITY | 0.93+ |
one gigabit | QUANTITY | 0.93+ |
U F | ORGANIZATION | 0.92+ |
two fifty six gigabyte | QUANTITY | 0.92+ |
one | QUANTITY | 0.91+ |
America | LOCATION | 0.91+ |
Sheba | ORGANIZATION | 0.9+ |
single | QUANTITY | 0.85+ |
three | QUANTITY | 0.85+ |
one point | QUANTITY | 0.82+ |
Sheba | PERSON | 0.74+ |
eighty seven | QUANTITY | 0.74+ |
Flash | TITLE | 0.73+ |
thirty | QUANTITY | 0.73+ |
three terror bits | QUANTITY | 0.72+ |
Scott | ORGANIZATION | 0.72+ |
Anand | ORGANIZATION | 0.71+ |
early two | DATE | 0.71+ |
one of | QUANTITY | 0.7+ |
single diet | QUANTITY | 0.68+ |
Maureen | PERSON | 0.67+ |
thousand thirteen | QUANTITY | 0.64+ |
Chiba | ORGANIZATION | 0.63+ |
Zero | QUANTITY | 0.62+ |
cell | QUANTITY | 0.61+ |
last few years | DATE | 0.54+ |
Thomas | PERSON | 0.52+ |
nineteen | DATE | 0.51+ |
applications | QUANTITY | 0.47+ |
Scott Nelson & Doug Wong, Toshiba Memory America | CUBE Conversation, December 2018
>> (enchanted music) >> Hi, I'm Peter Burris and welcome to another CUBE Conversation from our awesome Palo Alto Studios. We've got a great conversation today. We're going to be talking about flash memory, other types of memory, classes of applications, future of how computing is going to be made more valuable to people and how it's going to affect us all. And to do that we've got Scott Nelson who's the Senior Vice President and GM of the memory unit at Toshiba Memory America. And Doug Wong who's a member of the technical staff also at Toshiba Memory America. Gentlemen, welcome to the CUBE >> Thank you >> Here's where I want to start. That when you think about where we are today in computing and digital devices, etc., a lot of that has been made possible by new memory technologies, and let me explain what I mean. For a long, time storage was how we persisted data. We wrote transactions to data and we kept it there so we could go back and review it if we wanted to. But something happened in the last dozen years or so, it happened before then but it's really taken off, where we're using semi-conductor memory which allows us to think about how we're going to deliver data to different classes of devices, both the consumer and the enterprise. First off, what do you think about that and what's Toshiba's association with these semi-conductor memories been? Why don't we start with you. >> So, appreciate the observation and I think that you're spot on. So, roughly 35 years ago Toshiba had the vision of a non-volatile storage device. So, we brought to market, we invented NOR flash in 1984. And then later the market wanted something that was higher density, so we developed NAND flash technology, which was invented in 1987. So, that was kind of the genesis of this whole flash revolution that's really been disruptive to the industry as we see it today. >> So, added up, it didn't start off in large data centers. It started off in kind of almost unassuming devices associated with particular classes of file. What were they? >> So, it was very disruptive technology. So the first application for the flash technology was actually replacing audio tape and the phone answering machine. And then it evolved beyond that into replacing digital film. Kept going replacing cassette tapes and then if you look at today it enabled the thin and light that we see with the portability of the notebooks and the laptops. The mobility of content with our pictures, and our videos and our music. And then today, the smart phone, that wouldn't really be without the flash technology that's necessary that gives us all of the high density storage that we see. >> So, this suggests a pretty expansive role of semi-conductive related memory. Give us a little sense of where is the technology today? >> Well, the technology today is evolving. So, originally floating-gate flash was the primary type of flash that we created. It's called two-dimensional, cleaner, floating-gate flash. And that existed from the beginning all the way through maybe to 2015 or so. But, it was not possible to really shrink flash any further to increase the density. >> In the 2D form? >> In the 2D form, exactly. So, we to move to a 3D technology. Now Toshiba presented the world's first research papers on 3D flash back in 2007, but at that time it was not necessary to actually use 3D technology at that time. When it became difficult to increase the density of flash further that's when we actually moved to production of our 3D flash memory which we call BiCS flash. And BiCS stands for bit column stacked flash and that's our trade name for our 3D memory. >> So, we're now in 3D memory technology because we're creating more data and the applications are demanding more data, both for customer experience and new classes of application. So, when we think about those applications Toshiba used to have to go to people and tell them how they could use this technology and now you've got an enormous number of designers coming to you. Doug, what are some of the applications that you're anticipating hearing about that's driving the demand for these technologies? >> Well, beyond the existing applications, such as personal information appliances like laptops and portables, and also in data centers which is actually a large part of our business as well. We also see emerging technologies as becoming eventual large users of flash memory. Things like autonomous vehicles or augmented or virtual reality. Or even the emerging IOT infrastructure and that's necessary to support all these portable devices. So these are devices that currently aren't using large amounts of flash, but are going to be in the future. Especially as the flash memory gets more dense, and less expensive. >> So there's an enormous range of applications on the horizon. Going to drive greater demand for flash, but there's some business challenges of achieving that demand. We've seen periodic challenges of supply, price volatility. Scott, when we think about Toshiba as a leader in sustaining a kind of good flow of technology into these applications, what is Toshiba doing to continue to satisfy customer demand, sustain that leadership in this flash marketplace? >> So, first off as Doug had mentioned the floating-gate technology has reached its ability to scale in a meaningful way. And so the other part of that also, is the limitation on the dye density so the market demand for these applications are asking for a higher density, higher performance, lower latency type of applications. And so because floating-gate has reached the end of its usefulness in terms of being able to scale, that brought about the 3D. And so the 3D, that gives us our higher density and then along with the performance it enables these applications. So, from Toshiba's point, we are seeing that migration that is happening today. So, the floating-gate is migrating over to the 3D. It's not to say that floating-gate demand will go away. There's a lot of applications that require the lower density. But certainly the higher density where you need a dye level 256 512 giga bit even up to terabit of data. That's where the 3D's go into play. Second to that really goes into the cap back. So, obviously that requires a significant amount of cap backs not only on the development but also in terms of capacity. And that, of course, is very important to our customers and to the industry as a whole for the assurance of supply. >> So, we're looking so Toshiba's value to the marketplace is both in creating these new technologies, filling out a product line, but also stepping up and establishing the capacity through significant capital investments in a lot of places around the globe to ensure that the supply is there for the future. >> Exactly right. You know, Toshiba is the most experienced flash vendor out there and so we led the industry in terms of the floating-gate technology and we are technology leaders; industry's migrating into the 3D. And so, with that, we continue with a significant capital investment to maintain our presence in the industry as a leader. >> So, when we think about leadership, we think about leadership both in consumer markets, because volume is crucial to sustaining these investments, generating returns, but I also want to spend just a second talking about the enterprise as well. What types of enterprise relationships do you guys envision? And what types of applications do you think are going to be made possible by the continued exploitation of flash in some of these big applications that we're building? Doug, what do you think? >> Well, I think that new types of flash will be necessary for new, emerging applications such as AI or instant recognition of images. So, we are working on next generation flash technology. So, historically flash was designed for lowest cost per bit. So that's how flash began to take over the market for storage from hard drives. But there are a class of applications that do require very low latencies. In other words, they want faster performance. So we are working on a new flash technology that actually optimizes performance over cost. And that is actually a new change to the flash memory landscape. And as you alluded to earlier there's a lot of differentiation in flash now to address specific market segments. So that's what we are working on, actually. Now, generically, these new non-volatile memory technologies are called storage class memories. And they include things like optimized flash or potentially face change memories resistive memories. But all these memories, even though they're slower than say the volatile memories such as D-ram and S-ram they are, number one they're non-volatiles which means they can learn and they can store data for the future. So we believe that this class of memory is going to become more important in the future to address things like learning systems and AI. >> Because you can't learn what you can't remember. >> Exactly. >> I heard somebody say that once. In fact, I've got to give credit. That came straight from Doug. So, if we think about looking forward the challenges that we face ultimately is have the capital structure necessary to build these things. The right relationships with the designers necessary to provide guidance and suggest about the new cost of applications, and the ability to consistently deliver into this. Especially for some of these new applications as we look forward. Do you guys anticipate that there will be in the next few years, particular moments or particular application forms that are going to just kick a lot of or further kick some of the new designs, some of the new technologies into higher gear? Is there something autonomous vehicles or something that's just going to catalyze a whole new way of thinking about the role that memory plays in computing and in devices? >> Well, I think that building off of a lot of the applications that are utilizing NAND technology that we're going to see now we have the enterprise, we have the data center that's really starting to take off to adopt the value proposition of NAND. And as Doug had mentioned when we get into the autonomous vehicle we get into AI or we get into VR a lot of applications to come will be utilizing the high-density, low-latency that the flash offers for storage. >> Excellent. Gentlemen, thanks very much for being on the CUBE. Great conversation about Toshiba's role in semi-conductor memory, flash memory, and future leadership as well. >> Thank you, Peter. >> Scott Nelson is the Senior Vice President and GM of the memory unit at Toshiba Memory America. Doug Wong is a member of the tactical staff at Toshiba Memory America. I'm Peter Burris. Thanks once again for watching the CUBE. (enchanted music)
SUMMARY :
future of how computing is going to be made more valuable both the consumer and the enterprise. disruptive to the industry as we see it today. So, added up, it didn't start off in large data centers. and light that we see with the portability So, this suggests a pretty expansive role And that existed from the beginning all the way In the 2D form, exactly. that's driving the demand for these technologies? but are going to be in the future. on the horizon. So, the floating-gate is migrating over to the 3D. in a lot of places around the globe the floating-gate technology are going to be made possible by the So that's how flash began to take over the market and the ability to consistently deliver into this. a lot of the applications that are utilizing NAND technology being on the CUBE. Doug Wong is a member of the tactical staff
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Peter Burris | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
2007 | DATE | 0.99+ |
Doug Wong | PERSON | 0.99+ |
1987 | DATE | 0.99+ |
Doug | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
1984 | DATE | 0.99+ |
Scott Nelson | PERSON | 0.99+ |
December 2018 | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Toshiba Memory America | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
first application | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
Second | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
first | QUANTITY | 0.96+ |
35 years ago | DATE | 0.94+ |
first research papers | QUANTITY | 0.94+ |
CUBE | ORGANIZATION | 0.92+ |
Palo Alto Studios | ORGANIZATION | 0.92+ |
BiCS | TITLE | 0.85+ |
a second | QUANTITY | 0.82+ |
level 256 512 giga bit | QUANTITY | 0.8+ |
next few years | DATE | 0.77+ |
last dozen years | DATE | 0.76+ |
NAND flash | OTHER | 0.75+ |
3D | QUANTITY | 0.67+ |
two | QUANTITY | 0.62+ |
once | QUANTITY | 0.58+ |
to terabit | QUANTITY | 0.56+ |
2D | QUANTITY | 0.53+ |
CUBE Conversation | EVENT | 0.49+ |
Jeremy Werner, Toshiba | CUBEConversation, July 2018
(upbeat orchestral music) >> Hi I'm Peter Burris and welcome to another CUBE Conversation from our wonderful Palo Alto Studios. Great conversation today with Jeremy Werner who is the vice president of SSD Marketing at Toshiba Memory, Jeremy welcome to theCUBE. >> Thank you Peter, great to be here. >> You know Jeremy, one of the reasons why I find you being here so intriguing interesting is there's a lot going on in the industry. We talk about new types of workloads: AI, cloud, deep learning, all these other things, all these technologies are-- all these applications and workloads are absolutely dependent on the idea that the infrastructure has to start focusing less on just persisting memory and focusing more on delivering memory-- delivering data to these very advanced applications. That's where flash comes in. Tell us a little bit about the role that flash has had in the industry. >> It's amazing, thank you for recognizing that. So, flash has a long history. 30 years ago actually Toshiba invented flash memory, and it's had a transformation on people's lives everywhere, on all kinds of products starting with the very first application for flash being-- for NAND flash being kind of removable memory cards. You had the digital camera revolution, then it found its way into cell phones, that enabled smart phones and people carrying around all their media etc. And now we're in kind of this large third phase adoption which is, like you mentioned, the transition from persistent storage with a hard drive where, your data was available but not really available to do a lot with. To now storage on an SSD, which allows artificial intelligence, business analytics, and all the new workloads that are changing business paradigms. >> So clearly flash adoption is increasing in the data center. Wikibon has been talking about this for quite some time. My colleague David Foyer was one of the first people out there to project the role that flash was going to play within the data center. How are you seeing as you talk to customers, as you talk to some of the big systems manufacturers and some of the hyperscalers. How are you hearing or what are they saying about how they are applying and will intend to apply flash in the market today? >> It's amazing when we talk to customers they really can't get enough flash. As an industry we just came out of a major shortage of flash memory, and now a lot of new technologies are coming online. So, we at Toshiba, just announced our 96 layer 3D flash, our QLC flash. This is all in an attempt to get more flash storage into the hands of these customers so that they can bring these new applications to market. And this transformation, it's happening quickly although maybe not as quickly as people think because there's a very long road ahead of us. Still you look out 10 years into the future, you're talking about 40 or 50% growth per year, at least for the next decade. >> So I want to get to that in a second, but I want to touch upon something that you said that many of the naysayers about flash predicted that there would be shortfalls and they were very Chicken Little like. Oh my gosh, the sky is going to fall, the prices are going to go out of control. We did have a shortage, and it was a pretty significant one, but we were able to moderate some of the price increases so it didn't lead to a whole bunch of design losses or a disruption in how we thought about new workloads, did it? >> True, no it didn't, and I think that's the value of flash memory. Basically what we saw was the traditional significant decline in pricing took a pause, and you look back 20 years ago, I mean flash was 1000 times more expensive. And as we move down that cost curve, it enables more and more applications to adopt it. Even in today's pricing, flash is an amazingly valuable tool to data centers and enterprise as they roll out new workloads and particularly around analytics, and artificial intelligence, machine learning, kind of all the interesting new technologies that you hear about. >> Yeah, and I think that's probably going to be the way that these kinds of blips in supply are going to be-- it'll perhaps lead to a temporary moderation in how fast the prices drop. >> That's right. >> It's not going to lead to massive disruption and craziness. And I will also say this, you mentioned 20 years ago stuff was really expensive and I cut my teeth on mainframe stuff. And I remember when disk drives on the mainframe were $3500 a megabyte, so it could be a lot worse. So, let's now-- flash is a great technology, SSD is a great technology, but it's made valuable by an overall ecosystem. >> That's right. >> There's a lot of other supporting technologies that are really crucial here. Disk has been dominated by interfaces like SATA for a long time. Done very well by us. Allowed for a fair amount of parallelism, a lot of pathing to mainly disk, but that's starting to change as we start thinking about flash coming on and being able to provide much much faster access times. What's going on with SATA and what's on the horizon? >> Yeah, so great question. Really what we saw with SATA in about 2010 was the introduction of a six gigabit SATA interface, and that was a doubling of the prior speed that was available, and then zero progress since then, and actually the SATA roadmap has nothing forward. So people have been stuck effectively with that SATA interface for the last eight years. Now they've had some choices. You look at the existing ecosystem, the existing infrastructure, SATA and SAS drives were both choices, and SAS is a faster interface today up to 12 gigabit. It's full duplex where SATA is half duplex, so you can read and write in parallel, so actually you can get four times the speed on a SAS drive that you would get on a SATA drive today. The challenge with SAS, why everyone went to SATA-- I won't say everyone went to SATA, but maybe three or four times the adoption rate of SATA versus SAS was the SAS products that were available on the market really didn't deliver the most economical deployment of-- >> They were more expensive. >> They were more expensive. >> Alright, but that's changing. >> That is changing, so what we've been trying to do is prepare and work with our customers for a life after SATA. And it's been a long time coming, like I said eight years on this current interface. Recently we introduced what we call a value SAS product line. The value SAS product line brings a lot of the benefits of SAS, so the faster performance, the better reliability, and the better manageability, into the existing infrastructure, but at SATA-like economics. And that I think is going to be critical as customers look at the long-term life after SATA, which is the transition to NVMe and a flash-only world without having to be fully dependent on changing everything that they've ever done to move from SATA to NVMe. So, the life after SATA preparation on customers is how do I make the most out of my existing knowledge, my existing infrastructure capabilities. What's readily available from a support perspective as I prepare for that eventual transition to NVMe. >> Yeah I want to pick up on that notion of higher performance at improving cost of SAS and just make sure that we're clear here that SATA is an electrical interface. It has certain performance characteristics, but these new systems are putting an enormous amount of stress on that interface. And that means you can't put more work on top of that, not only from an application standpoint, but as you said crucially also from a management standpoint. When you put more reporting or you put more automation or your put more AI on some of these devices, that creates new load on those drives. Going to SAS releases that headroom, so now we can bring more management workloads. That's important, and this is what I want to test. That's important because as we do these more complex applications, we're pushing more work down closer to the data, and we're using a lot more data, it's going to require more automation. Is SAS going to provide the headroom that we need to actually bring new levels of reliability to more complex work? >> I believe it will, absolutely. SAS is the world's most trusted interface. So, when it comes to reliability, our SAS drives in the field are the most reliable product that our customers purchase today. And we take that same core technology and package in a way to make it truly an economical replacement for SATA. >> So we at Wikibon now have observed NVMe, so I want to turn a little bit of attention to that. We have observed that NVMe is in fact going to have a significant impact. But when Toshiba Memory is looking at what kinds of things customers are looking for, you're saying not so much SATA, let's focus on SAS, and let's bring NVMe online as the system designs are there. Is that kind of what it's about? >> You know I think it's a complicated situation. Not everyone is ready for everything at the same time. Even today, there's some major cloud providers that have just about fully transitioned to NVMe SSDs. And that transition has been challenging. So what we see is customers over the course of the next four or five years, their readiness for that transition from today to five years from now, that's happening based on the complexity of what they need to manage from a physical infrastructure, a software ecosystem perspective. So some customers have already migrated, and other customers are years away. And that is really what we're trying to help customers with. We have a very broad NVMe offering. Actually we have more NVMe SSDs than any other product line, but for a lot of those customers who want to continue with the digital transformation in to data analytics, in to realizing the value of all the data that they have available and transforming that into improved business processes, improved business results. Those customers don't want to have to wait for their infrastructure to catch up for NVMe. Value SAS gives them a means to make that transition, while continuing on to take advantage of all the capabilities of flash. One of the things that we always talk about, one of my responsibilities is product planning product definition, and one of the things that we always talk about is our ideal SSD, the bottleneck is the flash. In other words if you look at a drive there's so many things that could bottleneck performance. It could be the interface, it could be the power that you can consume and dissipate, it could be the megahertz in your controller >> You sound like an electrical engineer. >> I am an electrical engineer, but I'm a marketing guy, right? So, there's all kinds of bottlenecks, and when we design an SSD we want the flash to be the bottleneck cause at the end of the day, that's fundamentally what people need and want. And so, you look at SATA, and it's like, not only is it a bottleneck, but it's clamping the performance at 50% or less than 50% of what's achievable in the same power footprint, in the same cost footprint, so it's just not practical I mean the thing's eight years old so-- >> Yeah. Yeah. >> In technology eight years is a lot of time. >> Especially these days, and so to simplify that perhaps, or say that a little bit differently, bottom line is SAS is a smaller step for existing customers who don't have the expertise necessary to re-engineer an entire system and infrastructure. >> That's right, it gives them that stepping stone. >> So you also mentioned that there' a difference between the flash and the SSD, and that difference is an enormous amount of value-wide engineering that leads to automation, reliability, types of things you can do down at the drive. Talk to us a little bit about Toshiba, Toshiba Memory, as a supplier of that differentiating engineering that's going to lead to even superior performance at better cost and greater manageability and time to value on some of these new flash-based workloads. >> So I'm amazed at the quality of our engineering team and the challenges that they face to constantly be bringing out new technologies that keep up with the flash memory curve. And I actually joke sometimes, I say it's like being on a hamster wheel. It never stops, the second that you release a product you're developing the next product. I mean it's one of the fastest product life cycles in the entire industry, and you're talking about extremely complicated, complex systems with tight firmware development. So what we do at Toshiba Memory, we actually engineer our own SOCs and controllers, develop the RTL, manage that from basically architecture to production. We write all our own firmware, we assemble our own drives, we put it all together. The process for actually defining a product to when we release it is about five years. So we have meetings now, we're talking about what are we going to release in 2023? And that is one of the big challenges, because these design cycles are very long so anticipating where innovation is going, and today's innovation is at the speed of software, right? Not the speed of hardware. So how do you build that kind of flexibility and capability into your product so that you can keep up with new innovations no one might have seen five years ago? That's where Toshiba Memory's engineering team really shows its mettle. >> So let's get your back in theCUBE in the not-to-distant future to talk about what 2023 is going to look like, but for right now Jeremy Werner, Vice President of SSD Marketing at Toshiba Memory, thank you very much for being on theCUBE. >> Thank you, Peter. >> And once again, thanks for watching this CUBE Conversation. (upbeat orchestral music)
SUMMARY :
Hi I'm Peter Burris and welcome to that the infrastructure has to start focusing less on and all the new workloads that manufacturers and some of the hyperscalers. flash storage into the hands of these Oh my gosh, the sky is going to fall, machine learning, kind of all the interesting Yeah, and I think that's probably going to And I will also say this, you mentioned 20 years but that's starting to change as we start speed on a SAS drive that you would And that I think is going to be critical And that means you can't put more work SAS is the world's most trusted interface. and let's bring NVMe online as the system designs are there. One of the things that we always talk about, the thing's eight years old so-- Especially these days, and so to simplify that difference between the flash and the SSD, And that is one of the big challenges, not-to-distant future to talk about what 2023 And once again, thanks for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeremy Werner | PERSON | 0.99+ |
Jeremy | PERSON | 0.99+ |
David Foyer | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
$3500 | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
July 2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
Toshiba Memory | ORGANIZATION | 0.99+ |
1000 times | QUANTITY | 0.99+ |
eight years | QUANTITY | 0.99+ |
Wikibon | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
four times | QUANTITY | 0.99+ |
20 years ago | DATE | 0.99+ |
20 years ago | DATE | 0.98+ |
less than 50% | QUANTITY | 0.98+ |
first application | QUANTITY | 0.98+ |
five years ago | DATE | 0.97+ |
next decade | DATE | 0.97+ |
10 years | QUANTITY | 0.97+ |
30 years ago | DATE | 0.97+ |
erner | PERSON | 0.97+ |
first people | QUANTITY | 0.97+ |
SAS | ORGANIZATION | 0.96+ |
third phase | QUANTITY | 0.96+ |
Jeremy W | PERSON | 0.95+ |
about five years | QUANTITY | 0.94+ |
both choices | QUANTITY | 0.93+ |
Vice President | PERSON | 0.93+ |
second | QUANTITY | 0.92+ |
Palo Alto Studios | ORGANIZATION | 0.87+ |
six gigabit | QUANTITY | 0.86+ |
2010 | DATE | 0.86+ |
last eight years | DATE | 0.85+ |
eight years old | QUANTITY | 0.83+ |
up to 12 gigabit | QUANTITY | 0.81+ |
SAS | TITLE | 0.8+ |
zero | QUANTITY | 0.79+ |
five years | QUANTITY | 0.78+ |
CUBEConversation | EVENT | 0.76+ |
96 layer | QUANTITY | 0.76+ |
about 40 | QUANTITY | 0.72+ |
SATA | TITLE | 0.6+ |
megabyte | QUANTITY | 0.57+ |
Conversation | EVENT | 0.57+ |
a second | QUANTITY | 0.53+ |
next four | DATE | 0.44+ |
Ravi Pendekanti, Dell EMC and Steve Fingerhut, Toshiba Memory America | Dell Technologies World 2018
>> Narrator: Live from Las Vegas, it's theCUBE, covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Welcome back to the Sands! We continue here live on theCUBE, our coverage here of Dell Technologies World 2018. 14,000 attendees wrapping up day 3. We are live as I said with Stu Miniman. I'm John Walls, and it is now our pleasure to welcome to the set Steve Fingerhut, who is the SVP and GM of SSD and Cloud Software Business Units at Toshiba Memory Americas. Steve, good to see you, sir. >> Great to be here. >> And Ravi Pendekanti, who is the SVP of Server Solutions Product Management and Marketing at Dell. >> Thank you, John. >> Ravi, good to see you, sir. >> Same here, sir. >> Yeah, let's talk about, first off, show theme. Make it real, right? Digital transformation, but make it real. >> Ravi: Yup. >> So, what does it mean to the two of you? We've heard that theme over and over again, and what do you think that means to your customers as well? How do you make it real for them? >> First and foremost, I think the whole idea of new workloads come in play. People talk about machine learning and deep learning as you, I'm sure, are aware of. People talk about analytics. The fact is, each of us is collecting a lot more data than a year ago. Which is good for my friend Steve and others, and obviously, we like the fact that customers are looking at making more real-time, if not near real-time, analysis. And the whole notion of governmental agencies across the world trying to go into more of a digital world where if you look at a country like India, for example, I mean, they have a billion people who are looking at other cards where they didn't have a form of identification for each individuals. Now if they're gone through a new transformation phase where they want to ensure that every single one of them actually has a way of identification, and it's all done digitally with accounts and everything else that goes on, this is just some of the manifestations of the digital transformation we see, whether it is in your industries, pick your favorite one, whether it's financial sector, the manufacturing, health care, all the way to governmental agencies. I think each of them are looking at how do they look at providing rights out of services. Either for their customers or their communities at large, and, you know, we can't be more excited about what this provides an opportunity for us to go back and provide a way for them to communicate and do some cool takes. >> Steve? >> Yeah, Ravi, you mentioned the workloads that are driving the new campaign or that you're highlighting in the new campaign Make It Real, and, many of those workloads are, they're new architectures, and they were basically built from day one on SSDs, right? Counting on that performance, reliability, etc. And so obviously, that's what we're here to promote at the show. And you can see the new workloads, obviously anything Cloud very much counts on SSDs and Flash. And then as you get into machine learning, different types of artificial intelligence, those are certainly counting on the performance of SSDs. And keep nothing more real than actual products in hands so with Ravi's products and ours, we have a number of demo's, including the new AMD platforms that the Power Edge team is rolling out, running all of these new workloads on Toshiba SSDs. So it's a good way to make it real. >> Yeah, Steve, maybe bring us in a little bit kind of the state of storage, though. We have talked about SSDs, and we're now a decent way into it. Dell's announcement talking a lot about NVMe. Maybe give us the Toshiba viewpoint on memory and storage and some of those transitions we're going through. >> Right, well, I guess the secret's out that SSDs are a great addition. Right? You take pretty much any environment, and you add SSDs, and it will go faster. So it's pretty much the biggest bang for the buck in terms of incremental performance. So what that means is just tremendous growth. And the last couple years have been, really for the industry, keeping up with that really increased demand. So there's inherent efficiencies in the SSDs. We're trying to build as many as we can, and then obviously trying to help our customers use them in the most efficient ways possible. >> Yeah, I agree with Steve. I mean, it is an efficiency equation. The fact of the matter is, you really do need to provide customers with a better way of ensuring that timely information is made available. Again, it's information, and it has to be timely. Because if you really don't provide it at a time when our customers need it, there's really no advantage of being really, having right infrastructure, right? Or lack of it, for that matter. Case in point, if you look at what we just announced, Stu. Yesterday, we had talked about the R840, for example, which is a 4-socket server. And we actually announced it with 44 NVMe drives, believe it or not. That's about two times more than the nearest competitor that just gives you an idea into the amount of data that customers are consuming on the applications, obviously. And more importantly, when we were coming up with this notion, we felt that 12 was probably a good number. Maybe 24 was going to be a stretch. And the number of customers we have talked to even in the last two days, I mean it's been huge. We're hearing them saying, "Wow, we can't wait "to go get this product in our hands." Because that really shows you that there is already a pretty big demand for these kinds of technologies to be brought in. >> Yeah, I like what you were saying there, Ravi, because I'd like both of you to help connect the dots for us a little bit. 'Cause when I think back to, okay, what speed disc did I have? Or was the flash piece in? This was something that, it was traditionally the server admin. Maybe there was some application person that came in. But you're talking about C-level discussions here. The trends that Jeff Clark talked about in his keynote as to, you know, this is what the business is driving things, like AINML and some of those. Steve, how are the conversations changing to get this piece of the infrastructure up at more of the C-level discussion? >> Right, it certainly is part of the transformation where it's been talked about several times this week. IT has moved from being a cost center to the revenue center and then that puts it on the CEO's radar much more squarely. You definitely want to, if you're the CIO, CTO, infrastructure leader, your goal is to try to deliver that agility, right? Don't stand in the way of revenue, while managing security, managing cost. And it's those dynamics and, you know, it's not a new conversation, but it's the public versus private hybrid. What exactly should go where? And those are still top-of-mind for all the customers we're talking to. >> Actually, Steve hit on something else, if I may, which is about security. And I can't tell you, Stu, a good 70% of the customers on average today, do not finish a conversation in the 30-minute chunks we have had without talking about what is it you guys are going to do for security. And that's a huge number or an increase from where we were just even a year or two ago. And imagine having said that, if you really had a longer conversation, security obviously is one of those fundamental pillars that everybody comes down to. Because everybody's worried about data, and the fact that there's leakage of information, if I may, pertaining to this. And more importantly, you know, making it real, if I may, to your point earlier on, Jon, as well. Which is, customers don't want to look at just the buzz words. They're now asking for proof points. Proof points on, "Hey, what does this really mean "in terms of security?" For example, when we talk about zero arrays or, you know, secure arrays, sorry, which is, how do you go retire an old data server or a box without necessarily worrying about the bits and bytes being left on the disc drives? So we have come up with new technologies which enables all the drives to be wiped. Makes it a lot easier, of course, with some of the stuff we do with Toshiba, and some of their technologies as well. But my point, again, being that I think now, our C-level execs are coming in asking us for, not just the major teams, but they're actually more interested in finding out how and what is it we're doing to help some of those major teams. And I think the number of requests we have had for some of the white papers we have come out with, Steve, I think has only grown up now. >> Absolutely. >> Which, I don't think was happening in the past from the C-level execs. So it's absolutely a valid statement. >> Yeah, well, there were Senate hearings last year and some pretty famous data breaches, and you have senators grilling CEO's, and it was shocking. They actually used, there was a senator who used the term, full disc encryption, and taking a CEO to task for not using full disc encryption and so I think that might help, talking about getting on the C-level radar. That helps. >> That was good staff work there. >> Exactly, exactly. That was a good plant. >> Yeah, right. But to the point of security. Obviously with this exponential growth of data, unstructured, blowing up, and then all of a sudden, you become a lot riper, if you will, and you've got a lot more to manage. And so with that, how much more at risk are people, and is that what's raising the awareness now in the C-sweep? Is they realize that they're a much bigger target now than maybe when data wasn't as plentiful you know, back in the old days, if you will. Is that part of this? Or is that it? >> I believe that's a big part of it. And, one of the other things that's obviously going with this is, if you really look at the disclosures that any of us have to go through, even in terms of whether it's a simple credit card you're looking at. I don't know if you've ever seen those. As we were doing some of the analysis, we noticed. You want a simple credit card application, we'd had some security, and, you know, personal information clauses is actually garnered by about 120% in terms of the number of things they ask for. And making sure that the consumer is aware as well. Right? I don't think that happened before. And the fact of the matter is, I don't think there's a single day that we can go through any of the trade press without somebody coming out with a security breach maybe, or a security feature, whether it's hardware or software. And I think there's a whole security encryption device or drives, I think there's a huge demand for that as well, right? >> Absolutely. And you talk about the data growth. It's obviously been phenomenal. In his keynote Monday, Michael Dell talked about the data growth from machine to machine, and it's going to make this look like a little bit of data. So like you said, just that risk, the exposure is much larger, and you have to keep that data secure. So as Ravi mentioned, we work closely with Dell. There's a lot of, it's not an easy problem to solve, right? So there's a lot of engineering to make sure that you have that end-to-end security, and that's why we work with things like the instant system erase, right? So you can, one button, erase the system in minutes, versus in the past, it might take hours and days. And do you really trust that it's gone? Those types of things, so I think that those are enabling a much more robust security, and you basically have to make it easy, right? >> Letting people sleep at night. >> Exactly. >> That's what you're doing. >> It's interesting. In the past, the only way you could do that was you had to write a series of 0's and 1's on their driver. And that would take, you know, hours together. That's how you would erase your data, right? I love when you talk about autonomous vehicles. Imagine there's a whole big, a whole discussion as much as how do you make sure that you have the, that's kind of an edge computing as Jeff, I think, mentioned on stage yesterday. That you want to not have latency come in between making a deterministic turn, right? Or an object appears. You don't want to wait for the breaking system to play because some decision needs to be made in a remote center. Right? Which essentially means now you have got data being collected and analyzed and acted upon. And there are things like that, and you've probably heard of all the insurance companies are working on, you know, what kind of data can we collect it, because when crashes happen, right? How do you make sure that, you know, there are privacy laws in place and what-not, who has access to it, plenty of stuff. >> John: Sure. >> Steve, want to get your viewpoint. We're getting not far from the end of the show. Why don't you give, in general, the partner viewpoint of Dell technology's world in, specifically Toshiba. I know you've got, there's the booze, there's the party, there's demos, there's labs, so a lot of activity your team's doing, for those that haven't been here. And, you know, Toshiba's worked with both Legacy Dell, Legacy MC. Any commentary to close on that coming together? >> Right. I think last year, I used the Jordan/Pippen analogy, but it's only gotten better since then. So it's a great partnership. We're definitely growing strong together, and like you said, that doesn't happen overnight. That's years of hard work and trust that makes that a possibility. But I truly believe we're only getting started. And you know, one of our goals we're working together is how do we make these important capabilities like security more common, more accessible, lower cost, those types of things. So that's a major factor, major focus area for us going forward. But definitely see this is just the beginning. >> Any key highlight from the show or activities that your team's been doing here that you'd like to leave us with? >> Sure. Yeah, we have a significant presence here. We have eight server demos running. I mentioned the AMD servers, multiple workloads across these new emerging workloads. And then the hands-on demo zone. Where actually, the developers can use the systems and software they want to evaluate. They can use them in the Cloud. Those are all being driven by Toshiba, and of course, as part of the Dell Solution. Yeah, we're happy. Honored to be a big part of the show this year. >> Jordan/Pippen, I was thinking more like Curry/Durant. That's where I was going with that. >> Exactly. That might be a little more up-to-date, right? >> I'm good with Jordan. No, he wasn't bad. Pretty good pair like you two are. Thanks for joining us both. We appreciate it, Ravi, Steve. >> Thank you. >> Thank you. >> Good seeing you here. Back with more of a continue, our live coverage here on theCUBE where Dell Technologies World 2018, and we are in Las Vegas.
SUMMARY :
Brought to you by Dell EMC and its ecosystem partners. I'm John Walls, and it is now our pleasure And Ravi Pendekanti, who is the SVP of Yeah, let's talk about, first off, show theme. of the digital transformation we see, And you can see the new workloads, obviously anything Cloud kind of the state of storage, though. and you add SSDs, and it will go faster. And the number of customers we have talked to because I'd like both of you to help connect the dots And it's those dynamics and, you know, And more importantly, you know, making it real, if I may, from the C-level execs. and you have senators grilling CEO's, That was That was a good plant. you know, back in the old days, if you will. And making sure that the consumer is aware as well. and you have to keep that data secure. In the past, the only way you could do that Why don't you give, in general, the partner viewpoint and like you said, that doesn't happen overnight. and of course, as part of the Dell Solution. That's where I was going with that. That might be a little more up-to-date, right? Pretty good pair like you two are. Good seeing you here.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Steve | PERSON | 0.99+ |
Jeff Clark | PERSON | 0.99+ |
Jordan | PERSON | 0.99+ |
Steve Fingerhut | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Curry | PERSON | 0.99+ |
Ravi Pendekanti | PERSON | 0.99+ |
Michael Dell | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Durant | PERSON | 0.99+ |
Jon | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Ravi | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
30-minute | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
Pippen | PERSON | 0.99+ |
70% | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Legacy MC | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
First | QUANTITY | 0.98+ |
Yesterday | DATE | 0.98+ |
Dell Technologies World 2018 | EVENT | 0.98+ |
one button | QUANTITY | 0.98+ |
Toshiba Memory Americas | ORGANIZATION | 0.98+ |
Toshiba Memory America | ORGANIZATION | 0.97+ |
this week | DATE | 0.97+ |
this year | DATE | 0.97+ |
about 120% | QUANTITY | 0.97+ |
44 NVMe | QUANTITY | 0.97+ |
Legacy Dell | ORGANIZATION | 0.96+ |
zero arrays | QUANTITY | 0.96+ |
day 3 | QUANTITY | 0.95+ |
R840 | COMMERCIAL_ITEM | 0.94+ |
India | LOCATION | 0.94+ |
a year | DATE | 0.93+ |
a year ago | DATE | 0.93+ |
each individuals | QUANTITY | 0.93+ |
4-socket | QUANTITY | 0.93+ |
Stu | PERSON | 0.91+ |
single day | QUANTITY | 0.9+ |
12 | QUANTITY | 0.9+ |
14,000 attendees | QUANTITY | 0.86+ |
0 | QUANTITY | 0.86+ |
Senate | ORGANIZATION | 0.86+ |
Steve Fingerhut, Toshiba & Ravi Pendekanti, Dell EMC | Dell EMC World 2017
>> Narrator: Live from Las Vegas it's the Cube covering Dell EMC World 2017. Brought to you by Dell EMC. >> Okay, welcome back everyone. We are here live at Dell EMC World 2017. This is Cube's coverage of Dell EMC, the combination, the big news here. I'm John Furrier with SiliconANGLE and my co-host Paul Gillin. And our next guest is Steve Fingerhut, senior vice president and general manager of storage product business unit at Toshiba and Ravi Pendekanti, ex-VP of service solutions and product management at Dell EMC. Guys, welcome to the Cube. Good to see you, Ravi. Steve nice to meet you. >> Thanks for having us. >> Steve, so tell us what's going on at Toshiba, 'cause I want to hear what you guys are doing and your role in the relationship with Dell EMC. And what is going on with your architecture because we've been hearing a ton about IoT of the edge, centralized pushing the intelligence to the edge, new architectures. The world is kind of moving to a new architecture. What's your pitch? >> Sure. Well Dell and Toshiba have a long history 20 plus years of working and both strong innovators. We're engaged both in our hard drive products as well as our SSD products, really across every aspect of Dell's portfolio, client server and storage. And we're really taking the architecture, both of those product categories are really popular as everyone, data explosion is happening. A lot of that is ending up on storage and our focus areas on hard drive are around the near line storage which are the high capacity eight terabyte and higher, really popular with the cloud architectures. We have a 14 and 16 terabyte helium-based drive coming out next year, which will put us in a strong leadership position. And then on the SSD side what we're highlighting at the show today is our latest generation NAND. And we've moved into 3-D NAND and we're showing our wafer with 64-layer, 3-D flash as well as the first public demonstration of any company of an SSD using that 64-layer, 3-D flash. So we're on that cutting edge and we see that really growing. And you mentioned IoT, that's really driving a lot of the big data growth. A lot of that data will reside on hard drives, right, for the long-term storage, but then you bring that into an SSD tier for the very rapid analytics work that you want to do to make decisions with that data. >> Steve, talk about the impact of the latest state-of-the-art, because to me it's, oh my God, it's speeds and feeds but storage, people always care about storage. Go back to the original iPod, then iPhone, things are in devices, you mentioned IoT, state-of-the-art has to get better, faster, cheaper. What's the impact to some of those specs that you guys just released in terms of the media, the SSD? What's the impact going to be for customers? Scenario-wise, what's some of the the impact going to look like? >> Sure, I think the number one impact as I talk to customers here at the event and it's no surprise but-- >> Give me more of that they say. (laughs) >> Every customer, every Dell executive says we need more. So really it's just the SSD adaption >> Ravi: Yes we do need more. >> Exactly, so that's exploding. So the number one thing this will do is it's the, each individual die on the wafer doubles in capacity and will soon double again and double again after that. So this 3-D technology really allows us to drive density. And that means lower cost, it means more capacity. It also means we can develop denser SSD. So more in the same space or smaller space. >> For the consumer it's obvious, it's all the devices, the wearables, but the business is really more fundamental than that, things that are going to be connected to the network, the microwave, the air conditioning, all the sensors in the world are going to be now digitally connected once analog, now digital. I mean, that's kind of where, does that kind of get that right? >> Absolutely, and those are, that same technology will be used in a a lot of end devices. It's in your smart phone, it's in your smart watch. It'll be in a lot of those smart devices capturing the temporary data. But then that all gets consolidated in a massive pool and companies are looking for how do they efficiently scale to capture and analyze that data and turn it into revenue and profit. And that's where the performance of SSDs and in the future the higher capacity levels will all efficient scaling at the data center. >> Ravi, in the hyper diverge market, now all the sudden you've got the storage coming back into the server-- >> Ravi: Yep. >> What are customers looking for in terms of performance on the storage side? Are they driving you for the same kind of constant drive for more capacity and better performance? >> Absolutely, Paul, I mean if you think about it the workloads of today are vastly different than the workloads of the past. Think about it. Today people are not looking for data to be just collected. It doesn't have the complete value or in my view it doesn't give you anything other than just lots of bits and bytes. What really gives you the power to act upon is information and so to create information you need to take the data, go process it and get you to the same, to the level of detail you can act upon right? So that's the analytic extension. So having said that, today when you look at any of the industries, whether it's genomics, whether you're looking at mission learning, deep learning, these require a sense of performance to be provided for our customers because they are looking at analyzing data quickly enough. That's when they can act on it. So our customers are absolutely asking for better performance and higher capacity and they need it now. >> So Toshiba's not a new player to you though, they've been a supplier to the Power Edge, right? >> Oh, absolutely true. They've been a fantastic supplier for the last 20 years. We look at them more as a partner. They've been with us through the journey. We've been, if you think about it, for the last couple of decades we've been shipping your product and they've been working through us. We've been working together, just not as, it's not just a supplier kind of relationship. We actually track their new technologies. Steve just talked about 3-D cross point and things of that kind. We are working on those technologies together to ensure that we give our customers just not the latest technology but also to to provide them with the right price performance. Again, I emphasize price performance because it's just not one of them on it's own that has merit to our customers. >> Is brand important to your customers in terms of a storage provider? Do they ask for Toshiba brand? Or does it matter? >> What they do ask for is they ask for reliability. Right, they want to make sure that they have a reliable product. And then if you think about it that really translates to them to certain vendors. So yeah, they could have a potential propensity for a certain vendor. But it all starts with reliability. If you really can't have a reliable component in the service that we sell, it really doesn't help our customers. And that's where, it goes back to the point I was making earlier, which is this long-standing relationship with the companies because we have built that reliable product that Toshiba's been providing for us. >> Steve, tell us about the relationship with SSD and the enterprise? Everybody knows people want more solid state, that's, everyone kind of sees the consumer product. Where's the progress bar in terms of adoption because we, I hear stories and we actually report them on SiliconANGLE, I'm buying capacity, I'm all flash drives. Server certainly has their share of flash as well. David Foyer and Wendy Bond have been covering that for years but now in the Enterprise and all the other mainstream products, where's the analogy here, what's the tipping point? Are we there or? >> Well, if you look at if from a dollars spent perspective, actually this year is the crossover where Enterprise's SSDs will consume a higher amount of the spin than Enterprise hard drives. So people are putting their money-- >> Spinning disc, you mean. >> Exactly. >> The old hard drive. >> And so that crossover will happen, has happened, if we had more supply, if the industry had more supply I'm sure it would have already happened. And now if you look at it from a gigabytes perspective of course hard drives are much, much, are still the vast majority of the bits shipped. And so, it really is about data, intelligent use of flash. It's fast, it's very reliable, it consumes less power but it is also more expensive, so you have to pick the right applications and the right ways of deploying those. And that's were Dell and Toshiba work together with partners like VMware. We're talking about a certified solution around Toshiba Dell, VMware V stand, as well as Nutanix. And both of those solutions in a converged architecture and hyper-converged architecture, they rely on SSDs in every mode to ensure you get the performance scaling. >> The SSD has been exciting because sort of hard disc performance peaked out about 10 years ago, and we've been jerry-rigging ways to make it faster but SSDs genuinely are getting faster and faster. What is the upper limit on speed right now? Are we looking at Moore's law type of growth in performance or does that top out at some point? >> We can, we get to saturating the interface with performance but I'll tell you the most customers aren't asking us for more IOPS performance or more bandwidth, certainly they'll take it but when you put several of these in a server or storage box, it's more than the interface can consume. So certainly there's been, if you look at the bi-segment type of growth rates, it's moving into how cheap can we make it, can we reduce the endurance. It's still plenty fast and kind of opening that up. That's a growing tier. And so we're really seeing that kind of good enough performance driving a lot of the expansion. >> Ravi, how about the architectural challenges? I was joking with Dave Vallant, a couple Cube things ago about Dell, oh Dell, their supply chain was their big innovation and everyone kind of knows that story of how they, I said data is the new supply chain. Data is now coming in and you got the form factor on storage memory, which everyone wants more SSD, give me more, we heard that. How are you guys going to build your server architecture to handle the tsunami of data coming in from stuff that this is going to enable. I mean, everything in the business will be instrumented with data. >> Absolutely. >> Devices and sensors are coming in. >> Yep. >> Is there a server for that and how do you >> Steve: It's called streaming. >> It's a moving train architecturally. >> Ravi: Yes it is. >> So what do you guys doing, give us an overview. >> It's interesting that you ask, John, because when you look at a server today it does have to deal with lots of data coming in. And it's just not data but if you look at it, there are, we used to talk about storage tiering, now I think we got to start looking at memory tiering. And what this means is we have to fundamentally change the way the architecture of the system is put in place and for example in 14G, we are now coming out with more of our important talent sets. It's all about scalable business architecture. Again, this goes into the whole premise of, we talk about work loads, as work loads change, you talk about IoT, you talked about how all the data is coming in, you got to synthesize it. You also need to have an architecture that essentially says I have to go get this data in. I get it the right time. It's not just getting data in. So we are working on things called MCA, which is memory centered architectures. 'Cause at the end of the day, it's analogous to, and I'm from California, we have in the Bay area, we have the 101, that kind of is the nerve of the entire Bay area. >> John: It's crowded, we need more. >> It's crowded. >> We need flying cars. >> A lot of bottlenecks. >> Absolutely right. >> Io problems. (laughs) >> Absolutely right. >> Yeah, right. >> That's your IOPS. >> Elon Musk is going to figure this out. >> Yeah, that's the goal right. >> Flying cars. >> We on the service side are trying to do the same thing, which is as more data, like more cars are on the road, we now have to go to ensure that the connectivity between the memories of system, your storage subsystem, and the CPU actually comes out to be a low latency, high bandwidth kind of a solution, which is what goes back into what I call memory centered architectures. So that's essentially what we're working on, to ensure that we have an optimal performance at the application level because that's what customers need. >> Cool, well what is tiered memory and is that actually a thing now in the 14G server? >> So tiered memory is something that, I mean, we are setting the stage for the future, right? So we talk about tiered storage. Were are tier one, tier two, tier three storage. If data was not being utilized you basically took the data but it on the tapes for example, right? In the current generation, a lot of people use hard drives as a way of putting data out. So likewise in memory, I mean, if you really think about it you have the registers, you bought the L one cache, the L two cache, those caches. Then we are coming into all kinds of NVMe drives. So that's what I mean by kind of clearing the air to deal with. There is normal memory, you've got persistent memory, right? So those are the new memory-- >> By the way, stateless cloud native really and microservices use state and stateless apps and, you differentiate between the two and SSD is great for that. >> Yes, so this is where I was going back to your question, Paul, is that's the way I think we are in the early stages of how we evolve. So that's where you'll see we're going to support no persistent memory for example, when people look at SAP HANA, they won't have memory. It's basically in memory database. So these are the kind of things we are doing. So with 14G for example we are working on things like that. We'll have 14, I mean about 19X, more NVMe than we had in the prior generation. I wish I could give you more specifics but we will do as we get into the formal shipment of the product, but-- >> John: Shipment's in the summer, though right? This summer is what I heard? >> Summer. >> Summer time frame, a few months away. >> Yeah. >> Okay, talk about the relationship between you guys. Obviously you're partners, this is a significant component, I would worry about as a customer, availability concerns, allocation of products. Are we good, supply solid? I didn't mean to put you on the spot. >> No, absolutely. >> Let's put him on the spot, we need more. >> It's a great question. >> Get the checkbook out, I get a commission. >> You know it's a great teamwork. You think about like the great teams in history, the Jordan-Pippin, they worked together. >> John: Bird and McHale. >> Exactly, and they can anticipate each other's next steps and that's really how we're operating. Ravi mentioned that we've worked hard to make sure we have product alignment up and down and the next is Dell technologies has massive scale so aligning the supply chains is key and we've done that to make sure we have the right products in the right place for Dell's customers. But in terms of supply, yeah, it really is about getting to that next generation where we can double our capacity per wafer or even more in some cases. So that will really allow us to open the spigots and we think 2018 is going to be a-- >> And the impact to the customers, guys, just comment on the relationship, what's going to be the impact to your customers? >> So, first and foremost, jokes apart, we know about the constraints in the industry on SSD drives. So that's an industry-wide thing. So one of the things we've been doing with Toshiba is we have regular interlock meetings. We discuss where the demand is, and we help forecast where we are headed. We actually worked through the process. We do anticipate that something that Steve's team and our teams will be doing together. >> John: This is not new for Dell, this is their wheelhouse. >> It is, it is. But I will tell you, John, given the constraints we have in industry, I must say that in the last couple of quarters, we had to put a lot more emphasis on how we go deal with this because, going back to the prior comments that gentlemen made, there's such a demand for the SSDs right now, that I wish the supply and demand were not out of balance. But they are, right? We got to work through and try to ensure that we don't surprise them as partners so we don't come back and say "Hey, give us a truckload tomorrow." So that's something that we are actually finishing. >> And they're shaping your strategy too. They're an indicator to where you can go based upon the tech, the state-of the-art. >> Absolutely, this is were our call is. It's a constant feedback mechanism we have built. I mean they know the SSD drive market, the NAND flash technologies better than we do. Right, they do. And we understand the overall customer side and what impact is from the computer for example in our case. And now we go back in and try and see how we can do a better mechanism of shaping the demand and ensuring that the right product is available at the right time. >> Is a relief in sight with the shortages? >> I think it's going to be linked to those next generation technologies. As we ramp those and get them into production, the SSDs and into Dell EMC systems, then you will see the balance come back in the industry. >> Paul: A year, two years, less? >> That's, I think most people are saying it's going to last through this year. We're obviously working very hard to get the right products in the right place but I think most people are saying it'll last through this year, but we'll see. It's hard to predict. >> I think that's the consistent message we get is at least three to four quarters before things stabilize. >> Well, Ravi, congratulations on this scale. I think it's a huge advantage and certainly you've got some great supplier relationships with the scale. Congratulations to Steve on the state-of-the-art new stuff coming. More, faster, come on, bring it on. >> Absolutely. >> John: Internet of things is waiting. >> It is. That market is waiting for you guys. Congratulations, thanks for coming on Cube. We appreciate you sharing insights. >> Thank you, I mean, we couldn't have found a better partner as we announce our 14G and we are excited about it. Thank you for having us both, John and Paul. >> Great stuff. >> Thank you for having us. >> Bringing you state-of-the-art content here in the Cube but more importantly faster, memory, SSDs and the Enterprise taking over the hard disc drive certainly a ton of data, a tsunami of data coming in from all angles, IoT and the Enterprise and everywhere else. It's the Cube sharing hard data with you. Be right back with more live coverage. Stay with us. (upbeat electronic music)
SUMMARY :
Brought to you by Dell EMC. This is Cube's coverage of Dell EMC, the combination, about IoT of the edge, centralized pushing the intelligence for the long-term storage, but then you bring that What's the impact going to be for customers? Give me more of that they say. So really it's just the SSD adaption So the number one thing this will do is all the sensors in the world are going to be now and in the future the higher capacity levels So that's the analytic extension. the latest technology but also to to provide them in the service that we sell, and the enterprise? of the spin than Enterprise hard drives. the right applications and the right ways What is the upper limit on speed right now? driving a lot of the expansion. I mean, everything in the business will be instrumented and for example in 14G, we are now coming out (laughs) and the CPU actually comes out to be a low latency, the L one cache, the L two cache, those caches. By the way, stateless cloud native really into the formal shipment of the product, but-- Okay, talk about the relationship between you guys. the Jordan-Pippin, they worked together. and the next is Dell technologies has massive scale So one of the things we've been doing with Toshiba John: This is not new for Dell, in the last couple of quarters, we had to put They're an indicator to where you can go and ensuring that the right product is available the SSDs and into Dell EMC systems, in the right place but I think most people are saying I think that's the consistent message we get Congratulations to Steve on the state-of-the-art We appreciate you sharing insights. Thank you for having us both, John and Paul. and the Enterprise taking over the hard disc drive
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Steve | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Steve Fingerhut | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Ravi | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Dave Vallant | PERSON | 0.99+ |
California | LOCATION | 0.99+ |
Ravi Pendekanti | PERSON | 0.99+ |
2018 | DATE | 0.99+ |
20 plus years | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
John Furrier | PERSON | 0.99+ |
Bay | LOCATION | 0.99+ |
next year | DATE | 0.99+ |
iPod | COMMERCIAL_ITEM | 0.99+ |
two years | QUANTITY | 0.99+ |
Elon Musk | PERSON | 0.99+ |
14 | QUANTITY | 0.99+ |
64-layer | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
eight terabyte | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Today | DATE | 0.99+ |
A year | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
Las Vegas | LOCATION | 0.98+ |
this year | DATE | 0.98+ |
16 terabyte | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
SAP HANA | TITLE | 0.97+ |
Nutanix | ORGANIZATION | 0.96+ |
four quarters | QUANTITY | 0.95+ |
tier three | OTHER | 0.94+ |
first | QUANTITY | 0.94+ |
SiliconANGLE | ORGANIZATION | 0.94+ |
4-video test
>>don't talk mhm, >>Okay, thing is my presentation on coherent nonlinear dynamics and combinatorial optimization. This is going to be a talk to introduce an approach we're taking to the analysis of the performance of coherent using machines. So let me start with a brief introduction to easing optimization. The easing model represents a set of interacting magnetic moments or spins the total energy given by the expression shown at the bottom left of this slide. Here, the signal variables are meditate binary values. The Matrix element J. I. J. Represents the interaction, strength and signed between any pair of spins. I. J and A Chive represents a possible local magnetic field acting on each thing. The easing ground state problem is to find an assignment of binary spin values that achieves the lowest possible value of total energy. And an instance of the easing problem is specified by giving numerical values for the Matrix J in Vector H. Although the easy model originates in physics, we understand the ground state problem to correspond to what would be called quadratic binary optimization in the field of operations research and in fact, in terms of computational complexity theory, it could be established that the easing ground state problem is np complete. Qualitatively speaking, this makes the easing problem a representative sort of hard optimization problem, for which it is expected that the runtime required by any computational algorithm to find exact solutions should, as anatomically scale exponentially with the number of spends and for worst case instances at each end. Of course, there's no reason to believe that the problem instances that actually arrives in practical optimization scenarios are going to be worst case instances. And it's also not generally the case in practical optimization scenarios that we demand absolute optimum solutions. Usually we're more interested in just getting the best solution we can within an affordable cost, where costs may be measured in terms of time, service fees and or energy required for a computation. This focuses great interest on so called heuristic algorithms for the easing problem in other NP complete problems which generally get very good but not guaranteed optimum solutions and run much faster than algorithms that are designed to find absolute Optima. To get some feeling for present day numbers, we can consider the famous traveling salesman problem for which extensive compilations of benchmarking data may be found online. A recent study found that the best known TSP solver required median run times across the Library of Problem instances That scaled is a very steep route exponential for end up to approximately 4500. This gives some indication of the change in runtime scaling for generic as opposed the worst case problem instances. Some of the instances considered in this study were taken from a public library of T SPS derived from real world Veil aside design data. This feels I TSP Library includes instances within ranging from 131 to 744,710 instances from this library with end between 6880 13,584 were first solved just a few years ago in 2017 requiring days of run time and a 48 core to King hurts cluster, while instances with and greater than or equal to 14,233 remain unsolved exactly by any means. Approximate solutions, however, have been found by heuristic methods for all instances in the VLS i TSP library with, for example, a solution within 0.14% of a no lower bound, having been discovered, for instance, with an equal 19,289 requiring approximately two days of run time on a single core of 2.4 gigahertz. Now, if we simple mindedly extrapolate the root exponential scaling from the study up to an equal 4500, we might expect that an exact solver would require something more like a year of run time on the 48 core cluster used for the N equals 13,580 for instance, which shows how much a very small concession on the quality of the solution makes it possible to tackle much larger instances with much lower cost. At the extreme end, the largest TSP ever solved exactly has an equal 85,900. This is an instance derived from 19 eighties VLSI design, and it's required 136 CPU. Years of computation normalized to a single cord, 2.4 gigahertz. But the 24 larger so called world TSP benchmark instance within equals 1,904,711 has been solved approximately within ophthalmology. Gap bounded below 0.474%. Coming back to the general. Practical concerns have applied optimization. We may note that a recent meta study analyzed the performance of no fewer than 37 heuristic algorithms for Max cut and quadratic pioneer optimization problems and found the performance sort and found that different heuristics work best for different problem instances selected from a large scale heterogeneous test bed with some evidence but cryptic structure in terms of what types of problem instances were best solved by any given heuristic. Indeed, their their reasons to believe that these results from Mexico and quadratic binary optimization reflected general principle of performance complementarity among heuristic optimization algorithms in the practice of solving heart optimization problems there. The cerise is a critical pre processing issue of trying to guess which of a number of available good heuristic algorithms should be chosen to tackle a given problem. Instance, assuming that any one of them would incur high costs to run on a large problem, instances incidence, making an astute choice of heuristic is a crucial part of maximizing overall performance. Unfortunately, we still have very little conceptual insight about what makes a specific problem instance, good or bad for any given heuristic optimization algorithm. This has certainly been pinpointed by researchers in the field is a circumstance that must be addressed. So adding this all up, we see that a critical frontier for cutting edge academic research involves both the development of novel heuristic algorithms that deliver better performance, with lower cost on classes of problem instances that are underserved by existing approaches, as well as fundamental research to provide deep conceptual insight into what makes a given problem in, since easy or hard for such algorithms. In fact, these days, as we talk about the end of Moore's law and speculate about a so called second quantum revolution, it's natural to talk not only about novel algorithms for conventional CPUs but also about highly customized special purpose hardware architectures on which we may run entirely unconventional algorithms for combinatorial optimization such as easing problem. So against that backdrop, I'd like to use my remaining time to introduce our work on analysis of coherent using machine architectures and associate ID optimization algorithms. These machines, in general, are a novel class of information processing architectures for solving combinatorial optimization problems by embedding them in the dynamics of analog, physical or cyber physical systems, in contrast to both MAWR traditional engineering approaches that build using machines using conventional electron ICS and more radical proposals that would require large scale quantum entanglement. The emerging paradigm of coherent easing machines leverages coherent nonlinear dynamics in photonic or Opto electronic platforms to enable near term construction of large scale prototypes that leverage post Simoes information dynamics, the general structure of of current CM systems has shown in the figure on the right. The role of the easing spins is played by a train of optical pulses circulating around a fiber optical storage ring. A beam splitter inserted in the ring is used to periodically sample the amplitude of every optical pulse, and the measurement results are continually read into a refugee A, which uses them to compute perturbations to be applied to each pulse by a synchronized optical injections. These perturbations, air engineered to implement the spin, spin coupling and local magnetic field terms of the easing Hamiltonian, corresponding to a linear part of the CME Dynamics, a synchronously pumped parametric amplifier denoted here as PPL and Wave Guide adds a crucial nonlinear component to the CIA and Dynamics as well. In the basic CM algorithm, the pump power starts very low and has gradually increased at low pump powers. The amplitude of the easing spin pulses behaviors continuous, complex variables. Who Israel parts which can be positive or negative, play the role of play the role of soft or perhaps mean field spins once the pump, our crosses the threshold for parametric self oscillation. In the optical fiber ring, however, the attitudes of the easing spin pulses become effectively Qantas ized into binary values while the pump power is being ramped up. The F P J subsystem continuously applies its measurement based feedback. Implementation of the using Hamiltonian terms, the interplay of the linear rised using dynamics implemented by the F P G A and the threshold conversation dynamics provided by the sink pumped Parametric amplifier result in the final state of the optical optical pulse amplitude at the end of the pump ramp that could be read as a binary strain, giving a proposed solution of the easing ground state problem. This method of solving easing problem seems quite different from a conventional algorithm that runs entirely on a digital computer as a crucial aspect of the computation is performed physically by the analog, continuous, coherent, nonlinear dynamics of the optical degrees of freedom. In our efforts to analyze CIA and performance, we have therefore turned to the tools of dynamical systems theory, namely, a study of modifications, the evolution of critical points and apologies of hetero clinic orbits and basins of attraction. We conjecture that such analysis can provide fundamental insight into what makes certain optimization instances hard or easy for coherent using machines and hope that our approach can lead to both improvements of the course, the AM algorithm and a pre processing rubric for rapidly assessing the CME suitability of new instances. Okay, to provide a bit of intuition about how this all works, it may help to consider the threshold dynamics of just one or two optical parametric oscillators in the CME architecture just described. We can think of each of the pulse time slots circulating around the fiber ring, as are presenting an independent Opio. We can think of a single Opio degree of freedom as a single, resonant optical node that experiences linear dissipation, do toe out coupling loss and gain in a pump. Nonlinear crystal has shown in the diagram on the upper left of this slide as the pump power is increased from zero. As in the CME algorithm, the non linear game is initially to low toe overcome linear dissipation, and the Opio field remains in a near vacuum state at a critical threshold. Value gain. Equal participation in the Popeo undergoes a sort of lazing transition, and the study states of the OPIO above this threshold are essentially coherent states. There are actually two possible values of the Opio career in amplitude and any given above threshold pump power which are equal in magnitude but opposite in phase when the OPI across the special diet basically chooses one of the two possible phases randomly, resulting in the generation of a single bit of information. If we consider to uncoupled, Opio has shown in the upper right diagram pumped it exactly the same power at all times. Then, as the pump power has increased through threshold, each Opio will independently choose the phase and thus to random bits are generated for any number of uncoupled. Oppose the threshold power per opio is unchanged from the single Opio case. Now, however, consider a scenario in which the two appeals air, coupled to each other by a mutual injection of their out coupled fields has shown in the diagram on the lower right. One can imagine that depending on the sign of the coupling parameter Alfa, when one Opio is lazing, it will inject a perturbation into the other that may interfere either constructively or destructively, with the feel that it is trying to generate by its own lazing process. As a result, when came easily showed that for Alfa positive, there's an effective ferro magnetic coupling between the two Opio fields and their collective oscillation threshold is lowered from that of the independent Opio case. But on Lee for the two collective oscillation modes in which the two Opio phases are the same for Alfa Negative, the collective oscillation threshold is lowered on Lee for the configurations in which the Opio phases air opposite. So then, looking at how Alfa is related to the J. I. J matrix of the easing spin coupling Hamiltonian, it follows that we could use this simplistic to a p o. C. I am to solve the ground state problem of a fair magnetic or anti ferro magnetic ankles to easing model simply by increasing the pump power from zero and observing what phase relation occurs as the two appeals first start delays. Clearly, we can imagine generalizing this story toe larger, and however the story doesn't stay is clean and simple for all larger problem instances. And to find a more complicated example, we only need to go to n equals four for some choices of J J for n equals, for the story remains simple. Like the n equals two case. The figure on the upper left of this slide shows the energy of various critical points for a non frustrated and equals, for instance, in which the first bifurcated critical point that is the one that I forget to the lowest pump value a. Uh, this first bifurcated critical point flows as symptomatically into the lowest energy easing solution and the figure on the upper right. However, the first bifurcated critical point flows to a very good but sub optimal minimum at large pump power. The global minimum is actually given by a distinct critical critical point that first appears at a higher pump power and is not automatically connected to the origin. The basic C am algorithm is thus not able to find this global minimum. Such non ideal behaviors needs to become more confident. Larger end for the n equals 20 instance, showing the lower plots where the lower right plot is just a zoom into a region of the lower left lot. It can be seen that the global minimum corresponds to a critical point that first appears out of pump parameter, a around 0.16 at some distance from the idiomatic trajectory of the origin. That's curious to note that in both of these small and examples, however, the critical point corresponding to the global minimum appears relatively close to the idiomatic projector of the origin as compared to the most of the other local minima that appear. We're currently working to characterize the face portrait topology between the global minimum in the antibiotic trajectory of the origin, taking clues as to how the basic C am algorithm could be generalized to search for non idiomatic trajectories that jump to the global minimum during the pump ramp. Of course, n equals 20 is still too small to be of interest for practical optimization applications. But the advantage of beginning with the study of small instances is that we're able reliably to determine their global minima and to see how they relate to the 80 about trajectory of the origin in the basic C am algorithm. In the smaller and limit, we can also analyze fully quantum mechanical models of Syrian dynamics. But that's a topic for future talks. Um, existing large scale prototypes are pushing into the range of in equals 10 to the 4 10 to 5 to six. So our ultimate objective in theoretical analysis really has to be to try to say something about CIA and dynamics and regime of much larger in our initial approach to characterizing CIA and behavior in the large in regime relies on the use of random matrix theory, and this connects to prior research on spin classes, SK models and the tap equations etcetera. At present, we're focusing on statistical characterization of the CIA ingredient descent landscape, including the evolution of critical points in their Eigen value spectra. As the pump power is gradually increased. We're investigating, for example, whether there could be some way to exploit differences in the relative stability of the global minimum versus other local minima. We're also working to understand the deleterious or potentially beneficial effects of non ideologies, such as a symmetry in the implemented these and couplings. Looking one step ahead, we plan to move next in the direction of considering more realistic classes of problem instances such as quadratic, binary optimization with constraints. Eso In closing, I should acknowledge people who did the hard work on these things that I've shown eso. My group, including graduate students Ed winning, Daniel Wennberg, Tatsuya Nagamoto and Atsushi Yamamura, have been working in close collaboration with Syria Ganguly, Marty Fair and Amir Safarini Nini, all of us within the Department of Applied Physics at Stanford University. On also in collaboration with the Oshima Moto over at NTT 55 research labs, Onda should acknowledge funding support from the NSF by the Coherent Easing Machines Expedition in computing, also from NTT five research labs, Army Research Office and Exxon Mobil. Uh, that's it. Thanks very much. >>Mhm e >>t research and the Oshie for putting together this program and also the opportunity to speak here. My name is Al Gore ism or Andy and I'm from Caltech, and today I'm going to tell you about the work that we have been doing on networks off optical parametric oscillators and how we have been using them for icing machines and how we're pushing them toward Cornum photonics to acknowledge my team at Caltech, which is now eight graduate students and five researcher and postdocs as well as collaborators from all over the world, including entity research and also the funding from different places, including entity. So this talk is primarily about networks of resonate er's, and these networks are everywhere from nature. For instance, the brain, which is a network of oscillators all the way to optics and photonics and some of the biggest examples or metal materials, which is an array of small resonate er's. And we're recently the field of technological photonics, which is trying thio implement a lot of the technological behaviors of models in the condensed matter, physics in photonics and if you want to extend it even further, some of the implementations off quantum computing are technically networks of quantum oscillators. So we started thinking about these things in the context of icing machines, which is based on the icing problem, which is based on the icing model, which is the simple summation over the spins and spins can be their upward down and the couplings is given by the JJ. And the icing problem is, if you know J I J. What is the spin configuration that gives you the ground state? And this problem is shown to be an MP high problem. So it's computational e important because it's a representative of the MP problems on NPR. Problems are important because first, their heart and standard computers if you use a brute force algorithm and they're everywhere on the application side. That's why there is this demand for making a machine that can target these problems, and hopefully it can provide some meaningful computational benefit compared to the standard digital computers. So I've been building these icing machines based on this building block, which is a degenerate optical parametric. Oscillator on what it is is resonator with non linearity in it, and we pump these resonate er's and we generate the signal at half the frequency of the pump. One vote on a pump splits into two identical photons of signal, and they have some very interesting phase of frequency locking behaviors. And if you look at the phase locking behavior, you realize that you can actually have two possible phase states as the escalation result of these Opio which are off by pie, and that's one of the important characteristics of them. So I want to emphasize a little more on that and I have this mechanical analogy which are basically two simple pendulum. But there are parametric oscillators because I'm going to modulate the parameter of them in this video, which is the length of the string on by that modulation, which is that will make a pump. I'm gonna make a muscular. That'll make a signal which is half the frequency of the pump. And I have two of them to show you that they can acquire these face states so they're still facing frequency lock to the pump. But it can also lead in either the zero pie face states on. The idea is to use this binary phase to represent the binary icing spin. So each opio is going to represent spin, which can be either is your pie or up or down. And to implement the network of these resonate er's, we use the time off blood scheme, and the idea is that we put impulses in the cavity. These pulses air separated by the repetition period that you put in or t r. And you can think about these pulses in one resonator, xaz and temporarily separated synthetic resonate Er's if you want a couple of these resonator is to each other, and now you can introduce these delays, each of which is a multiple of TR. If you look at the shortest delay it couples resonator wanted to 2 to 3 and so on. If you look at the second delay, which is two times a rotation period, the couple's 123 and so on. And if you have and minus one delay lines, then you can have any potential couplings among these synthetic resonate er's. And if I can introduce these modulators in those delay lines so that I can strength, I can control the strength and the phase of these couplings at the right time. Then I can have a program will all toe all connected network in this time off like scheme, and the whole physical size of the system scales linearly with the number of pulses. So the idea of opium based icing machine is didn't having these o pos, each of them can be either zero pie and I can arbitrarily connect them to each other. And then I start with programming this machine to a given icing problem by just setting the couplings and setting the controllers in each of those delight lines. So now I have a network which represents an icing problem. Then the icing problem maps to finding the face state that satisfy maximum number of coupling constraints. And the way it happens is that the icing Hamiltonian maps to the linear loss of the network. And if I start adding gain by just putting pump into the network, then the OPI ohs are expected to oscillate in the lowest, lowest lost state. And, uh and we have been doing these in the past, uh, six or seven years and I'm just going to quickly show you the transition, especially what happened in the first implementation, which was using a free space optical system and then the guided wave implementation in 2016 and the measurement feedback idea which led to increasing the size and doing actual computation with these machines. So I just want to make this distinction here that, um, the first implementation was an all optical interaction. We also had an unequal 16 implementation. And then we transition to this measurement feedback idea, which I'll tell you quickly what it iss on. There's still a lot of ongoing work, especially on the entity side, to make larger machines using the measurement feedback. But I'm gonna mostly focused on the all optical networks and how we're using all optical networks to go beyond simulation of icing Hamiltonian both in the linear and non linear side and also how we're working on miniaturization of these Opio networks. So the first experiment, which was the four opium machine, it was a free space implementation and this is the actual picture off the machine and we implemented a small and it calls for Mexico problem on the machine. So one problem for one experiment and we ran the machine 1000 times, we looked at the state and we always saw it oscillate in one of these, um, ground states of the icing laboratoria. So then the measurement feedback idea was to replace those couplings and the controller with the simulator. So we basically simulated all those coherent interactions on on FB g. A. And we replicated the coherent pulse with respect to all those measurements. And then we injected it back into the cavity and on the near to you still remain. So it still is a non. They're dynamical system, but the linear side is all simulated. So there are lots of questions about if this system is preserving important information or not, or if it's gonna behave better. Computational wars. And that's still ah, lot of ongoing studies. But nevertheless, the reason that this implementation was very interesting is that you don't need the end minus one delight lines so you can just use one. Then you can implement a large machine, and then you can run several thousands of problems in the machine, and then you can compare the performance from the computational perspective Looks so I'm gonna split this idea of opium based icing machine into two parts. One is the linear part, which is if you take out the non linearity out of the resonator and just think about the connections. You can think about this as a simple matrix multiplication scheme. And that's basically what gives you the icing Hambletonian modeling. So the optical laws of this network corresponds to the icing Hamiltonian. And if I just want to show you the example of the n equals for experiment on all those face states and the history Graham that we saw, you can actually calculate the laws of each of those states because all those interferences in the beam splitters and the delay lines are going to give you a different losses. And then you will see that the ground states corresponds to the lowest laws of the actual optical network. If you add the non linearity, the simple way of thinking about what the non linearity does is that it provides to gain, and then you start bringing up the gain so that it hits the loss. Then you go through the game saturation or the threshold which is going to give you this phase bifurcation. So you go either to zero the pie face state. And the expectation is that Theis, the network oscillates in the lowest possible state, the lowest possible loss state. There are some challenges associated with this intensity Durban face transition, which I'm going to briefly talk about. I'm also going to tell you about other types of non aerodynamics that we're looking at on the non air side of these networks. So if you just think about the linear network, we're actually interested in looking at some technological behaviors in these networks. And the difference between looking at the technological behaviors and the icing uh, machine is that now, First of all, we're looking at the type of Hamilton Ian's that are a little different than the icing Hamilton. And one of the biggest difference is is that most of these technological Hamilton Ian's that require breaking the time reversal symmetry, meaning that you go from one spin to in the one side to another side and you get one phase. And if you go back where you get a different phase, and the other thing is that we're not just interested in finding the ground state, we're actually now interesting and looking at all sorts of states and looking at the dynamics and the behaviors of all these states in the network. So we started with the simplest implementation, of course, which is a one d chain of thes resonate, er's, which corresponds to a so called ssh model. In the technological work, we get the similar energy to los mapping and now we can actually look at the band structure on. This is an actual measurement that we get with this associate model and you see how it reasonably how How? Well, it actually follows the prediction and the theory. One of the interesting things about the time multiplexing implementation is that now you have the flexibility of changing the network as you are running the machine. And that's something unique about this time multiplex implementation so that we can actually look at the dynamics. And one example that we have looked at is we can actually go through the transition off going from top A logical to the to the standard nontrivial. I'm sorry to the trivial behavior of the network. You can then look at the edge states and you can also see the trivial and states and the technological at states actually showing up in this network. We have just recently implement on a two D, uh, network with Harper Hofstadter model and when you don't have the results here. But we're one of the other important characteristic of time multiplexing is that you can go to higher and higher dimensions and keeping that flexibility and dynamics, and we can also think about adding non linearity both in a classical and quantum regimes, which is going to give us a lot of exotic, no classical and quantum, non innate behaviors in these networks. Yeah, So I told you about the linear side. Mostly let me just switch gears and talk about the nonlinear side of the network. And the biggest thing that I talked about so far in the icing machine is this face transition that threshold. So the low threshold we have squeezed state in these. Oh, pios, if you increase the pump, we go through this intensity driven phase transition and then we got the face stays above threshold. And this is basically the mechanism off the computation in these O pos, which is through this phase transition below to above threshold. So one of the characteristics of this phase transition is that below threshold, you expect to see quantum states above threshold. You expect to see more classical states or coherent states, and that's basically corresponding to the intensity off the driving pump. So it's really hard to imagine that it can go above threshold. Or you can have this friends transition happen in the all in the quantum regime. And there are also some challenges associated with the intensity homogeneity off the network, which, for example, is if one opioid starts oscillating and then its intensity goes really high. Then it's going to ruin this collective decision making off the network because of the intensity driven face transition nature. So So the question is, can we look at other phase transitions? Can we utilize them for both computing? And also can we bring them to the quantum regime on? I'm going to specifically talk about the face transition in the spectral domain, which is the transition from the so called degenerate regime, which is what I mostly talked about to the non degenerate regime, which happens by just tuning the phase of the cavity. And what is interesting is that this phase transition corresponds to a distinct phase noise behavior. So in the degenerate regime, which we call it the order state, you're gonna have the phase being locked to the phase of the pump. As I talked about non degenerate regime. However, the phase is the phase is mostly dominated by the quantum diffusion. Off the off the phase, which is limited by the so called shallow towns limit, and you can see that transition from the general to non degenerate, which also has distinct symmetry differences. And this transition corresponds to a symmetry breaking in the non degenerate case. The signal can acquire any of those phases on the circle, so it has a you one symmetry. Okay, and if you go to the degenerate case, then that symmetry is broken and you only have zero pie face days I will look at. So now the question is can utilize this phase transition, which is a face driven phase transition, and can we use it for similar computational scheme? So that's one of the questions that were also thinking about. And it's not just this face transition is not just important for computing. It's also interesting from the sensing potentials and this face transition, you can easily bring it below threshold and just operated in the quantum regime. Either Gaussian or non Gaussian. If you make a network of Opio is now, we can see all sorts off more complicated and more interesting phase transitions in the spectral domain. One of them is the first order phase transition, which you get by just coupling to Opio, and that's a very abrupt face transition and compared to the to the single Opio phase transition. And if you do the couplings right, you can actually get a lot of non her mission dynamics and exceptional points, which are actually very interesting to explore both in the classical and quantum regime. And I should also mention that you can think about the cup links to be also nonlinear couplings. And that's another behavior that you can see, especially in the nonlinear in the non degenerate regime. So with that, I basically told you about these Opio networks, how we can think about the linear scheme and the linear behaviors and how we can think about the rich, nonlinear dynamics and non linear behaviors both in the classical and quantum regime. I want to switch gear and tell you a little bit about the miniaturization of these Opio networks. And of course, the motivation is if you look at the electron ICS and what we had 60 or 70 years ago with vacuum tube and how we transition from relatively small scale computers in the order of thousands of nonlinear elements to billions of non elements where we are now with the optics is probably very similar to 70 years ago, which is a table talk implementation. And the question is, how can we utilize nano photonics? I'm gonna just briefly show you the two directions on that which we're working on. One is based on lithium Diabate, and the other is based on even a smaller resonate er's could you? So the work on Nana Photonic lithium naive. It was started in collaboration with Harvard Marko Loncar, and also might affair at Stanford. And, uh, we could show that you can do the periodic polling in the phenomenon of it and get all sorts of very highly nonlinear processes happening in this net. Photonic periodically polls if, um Diabate. And now we're working on building. Opio was based on that kind of photonic the film Diabate. And these air some some examples of the devices that we have been building in the past few months, which I'm not gonna tell you more about. But the O. P. O. S. And the Opio Networks are in the works. And that's not the only way of making large networks. Um, but also I want to point out that The reason that these Nana photonic goblins are actually exciting is not just because you can make a large networks and it can make him compact in a in a small footprint. They also provide some opportunities in terms of the operation regime. On one of them is about making cat states and Opio, which is, can we have the quantum superposition of the zero pie states that I talked about and the Net a photonic within? I've It provides some opportunities to actually get closer to that regime because of the spatial temporal confinement that you can get in these wave guides. So we're doing some theory on that. We're confident that the type of non linearity two losses that it can get with these platforms are actually much higher than what you can get with other platform their existing platforms and to go even smaller. We have been asking the question off. What is the smallest possible Opio that you can make? Then you can think about really wavelength scale type, resonate er's and adding the chi to non linearity and see how and when you can get the Opio to operate. And recently, in collaboration with us see, we have been actually USC and Creole. We have demonstrated that you can use nano lasers and get some spin Hamilton and implementations on those networks. So if you can build the a P. O s, we know that there is a path for implementing Opio Networks on on such a nano scale. So we have looked at these calculations and we try to estimate the threshold of a pos. Let's say for me resonator and it turns out that it can actually be even lower than the type of bulk Pip Llano Pos that we have been building in the past 50 years or so. So we're working on the experiments and we're hoping that we can actually make even larger and larger scale Opio networks. So let me summarize the talk I told you about the opium networks and our work that has been going on on icing machines and the measurement feedback. And I told you about the ongoing work on the all optical implementations both on the linear side and also on the nonlinear behaviors. And I also told you a little bit about the efforts on miniaturization and going to the to the Nano scale. So with that, I would like Thio >>three from the University of Tokyo. Before I thought that would like to thank you showing all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today of some of the recent works that have been done either by me or by character of Hong Kong. Honest Group indicates the title of my talk is a neuro more fic in silica simulator for the communities in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then it will show some proof of concept of the game and performance that can be obtained using dissimulation in the second part and the protection of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is elected from recent natural tronics paper from the village Park hard people, and this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba beautification machine or a recently proposed restricted Bozeman machine, FPD A by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition in Fox, CBS or the energy efficiency off memory Sisters uh P. J. O are still an attractive platform for building large organizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particular in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system. In this respect, the LPGA is They are interesting from the perspective off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see and so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for digesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics orphan chaotic because of symmetry, is interconnectivity the infrastructure? No. Next talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's the schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the car testing machine, which is the ground toe, the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo f represents the monitor optical parts, the district optical Parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback coupling cm using oh, more than detection and refugee A and then injection off the cooking time and eso this dynamics in both cases of CNN in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the eyes in coping and the H is the extension of the icing and attorney in India and expect so. Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted conventions to the global minimum of There's even Tony and using this approach. And so this is why we propose, uh, to introduce a macro structures of the system where one analog spin or one D O. P. O is replaced by a pair off one another spin and one error, according viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a learning process for searching for the ground state of the icing. Every 20 within this massacre structure the role of the er variable eyes to control the amplitude off the analog spins toe force. The amplitude of the expense toe become equal to certain target amplitude a uh and, uh, and this is done by modulating the strength off the icing complaints or see the the error variable E I multiply the icing complaint here in the dynamics off air d o p. O. On then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different. I thesis introduces a symmetry in the system, which in turn creates security dynamics, which I'm sure here for solving certain current size off, um, escape problem, Uh, in which the X I are shown here and the i r from here and the value of the icing energy showing the bottom plots. You see this Celtics search that visit various local minima of the as Newtonian and eventually finds the global minimum? Um, it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing evertonians so that we're gonna do not get stuck in any of them. On more over the other types of attractors I can eventually appear, such as limits I contractors, Okot contractors. They can also be destabilized using the motivation of the target and Batuta. And so we have proposed in the past two different moderation of the target amateur. The first one is a modulation that ensure the uh 100 reproduction rate of the system to become positive on this forbids the creation off any nontrivial tractors. And but in this work, I will talk about another moderation or arrested moderation which is given here. That works, uh, as well as this first uh, moderation, but is easy to be implemented on refugee. So this couple of the question that represent becoming the stimulation of the cortex in machine with some error correction they can be implemented especially efficiently on an F B. G. And here I show the time that it takes to simulate three system and also in red. You see, at the time that it takes to simulate the X I term the EI term, the dot product and the rising Hamiltonian for a system with 500 spins and Iraq Spain's equivalent to 500 g. O. P. S. So >>in >>f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements back O C. M. In which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as ah one g repression. Uh, replicate pulsed laser CIA Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts or the dog products, respect to the problem size. And And if we had infinite amount of resources and PGA to simulate the dynamics, then the non illogical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a look at it off and and while the guide off end. Because computing the dot product involves assuming all the terms in the product, which is done by a nephew, GE by another tree, which heights scarce logarithmic any with the size of the system. But This is in the case if we had an infinite amount of resources on the LPGA food, but for dealing for larger problems off more than 100 spins. Usually we need to decompose the metrics into ah, smaller blocks with the block side that are not you here. And then the scaling becomes funny, non inner parts linear in the end, over you and for the products in the end of EU square eso typically for low NF pdf cheap PGA you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance start a path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product by increasing the size of this at the tree. And this can be done by organizing your critique the electrical components within the LPGA in order which is shown here in this, uh, right panel here in order to minimize the finding finance of the system and to minimize the long distance that a path in the in the fpt So I'm not going to the details of how this is implemented LPGA. But just to give you a idea off why the Iraqi Yahiko organization off the system becomes the extremely important toe get good performance for similar organizing machine. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should results for solving escape problems. Free connected person, randomly person minus one spring last problems and we sure, as we use as a metric the numbers of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with the Nina successful BT against the problem size here and and in red here, this propose FDJ implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior with similar to the Cartesian mission. Uh, and so clearly you see that the scaring off the numbers of matrix vector product necessary to solve this problem scales with a better exponents than this other approaches. So So So that's interesting feature of the system and next we can see what is the real time to solution to solve this SK instances eso in the last six years, the time institution in seconds to find a grand state of risk. Instances remain answers probability for different state of the art hardware. So in red is the F B g. A presentation proposing this paper and then the other curve represent Ah, brick a local search in in orange and silver lining in purple, for example. And so you see that the scaring off this purpose simulator is is rather good, and that for larger plant sizes we can get orders of magnitude faster than the state of the art approaches. Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FPD implementation would be faster than risk. Other recently proposed izing machine, such as the hope you know, natural complimented on memories distance that is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the restricted Bosman machine. Implementing a PGA proposed by some group in Broken Recently Again, which is very fast for small parliament sizes but which canning is bad so that a dis worse than the proposed approach so that we can expect that for programs size is larger than 1000 spins. The proposed, of course, would be the faster one. Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better candidates that have been previously found by any other algorithms, so they are the best known could values to best of our knowledge. And, um or so which is shown in this paper table here in particular, the instances, uh, 14 and 15 of this G set can be We can find better converse than previously known, and we can find this can vary is 100 times faster than the state of the art algorithm and CP to do this which is a very common Kasich. It s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g a onda and carefully routing the components within the P G A and and we can draw some projections of what type of performance we can achieve in the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape programs respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital. And, you know, 42 is shown in the green here, the green line without that's and, uh and we should two different, uh, hypothesis for this productions either that the time to solution scales as exponential off n or that the time of social skills as expression of square root off. So it seems, according to the data, that time solution scares more as an expression of square root of and also we can be sure on this and this production show that we probably can solve prime escape problem of science 2000 spins, uh, to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP oh, optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will be just based on the simple common line access for the simulator and in which will have just a classic or approximation of the system. We don't know Sturm, binary weights and museum in term, but then will propose a second version that would extend the current arising machine to Iraq off F p g. A, in which we will add the more refined models truncated, ignoring the bottom Goshen model they just talked about on the support in which he valued waits for the rising problems and support the cement. So we will announce later when this is available and and far right is working >>hard comes from Universal down today in physics department, and I'd like to thank the organizers for their kind invitation to participate in this very interesting and promising workshop. Also like to say that I look forward to collaborations with with a file lab and Yoshi and collaborators on the topics of this world. So today I'll briefly talk about our attempt to understand the fundamental limits off another continues time computing, at least from the point off you off bullion satisfy ability, problem solving, using ordinary differential equations. But I think the issues that we raise, um, during this occasion actually apply to other other approaches on a log approaches as well and into other problems as well. I think everyone here knows what Dorien satisfy ability. Problems are, um, you have boolean variables. You have em clauses. Each of disjunction of collaterals literally is a variable, or it's, uh, negation. And the goal is to find an assignment to the variable, such that order clauses are true. This is a decision type problem from the MP class, which means you can checking polynomial time for satisfy ability off any assignment. And the three set is empty, complete with K three a larger, which means an efficient trees. That's over, uh, implies an efficient source for all the problems in the empty class, because all the problems in the empty class can be reduced in Polian on real time to reset. As a matter of fact, you can reduce the NP complete problems into each other. You can go from three set to set backing or two maximum dependent set, which is a set packing in graph theoretic notions or terms toe the icing graphs. A problem decision version. This is useful, and you're comparing different approaches, working on different kinds of problems when not all the closest can be satisfied. You're looking at the accusation version offset, uh called Max Set. And the goal here is to find assignment that satisfies the maximum number of clauses. And this is from the NPR class. In terms of applications. If we had inefficient sets over or np complete problems over, it was literally, positively influenced. Thousands off problems and applications in industry and and science. I'm not going to read this, but this this, of course, gives a strong motivation toe work on this kind of problems. Now our approach to set solving involves embedding the problem in a continuous space, and you use all the east to do that. So instead of working zeros and ones, we work with minus one across once, and we allow the corresponding variables toe change continuously between the two bounds. We formulate the problem with the help of a close metrics. If if a if a close, uh, does not contain a variable or its negation. The corresponding matrix element is zero. If it contains the variable in positive, for which one contains the variable in a gated for Mitt's negative one, and then we use this to formulate this products caused quote, close violation functions one for every clause, Uh, which really, continuously between zero and one. And they're zero if and only if the clause itself is true. Uh, then we form the define in order to define a dynamic such dynamics in this and dimensional hyper cube where the search happens and if they exist, solutions. They're sitting in some of the corners of this hyper cube. So we define this, uh, energy potential or landscape function shown here in a way that this is zero if and only if all the clauses all the kmc zero or the clauses off satisfied keeping these auxiliary variables a EMS always positive. And therefore, what you do here is a dynamics that is a essentially ingredient descend on this potential energy landscape. If you were to keep all the M's constant that it would get stuck in some local minimum. However, what we do here is we couple it with the dynamics we cooperated the clothes violation functions as shown here. And if he didn't have this am here just just the chaos. For example, you have essentially what case you have positive feedback. You have increasing variable. Uh, but in that case, you still get stuck would still behave will still find. So she is better than the constant version but still would get stuck only when you put here this a m which makes the dynamics in in this variable exponential like uh, only then it keeps searching until he finds a solution on deer is a reason for that. I'm not going toe talk about here, but essentially boils down toe performing a Grady and descend on a globally time barren landscape. And this is what works. Now I'm gonna talk about good or bad and maybe the ugly. Uh, this is, uh, this is What's good is that it's a hyperbolic dynamical system, which means that if you take any domain in the search space that doesn't have a solution in it or any socially than the number of trajectories in it decays exponentially quickly. And the decay rate is a characteristic in variant characteristic off the dynamics itself. Dynamical systems called the escape right the inverse off that is the time scale in which you find solutions by this by this dynamical system, and you can see here some song trajectories that are Kelty because it's it's no linear, but it's transient, chaotic. Give their sources, of course, because eventually knowledge to the solution. Now, in terms of performance here, what you show for a bunch off, um, constraint densities defined by M overran the ratio between closes toe variables for random, said Problems is random. Chris had problems, and they as its function off n And we look at money toward the wartime, the wall clock time and it behaves quite value behaves Azat party nominally until you actually he to reach the set on set transition where the hardest problems are found. But what's more interesting is if you monitor the continuous time t the performance in terms off the A narrow, continuous Time t because that seems to be a polynomial. And the way we show that is, we consider, uh, random case that random three set for a fixed constraint density Onda. We hear what you show here. Is that the right of the trash hold that it's really hard and, uh, the money through the fraction of problems that we have not been able to solve it. We select thousands of problems at that constraint ratio and resolve them without algorithm, and we monitor the fractional problems that have not yet been solved by continuous 90. And this, as you see these decays exponentially different. Educate rates for different system sizes, and in this spot shows that is dedicated behaves polynomial, or actually as a power law. So if you combine these two, you find that the time needed to solve all problems except maybe appear traction off them scales foreign or merely with the problem size. So you have paranormal, continuous time complexity. And this is also true for other types of very hard constraints and sexual problems such as exact cover, because you can always transform them into three set as we discussed before, Ramsey coloring and and on these problems, even algorithms like survey propagation will will fail. But this doesn't mean that P equals NP because what you have first of all, if you were toe implement these equations in a device whose behavior is described by these, uh, the keys. Then, of course, T the continue style variable becomes a physical work off. Time on that will be polynomial is scaling, but you have another other variables. Oxidative variables, which structured in an exponential manner. So if they represent currents or voltages in your realization and it would be an exponential cost Al Qaeda. But this is some kind of trade between time and energy, while I know how toe generate energy or I don't know how to generate time. But I know how to generate energy so it could use for it. But there's other issues as well, especially if you're trying toe do this son and digital machine but also happens. Problems happen appear. Other problems appear on in physical devices as well as we discuss later. So if you implement this in GPU, you can. Then you can get in order off to magnitude. Speed up. And you can also modify this to solve Max sad problems. Uh, quite efficiently. You are competitive with the best heuristic solvers. This is a weather problems. In 2016 Max set competition eso so this this is this is definitely this seems like a good approach, but there's off course interesting limitations, I would say interesting, because it kind of makes you think about what it means and how you can exploit this thes observations in understanding better on a low continues time complexity. If you monitored the discrete number the number of discrete steps. Don't buy the room, Dakota integrator. When you solve this on a digital machine, you're using some kind of integrator. Um and you're using the same approach. But now you measure the number off problems you haven't sold by given number of this kid, uh, steps taken by the integrator. You find out you have exponential, discrete time, complexity and, of course, thistles. A problem. And if you look closely, what happens even though the analog mathematical trajectory, that's the record here. If you monitor what happens in discrete time, uh, the integrator frustrates very little. So this is like, you know, third or for the disposition, but fluctuates like crazy. So it really is like the intervention frees us out. And this is because of the phenomenon of stiffness that are I'll talk a little bit a more about little bit layer eso. >>You know, it might look >>like an integration issue on digital machines that you could improve and could definitely improve. But actually issues bigger than that. It's It's deeper than that, because on a digital machine there is no time energy conversion. So the outside variables are efficiently representing a digital machine. So there's no exponential fluctuating current of wattage in your computer when you do this. Eso If it is not equal NP then the exponential time, complexity or exponential costs complexity has to hit you somewhere. And this is how um, but, you know, one would be tempted to think maybe this wouldn't be an issue in a analog device, and to some extent is true on our devices can be ordered to maintain faster, but they also suffer from their own problems because he not gonna be affect. That classes soldiers as well. So, indeed, if you look at other systems like Mirandizing machine measurement feedback, probably talk on the grass or selected networks. They're all hinge on some kind off our ability to control your variables in arbitrary, high precision and a certain networks you want toe read out across frequencies in case off CM's. You required identical and program because which is hard to keep, and they kind of fluctuate away from one another, shift away from one another. And if you control that, of course that you can control the performance. So actually one can ask if whether or not this is a universal bottleneck and it seems so aside, I will argue next. Um, we can recall a fundamental result by by showing harder in reaction Target from 1978. Who says that it's a purely computer science proof that if you are able toe, compute the addition multiplication division off riel variables with infinite precision, then you could solve any complete problems in polynomial time. It doesn't actually proposals all where he just chose mathematically that this would be the case. Now, of course, in Real warned, you have also precision. So the next question is, how does that affect the competition about problems? This is what you're after. Lots of precision means information also, or entropy production. Eso what you're really looking at the relationship between hardness and cost of computing off a problem. Uh, and according to Sean Hagar, there's this left branch which in principle could be polynomial time. But the question whether or not this is achievable that is not achievable, but something more cheerful. That's on the right hand side. There's always going to be some information loss, so mental degeneration that could keep you away from possibly from point normal time. So this is what we like to understand, and this information laws the source off. This is not just always I will argue, uh, in any physical system, but it's also off algorithm nature, so that is a questionable area or approach. But China gets results. Security theoretical. No, actual solar is proposed. So we can ask, you know, just theoretically get out off. Curiosity would in principle be such soldiers because it is not proposing a soldier with such properties. In principle, if if you want to look mathematically precisely what the solar does would have the right properties on, I argue. Yes, I don't have a mathematical proof, but I have some arguments that that would be the case. And this is the case for actually our city there solver that if you could calculate its trajectory in a loss this way, then it would be, uh, would solve epic complete problems in polynomial continuous time. Now, as a matter of fact, this a bit more difficult question, because time in all these can be re scared however you want. So what? Burns says that you actually have to measure the length of the trajectory, which is a new variant off the dynamical system or property dynamical system, not off its parameters ization. And we did that. So Suba Corral, my student did that first, improving on the stiffness off the problem off the integrations, using implicit solvers and some smart tricks such that you actually are closer to the actual trajectory and using the same approach. You know what fraction off problems you can solve? We did not give the length of the trajectory. You find that it is putting on nearly scaling the problem sites we have putting on your skin complexity. That means that our solar is both Polly length and, as it is, defined it also poorly time analog solver. But if you look at as a discreet algorithm, if you measure the discrete steps on a digital machine, it is an exponential solver. And the reason is because off all these stiffness, every integrator has tow truck it digitizing truncate the equations, and what it has to do is to keep the integration between the so called stability region for for that scheme, and you have to keep this product within a grimace of Jacoby in and the step size read in this region. If you use explicit methods. You want to stay within this region? Uh, but what happens that some off the Eigen values grow fast for Steve problems, and then you're you're forced to reduce that t so the product stays in this bonded domain, which means that now you have to you're forced to take smaller and smaller times, So you're you're freezing out the integration and what I will show you. That's the case. Now you can move to increase its soldiers, which is which is a tree. In this case, you have to make domain is actually on the outside. But what happens in this case is some of the Eigen values of the Jacobean, also, for six systems, start to move to zero. As they're moving to zero, they're going to enter this instability region, so your soul is going to try to keep it out, so it's going to increase the data T. But if you increase that to increase the truncation hours, so you get randomized, uh, in the large search space, so it's it's really not, uh, not going to work out. Now, one can sort off introduce a theory or language to discuss computational and are computational complexity, using the language from dynamical systems theory. But basically I I don't have time to go into this, but you have for heart problems. Security object the chaotic satellite Ouch! In the middle of the search space somewhere, and that dictates how the dynamics happens and variant properties off the dynamics. Of course, off that saddle is what the targets performance and many things, so a new, important measure that we find that it's also helpful in describing thesis. Another complexity is the so called called Makarov, or metric entropy and basically what this does in an intuitive A eyes, uh, to describe the rate at which the uncertainty containing the insignificant digits off a trajectory in the back, the flow towards the significant ones as you lose information because off arrows being, uh grown or are developed in tow. Larger errors in an exponential at an exponential rate because you have positively up north spawning. But this is an in variant property. It's the property of the set of all. This is not how you compute them, and it's really the interesting create off accuracy philosopher dynamical system. A zay said that you have in such a high dimensional that I'm consistent were positive and negatively upon of exponents. Aziz Many The total is the dimension of space and user dimension, the number off unstable manifold dimensions and as Saddam was stable, manifold direction. And there's an interesting and I think, important passion, equality, equality called the passion, equality that connect the information theoretic aspect the rate off information loss with the geometric rate of which trajectory separate minus kappa, which is the escape rate that I already talked about. Now one can actually prove a simple theorems like back off the envelope calculation. The idea here is that you know the rate at which the largest rated, which closely started trajectory separate from one another. So now you can say that, uh, that is fine, as long as my trajectory finds the solution before the projective separate too quickly. In that case, I can have the hope that if I start from some region off the face base, several close early started trajectories, they kind of go into the same solution orphaned and and that's that's That's this upper bound of this limit, and it is really showing that it has to be. It's an exponentially small number. What? It depends on the end dependence off the exponents right here, which combines information loss rate and the social time performance. So these, if this exponents here or that has a large independence or river linear independence, then you then you really have to start, uh, trajectories exponentially closer to one another in orderto end up in the same order. So this is sort off like the direction that you're going in tow, and this formulation is applicable toe all dynamical systems, uh, deterministic dynamical systems. And I think we can We can expand this further because, uh, there is, ah, way off getting the expression for the escaped rate in terms off n the number of variables from cycle expansions that I don't have time to talk about. What? It's kind of like a program that you can try toe pursuit, and this is it. So the conclusions I think of self explanatory I think there is a lot of future in in, uh, in an allo. Continue start computing. Um, they can be efficient by orders of magnitude and digital ones in solving empty heart problems because, first of all, many of the systems you like the phone line and bottleneck. There's parallelism involved, and and you can also have a large spectrum or continues time, time dynamical algorithms than discrete ones. And you know. But we also have to be mindful off. What are the possibility of what are the limits? And 11 open question is very important. Open question is, you know, what are these limits? Is there some kind off no go theory? And that tells you that you can never perform better than this limit or that limit? And I think that's that's the exciting part toe to derive thes thes this levian 10.
SUMMARY :
bifurcated critical point that is the one that I forget to the lowest pump value a. the chi to non linearity and see how and when you can get the Opio know that the classical approximation of the car testing machine, which is the ground toe, than the state of the art algorithm and CP to do this which is a very common Kasich. right the inverse off that is the time scale in which you find solutions by first of all, many of the systems you like the phone line and bottleneck.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Exxon Mobil | ORGANIZATION | 0.99+ |
Andy | PERSON | 0.99+ |
Sean Hagar | PERSON | 0.99+ |
Daniel Wennberg | PERSON | 0.99+ |
Chris | PERSON | 0.99+ |
USC | ORGANIZATION | 0.99+ |
Caltech | ORGANIZATION | 0.99+ |
2016 | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Tatsuya Nagamoto | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
1978 | DATE | 0.99+ |
Fox | ORGANIZATION | 0.99+ |
six systems | QUANTITY | 0.99+ |
Harvard | ORGANIZATION | 0.99+ |
Al Qaeda | ORGANIZATION | 0.99+ |
September | DATE | 0.99+ |
second version | QUANTITY | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
India | LOCATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Burns | PERSON | 0.99+ |
Atsushi Yamamura | PERSON | 0.99+ |
0.14% | QUANTITY | 0.99+ |
48 core | QUANTITY | 0.99+ |
0.5 microseconds | QUANTITY | 0.99+ |
NSF | ORGANIZATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
NTT | ORGANIZATION | 0.99+ |
first implementation | QUANTITY | 0.99+ |
first experiment | QUANTITY | 0.99+ |
123 | QUANTITY | 0.99+ |
Army Research Office | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
1,904,711 | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Steve | PERSON | 0.99+ |
2000 spins | QUANTITY | 0.99+ |
five researcher | QUANTITY | 0.99+ |
Creole | ORGANIZATION | 0.99+ |
three set | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
third part | QUANTITY | 0.99+ |
Department of Applied Physics | ORGANIZATION | 0.99+ |
10 | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
85,900 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
one problem | QUANTITY | 0.99+ |
136 CPU | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
2.4 gigahertz | QUANTITY | 0.99+ |
1000 times | QUANTITY | 0.99+ |
two times | QUANTITY | 0.99+ |
two parts | QUANTITY | 0.99+ |
131 | QUANTITY | 0.99+ |
14,233 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
two possible phases | QUANTITY | 0.99+ |
13,580 | QUANTITY | 0.99+ |
5 | QUANTITY | 0.99+ |
4 | QUANTITY | 0.99+ |
one microseconds | QUANTITY | 0.99+ |
first step | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
two identical photons | QUANTITY | 0.99+ |
3 | QUANTITY | 0.99+ |
70 years ago | DATE | 0.99+ |
Iraq | LOCATION | 0.99+ |
one experiment | QUANTITY | 0.99+ |
zero | QUANTITY | 0.99+ |
Amir Safarini Nini | PERSON | 0.99+ |
Saddam | PERSON | 0.99+ |
Neuromorphic in Silico Simulator For the Coherent Ising Machine
>>Hi everyone, This system A fellow from the University of Tokyo before I thought that would like to thank you she and all the stuff of entity for the invitation and the organization of this online meeting and also would like to say that it has been very exciting to see the growth of this new film lab. And I'm happy to share with you today or some of the recent works that have been done either by me or by character of Hong Kong Noise Group indicating the title of my talk is a neuro more fic in silica simulator for the commenters in machine. And here is the outline I would like to make the case that the simulation in digital Tektronix of the CME can be useful for the better understanding or improving its function principles by new job introducing some ideas from neural networks. This is what I will discuss in the first part and then I will show some proof of concept of the game in performance that can be obtained using dissimulation in the second part and the production of the performance that can be achieved using a very large chaos simulator in the third part and finally talk about future plans. So first, let me start by comparing recently proposed izing machines using this table there is adapted from a recent natural tronics paper from the Village Back hard People. And this comparison shows that there's always a trade off between energy efficiency, speed and scalability that depends on the physical implementation. So in red, here are the limitation of each of the servers hardware on, Interestingly, the F p G, a based systems such as a producer, digital, another uh Toshiba purification machine, or a recently proposed restricted Bozeman machine, FPD eight, by a group in Berkeley. They offer a good compromise between speed and scalability. And this is why, despite the unique advantage that some of these older hardware have trust as the currency proposition influx you beat or the energy efficiency off memory sisters uh P. J. O are still an attractive platform for building large theorizing machines in the near future. The reason for the good performance of Refugee A is not so much that they operate at the high frequency. No, there are particle in use, efficient, but rather that the physical wiring off its elements can be reconfigured in a way that limits the funding human bottleneck, larger, funny and phenols and the long propagation video information within the system in this respect, the f. D. A s. They are interesting from the perspective, off the physics off complex systems, but then the physics of the actions on the photos. So to put the performance of these various hardware and perspective, we can look at the competition of bringing the brain the brain complete, using billions of neurons using only 20 watts of power and operates. It's a very theoretically slow, if we can see. And so this impressive characteristic, they motivate us to try to investigate. What kind of new inspired principles be useful for designing better izing machines? The idea of this research project in the future collaboration it's to temporary alleviates the limitations that are intrinsic to the realization of an optical cortex in machine shown in the top panel here. By designing a large care simulator in silicone in the bottom here that can be used for suggesting the better organization principles of the CIA and this talk, I will talk about three neuro inspired principles that are the symmetry of connections, neural dynamics. Orphan, chaotic because of symmetry, is interconnectivity. The infrastructure. No neck talks are not composed of the reputation of always the same types of non environments of the neurons, but there is a local structure that is repeated. So here's a schematic of the micro column in the cortex. And lastly, the Iraqi co organization of connectivity connectivity is organizing a tree structure in the brain. So here you see a representation of the Iraqi and organization of the monkey cerebral cortex. So how can these principles we used to improve the performance of the icing machines? And it's in sequence stimulation. So, first about the two of principles of the estimate Trian Rico structure. We know that the classical approximation of the Cortes in machine, which is a growing toe the rate based on your networks. So in the case of the icing machines, uh, the okay, Scott approximation can be obtained using the trump active in your position, for example, so the times of both of the system they are, they can be described by the following ordinary differential equations on in which, in case of see, I am the X, I represent the in phase component of one GOP Oh, Theo F represents the monitor optical parts, the district optical parametric amplification and some of the good I JoJo extra represent the coupling, which is done in the case of the measure of feedback cooking cm using oh, more than detection and refugee A then injection off the cooking time and eso this dynamics in both cases of CME in your networks, they can be written as the grand set of a potential function V, and this written here, and this potential functionally includes the rising Maccagnan. So this is why it's natural to use this type of, uh, dynamics to solve the icing problem in which the Omega I J or the Eyes in coping and the H is the extension of the rising and attorney in India and expect so. >>Not that this potential function can only be defined if the Omega I j. R. A. Symmetric. So the well known problem of >>this approach is that this potential function V that we obtain is very non convicts at low temperature, and also one strategy is to gradually deformed this landscape, using so many in process. But there is no theorem. Unfortunately, that granted convergence to the global minimum of there's even 20 and using this approach. And so this is >>why we propose toe introduce a macro structure the system or where one analog spin or one D o. P. O is replaced by a pair off one and knock spin and one error on cutting. Viable. And the addition of this chemical structure introduces a symmetry in the system, which in terms induces chaotic dynamics, a chaotic search rather than a >>learning process for searching for the ground state of the icing. Every 20 >>within this massacre structure the role of the ER variable eyes to control the amplitude off the analog spins to force the amplitude of the expense toe, become equal to certain target amplitude. A Andi. This is known by moderating the strength off the icing complaints or see the the error variable e I multiply the icing complain here in the dynamics off UH, D o p o on Then the dynamics. The whole dynamics described by this coupled equations because the e I do not necessarily take away the same value for the different, I think introduces a >>symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here for solving certain current size off, um, escape problem, Uh, in which the exiled from here in the i r. From here and the value of the icing energy is shown in the bottom plots. And you see this Celtics search that visit various local minima of the as Newtonian and eventually finds the local minima Um, >>it can be shown that this modulation off the target opportunity can be used to destabilize all the local minima off the icing hamiltonian so that we're gonna do not get stuck in any of them. On more over the other types of attractors, I can eventually appear, such as the limits of contractors or quality contractors. They can also be destabilized using a moderation of the target amplitude. And so we have proposed in the past two different motivation of the target constitute the first one is a moderation that ensure the 100 >>reproduction rate of the system to become positive on this forbids the creation of any non tree retractors. And but in this work I will talk about another modulation or Uresti moderation, which is given here that works, uh, as well as this first, uh, moderation, but is easy to be implemented on refugee. >>So this couple of the question that represent the current the stimulation of the cortex in machine with some error correction, they can be implemented especially efficiently on an F B G. And here I show the time that it takes to simulate three system and eso in red. You see, at the time that it takes to simulate the X, I term the EI term, the dot product and the rising everything. Yet for a system with 500 spins analog Spain's equivalent to 500 g. O. P. S. So in f b d a. The nonlinear dynamics which, according to the digital optical Parametric amplification that the Opa off the CME can be computed in only 13 clock cycles at 300 yards. So which corresponds to about 0.1 microseconds. And this is Toby, uh, compared to what can be achieved in the measurements tobacco cm in which, if we want to get 500 timer chip Xia Pios with the one she got repetition rate through the obstacle nine narrative. Uh, then way would require 0.5 microseconds toe do this so the submission in F B J can be at least as fast as, ah one gear repression to replicate the post phaser CIA. Um, then the DOT product that appears in this differential equation can be completed in 43 clock cycles. That's to say, one microseconds at 15 years. So I pieced for pouring sizes that are larger than 500 speeds. The dot product becomes clearly the bottleneck, and this can be seen by looking at the the skating off the time the numbers of clock cycles a text to compute either the non in your optical parts, all the dog products, respect to the problem size. And and if we had a new infinite amount of resources and PGA to simulate the dynamics, then the non in optical post can could be done in the old one. On the mattress Vector product could be done in the low carrot off, located off scales as a low carrot off end and while the kite off end. Because computing the dot product involves the summing, all the terms in the products, which is done by a nephew, Jay by another tree, which heights scares a logarithmic any with the size of the system. But this is in the case if we had an infinite amount of resources on the LPGA food but for dealing for larger problems off more than 100 spins, usually we need to decompose the metrics into ah smaller blocks with the block side that are not you here. And then the scaling becomes funny non inner parts linear in the and over you and for the products in the end of you square eso typically for low NF pdf cheap P a. You know you the block size off this matrix is typically about 100. So clearly way want to make you as large as possible in order to maintain this scanning in a log event for the numbers of clock cycles needed to compute the product rather than this and square that occurs if we decompose the metrics into smaller blocks. But the difficulty in, uh, having this larger blocks eyes that having another tree very large Haider tree introduces a large finding and finance and long distance started path within the refugee. So the solution to get higher performance for a simulator of the contest in machine eyes to get rid of this bottleneck for the dot product. By increasing the size of this at the tree and this can be done by organizing Yeah, click the extra co components within the F p G A in order which is shown here in this right panel here in order to minimize the finding finance of the system and to minimize the long distance that the path in the in the fpt So I'm not going to the details of how this is implemented the PGA. But just to give you a new idea off why the Iraqi Yahiko organization off the system becomes extremely important toe get good performance for simulator organizing mission. So instead of instead of getting into the details of the mpg implementation, I would like to give some few benchmark results off this simulator, uh, off the that that was used as a proof of concept for this idea which is can be found in this archive paper here and here. I should result for solving escape problems, free connected person, randomly person minus one, spin last problems and we sure, as we use as a metric the numbers >>of the mattress Victor products since it's the bottleneck of the computation, uh, to get the optimal solution of this escape problem with Nina successful BT against the problem size here and and in red here there's propose F B J implementation and in ah blue is the numbers of retrospective product that are necessary for the C. I am without error correction to solve this escape programs and in green here for noisy means in an evening which is, uh, behavior. It's similar to the car testing machine >>and security. You see that the scaling off the numbers of metrics victor product necessary to solve this problem scales with a better exponents than this other approaches. So so So that's interesting feature of the system and next we can see what is the real time to solution. To solve this, SK instances eso in the last six years, the time institution in seconds >>to find a grand state of risk. Instances remain answers is possibility for different state of the art hardware. So in red is the F B G. A presentation proposing this paper and then the other curve represent ah, brick, a local search in in orange and center dining in purple, for example, and So you see that the scaring off this purpose simulator is is rather good and that for larger politicizes, we can get orders of magnitude faster than the state of the other approaches. >>Moreover, the relatively good scanning off the time to search in respect to problem size uh, they indicate that the FBT implementation would be faster than risk Other recently proposed izing machine, such as the Hope you know network implemented on Memory Sisters. That is very fast for small problem size in blue here, which is very fast for small problem size. But which scanning is not good on the same thing for the >>restricted Bosman machine implemented a PGA proposed by some group in Brooklyn recently again, which is very fast for small promise sizes. But which canning is bad So that, uh, this worse than the purpose approach so that we can expect that for promise sizes larger than, let's say, 1000 spins. The purpose, of course, would be the faster one. >>Let me jump toe this other slide and another confirmation that the scheme scales well that you can find the maximum cut values off benchmark sets. The G sets better cut values that have been previously found by any other >>algorithms. So they are the best known could values to best of our knowledge. And, um, or so which is shown in this paper table here in particular, the instances, Uh, 14 and 15 of this G set can be We can find better converse than previously >>known, and we can find this can vary is 100 times >>faster than the state of the art algorithm and cp to do this which is a recount. Kasich, it s not that getting this a good result on the G sets, they do not require ah, particular hard tuning of the parameters. So the tuning issuing here is very simple. It it just depends on the degree off connectivity within each graph. And so this good results on the set indicate that the proposed approach would be a good not only at solving escape problems in this problems, but all the types off graph sizing problems on Mexican province in communities. >>So given that the performance off the design depends on the height of this other tree, we can try to maximize the height of this other tree on a large F p g A onda and carefully routing the trickle components within the P G A. And and we can draw some projections of what type of performance we can achieve in >>the near future based on the, uh, implementation that we are currently working. So here you see projection for the time to solution way, then next property for solving this escape problems respect to the prime assize. And here, compared to different with such publicizing machines, particularly the digital and, you know, free to is shown in the green here, the green >>line without that's and, uh and we should two different, uh, prosthesis for this productions either that the time to solution scales as exponential off n or that >>the time of social skills as expression of square root off. So it seems according to the data, that time solution scares more as an expression of square root of and also we can be sure >>on this and this production showed that we probably can solve Prime Escape Program of Science 2000 spins to find the rial ground state of this problem with 99 success ability in about 10 seconds, which is much faster than all the other proposed approaches. So one of the future plans for this current is in machine simulator. So the first thing is that we would like to make dissimulation closer to the rial, uh, GOP or optical system in particular for a first step to get closer to the system of a measurement back. See, I am. And to do this, what is, uh, simulate Herbal on the p a is this quantum, uh, condoms Goshen model that is proposed described in this paper and proposed by people in the in the Entity group. And so the idea of this model is that instead of having the very simple or these and have shown previously, it includes paired all these that take into account out on me the mean off the awesome leverage off the, uh, European face component, but also their violence s so that we can take into account more quantum effects off the g o p. O, such as the squeezing. And then we plan toe, make the simulator open access for the members to run their instances on the system. There will be a first version in September that will >>be just based on the simple common line access for the simulator and in which will have just a classical approximation of the system. We don't know Sturm, binary weights and Museum in >>term, but then will propose a second version that would extend the current arising machine to Iraq off eight f p g. A. In which we will add the more refined models truncated bigger in the bottom question model that just talked about on the supports in which he valued waits for the rising problems and support the cement. So we will announce >>later when this is available, and Farah is working hard to get the first version available sometime in September. Thank you all, and we'll be happy to answer any questions that you have.
SUMMARY :
know that the classical approximation of the Cortes in machine, which is a growing toe So the well known problem of And so this is And the addition of this chemical structure introduces learning process for searching for the ground state of the icing. off the analog spins to force the amplitude of the expense toe, symmetry in the system, which in turn creates chaotic dynamics, which I'm showing here is a moderation that ensure the 100 reproduction rate of the system to become positive on this forbids the creation of any non tree in the in the fpt So I'm not going to the details of how this is implemented the PGA. of the mattress Victor products since it's the bottleneck of the computation, uh, You see that the scaling off the numbers of metrics victor product necessary to solve So in red is the F B G. A presentation proposing Moreover, the relatively good scanning off the But which canning is bad So that, scheme scales well that you can find the maximum cut values off benchmark the instances, Uh, 14 and 15 of this G set can be We can find better faster than the state of the art algorithm and cp to do this which is a recount. So given that the performance off the design depends on the height the near future based on the, uh, implementation that we are currently working. the time of social skills as expression of square root off. And so the idea of this model is that instead of having the very be just based on the simple common line access for the simulator and in which will have just a classical to Iraq off eight f p g. A. In which we will add the more refined models any questions that you have.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brooklyn | LOCATION | 0.99+ |
September | DATE | 0.99+ |
100 times | QUANTITY | 0.99+ |
Berkeley | LOCATION | 0.99+ |
Hong Kong Noise Group | ORGANIZATION | 0.99+ |
CIA | ORGANIZATION | 0.99+ |
300 yards | QUANTITY | 0.99+ |
1000 spins | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
15 years | QUANTITY | 0.99+ |
second version | QUANTITY | 0.99+ |
first version | QUANTITY | 0.99+ |
Farah | PERSON | 0.99+ |
second part | QUANTITY | 0.99+ |
first part | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
500 spins | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
first step | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
more than 100 spins | QUANTITY | 0.99+ |
Scott | PERSON | 0.99+ |
University of Tokyo | ORGANIZATION | 0.99+ |
500 g. | QUANTITY | 0.98+ |
Mexican | LOCATION | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Kasich | PERSON | 0.98+ |
first version | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Iraq | LOCATION | 0.98+ |
third part | QUANTITY | 0.98+ |
13 clock cycles | QUANTITY | 0.98+ |
43 clock cycles | QUANTITY | 0.98+ |
first thing | QUANTITY | 0.98+ |
0.5 microseconds | QUANTITY | 0.97+ |
Jay | PERSON | 0.97+ |
Haider | LOCATION | 0.97+ |
15 | QUANTITY | 0.97+ |
one microseconds | QUANTITY | 0.97+ |
Spain | LOCATION | 0.97+ |
about 10 seconds | QUANTITY | 0.97+ |
LPGA | ORGANIZATION | 0.96+ |
each | QUANTITY | 0.96+ |
500 timer | QUANTITY | 0.96+ |
one strategy | QUANTITY | 0.96+ |
both cases | QUANTITY | 0.95+ |
one error | QUANTITY | 0.95+ |
20 watts | QUANTITY | 0.95+ |
Nina | PERSON | 0.95+ |
about 0.1 microseconds | QUANTITY | 0.95+ |
nine | QUANTITY | 0.95+ |
each graph | QUANTITY | 0.93+ |
14 | QUANTITY | 0.92+ |
CME | ORGANIZATION | 0.91+ |
Iraqi | OTHER | 0.91+ |
billions of neurons | QUANTITY | 0.91+ |
99 success | QUANTITY | 0.9+ |
about 100 | QUANTITY | 0.9+ |
larger than 500 speeds | QUANTITY | 0.9+ |
Vector | ORGANIZATION | 0.89+ |
spins | QUANTITY | 0.89+ |
Victor | ORGANIZATION | 0.89+ |
last six years | DATE | 0.86+ |
one | QUANTITY | 0.85+ |
one analog | QUANTITY | 0.82+ |
hamiltonian | OTHER | 0.82+ |
Simulator | TITLE | 0.8+ |
European | OTHER | 0.79+ |
three neuro inspired principles | QUANTITY | 0.78+ |
Bosman | PERSON | 0.75+ |
three system | QUANTITY | 0.75+ |
trump | PERSON | 0.74+ |
Xia Pios | COMMERCIAL_ITEM | 0.72+ |
100 | QUANTITY | 0.7+ |
one gear | QUANTITY | 0.7+ |
P. | QUANTITY | 0.68+ |
FPD eight | COMMERCIAL_ITEM | 0.66+ |
first one | QUANTITY | 0.64+ |
Escape Program of Science 2000 | TITLE | 0.6+ |
Celtics | OTHER | 0.58+ |
Toby | PERSON | 0.56+ |
Machine | TITLE | 0.54+ |
Refugee A | TITLE | 0.54+ |
couple | QUANTITY | 0.53+ |
Tektronix | ORGANIZATION | 0.51+ |
Opa | OTHER | 0.51+ |
P. J. O | ORGANIZATION | 0.51+ |
Bozeman | ORGANIZATION | 0.48+ |
Stanley Zaffos, Infinidat | CUBEConversation, October 2019
from our studios in the heart of Silicon Valley Palo Alto California this is a cute conversation hi and welcome to the cube Studios for another cube conversation where we go in-depth with thought leaders driving innovation across the tech industry I'm your host Peter Burris if there's one thing we know about cloud it's that it's going to drive new data and a lot of it and that places a lot of load on storage technologies who have to be able to capture persist and ultimately deliver that data to new classes of applications in support of whatever the digital business is trying to do so how is the whole storage industry and the relationship between data and storage going to evolve I can't think of a better person to have that conversation with in stanley's a phos senior vice president product marketing infinite dad Stan welcome to the cube thank you for it's my pleasure to be here and I'm flattered with that introduction well hold on look you and I have known each other for a long time we have been walking into user presentations and you've been walking out until recently though you were generally regarded as the thought leader when it came to user side concerns about storage what is that problem that users are fundamentally focused on as they think about their data data management and storage requirements fundamental problems and this afflicts all classes of users whether in a financial institution at university government small business medium-sized businesses is that they're coping with the number of primal forces that don't change and the first is that the environment is becoming ever more competitive and with the environment being ever more competitive that means that they're always under budget constraints they're usually suffering from skill shortages especially now when we see so many new technologies and the realization that we can coax value out of the information that we capture and store creating new demands elsewhere within the IT organization so what we see historically is that uses understand that there you have an insatiable demand for capacity they have finite budgets they have limited skills and they realize that recovering from a loss of data integrity is a far more painful process than recovering from an application blowing up or a networking issue and they got to do it faster and they have to do it faster so what we see in some ways is in effect the perfect storm and this is part of the reason that we've seen a number of the technical evolutions that we've witnessed over the past decade or two decades or however long we'd like to admit we've been tracking this industry occurring and growing in importance what we've also seen is that many of the technologies that are useful in helping to deliver usable availability to the application are in some ways becoming more commoditized so when we look across these industries some of the things that we're looking for is cost efficiency we're looking at increasing levels of automation we're looking of increases in data mobility with the ultimate objective being of course to allow data to reside where it naturally belongs and we're trying to deliver these new capabilities at scale in infrastructures that were built with storage arrays that would design for a terabyte world instead of a petabyte world and it won't be too long before we start talking about exabytes as we're already seeing so to be able to satisfy new scale problems with traditional and well understood issues is there are three basic types of storage companies that are targeting this problem the first of the established storage companies the incumbents the incumbents and the incumbents I really don't envy them because they have to maintain backwards compatibility which limits their ability to innovate at the same time they're competing against privately held newer companies that aren't constrained by the need for backwards compatibility and therefore able to take better advantage of the technology improvements that we're seeing to live it and when I say technology improvements not just in hardware but also in terms of software also in terms of management and government and governing philosophies so beginning with the point that all companies large small have some basic problems that are similar what we then see is there are three types of storage companies addressing them one of the in established and common vendors the other and they've gotten a lot of press or the companies that realize that flash media very media that delivers one to two orders of magnitude improvements in terms of performance in terms of bandwidth in terms of environmental x' that they could create storage solutions that address real pain points within a data center within an organization but at a very high price point and then it was the third approach and this is the approach that infinite I chose to take and that is to define the customer problem to find the customer market and then create an architecture which is underpinned by brilliant software to solve these problems in a way that is both cost-effective and extensible and of course meeting all of the critical capabilities that users are looking for so we've got the situation where we've got the incumbents who have install bases and are trying to bring their customers forward but right I have to do so within the constraints of past technology choices we've got the new folks who are basically technology first and saying jump to a new innovation curve and we've got other companies that are trying to bring the best of the technology to the best of the customer reality and marry it and you're asserting that's what infinite at ease and then it's precisely what we've done so let's talk about why did you then come to infinite at what is it about infinite act that gets you excited well one of the things that got well your number of things that got me excited about it so the first is that when I look at this and I approach these things as an engineer who's steeped in aerospace and weapon systems design so you look at the problem you superimpose capabilities there and then you blow it up and then if well we do blow it up but we blow it up using economics we blow it up using superior post-sale support effectiveness we blow it up with a fundamentally different approach to how we give our install base access to new capabilities so we're established storage companies and to some extent media based storage companies of forcing upgrades to avoid architectural obsolescence that is to gain access to new features and functions that can improve their staff productivity or deliver new capabilities to support new applications and workloads we're not forcing a cadence of infrastructure refreshes to gain access to that so if you take a look at our history our past behavior we allow today we're allowing current software to run on n minus 2 generation hardware so that now when you're doing a refresh on your hardware you're doing a refresh on the hardware because you've outgrown it because it's so old that it's moved past its useful service life which hasn't happened to us yet because that's usually on the order of about eight years and sometimes longer if it's kept in a clean data center and we have a steady cadence of product announcements and we understood some underlying economics so whether I talk to banking institutions colleges manufacturing companies telcos service providers everybody's in general agreement that roughly two-thirds of the data that they have online and accessible is stale data meaning that it hasn't been accessed in 60 to 90 days and then when I take a look at industry forecasts in terms of dollar per terabyte pricing for HD DS for disk drives and I look at dollar per terabyte forecast for flash technologies there's an order of magnitude difference in meaning 10x and even if you want to be a pessimist call it only 5x what you see is that we have a built-in advantage for storing 60% of the data that's already up and spinning and there are those questions of whether or not the availability of flash is going to come under pressure over the next few years as because we're not expanding another fabs out there they're generating flash so let me come back right it's kind of core points out there so we have quality yeah the right now you guys are trying to bring the economics of HDD to the challenges are faster more reliable more scalable data delivery right so that you can think about not only persisting your data from transactional applications but also delivering that data to the new uses new requirements new applications new business needs so you've made you know infinite out has made some choices about how to bring technology together that are some somewhat that are unique first thing is the team that did this tell us a little bit about the team and then let's talk about some of those torches so one of the draws for me personally is that we have a development team that has had the unique possibly the unique experience of having done three not one not two but three clean sheet designs of storage arrays now if you believe that practice makes perfect and you're starting off with very bright people that experience before they designed a storage array when we look at the InfiniBand when we look at in Finnegan what we see is the benefit of three clean sheet designs and what does that design look like what is it how did you guys bring these different senses of technology together to improve the notion of it all right so what we looked at we looked at trends instead of being married to a technology or married to an architecture we were we define the users problem we understood that they have an insatiable need for data we can argue whether they're growing at fifteen percent 30 percent or 100 percent per year but data growth is insatiable stale data being a constant megive n' and of course now with digital business initiatives and moving the infrastructure to the edge where we could capture ever more data if anything the amount of stale data that was storing is likely to increase so we've all seen survey after survey that 80% of all the data created is unstructured data meaning we're collecting it we know that may be a value at some point but we're not quite sure when so this is not data that you want to store in the most expensive media that we know how to manufacture or sell right not happening so we have a built-in economic advantage for this at least 60% of the data that users want to keep online we understand that if you implement an archiving solution that archive data still has to be stored somewhere and for practical purposes that's either disk or tape and we're not here to talk about the fact that I can take tape and store in a bunker for years but if I want to recover something if I have to answer a problem I want it on disk so the economic gap the price Delta between an archive storage solution per se and our approach is much narrower because we're using a common technology and when Seagate or West and digital a Toshiba cell and HDD they're not asking you where you're putting it they're saying you want this capacity this rpm this mean time between face its this is how much it's going to cost so when we take a look at a lot of the innovation and go to market models what they really are or revenue protection schemes for the existing established vendors and for the emerging companies the difference is there are in the problems that they're solving am i creating a backup restore solution the backup and restore is always a high impact pain point am i creating a backup restore solution am i building a system for primary storage a my targeting virtualized environments my targeting VDI now our install base the bulk of our install base I'm not sure we actually we should share percentages but it's well north of 50 and if you take a look at some virtualized estimates probably 80% of workloads today are virtualized we understood that to satisfy this environment and to have a built-in advantage that's memorable after the marketing presentations are done in other words treating these things as black boxes so if we take a look at my high-level description of an infinite box array installed at a customer site consistent sub-millisecond response times and we're able to do that because we service over 80% of all iOS out of DRAM which is probably about four orders of magnitude faster than NAND flash and then we have a large read cache to increase our cache hit ratio even further and when I say large we're not talking about single digits of terabytes we're talking about 20 plus terabytes and that can grow as necessary so that when we're done we're achieving cache hit ratios that are typically in excess of 90% now if I'm servicing iOS out of cache do I really care what's on the back end the answer is no but what I do care about for certain analytics applications is I want lots of bandwidth and I want and if I have workloads with high right content I don't want to be spending a lot of time paying my raid right penalty so what we've done is to take the obvious solution and coalesce rights so that instead of doing partial stripe rights we're always doing full stripe rights so we have double bit protection on data stored on HD DS which means that the world is likely to come to an end before we lose this slight exaggeration I think we're expecting the world to come to an end in 14 billion years yeah yeah let's do so so if I'm wrong get back to me in a Bay and it's a little bit less than that but it doesn't matter yeah okay high on that all so we've got a so we've got a built in economic advantage we've got a built in performance advantage because when I'm servicing most iOS out of DRAM which is for does magnitude faster than NAND flash I've got a lot of room to do a lot of very clever things in terms of metadata and still be faster so and you got a team that's done it before and we've got a team that's done it before and experimented because remember this is a team that has experience with scale-up architectures as in symmetric s-- they have experience with scale-out architectures which is XIV which was very disruptive to the market well so was it symmetric spec and now of course we've got this third bite at the Apple with infinite at where they also understood that the rate of microprocessor performance improvement was going up a lot faster than than our ability to transfer data on and off of HD DS or SSDs so what they realized is that they could change the ratio they can have a much lower microprocessor or controller to back-end storage ratio and still be able to deliver this tremendous performance and now if you have fewer parts and you're not affecting the ID MTBF by driving more iOS through I've lowered my overall cost of goods so now I've got an advantage in back-end media I have a bag I have an advantage in terms of the number of controllers I need to deliver sub sillas eken response time I have an advantage in terms of delivering usable availability so I'm now in a position to be able to unashamedly compete on price unashamedly compete on performance unashamedly compete on a better post sale support experience because remember if there's less stuff they had a break we're taking less calls and because of the way we're organized our support generally goes to what other vendors might think of it's third level support because of a guided answer answers the phone from us doesn't solve the problem he's calling development so if you take a look at gotten apear insights we're off the scale in terms of having great reviews and when you have I think it's 99% I may be off by a percent ninety eight to a hundred percent of our customers saying they'd recommend our kit to their to their peers that's a pretty positive endorsement yeah so let me let me break in and and kind of wrap up a little bit let me make this quick observation because the other thing that you guys have done is you've demonstrated that you're not bound to a single technology so smart people with a great architecture that's capable of utilizing any technology to serve a customer problem at a price point that reflects the value of the problem that's being solved right and in fact we it's very insightful observation because when you recognize that we've built a multimedia integrated architecture that makes our that makes very easy for us to include storage class memory and because of the way we've done our drivers we're also going to be nvme over if ready when that starts to gain traction as well excellent Stanley Zappos senior vice president product management Infini debt thanks very much for being in the cube we'll have you back oh it's my pleasure there's been a blast and once again I want to thank you for joining us for another cube conversation on Peterborough's see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
60% | QUANTITY | 0.99+ |
Peter Burris | PERSON | 0.99+ |
fifteen percent | QUANTITY | 0.99+ |
60 | QUANTITY | 0.99+ |
Seagate | ORGANIZATION | 0.99+ |
99% | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
80% | QUANTITY | 0.99+ |
October 2019 | DATE | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
90 days | QUANTITY | 0.99+ |
three types | QUANTITY | 0.99+ |
Stanley Zaffos | PERSON | 0.99+ |
third approach | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
over 80% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
third bite | QUANTITY | 0.97+ |
first thing | QUANTITY | 0.97+ |
phos | ORGANIZATION | 0.97+ |
Finnegan | LOCATION | 0.97+ |
third level | QUANTITY | 0.97+ |
Infinidat | ORGANIZATION | 0.97+ |
5x | QUANTITY | 0.96+ |
both | QUANTITY | 0.96+ |
Infini | ORGANIZATION | 0.96+ |
today | DATE | 0.96+ |
three | QUANTITY | 0.95+ |
about eight years | QUANTITY | 0.95+ |
about 20 plus terabytes | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
two orders | QUANTITY | 0.94+ |
single | QUANTITY | 0.93+ |
Stan | PERSON | 0.93+ |
14 billion years | QUANTITY | 0.93+ |
stanley | PERSON | 0.92+ |
Palo Alto California | LOCATION | 0.91+ |
telcos | ORGANIZATION | 0.91+ |
30 percent | QUANTITY | 0.91+ |
three basic types | QUANTITY | 0.9+ |
years | QUANTITY | 0.87+ |
two decades | QUANTITY | 0.87+ |
north of 50 | QUANTITY | 0.87+ |
90% | QUANTITY | 0.86+ |
three clean sheet designs | QUANTITY | 0.86+ |
at least 60% of | QUANTITY | 0.84+ |
two-thirds of | QUANTITY | 0.84+ |
three clean sheet | QUANTITY | 0.83+ |
2 generation | QUANTITY | 0.81+ |
a hundred percent | QUANTITY | 0.81+ |
100 percent per year | QUANTITY | 0.79+ |
dollar per terabyte | QUANTITY | 0.79+ |
Delta | ORGANIZATION | 0.78+ |
ninety eight | QUANTITY | 0.78+ |
one of the draws | QUANTITY | 0.78+ |
things | QUANTITY | 0.76+ |
single digits of terabytes | QUANTITY | 0.73+ |
double | QUANTITY | 0.73+ |
one of | QUANTITY | 0.72+ |
lot | QUANTITY | 0.69+ |
Stanley Zappos | PERSON | 0.68+ |
dollar per terabyte | QUANTITY | 0.68+ |
West | ORGANIZATION | 0.67+ |
past decade | DATE | 0.64+ |
percent | QUANTITY | 0.64+ |
next few years | DATE | 0.61+ |
four orders | QUANTITY | 0.59+ |
those | QUANTITY | 0.48+ |
Peterborough | PERSON | 0.45+ |
InfiniBand | ORGANIZATION | 0.35+ |
Day 2 Kick off | Pure Accelerate 2019
>> Announcer: From Austin, Texas it's The Cube covering Pure Storage Accelerate 2019, brought to you by Pure Storage. >> Good morning. From Austin, Texas, Lisa Martin with Dave Vellante at Pure Accelerate 2019. This is our second day. We just came from a very cool, interesting, keynote, Dave whenever there's astronauts my inner NASA geek from the early 2000s. She just comes right back up Leland Melvin was on >> Amazing, right? >> With a phenomenal story. Talking about technology and the feeling of innovation but also a great story of inspiration from a steam perspective science, technology, engineering, arts, math, I loved that and, >> Dave: And fun >> Very fun. But also... >> One of the better talks I've ever seen >> It really was. It had so many elements that I think you didn't have to be a NASA fan or a NASA geek or a space geek to appreciate the all of the lessons that Leland Melvin learned along the way that he really is inspiring, everybody the audience to take note of. It was I thought it was... >> And incredibly accomplished, right? I mean scientist, MIT engineer, played in the NFL, went to space, he had some really fun stuff when they were, you know, messing around with with gravity. >> Lisa: Yes. >> I never knew you could do that. He had like this water. >> Lisa: Water, yeah. >> Bubble. >> I'd never seen that before and they were throwing M&M's inside (laughter) and he, you know consumed it choked on it, which is pretty funny. >> Yeah, well it was near and dear to me. I worked with NASA my first job out of grad school. >> Dave: Really? >> I did, and managed biological pilots that flew on the space shuttle and the mission that the he talked about that didn't land, Colombia. That was the mission that I worked on. So when he talked about that countdown clock going positive. I was there on the runway with that. So for me, it just struck a chord of, >> Dave: so this is of course the 50th anniversary of the moonwalk. And you know I have this thing about watches, kind of like what you have with shoes (chuckles) >> Lisa: Hey, handbags. >> Is that not true? Oh, It's handbags for you? (laughing) >> Dave: I know this really that was a terrible thing for me to say. >> That's okay. >> Dave: You have great shoes so I just I just assumed that not good to make assumptions. So I bought a moon watch this year which was the watch that Neil Armstrong used to not the exact one but similar one, right? >> Lisa: Yeah. And it actually has an acrylic face because they're afraid if it cracked in space you'd have glass all over the place. [Lisa] Right. So that's a little nostalgia there. >> Well one of the main things too as you look at the mission that President John F. Kennedy established in the 60's for getting a man in space in that 10-year period. That being accomplished and kind of a parallel with what Pure Storage has done in its first 10 years of tremendous innovation. This keynote again Day 2, standing room only at least about 3000 people or so here. Storage as James Governor said, your friend and also who keynoted after Leland this morning you know, (mumbles) Software's eating the world storage is eating the world we have to have secure locations to store all this data so that we can extract maximum value from it. So nice parallel between the space program and Pure Storage. >> James is really good, isn't he? I mean he had to follow Leland and I mean again one of the better talks I've ever heard, but James is very strong, he's funny, he's witty he's he cuts to the chase. >> Lisa: Yes. >> He always tells it like it is. He's a very Monkchips is very focused on developers and they do a really good job there, one of the things he talked about was S3 and how Amazon uses this working backwards methodology which maybe a lot of people don't know about but what they do is they write and rewrite and rewrite and vet and rewrite the press release before they announce the product and even before they develop the products they write the press release and then they work backwards from there. So this is the outcome that we are trying to achieve, and it's very disciplined process that they use and as he said they may revise it hundreds and hundreds of times and he put up Andy Jassy's quote from 2004, around S3. That actually surprised me. 2000...Maybe I read it wrong. >> Lisa: No, it was 2004. >> Because S3 came out after EC2 which was 2006 so I don't know. Maybe I'm getting my dates wrong or I think James actually got his dates wrong but who knows, maybe you know what? Maybe he got a copy of that from the internal working document, working backwards doc that could be what it was but again the point being they envisioned this simple storage that developers didn't have to think about >> Lisa: Right. >> That was virtually unlimited in capacity, highly available and you know, dirt cheap which is what people want and so he talked about that and then he gave a little history of the Dell technology families and I tweeted out this in a funny little you know basically pivotal VM ware EMC and Dell and their history Dell was basically IPO 1984 and then today. There was a few things in between I know but he's got a great perspective on things and I think it resonated with the audience then he talked a lot about Kubernetes jokingly tongue-in-cheek how Kubernetes everybody thought was going to kill VMware but his big takeaway was look you got all these skills of (mumbles) Skills, core database skills, I would even add to that you know understanding how storage works and I always joke if your career is based on managing lawns you might want to rethink your career. But his point was which I liked was look all those skills you've learned are valuable but you now have to step up your game and learn new skills. You have to build on top of those skills so the history you have and the knowledge that you've built up is very valuable but it's not going to propel you to the next decade and so I thought that was a good takeaway and it was an excellent talk. >> So looking back at the conversations yesterday the press releases that came out the advancements of what Pure is doing, with AWS, with Nvidia, with the AI data-hub for example, delivering more of their portfolio as a service to allow businesses whether it's a law-firm like we talked to yesterday utility or Mercedes AMG Petronas Motor-sport, to be able to access data securely, incredibly quickly, recover it restore it absolutely critical and really can be game-changing depending on the type of organization. I want to get your perspectives on some of the things you heard anecdotally yesterday after we wrapped in terms of the atmosphere, the vibe, the thoughts on Pure's next 10 years. >> Yeah, so several things, just some commentary so it's always good at night you go around you get a lot of data we sometimes call it metadata. I think one of the more interesting announcements to me was the block-storage on AWS. I don't necessarily think that this is going to be a huge product near term for Pure in terms of meaningful revenue, but I think it's interesting that they're embracing the trend of the Cloud and are actually architecting Cloud solutions using Amazon services and blending in their own super gluing their own, I mean it's not really superglue but blending in their own software for their customers to extend. Now, you know some of the nuances I don't think they are going to have they have better right performance I think they'll have better read performance clearly they have better availability I think it's going to be a little bit more expensive. All these things are TBD that's just my take based on looking at what I've seen and talking to some people but to me the important thing is that Pure's embracing that Cloud model. Historically, companies that are trying to defend an existing business, they retreat. You know, they denigrate they don't embrace. We know that Pure's going to make more money on pram than it does in the Cloud. At least I think. And so it's to their advantage for companies to stay on-prem but at the same time they understand that trend is your friend and they're embracing that so that was kind of one thing. The second thing I learned is Charlie Giancarlo spent a lot of time with them last night as did you. He's a bit of a policy wonk in very certain narrow areas. He shared with me some of the policy work that he's done around IP protection and not necessarily though on the side that you would think. You would think that okay IP protection that's a good thing but a lot of the laws that were trying to be promoted for IP protection were there to help big companies essentially crush small companies so he fought against that. He shared with me some things around net neutrality. You would think you know you think you know which side of net neutrality he'd be on not necessarily so he had some really interesting perspectives on that. We also talked to and I won't share the name of the company but a very large financial institution that's that's betting a lot on Pure was very interesting to me. This is one of the brand names everybody would know it if you heard it. And their head of storage infrastructure was here, at the show. Now I know this individual and this person doesn't go to a lot of shows >> Maybe a couple a year. >> This person chose to come to this show because they're making an investment in Pure. In a fairly big way and they spent a lot of time with Pure management, expressing their desires as part of an executive form that Pure holds they didn't really market that a lot they didn't really tell us too much about it because it was a little private thing but I happen to know this individual and and I learned several things. They like Pure a lot, they use it for a lot of their workloads, but they have a lot of other storage, they can't necessarily get rid of that other storage for a lot of reasons. Inertia, technical debt, good tickets at the baseball game, all kinds of politics going on there. I also asked specifically about some hybrid companies products where the the cost structure's a little bit better so this gets me to flash array C and we talked to Charlie Giancarlo about this about his flash prices come down and it and opens up new markets. I got some other data yesterday and today that you know that flash array C is not going to be quite priced we don't think as well as hybrid arrays closing the gap it's between one and one and a quarter, one and a half dollars per gigabyte whereas hybrid arrays you are seeing half that, 70 cents a gigabyte. Sometimes as low as 60 cents a gigabyte. Sometimes higher, sometimes high as a dollar but the average around 65-70 cents a gigabyte so there's still a gap there. Flash prices have to come down further. Another thing I learned I'm going to just keep going. >> Lisa: Go ahead! >> The other thing I learned is that China is really building a lot of fab capacity in NAND to try to take out the thumb-drive market-place so they are going to go after the low-end. So companies like Samsung and Toshiba, Toshiba just renamed the company, I can't remember the name of the company but Micron and the NAND flash NAND manufacturers are going to have to now go use their capacity and go after the enterprise because China fab is going to crush the low-end and bomb the low-end pricing. Somebody else told me about a third of flash consumption is in China now. So interesting things going on there. So near term, flash array C is not going to just crush spinning disk and hybrid, it's going to get closer and it's going to slowly eat away at that as NAND prices come down it really could more rapidly eat away at that. So I just learned some other stuff too but I'll take a breath. (laughter) >> So one of the things I think we are resounding with it we heard not just yesterday on the program day but even last night at the executive event we were at is that from this large financial services company that you mentioned, Pure storage is a strategic partner to many organizations from small to large that is incredibly valued to your point the Shuttleman only goes to maybe a couple of events a year and this is one of them? >> Dave: Right. >> This is a company that in its first 10 years has embraced competition head on and I loved how you talked about yesterday 10 years ago they just drove a truck through EMC's market and sort of ripping and replacing. They're bold but they're also doing it in a way that's very methodical. They're working on bringing you know changing companies' perspectives of even backup data as becoming an asset to put it on flash. Because if you can't rapidly restore that, if there's an outage whether it is an attack or it's unintentional human related, that data can't be recovered quickly, you're in a big big problem. And so them as a strategic component of this isn't in any industry I think it was a very resounding sentiment that I heard and felt yesterday. >> Yeah, this ties into tam expansion of what we talked to Charlie Giancarlo about new workloads with AI as an example flash or AC lowering prices will open up those some of those new workloads data protection backup is clearly an opportunity and I think it's interesting, you're seeing a lot of companies now announce a lot of vendors announce flash based recovery systems I'll call them recovery systems because I don't even consider them backup anymore it's not about backup, it's about recovery. Oracle was actually one of the first to use that kind of concept with the zero data loss recovery appliance they call it recovery. So it's all about fast and near instantaneous recovery. Why is that important? It's because it's companies move toward a digital transformation and what does that mean? And what is a digital business? Digital business is all about how you use data and leveraging data in new ways to create new value to monetise or cut cost. And so being able to have access to that data and recover from any inaccess to that data in a split-second is crucial. So Pure can participate in that, now Pure's not alone You know, it's no coincidence that Veritas and Veeam and Cohesity and Rubrik they work with Pure, they work with HPE. They work with a lot of the big players and so but so Pure has to you know, has some work to do to win its fair share. Staying on backup for a moment, you know it's interesting to see, behind us, Veritas and Veeam have the biggest sort of presence here. Rubrik has a presence here. I'm sure Cohesity is here maybe someway, somehow but I haven't seen them >> I haven't either. >> Maybe they're not here. I'll have to check that up, but you know Veeam is actually doing very well particularly with lower ASPs we know that about Veeam. They've always come at it from the mid-market and SMB. Whereas Cohesity and Rubrik and Veritas traditionally are coming at it from a higher-end. Certainly Cohesity and Rubrik on higher ASPs. Veeam's doing very well with Pure. They're also doing very well with HPE which is interesting. Cohesity announced a deal with HPE recently I don't know, about six months ago somebody thought "Oh maybe Veeaam's on the outs." No, Veeam's doing very well with HPE. It's different parts of the organization. One works with the server group, one works with the storage group and both companies are actually doing quite well I actually think Veeam is ahead of the curve 'cause they've been working with HPE for quite some time and they're doing very well in the Pure base. By partnering with companies, Pure is able to enter that market much in the same way that NetApp did in the early days. They have a very tight relationship for example with Commvault. So, the other thing I was talking to Keith Townsend last night totally not secretor but he's talking about Outpost and how Amazon is going to be challenged to service Outpost Outpost is the on-prem Amazon stack, that VMware and Amazon announced that they're co-marketing. So who is going to service outpost? It's not going to be Amazon, that's not their game in professional service. It's going to have to be the ecosystem, the large SIs or the Vars the partners, VMware partners 'cause that's not Vmwares play either. So Keith Townsend's premise, I'd love to have him on The Cube to talk about this, is they're going to have trouble scaling Outpost because of that service issue. Believe it or not when we come to these conferences, we talk about other things than just, Pure. There's a lot of stuff going on. New Relic is happening this week. Oracle open world is going on this week. John Furrier just got back from AWS Bahrain, and of course we're here at Pure Accelerate. >> We are and this is our second day of two days of coverage. We've got Coz on next who I think has never been on The Cube. >> Dave: Not to my knowledge. >> We've got Kix on later. A great lineup, more customers Rob Lee is going to be on. So we're going to be digging more into Pure's Cloud strategy, the next ten years, how they're going to accelerate that and pack it into the next couple of years. >> I'll tell you one of the things I want to do, Lisa. I'll just call it out. An individual from Dell EMC wrote a blog ahead of Pure Accelerate I think it was last week, about four or five days ago and this individual called out like one, two, three, four.... five things that we should ask Pure so we should ask them, we should ask Coz we should ask Kix. There was criticism, of course they're biased. These guys they always fight. >> Lisa: Naturally. >> They have these internecine wars. >> Lisa: Yep. >> Sometimes I like to call them... no I won't say it. So scale out, question mark there we want to ask Coz about that and Kix. Pure uses proprietary flash modules. They do that because it allows them to do things that you can't do with off-the-shelf flash. I want to ask and challenge them that. I want to ask about their philosophy on tiering. They don't really believe in tiering, why not? I want to understand that better. They've made some acquisitions, Compuverde is one acquisition, it's a file system. What does that mean for flash play? >> Now we didn't hear anything about that yesterday, so that's a good point that we should dig into that. >> Yeah, so we'll bring that up. And then the Evergreen competitors hate Evergreen because Pure was first with it they caught everybody off guard. I said it yesterday, competitors hate Evergreen because competitors live off of maintenance and if you're not on their maintenance they just keep jacking up the maintenance prices and if you don't move to the new system, maintenance just keeps getting more and more and more and more expensive and so they force you, you're locked in. Force you to move. Pure introduced this different model. You pay for the CapEx up front and then, you know, after three years you get a controller swap. You know, so... >> To your point competitors hate it, customers love it. We heard a lot about that yesterday, we've got a couple more customers on our packed program today, Dave so let's get right to it! >> Great. >> Let's wrap up so we can get Coz on stage. >> Dave: Alright, awesome. >> Alright, for Dave Vellante. I'm Lisa Martin, you're watching The Cube from Pure Accelerate 2019, day two. Stick around 'Coz' John Colgrove, CTO, founder of Pure, will be on next. (upbeat music)
SUMMARY :
brought to you by Pure Storage. my inner NASA geek from the early 2000s. Talking about technology and the feeling of innovation But also... is inspiring, everybody the audience to take note of. played in the NFL, went to space, I never knew you could do that. and he, you know consumed it choked on it, I worked with NASA my first job out of grad school. that flew on the space shuttle and kind of like what you have with shoes Dave: I know this really that was a Dave: You have great shoes so I just I just assumed that So that's a little nostalgia there. Well one of the main things too as you look I mean he had to follow Leland and I mean again one of the things he talked about was S3 and how Amazon Maybe he got a copy of that from the internal so the history you have and the knowledge that you've So looking back at the conversations yesterday I don't necessarily think that this is going to be array C is not going to be quite priced market-place so they are going to go after the low-end. as becoming an asset to put it on flash. but so Pure has to and how Amazon is going to be challenged to service Outpost We are and this is our second day and pack it into the next couple of years. I think it was last week, about four or five days ago They do that because it allows them to do things so that's a good point that we should dig into that. and if you don't move to the new system, so let's get right to it! CTO, founder of Pure, will be on next.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Rob Lee | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
James | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Evergreen | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
2004 | DATE | 0.99+ |
Leland Melvin | PERSON | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
2006 | DATE | 0.99+ |
70 cents | QUANTITY | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
60 cents | QUANTITY | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
Veritas | ORGANIZATION | 0.99+ |
Pure | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
President | PERSON | 0.99+ |
today | DATE | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
second day | QUANTITY | 0.99+ |
John Colgrove | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Austin, Texas | LOCATION | 0.99+ |
both companies | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
10-year | QUANTITY | 0.99+ |
five things | QUANTITY | 0.99+ |
Micron Analysis | Micron Insight'18
live from San Francisco it's the queue covering micron insight 2018 brought to you by micron welcome to San Francisco everybody this is the cube the leader in live tech coverage my name is Dave Volante I'm here with my co-host David flora this is our special presentation of micron insight 18 hashtag micron insight 18 where the theme is accelerated intelligence the blending together of memory storage and artificial intelligence micron is a 40 year old company there's a dominant player in the DRAM marketplace years and years ago they used to be 1920 manufacturers of DRAM there's really three companies now that dominate that market they own 96% of it micron Samsung and Toshiba I believe right is the third one and and so microns is 30 billion dollar company they've got about a 50 billion just under 50 billion dollar market cap growing like crazy 70% of their business comes from DRAM the balance comes from alternative storage and other memory systems that they built and traditionally David memories have been a very cyclical business micron number two semiconductor manufacturer worldwide behind Intel obviously competing with a lot of overseas players and micron is putting forth the premise that they've begun to be able to dampen the fluctuations the peaks in the valleys in this business why because first of all the capital expense required to participate in this business is enormous that's why somebody companies have been shaken out and secondly the technology transitions are getting much much more difficult and so the premise that micron put forth in May at their financial analyst conference is that the cyclic allottee of this business is starting to moderate we've certainly seen this in some regards in the last several years with component shortages it's been a boon to Microsoft's financials the stock you know up until recently have been been climbing like crazy this is a company that has literally last quarter had seventy percent gross margins in its in business it's not and much much you know and if you look at the SSD business the flash business smaller gross margin maybe 48 50 percent they're gonna start blending those together and reporting on a blended basis I think they don't want Michael Dell you know advertising to Michael Dell that we're getting 70% gross margins on T Ram so they're gonna stop giving that guidance out excessively B to thwart competition but really it's probably examination probably something that's not sustainable but David so we're seeing sort of moderation and supply growth we're seeing a very well-run company this company is growing like crazy let me break down some of the businesses and I want to bring you into the conversation the compute and networking business very strong a grew at 53 percent year-over-year the mobile business up sixty percent last year Mobile's taking tons of of memory of course and and storage the embedded business which is sort of automobiles and and industrial markets is up about 12% and the storage business unit actually is gonna flat to down they expect growth but you know the stores business has been you know a bit of a challenge for them even though you know they're doing very well and they're gaining share they've gone through some transitions that we'll talk about to some of the executives here but but David the theme really is about about bringing artificial intelligence to the world and the intersection between AI and memory and storage obviously you need memory obviously you need storage to make AI happen and you know micron in the value chain at the lowest level is right there making tons of money shipping a lot of product driving a lot of innovation and competing very effectively so your thoughts on micron and right those are this event my car crushing it I mean the the growth in in in their revenues from DRAM with 70 percent year-on-year last four quarters to the four quarters before that was 70 percent desam that's actually was it's 70 percent of their business it was about 50 percent 47 to 50 percent growth so yeah well for the DRAM piece of the business well NAND is about 25 26 percent of the business and and growing you know about 20% a year yeah I think they're on the calling it's tiny but so the the figures the we're using a even better than that so I think as it fundamentally they're crushing it from from a business perspective and they're in as you said in a very good place because as AI takes place as what I call the matrix applications are coming on board that's a virtual reality augmented reality the the modern gaming machines all of these types of compute and then on top of that IOT as well with all the sensors and and the requirements of memory and and compute very very close to the Census themselves all of these different areas are relying on AI to make a difference of lying on that type of workload that matrix workload and some of the figures is very interesting to look at when you're looking at new workloads you need at least around six times as much DRAM and and and more storage as well more and Nan storage as well six times you're talking about the ratio between storage and the if you can take traditional processing you need for a tree you need six times that's interesting figure and and similarly with an and and and the on top of that when you're looking at graphics work all the graphics work that's very very bandwidth intensive and that requires the very latest technology and again premium technology to go into the graphics side of things as well so they are in a the right place at the right time in terms of the speed of which memory is is developing and the opportunities to make a difference so if you think about some of the tail winds and headwinds in their business there's a lot of tailwind I mean they're manufacturing efficiencies they're really started to see a flywheel effect there and they're did micron has made a lot of investment in of technology transitions what's happening is the bit density growth for each new technology transfer transition is starting to moderate presumably Moore's Law story to moderate right is what's really going on there and but they've really done a good job of investing in technology transitions ahead of their competition and so they're getting some good returns on that investment investment they lead in a lot of these markets they're a very well-run company pricing has been pretty firm for them over the last several years so that's been a nice tailwind and supply has been short in the last several years now they're the the headwinds are there are CPU shortages in the marketplace today and so if you can't can't get the CPU you can't necessarily make the box you can't ship the PC or you can't you know you need you need CPU memory and storage to go together and as a result there's a pending oversupply it looks like and so they're having to manage some of that inventory import tariffs from China not you know that's a I would say huge deal for these guys is something they can manage but you know president Trump's tariff posture it doesn't help a company like micron their tax rate is much higher this year than it was last year it's about going from 4% to like 28% and so those are some of the the headwinds and that's ahead the stock moderate a little bit but the stock has been on fire for the last several years and the company has done very very well cash flow is it's nine billion free cash flow which is important because they have to spend eight billion dollars a year more even they're growing that capex spending from 8 billion this year to 10 and a half billion next year so you get a sense of the various to entry in this marketplace it takes a lot of tenacity which I like micron is exhibited over the last 40 years when you think about all the ebbs and flows but the big changes are this used to be kind of driven by pcs it used to be a PC centered world and now we're seeing a much more diverse customer customer base probably driven by mobile no question about it the data center guys the big hyper scale is the autonomous vehicle folks the industrial internet edge computing they all need memory they all need storage the other piece of this is the transition from spinning hard disk to flash even though it's not a majority of their business today micron is in a very well position very well positioned to take advantage of that David something that you were the first in the industry to call he was a very first analyst that said that SSD flash is going to replace spinning disk it's clearly happening and it happened first in laptops and it's clearly happening in the in the data center you know with some exceptions but generally speaking that trend is pretty substantial you don't absolutely the that the technology changes we keep on saying each year we've witnessed the the most change in technology that we've ever seen and next year it gets faster and it gets faster it's absolutely amazing I think there's another area coming into play when you're looking at the traditional marketplaces they were the PC and the servers that's what we're most of the of the DRAM went we're seeing a change with mobile taking an increasing portion of that you're looking at PCs now they're introducing the the arm pcs as well and then grow ARM processors in the PCs so and that's growing very fast as as well and we're predicting that will go fast and we're looking at also at a very aggressive entry into the market place of ARM processors in general all the way through from from from the edge all the way through up to the top and therefore there's and those are really being designed for this matrix computing I was talking about met much more attention to parallelism to the ability to have GPUs inside it neural networks inside it that is that change and that that that requirement to fit in with this new way of doing is is a fantastic opportunity and they have an opportunity really to lead and powering some of these new workload so we're gonna be unpacking this all all day here at micron inside hashtag micron insight 18 you're watching the cube Dave Volante for David floor we'll be right back right after this short break
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
70 percent | QUANTITY | 0.99+ |
seventy percent | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
nine billion | QUANTITY | 0.99+ |
70 percent | QUANTITY | 0.99+ |
David | PERSON | 0.99+ |
53 percent | QUANTITY | 0.99+ |
96% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
4% | QUANTITY | 0.99+ |
28% | QUANTITY | 0.99+ |
San Francisco | LOCATION | 0.99+ |
48 | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
40 year old | QUANTITY | 0.99+ |
Dave Volante | PERSON | 0.99+ |
8 billion | QUANTITY | 0.99+ |
May | DATE | 0.99+ |
David flora | PERSON | 0.99+ |
six times | QUANTITY | 0.99+ |
47 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
30 billion dollar | QUANTITY | 0.98+ |
last year | DATE | 0.98+ |
about 50 percent | QUANTITY | 0.98+ |
last quarter | DATE | 0.98+ |
50 percent | QUANTITY | 0.97+ |
three companies | QUANTITY | 0.97+ |
about 20% a year | QUANTITY | 0.97+ |
about 12% | QUANTITY | 0.97+ |
China | LOCATION | 0.97+ |
Michael Dell | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
president | PERSON | 0.96+ |
sixty percent | QUANTITY | 0.96+ |
this year | DATE | 0.95+ |
today | DATE | 0.95+ |
Michael Dell | PERSON | 0.94+ |
eight billion dollars a year | QUANTITY | 0.94+ |
about 25 26 percent | QUANTITY | 0.94+ |
third one | QUANTITY | 0.94+ |
50 billion | QUANTITY | 0.93+ |
10 and a half billion | QUANTITY | 0.92+ |
each | QUANTITY | 0.92+ |
each year | QUANTITY | 0.92+ |
1920 | QUANTITY | 0.91+ |
next year | DATE | 0.91+ |
last four quarters | DATE | 0.84+ |
under 50 billion dollar | QUANTITY | 0.83+ |
last 40 years | DATE | 0.82+ |
Intel | ORGANIZATION | 0.82+ |
Trump | PERSON | 0.81+ |
last several years | DATE | 0.79+ |
last several years | DATE | 0.74+ |
around six times | QUANTITY | 0.74+ |
last several years | DATE | 0.71+ |
2018 | DATE | 0.69+ |
Micron Analysis | ORGANIZATION | 0.66+ |
money | QUANTITY | 0.66+ |
tons | QUANTITY | 0.64+ |
number two | QUANTITY | 0.64+ |
years | DATE | 0.63+ |
Moore's | TITLE | 0.63+ |
secondly | QUANTITY | 0.6+ |
Micron Insight'18 | ORGANIZATION | 0.58+ |
NAND | ORGANIZATION | 0.53+ |
micron insight 18 | ORGANIZATION | 0.53+ |
18 | QUANTITY | 0.5+ |
David floor | PERSON | 0.49+ |
Census | ORGANIZATION | 0.47+ |
four | QUANTITY | 0.46+ |
quarters | DATE | 0.43+ |
Matt Burr, Pure Storage & Rob Ober, NVIDIA | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE! Covering Pure Storage Accelerate 2018 brought to you by Pure Storage. >> Welcome back to theCUBE's continuing coverage of Pure Storage Accelerate 2018, I'm Lisa Martin, sporting the clong and apparently this symbol actually has a name, the clong, I learned that in the last half an hour. I know, who knew? >> Really? >> Yes! Is that a C or a K? >> Is that a Prince orientation or, what is that? >> Yes, I'm formerly known as. >> Nice. >> Who of course played at this venue, as did Roger Daltry, and The Who. >> And I might have been staff for one of those shows. >> You could have been, yeah, could I show you to your seat? >> Maybe you're performing later. You might not even know this. We have a couple of guests joining us. We've got Matt Burr, the GM of FlashBlade, and Rob Ober, the Chief Platform Architect at NVIDIA. Guys, welcome to theCUBE. >> Hi. >> Thank you. >> Dave: Thanks for coming on. >> So, lots of excitement going on this morning. You guys announced Pure and NVIDIA just a couple of months ago, a partnership with AIRI. Talk to us about AIRI, what is it? How is it going to help organizations in any industry really democratize AI? >> Well, AIRI, so AIRI is something that we announced, the AIRI Mini today here at Accelerate 2018. AIRI was originally announced at the GTC, Global Technology Conference, for NVIDIA back in March, and what it is is, it essentially brings NVIDIA's DGX servers, connected with either Arista or Cisco switches down to the Pure Storage FlashBlade, so this is something that sits in less than half a rack in the data center, that replaces something that was probably 25 or 50 racks of compute and store, so, I think Rob and I like to talk about it as kind of a great leap forward in terms of compute potential. >> Absolutely, yeah. It's an AI supercomputer in a half rack. >> So one of the things that this morning, that we saw during the general session that Charlie talked about, and I think Matt (mumbles) kind of a really brief history of the last 10 to 20 years in storage, why is modern external storage essential for AI? >> Well, Rob, you want that one, or you want me to take it? Coming from the non storage guy, maybe? (both laugh) >> Go ahead. >> So, when you look at the structure of GPUs, and servers in general, we're talking about massively parallel compute, right? These are, we're now taking not just tens of thousands of cores but even more cores, and we're actually finding a path for them to communicate with storage that is also massively parallel. Storage has traditionally been something that's been kind of serial in nature. Legacy storage has always waited for the next operation to happen. You actually want to get things that are parallel so that you can have parallel processing, both at the compute tier, and parallel processing at the storage tier. But you need to have big network bandwidth, which was what Charlie was eluding to, when Charlie said-- >> Lisa: You like his stool? >> When Charlie was, one of his stools, or one of the legs of his stool, was talking about, 20 years ago we were still, or 10 years ago, we were at 10 gig networks, in merges of 100 gig networks has really made the data flow possible. >> So I wonder if we can unpack that. We talked a little bit to Rob Lee about this, the infrastructure for AI, and wonder if we can go deeper. So take the three legs of the stool, and you can imagine this massively parallel compute-storage-networking grid, if you will, one of our guys calls it uni-grid, not crazy about the name, but this idea of alternative processing, which is your business, really spanning this scaled out architecture, not trying to stuff as much function on a die as possible, really is taking hold, but what is the, how does that infrastructure for AI evolve from an architect's perspective? >> The overall infrastructure? I mean, it is incredibly data intensive. I mean a typical training set is terabytes, in the extreme it's petabytes, for a single run, and you will typically go through that data set again and again and again, in a training run, (mumbles) and so you have one massive set that needs to go to multiple compute engines, and the reason it's multiple compute engines is people are discovering that as they scale up the infrastructure, you actually, you get pretty much linear improvements, and you get a time to solution benefit. Some of the large data centers will run a training run for literally a month and if you start scaling it out, even in these incredibly powerful things, you can bring time to solution down, you can have meaningful results much more quickly. >> And you be a sensitive, sort of a practical application of that. Yeah there's a large hedge fund based in the U.K. called Man AHL. They're a system-based quantitative training firm, and what that means is, humans really aren't doing a lot of the training, machines are doing the vast majority if not all of the training. What the humans are doing is they're essentially quantitative analysts. The number of simulations that they can run is directly correlative to the number of trades that their machines can make. And so the more simulations you can make, the more trades you can make. The shorter your simulation time is, the more simulations that you can run. So we're talking about in a sort of a meta context, that concept applies to everything from retail and understanding, if you're a grocery store, what products are not on my shelves at a given time. In healthcare, discovering new forms of pathologies for cancer treatments. Financial services we touched on, but even broader, right down into manufacturing, right? Looking at, what are my defect rates on my lines, and if it used to take me a week to understand the efficiency of my assembly line, if I can get that down to four hours, and make adjustments in real time, that's more than just productivity, it's progress. >> Okay so, I wonder if we can talk about how you guys see AI emerging in the marketplace. You just gave an example. We were talking earlier again to Rob Lee about, it seems today to be applied and, in narrow use cases, and maybe that's going to be the norm, whether it's autonomous vehicles or facial recognition, natural language processing, how do you guys see that playing out? Whatever be, this kind of ubiquitous horizontal layer or do you think the adoption is going to remain along those sort of individual lines, if you will. >> At the extreme, like when you really look out at the future, let me start by saying that my background is processor architecture. I've worked in computer science, the whole thing is to understand problems, and create the platforms for those things. What really excited me and motivated me about AI deep learning is that it is changing computer science. It's just turning it on its head. And instead of explicitly programming, it's now implicitly programming, based on the data you feed it. And this changes everything and it can be applied to almost any use case. So I think that eventually it's going to be applied in almost any area that we use computing today. >> Dave: So another way of asking that question is how far can we take machine intelligence and your answer is pretty far, pretty far. So as processor architect, obviously this is very memory intensive, you're seeing, I was at the Micron financial analyst meeting earlier this week and listening to what they were saying about these emerging, you got T-RAM, and obviously you have Flash, people are excited about 3D cross-point, I heard it, somebody mentioned 3D cross-point on the stage today, what do you see there in terms of memory architectures and how they're evolving and what do you need as a systems architect? >> I need it all. (all talking at once) No, if I could build a GPU with more than a terabyte per second of bandwidth and more than a terabyte of capacity I could use it today. I can't build that, I can't build that yet. But I need, it's a different stool, I need teraflops, I need memory bandwidth, and I need memory capacity. And really we just push to the limit. Different types of neural nets, different types of problems, will stress different things. They'll stress the capacity, the bandwidth, or the actual compute. >> This makes the data warehousing problem seem trivial, but do you see, you know what I mean? Data warehousing, it was like always a chase, chasing the chips and snake swallowing a basketball I called it, but do you see a day that these problems are going to be solved, architecturally, it talks about, More's laws, moderating, or is this going to be this perpetual race that we're never going to get to the end of? >> So let me put things in perspective first. It's easy to forget that the big bang moment for AI and deep learning was the summer of 2012, so slightly less than six years ago. That's when Alex Ned get the seed and people went wow, this is a whole new approach, this is amazing. So a little less than six years in. I mean it is a very young, it's a young area, it is in incredible growth, the change in state of art is literally month by month right now. So it's going to continue on for a while, and we're just going to keep growing and evolving. Maybe five years, maybe 10 years, things will stabilize, but it's an exciting time right now. >> Very hard to predict, isn't it? >> It is. >> I mean who would've thought that Alexa would be such a dominant factor in voice recognition, or that a bunch of cats on the internet would lead to facial recognition. I wonder if you guys can comment, right? I mean. >> Strange beginnings. (all laughing) >> But very and, I wonder if I can ask you guys ask about the black box challenge. I've heard some companies talk about how we're going to white box everything, make it open and, but the black box problem meaning if I have to describe, and we may have talked about this, how I know that it's a dog. I struggle to do that, but a machine can do that. I don't know how it does it, probably can't tell me how it does it, but it knows, with a high degree of accuracy. Is that black box phenomenon a problem, or do we just have to get over it? >> Up to you. >> I think it's certain, I don't think it's a problem. I know that mathematicians, who are friends, it drives them crazy, because they can't tell you why it's working. So it's a intellectual problem that people just need to get over. But it's the way our brains work, right? And our brains work pretty well. There are certain areas I think where for a while there will be certain laws in place where you can't prove the exact algorithm, you can't use it, but by and large, I think the industry's going to get over it pretty fast. >> I would totally agree, yeah. >> You guys are optimists about the future. I mean you're not up there talking about how jobs are going to go away and, that's not something that you guys are worried about, and generally, we're not either. However, machine intelligence, AI, whatever you want to call it, it is very disruptive. There's no question about it. So I got to ask you guys a few fun questions. Do you think large retail stores are going to, I mean nothing's in the extreme, but do you think they'll generally go away? >> Do I think large retail stores will generally go away? When I think about retail, I think about grocery stores, and the things that are going to go away, I'd like to see standing in line go away. I would like my customer experience to get better. I don't believe that 10 years from now we're all going to live inside our houses and communicate over the internet and text and half of that be with chat mods, I just don't believe that's going to happen. I think the Amazon effect has a long way to go. I just ordered a pool thermometer from Amazon the other day, right? I'm getting old, I ordered readers from Amazon the other day, right? So I kind of think it's that spur of the moment item that you're going to buy. Because even in my own personal habits like I'm not buying shoes and returning them, and waiting five to ten times, cycle, to get there. You still want that experience of going to the store. Where I think retail will improve is understanding that I'm on my way to their store, and improving the experience once I get there. So, I think you'll see, they need to see the Amazon effect that's going to happen, but what you'll see is technology being employed to reach a place where my end user experience improves such that I want to continue to go there. >> Do you think owning your own vehicle, and driving your own vehicle, will be the exception, rather than the norm? >> It pains me to say this, 'cause I love driving, but I think you're right. I think it's a long, I mean it's going to take a while, it's going to take a long time, but I think inevitably it's just too convenient, things are too congested, by freeing up autonomous cars, things that'll go park themselves, whatever, I think it's inevitable. >> Will machines make better diagnoses than doctors? >> Matt: Oh I mean, that's not even a question. Absolutely. >> They already do. >> Do you think banks, traditional banks, will control of the payment systems? >> That's a good one, I haven't thought about-- >> Yeah, I'm not sure that's an AI related thing, maybe more of a block chain thing, but, it's possible. >> Block chain and AI, kind of cousins. >> Yeah, they are, they are actually. >> I fear a world though where we actually end up like WALLE in the movie and everybody's on these like floating chez lounges. >> Yeah lets not go there. >> Eating and drinking. No but I'm just wondering, you talked about, Matt, in terms of the number of, the different types of industries that really can verge in here. Do you see maybe the consumer world with our expectation that we can order anything on Amazon from a thermometer to a pair of glasses to shoes, as driving other industries to kind of follow what we as consumers have come to expect? >> Absolutely no question. I mean that is, consumer drives everything, right? All flash arrays were driven by you have your phone there, right? The consumerization of that device was what drove Toshiba and all the other fad manufacturers to build more NAM flash, which is what commoditized NAM flash, which what brought us faster systems, these things all build on each other, and from a consumer perspective, there are so many things that are inefficient in our world today, right? Like lets just think about your last call center experience. If you're the normal human being-- >> I prefer not to, but okay. >> Yeah you said it, you prefer not to, right? My next comment was going to be, most people's call center experiences aren't that good. But what if the call center technology had the ability to analyze your voice and understand your intonation, and your inflection, and that call center employee was being given information to react to what you were saying on the call, such that they either immediately escalated that call without you asking, or they were sent down a decision path, which brought you to a resolution that said that we know that 62% of the time if we offer this person a free month of this, that person is going to view, is going to go away a happy customer, and rate this call 10 out of 10. That is the type of things that's going to improve with voice recognition, and all of the voice analysis, and all this. >> And that really get into how far we can take machine intelligence, the things that machines, or the humans can do, that machines can't, and that list changes every year. The gap gets narrower and narrower, and that's a great example. >> And I think one of the things, going back to your, whether stores'll continue being there or not but, one of the biggest benefits of AI is recommendation, right? So you can consider it userous maybe, or on the other hand it's great service, where a lot of, something like an Amazon is able to say, I've learned about you, I've learned about what people are looking for, and you're asking for this, but I would suggest something else, and you look at that and you go, "Yeah, that's exactly what I'm looking for". I think that's really where, in the sales cycle, that's really where it gets up there. >> Can machines stop fake news? That's what I want to know. >> Probably. >> Lisa: To be continued. >> People are working on that. >> They are. There's a lot, I mean-- >> That's a big use case. >> It is not a solved problem, but there's a lot of energy going into that. >> I'd take that before I take the floating WALLE chez lounges, right? Deal. >> What if it was just for you? What if it was just a floating chez lounge, it wasn't everybody, then it would be alright, right? >> Not for me. (both laughing) >> Matt and Rob, thanks so much for stopping by and sharing some of your insights and we should have a great rest of the day at the conference. >> Great, thank you very much. Thanks for having us. >> For Dave Vellante, I'm Lisa Martin, we're live at Pure Storage Accelerate 2018 at the Bill Graham Civic Auditorium. Stick around, we'll be right back after a break with our next guest. (electronic music)
SUMMARY :
brought to you by Pure Storage. I learned that in the last half an hour. Who of course played at this venue, and Rob Ober, the Chief Platform Architect at NVIDIA. Talk to us about AIRI, what is it? I think Rob and I like to talk about it as kind of It's an AI supercomputer in a half rack. for the next operation to happen. has really made the data flow possible. and you can imagine this massively parallel and if you start scaling it out, And so the more simulations you can make, AI emerging in the marketplace. based on the data you feed it. and what do you need as a systems architect? the bandwidth, or the actual compute. in incredible growth, the change I wonder if you guys can comment, right? (all laughing) I struggle to do that, but a machine can do that. that people just need to get over. So I got to ask you guys a few fun questions. and the things that are going to go away, I think it's a long, I mean it's going to take a while, Matt: Oh I mean, that's not even a question. maybe more of a block chain thing, but, it's possible. and everybody's on these like floating to kind of follow what we as consumers I mean that is, consumer drives everything, right? information to react to what you were saying on the call, the things that machines, or the humans can do, and you look at that and you go, That's what I want to know. There's a lot, I mean-- It is not a solved problem, I'd take that before I take the Not for me. and sharing some of your insights and Great, thank you very much. at the Bill Graham Civic Auditorium.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Matt Burr | PERSON | 0.99+ |
Matt | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Rob Lee | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Lisa | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Rob Ober | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
62% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
March | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
Alex Ned | PERSON | 0.99+ |
Roger Daltry | PERSON | 0.99+ |
AIRI | ORGANIZATION | 0.99+ |
U.K. | LOCATION | 0.99+ |
four hours | QUANTITY | 0.99+ |
ten times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Bill Graham Civic Auditorium | LOCATION | 0.99+ |
today | DATE | 0.99+ |
less than half a rack | QUANTITY | 0.98+ |
Arista | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
San Francisco | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
summer of 2012 | DATE | 0.98+ |
three legs | QUANTITY | 0.98+ |
tens of thousands of cores | QUANTITY | 0.97+ |
less than six years | QUANTITY | 0.97+ |
Man AHL | ORGANIZATION | 0.97+ |
both | QUANTITY | 0.97+ |
a week | QUANTITY | 0.96+ |
earlier this week | DATE | 0.96+ |
more than a terabyte | QUANTITY | 0.96+ |
50 racks | QUANTITY | 0.96+ |
Global Technology Conference | EVENT | 0.96+ |
this morning | DATE | 0.95+ |
more than a terabyte per second | QUANTITY | 0.95+ |
Pure | ORGANIZATION | 0.94+ |
GTC | EVENT | 0.94+ |
less than six years ago | DATE | 0.93+ |
petabytes | QUANTITY | 0.92+ |
terabytes | QUANTITY | 0.92+ |
half rack | QUANTITY | 0.92+ |
one of the legs | QUANTITY | 0.92+ |
single run | QUANTITY | 0.92+ |
a month | QUANTITY | 0.91+ |
FlashBlade | ORGANIZATION | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
Pure Storage Accelerate 2018 | EVENT | 0.88+ |
20 years | QUANTITY | 0.87+ |
David Hatfield, Pure Storage | Pure Storage Accelerate 2018
>> Announcer: Live from the Bill Graham Auditorium in San Francisco, it's theCUBE, covering Pure Storage Accelerate 2018. Brought to be you by Pure Storage. >> Welcome back to theCUBE, we are live at Pure Storage Accelerate 2018 in San Francisco. I'm Lisa Prince Martin with Dave The Who Vellante, and we're with David Hatfield, or Hat, the president of Purse Storage. Hat, welcome back to theCUBE. >> Thank you Lisa, great to be here. Thanks for being here. How fun is this? >> The orange is awesome. >> David: This is great. >> Super fun. >> Got to represent, we love the orange here. >> Always a good venue. >> Yeah. >> There's not enough orange. I'm not as blind yet. >> Well it's the Bill Graham, I mean it's a great venue. But not generally one for technology conferences. >> Not it's not. You guys are not conventional. >> So far so good. >> But then-- >> Thanks for keeping us out of Las Vegas for a change. >> Over my dead body I thin I've said once or twice before. >> Speaking of-- Love our customers in Vegas. Unconventional, you've said recently this is not your father's storage company. What do you mean by that? >> Well we just always want to do things a little bit less conventional. We want to be modern. We want to do things differently. We want to create an environment where it's community so our customers and our partners, prospective customers can get a feel for what we mean by doing things a little bit more modern. And so the whole orange thing is something that we all opt in for. But it's more about really helping transform customer's organizations think differently, think out of the box, and so we wanted to create a venue that forced people to think differently, and so the last three years, one was on Pier 48, we transformed that. Last year was in a big steelworkers, you know, 100 year old steel manufacturing, ship building yard which is now long since gone. But we thought the juxtaposition of that, big iron rust relative to what we're doing from a modern solid state perspective, was a good metaphor. And here it's about making music, and how can we together as an industry, develop new things and develop new songs and really help transform organizations. >> For those of you who don't know, spinning disk is known as spinning rust, right? Eventually, so very clever sort of marketing. >> The more data you put on it the slower it gets and it gets really old and we wanted to get rid of that. We wanted to have everything be online in the data center, so that was the point. >> So Hat, as you go around and talk to customers, they're going through a digital transformation, you hear all this stuff about machine intelligence, artificial intelligence, whatever you want to call it, what are the questions that you're getting? CEO's, they want to get digital right. IT professionals are wondering what's next for them. What kind of questions and conversations are you having? >> Yeah, I think it's interesting, I was just in one of the largest financial services companies in New York, and we met with the Chief Data Officer. The Chief Data Officer reports into the CEO. And he had right next to him the CIO. And so they have this development of a recognition that moving into a digital world and starting to harness the power of data requires a business context. It requires people that are trying to figure out how to extract value from the data, where does our data live? But that's created the different organization. It drives devops. I mean, if you're going to go through a digital transformation, you're going to try and get access to your data, you have to be a software development house. And that means you're going to use devops. And so what's happened from our point of view over the last 10 years is that those folks have gone to the public cloud because IT wasn't really meeting the needs of what devops needed and what the data scientists were looking for, and so what we wanted to create not only was a platform and a tool set that allowed them to bridge the gap, make things better today dramatically, but have a platform that gets you into the future, but also create a community and an ecosystem where people are aware of what's happening on the devop's side, and connect the dots between IT and the data scientists. And so we see this exploding as companies digitize, and somebody needs to be there to help kind of bridge the gap. >> So what's your point of view and advice to that IT ops person who maybe really good at provisioning LUNS, should they become more dev like? Maybe ops dev? >> Totally, I mean I think there's a huge opportunity to kind of advance your career. And a lot of what Charlie talked about and a lot of what we've been doing for nine years now, coming up on nine years, is trying to make our customers heroes. And if data is a strategic asset, so much so they're actually going to think about putting it on your balance sheet, and you're hiring Chief Data Officers, who knows more about the data than the storage and infrastructure team. They understand the limitations that we had to go through over the past. They've recognized they had to make trade offs between performance and cost. And in a shared accelerated storage platform where you have tons of IO and you can put all of your applications (mumbles) at the same time, you don't have to make those trade offs. But the people that really know that are the storage leads. And so what we want to do is give them a path for their career to become strategic in their organization. Storage should be self driving, infrastructure should be self driving. These are not things that in a boardroom people care about, gigabytes and petabytes and petaflops, and whatever metric. What they care about is how they can change their business and have a competitive advantage. How they can deliver better customer experiences, how they can put more money on the bottom line through better insights, etc. And we want to teach and work with and celebrate data heroes. You know, they're coming from the infrastructure side and connecting the dots. So the value of that data is obviously something that's new in terms of it being front and center. So who determines the value of that data? You would think it's the business line. And so there's got to be a relationship between that IT ops person and the business line. Which maybe here to for was somewhat adversarial. Business guys are calling, the clients are calling again. And the business guys are saying, oh IT, they're slow, they say no. So how are you seeing that relationship changing? >> It has to come together because, you know, it does come down to what are the insights that we can extract from our data? How much more data can we get online to be able to get those insights? And that's a combination of improving the infrastructure and making it easy and removing those trade offs that I talked about. But also being able to ask the right questions. And so a lot has to happen. You know, we have one of the leaders in devops speaking tomorrow to go through, here's what's happening on the software development and devops side. Here's what the data scientists are trying to get at. So our IT professionals understand the language, understand the problem set. But they have to come together. We have Dr. Kate Harding as well from MIT, who's brilliant and thinking about AI. Well, there's only .5% of all the data has actually been analyzed. You know, it's all in these piggy banks as Burt talked about onstage. And so we want to get rid of the piggy banks and actually create it and make it more accessible, and get more than .5% of the data to be usable. You know, bring as much of that online as possible, because it's going to provide richer insights. But up until this point storage has been a bottleneck to making that happen. It was either too costly or too complex, or it wasn't performing enough. And with what we've been able to bring through solid state natively into sort of this platform is an ability to have all of that without the trade offs. >> That number of half a percent, or less than half a percent of all data in the world is actually able to be analyzed, is really really small. I mean we talk about, often you'll here people say data's the lifeblood of an organization. Well, it's really a business catalyst. >> David: Oil. >> Right, but catalysts need to be applied to multiple reactions simultaneously. And that's what a company needs to be able to do to maximize the value. Because if you can't do that there's no value in that. >> Right. >> How are you guys helping to kind of maybe abstract storage? We hear a lot, we heard the word simplicity a lot today from Mercedes Formula One, for example. How are you partnering with customers to help them identify, where do we start narrowing down to find those needles in the haystack that are going to open up new business opportunities, new services for our business? >> Well I think, first of all, we recognize at Pure that we want to be the innovators. We want to be the folks that are, again, making things dramatically better today, but really future-proofing people for what applications and insights they want to get in the future. Charlie talked about the three-legged stool, right? There's innovations that's been happening in compute, there's innovations that have been happening over the years in networking, but storage hasn't really kept up. It literally was sort of the bottleneck that was holding people back from being able to feed the GPUs in the compute that's out there to be able to extract the insights. So we wanted to partner with the ecosystem, but we recognize an opportunity to remove the primary bottleneck, right? And if we can remove the bottleneck and we can partner with firms like NVIDIA and firms like Cisco, where you integrate the solution and make it self driving so customers don't have to worry about it. They don't have to make the trade offs in performance and cost on the backend, but it just is easy to stamp out, and so it was really great to hear Service Now and Keith walk through is story where he was able to get a 3x level improvement and something that was simple to scale as their business grew without having an impact on the customer. So we need to be part of an ecosystem. We need to partner well. We need to recognize that we're a key component of it because we think data's at the core, but we're only a component of it. The one analogy somebody shared with me when I first started at Pure was you can date your compute and networking partner but you actually get married to your storage partner. And we think that's true because data's at the core of every organization, but it's making it available and accessible and affordable so you can leverage the compute and networking stacks to make it happen. >> You've used the word platform, and I want to unpack that a little bit. Platform versus product, right? We hear platform a lot today. I think it's pretty clear that platforms beat products and that allows you to grow and penetrate the market further. It also has an implication in terms of the ecosystem and how you partner. So I wonder if you could talk about platform, what it means to you, the API economy, however you want to take that. >> Yeah, so, I mean a platform, first of all I think if you're starting a disruptive technology company, being hyper-focused on delivering something that's better and faster in every dimension, it had to be 10x in every dimension. So when we started, we said let's start with tier one block, mission critical data workloads with a product, you know our Flash Array product. It was the fastest growing product in storage I think of all time, and it still continues to be a great contributor, and it should be a multi-billion dollar business by itself. But what customers are looking for is that same consumer like or cloud like experience, all of the benefits of that simplicity and performance across their entire data set. And so as we think about providing value to customers, we want to make sure we capture as much of that 99.5% of the data and make it online and make it affordable, regardless of whether it's block, file, or object, or regardless if it's tier one, tier two, and tier three. We talk about this notion of a shared accelerated storage platform because we want to have all the applications hit it without any compromise. And in an architecture that we've provided today you can do that. So as we think about partnering, we want to go, in our strategy, we want to go get as much of the data as we possibly can and make it usable and affordable to bring online and then partner with an API first open approach. There's a ton of orchestration tools that are out there. There's great automation. We have a deep integration with ACI at Cisco. Whatever management and orchestration tools that our customer wants to use, we want to make those available. And so, as you look at our Flash Array, Flash Deck, AIRI, and Flash Blade technologies, all of them have an API open first approach. And so a lot of what we're talking about with our cloud integrations is how do we actually leverage orchestration, and how do we now allow and make it easy for customers to move data in and out of whatever clouds they may want to run from. You know, one of the key premises to the business was with this exploding data growth and whether it's 30, 40, 50 zettabytes of data over the next you know, five years, there's only two and a half or three zettabytes of internet connectivity in those same period of time. Which means that companies, and there's not enough data platform or data resources to actually handle all of it, so the temporal nature of the data, where it's created, what a data center looks like, is going to be highly distributed, and it's going to be multi cloud. And so we wanted to provide an architecture and a platform that removed the trade offs and the bottlenecks while also being open and allowing customers to take advantage of Red Shift and Red Hat and all the container technologies and platform as a service technologies that exist that are completely changing the way we can access the data. And so we're part of an ecosystem and it needs to be API and open first. >> So you had Service Now on stage today, and obviously a platform company. I mean any time they do M and A they bring that company into their platform, their applications that they build are all part of that platform. So should we think about Pure? If we think about Pure as a platform company, does that mean, I mean one of your major competitors is consolidating its portfolio. Should we think of you going forward as a platform company? In other words, you're not going to have a stovepipe set of products, or is that asking too much as you get to your next level of milestone. >> Well we think we're largely there in many respects. You know, if you look at any of the competitive technologies that are out there, you know, they have a different operating system and a different customer experience for their block products, their file products, and their object products, etc. So we wanted to have a shared system that had these similar attributes from a storage perspective and then provide a very consistent customer experience with our cloud-based Pure One platform. And so the combination of our systems, you hear Bill Cerreta talk about, you have to do different things for different protocols to be able to get the efficiencies in the data servers as people want. But ultimately you need to abstract that into a customer experience that's seamless. And so our Pure One cloud-based software allows for a consistent experience. The fact that you'll have a, one application that's leveraging block and one application that's leveraging unstructured tool sets, you want to be able to have that be in a shared accelerated storage platform. That's why Gartner's talking about that, right? Now you can do it with a solid state world. So it's super key to say, hey look, we want consistent customer experience, regardless of what data tier it used to be on or what protocol it is and we do that through our Pure One cloud-based platform. >> You guys have been pretty bullish for a long time now where competition is concerned. When we talk about AWS, you know Andy Jassy always talks about, they look forward, they're not looking at Oracle and things like that. What's that like at Pure? Are you guys really kind of, you've been also very bullish recently about NVME. Are you looking forward together with your partners and listening to the voice of the customer versus looking at what's blue over the corner? >> Yes, so first of all we have a lot of respect for companies that get big. One of my mentors told me one time that they got big because they did something well. And so we have a lot of respect for the ecosystem and companies that build a scale. And we actually want to be one of those and are already doing that. But I think it's also important to listen and be part of the community. And so we've always wanted to the pioneers. We always wanted to be the innovators. We always wanted to challenge conventions. And one of the reasons why we founded the company, why Cos and Hayes founded the company originally was because they saw that there was a bottleneck and it was a media level bottleneck. In order to remove that you need to provide a file system that was purpose built for the new media, whatever it was going to be. We chose solid state because it was a $40 billion industry thanks to our consumer products and devices. So it was a cost curve where I and D was going to happen by Samsung and Toshiba and Micron and all those guys that we could ride that curve down, allowing us to be able to get more and more of the data that's out there. And so we founded the company with the premise that you need to remove that bottleneck and you can drive innovation that was 10x better in every dimension. But we also recognize in doing so that putting an evergreen ownership model in place, you can fundamentally change the business model that customers were really frustrated by over the last 25 years. It was fair because disk has lots of moving parts, it gets slower with the more data you put on, etc., and so you pass those maintenance expenses and software onto customers. But in a solid state world you didn't need that. So what we wanted to do was actually, in addition to provide innovation that was 10x better, we wanted to provide a business model that was evergreen and cloud like in every dimension. Well, those two forces were very disruptive to the competitors. And so it's very, very hard to take a file system that's 25 years old and retrofit it to be able to really get the full value of what the stack can provide. So we focus on innovation. We focus on what the market's are doing, and we focus on our customer requirements and where we anticipate the use cases to be. And then we like to compete, too. We're a company of folks that love to win, but ultimately the real focus here is on enabling our customers to be successful, innovating forward. And so less about looking sidewise, who's blue and who's green, etc. >> But you said it before, when you were a startup, you had to be 10x better because those incumbents, even though it was an older operating system, people's processes were wired to that, so you had to give them an incentive to do that. But you have been first in a number of things. Flash itself, the sort of All-Flash, at a spinning disk price. Evergreen, you guys set the mark on that. NVME you're doing it again with no premium. I mean, everybody's going to follow. You can look back and say, look we were first, we led, we're the innovator. You're doing some things in cloud which are similar. Obviously you're doing this on purpose. But it's not just getting close to your customers. There's got to be a technology and architectural enabler for you guys. Is that? >> Well yeah, it's software, and at the end of the day if you write a file system that's purpose built for a new media, you think about the inefficiencies of that media and the benefits of that media, and so we knew it was going to be memory, we knew it was going to be silicon. It behaves differently. Reads are effectively free. Rights are expensive, right? And so that means you need to write something that's different, and so you know, it's NVME that we've been plumbing and working on for three years that provides 44,000 parallel access points. Massive parallelism, which enables these next generation of applications. So yeah we have been talking about that and inventing ways to be able to take full advantage of that. There's 3D XPoint and SCM and all kinds of really interesting technologies that are coming down the line that we want to be able to take advantage of and future proof for our customers, but in order to do that you have to have a software platform that allows for it. And that's where our competitive advantage really resides, is in the software. >> Well there are lots more software companies in Silicon Valley and outside Silicon Valley. And you guys, like I say, have achieved that escape velocity. And so that's pretty impressive, congratulations. >> Well thank you, we're just getting started, and we really appreciate all the work you guys do. So thanks for being here. >> Yeah, and we just a couple days ago with the Q1FY19, 40%, you have a year growth, you added 300 more customers. Now what, 4800 customers globally. So momentum. >> Thank you, thank you. Well we only do it if we're helping our customers one day at a time. You know, I'll tell you that this whole customer first philosophy, a lot of customers, a lot of companies talk about it, but it truly has to be integrated into the DNA of the business from the founders, and you know, Cos's whole pitch at the very beginning of this was we're going to change the media which is going to be able to transform the business model. But ultimately we want to make this as intuitive as an iPhone. You know, infrastructure should just work, and so we have this focus on delivering simplicity and delivering ownership that's future proofed from the very beginning. And you know that sort of permeates, and so you think about our growth, our growth has happened because our customers are buying more stuff from us, right? If you look at our underneath the covers on our growth, 70 plus percent of our growth every single quarter comes from customers buying more stuff, and so, as we think about how we partner and we think about how we innovate, you know, we're going to continue to build and innovate in new areas. We're going to keep partnering. You know, the data protection staff, we've got great partners like Veeam and Cohesity and Rubrik that are out there. And we're going to acquire. We do have a billion dollars of cash in the bank to be able to go do that. So we're going to listen to our customers on where they want us to do that, and that's going to guide us to the future. >> And expansion overseas. I mean, North America's 70% of your business? Is that right? >> Rough and tough. Yeah, we had 28%-- >> So it's some upside. >> Yeah, yeah, no any mature B2B systems company should line up to be 55, 45, 55 North America, 45, in line with GDP and in line with IT spend, so we made investments from the beginning knowing we wanted to be an independent company, knowing we wanted to support global 200 companies you have to have operations across multiple countries. And so globalization is always going to be key for us. We're going to continue our march on doing that. >> Delivering evergreen from an orange center. Thanks so much for joining Dave and I on the show this morning. >> Thanks Lisa, thanks Dave, nice to see you guys. >> We are theCUBE Live from Pure Accelerate 2018 from San Francisco. I'm Lisa Martin for Dave Vellante, stick around, we'll be right back with our next guests.
SUMMARY :
Brought to be you by Pure Storage. Welcome back to theCUBE, we are live Thank you Lisa, great to be here. There's not enough orange. Well it's the Bill Graham, I mean it's a great venue. You guys are not conventional. Thanks for keeping us What do you mean by that? and so we wanted to create a venue that For those of you who don't know, and it gets really old and we wanted to get rid of that. So Hat, as you go around and talk to customers, and somebody needs to be there And so there's got to be a relationship and get more than .5% of the data to be usable. is actually able to be analyzed, Right, but catalysts need to be applied that are going to open up new business opportunities, and we can partner with firms like NVIDIA and that allows you to grow You know, one of the key premises to the business was Should we think of you going forward as a platform company? And so the combination of our systems, and listening to the voice of the customer and so you pass those maintenance expenses and architectural enabler for you guys. And so that means you need to And you guys, like I say, and we really appreciate all the work you guys do. Yeah, and we just a couple days ago with the Q1FY19, 40%, and so we have this focus on delivering simplicity And expansion overseas. Yeah, we had 28%-- And so globalization is always going to be key for us. on the show this morning. We are theCUBE Live from Pure Accelerate 2018
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
David Hatfield | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Bill Cerreta | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Lisa Prince Martin | PERSON | 0.99+ |
Charlie | PERSON | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
New York | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Kate Harding | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
nine years | QUANTITY | 0.99+ |
28% | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Purse Storage | ORGANIZATION | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
Micron | ORGANIZATION | 0.99+ |
Hat | PERSON | 0.99+ |
Last year | DATE | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
10x | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
4800 customers | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Rubrik | ORGANIZATION | 0.99+ |
half a percent | QUANTITY | 0.99+ |
99.5% | QUANTITY | 0.99+ |
less than half a percent | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
.5% | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
55 | QUANTITY | 0.99+ |
one day | QUANTITY | 0.98+ |
twice | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
two forces | QUANTITY | 0.98+ |
Pure One | COMMERCIAL_ITEM | 0.98+ |
five years | QUANTITY | 0.98+ |
44,000 parallel access points | QUANTITY | 0.98+ |
Cohesity | ORGANIZATION | 0.97+ |
200 companies | QUANTITY | 0.97+ |
tomorrow | DATE | 0.97+ |
North America | LOCATION | 0.97+ |
one application | QUANTITY | 0.97+ |
first approach | QUANTITY | 0.97+ |
45 | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
Pure | ORGANIZATION | 0.97+ |
once | QUANTITY | 0.96+ |
ACI | ORGANIZATION | 0.96+ |
300 more customers | QUANTITY | 0.95+ |
70 plus percent | QUANTITY | 0.95+ |
NVME | ORGANIZATION | 0.95+ |
billion dollars | QUANTITY | 0.95+ |
two and a half | QUANTITY | 0.95+ |
Kirk Skaugen & Sudheesh Nair - Nutanix .NEXTconf 2017 - #NEXTconf - #theCUBE
>> Voiceover: Live, from Washington, DC. It's the Cube covering .NEXT Conference. (upbeat music) Brought to you by Nutanix. >> We're back at Nutanix .NEXT, everybody. This is the Cube, the leader in live tech coverage. This is day two of our wall-to-wall coverage of .NEXT Conf. Kirk Skaugen is here, he's the president of the Lenovo Data Center Infrastructure Group. Sudheesh Nair is the president of Nutanix. Gentlemen, welcome to the Cube. I'm Dave Vellante, this is Stu Miniman. We're part of the nerd herd here at the conference. So Kirk, let's start with you. We've been talking to Nutanix all week. You guys got the great booth, we've been looking at your booth all week. Transform, last week you guys had a big conference. Lenovo, obviously undergoing major transformations, as are your customers and your partners. Give us the update, how's it going? >> Well, it was a big event for us. We've been working for about two and a half years since the acquisition, the IBM X-Series team. So we launched basically our biggest data center portfolio in history, about 14 new servers, seven new storage boxes, five new network machines, and, probably more importantly to our relationship, we announced two big new brands. So Think System is kind of for the traditional infrastructure and then Think Agile, and our appliances with Nutanix for hyper-converge infrastructure. >> You guys have been talking to analysts and your community about what I call choice. You know, you've got a lot of different choices of partners, of even now processor types, hyper-visors, etc. So talk about how that's important to your partnership strategy, generally and specifically unpack some of the Lenovo specifics. >> I think it is important to have a point of view, when you're talking to customers nowadays. The problem is: is the point of view about your own company's thought process, Wall Street expectations or the point of view's doing by what is right for the customer. Take it for example, an SSD, a commodity SSD from Samsung or Toshiba. If you take that SSD and put it inside a Solar and try to sell it, you probably will get X dollars for it. That same SSD, if you put it inside a high-end SAN, you can probably take like 10X more that, right? Where do you you are-- >> Those were the days. (laughing) >> The thing is where do you think you will be going first? What will you be trying to sell first? The thing I like about Lenovo is that they're made to be efficient. That it is going to be a software defined world. But hardware does matter, the library matters, support matters and along with Lenovo, we are able to go to customers and completely re-transform, you know sort of change their architecture without being caged by any sort of Wall Street expectation that goes counter to what is right for customers. >> Kirk, I know there are many milestones you talked about at Lenovo Transform. I think if I remember it, one of them was the 20 millionth x86 server is going to be shipping sometime in the next couple weeks. >> That's right. >> To think Agile line to kind of look at software defined, how does Nutanix fit into that? You've been OEM-ing them since before you went into this branding so tell us how that came together to the new line. >> So I think we're celebrating this year 25 years an x86 servers and so you're right, we are looking at a software defined world and what I constantly hear is that Lenovo is getting pulled in because we don't have a legacy infrastructure of a big SAN business or a big router business, so we're kind of unencumbered by that but we're shipping our 20 millionth x86 server in July, next month. But with Nutanix, what we're basically doing is we're tightly integrating our management software with their prism software, we're looking at integrating some of the network topology work now with innovation because rather than kind of a legacy network that people are used to now, well we moved to a hyper-converge infrastructure, some of those pain points move onto networking but we've been innovating together now for almost two years and I think we're crossing almost 300 customer deployments now, almost 200% growth since we've started. At least Lenovo's goal is we're going to be Nutanix's largest growing OEM partner this year. >> So talk more about that innovation strategy because, you know, the general consensus is well, it's x86, they're all the same. How do you guys differentiate from an innovation standpoint? >> Well, what we talked about at Transform is our legacy now is we're number one in customer satisfaction in Lenovo on x86 systems in actually 21-22 categories. And that's a third party survey that's done across like 700 customers in 20 countries. Number one in reliability. So we're building off of this infrastructure, off of a really strong customer base. What we're trying to do on Think Agile is completely redefine the customer experience. From the way you configure the system, we can now do configure to order in three weeks. Which we think is about half of what anyone else in the industry can do relative to our competitors. And then we're innovating down the the manageability layer, the networking stack, all of those pieces to really build the best solutions together. >> Sudheesh, there's an interesting two differing things if I look at Lenovo and your partnership. Number one is Kirk says they don't have any legacy, but one of the reasons you're in OEM with them is because they do have history, they've got brand, they've got channel, how do those come together in the partnership? >> So remember, I think before XEI, servers used to be a stateless machine, being they would move the VM's back and forth because the data lives somewhere else in the storage system. So what you expect out of the server, when it comes to reliability and serviceability are very different. What we did with XEI when we came on for the first time, we took the liable storage piece, sharded into small segments and move them inside the servers. All of a sudden, the library of the server has become exponentially more important. Affordability, serviceability, how you do things like form guard management, all those things become important now because your entire core banking application is running inside a bunch of servers, there is no SAN sitting behind protecting all of this. One of the reasons why Lenovo's ex-clarity project is one of the first apps on our app store is because we want to make sure that customers have a fully integrated souped enough experience of not just managing the product but also experiencing the day one and day two. Upgrades, replacements, failure replacement, all of those things. So between our relationship with Lenovo's hardware and engineering, plus the support, we are able to deliver a one plus one equals three experience for our customers. >> So Sudheesh, I heard almost 300 customers you're at. Could you give us a little bit of kind of either verticals or geography that you're being successful? >> What we've seen with Lenovo that is a little different from the rest of the business that we do is that majority of the business is coming from large customers and second, I would say financial sectors were the biggest initial moment it seem to be. And the repeat business is following the same pattern that the customers who buy are coming back and buying again. In fact, one of the largest financial institutions in the country, New York, bought last quarter a decent size, a seven figure plus deal, and they'll probably come back and buy again this quarter. So that pick-up is happening really fast and customers are happy with the overall experience. And it's also about the courting process, the shipping process that he talked about, these are all simple things but these are extremely important in the customer buying experience. >> I think from our perspective, we operate in over 160 countries, a lot of people don't realize we have over 10,000 support specialists that call with more than a 90% customer sat rating. So when we're bringing in Think Agile, what we're bundling now with Think Agile and the Nutanix appliances is premiere customer support so you don't even go to an automated system, you go directly to a local language speaking person on the phone immediately and you get one vendor to support you across your server, your storage, your networking in the whole configuration. That has gotten customers like for us, Jiffy Lube, Holloway, Beam Suntory who's the third largest premium spirits vendor in the world, one of the largest Japanese auto-manufacturers, I mean, I think it's been across all verticals that we've seen success together. >> I was in Asia last week, two weeks ago, and the business there is tremendously picking up speed. It goes through the story, you know, they have local language support, local marketing, local channel enablement, those things matter significantly. Lenovo's very strong in all those areas. >> We live in a world that's data driven. Data is the new oil. You've got to montage your data. You guys have big volumes, you have a lot of data. In relation to partnerships, in this day and age, what role does the data play? Is there an integration of data, is there a way to get more value, how are you getting more value out of the data that you share with your customers? >> I started maybe working China as well in one of the areas, this is an extremely important question, don't think of this as a hardware and infrastructure software play, this is about what customers want. In one area, for example, SAP. One of the largest SAP's partner is Lenovo and by partnering with Lenovo, we are now able to deliver, in fact, there is a specific product CD's that we've built for Lenovo HANA customers called Bridge to HANA where we deliver certified HANA platform on Lenovo along with the Nutanix software as a prediction and testing and wiring IB's next to that. By lapping the Lenovo SAP expertise, the hardware expertise, and the Nutanix's infrastructure expertise, the customers can have a single one-stop shop for analytics, ERT, and everything. Those kind of experiences are what customers are looking for. >> I think one of the reasons people are coming to Lenovo is we're not trying to compete with them necessarily far up the stack like we would think some of our competitors are doing. But if you look at SAP, we're excited because we've had a relationship in software defined with SAP since probably eight years ago. We were actually blazing the trail, I think, with them on software defined and we got rid of the legacy SAN out of that solution probably in 2010, started eliminating some of the costs associated with that. And now we're proud that SAP runs Lenovo, and Lenovo runs SAP. We're starting to pull some big customers together like V-Grass which is one of the largest, fastest growing clothing manufacturers in China, but we're not trying to like hoard the data and use the data, or compete with our customers on data. >> Alright, guys, we're out of time. But just to sort of last questions relates to the future. Where do you guys want to take this? A couple years down the road, where are we going to see this partnership, what's your shared vision? >> You saw today, we moved from that hyper-converge to a multi-cloud world. A multi-cloud world where we are redefining what hybrid cloud really means. There's a lot of work to be done to bring applications, infrastructure, and uses togethers. And partners like Lenovo is how we are going to get there. >> Yeah, absolutely, I think this is just the beginning. We're looking to a transposable world, hyper-convergence is one path along the way. We've been participating in public cloud and now the world is moving into hybrid cloud. We've got great partnerships I think we'll see strong growth with both companies for the next few years. >> Can't do it alone. Kirk and Sudheesh, thanks very much for coming to the Cube, I really appreciate it. >> Thanks so much. >> You're welcome. Keep right there, buddy, Stu and I will be back with our next guest right after this short break. We're live from Nutanix .NEXT, we'll be right back. (upbeat music)
SUMMARY :
Brought to you by Nutanix. This is the Cube, the leader in live tech coverage. So Think System is kind of for the traditional So talk about how that's important to your The problem is: is the point of view about Those were the days. But hardware does matter, the library matters, you talked about at Lenovo Transform. To think Agile line to kind of look at software defined, integrating some of the network topology work now How do you guys differentiate from an innovation standpoint? From the way you configure the system, but one of the reasons you're in OEM with them and engineering, plus the support, we are able to deliver Could you give us a little bit of kind of either from the rest of the business that we do is that speaking person on the phone immediately and you get It goes through the story, you know, they have out of the data that you share with your customers? One of the largest SAP's partner is Lenovo started eliminating some of the costs associated with that. going to see this partnership, what's your shared vision? hyper-converge to a multi-cloud world. hyper-convergence is one path along the way. Kirk and Sudheesh, thanks very much for coming to the Cube, with our next guest right after this short break.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lenovo | ORGANIZATION | 0.99+ |
Sudheesh | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Sudheesh Nair | PERSON | 0.99+ |
Kirk | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
Asia | LOCATION | 0.99+ |
2010 | DATE | 0.99+ |
Lenovo Data Center Infrastructure Group | ORGANIZATION | 0.99+ |
Washington, DC | LOCATION | 0.99+ |
Kirk Skaugen | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
10X | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
July | DATE | 0.99+ |
last week | DATE | 0.99+ |
20 countries | QUANTITY | 0.99+ |
New York | LOCATION | 0.99+ |
20 millionth | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
25 years | QUANTITY | 0.99+ |
next month | DATE | 0.99+ |
700 customers | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
HANA | TITLE | 0.99+ |
SAP | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
Think System | ORGANIZATION | 0.98+ |
last quarter | DATE | 0.98+ |
eight years ago | DATE | 0.98+ |
two weeks ago | DATE | 0.98+ |
three weeks | QUANTITY | 0.98+ |
five new network machines | QUANTITY | 0.98+ |
over 10,000 support specialists | QUANTITY | 0.98+ |
seven new storage boxes | QUANTITY | 0.98+ |
One | QUANTITY | 0.97+ |
over 160 countries | QUANTITY | 0.97+ |
90% | QUANTITY | 0.97+ |
second | QUANTITY | 0.97+ |
three | QUANTITY | 0.97+ |
seven figure | QUANTITY | 0.96+ |
about two and a half years | QUANTITY | 0.96+ |
Holloway | ORGANIZATION | 0.96+ |
this year | DATE | 0.96+ |
almost 200% | QUANTITY | 0.96+ |
single | QUANTITY | 0.96+ |
one area | QUANTITY | 0.96+ |
almost 300 customer deployments | QUANTITY | 0.96+ |
Day 3 Wrap-Up - Dell EMC World 2017
>> Narrator: Live from Las Vegas, its theCUBE. Covering Dell EMC World 2017, brought to you by Dell EMC. >> Welcome back everyone, we are here live in Las Vegas for a wrap up on day three of three days of wall-to-wall coverage with theCUBE coverage at Dell EMC World, our eighth year covering EMC World Now, first year covering Dell EMC World. It's part of the big story of Dell and EMC combining entities, forming Dell Technologies, all those brands. I'm John Furrier with SiliconANGLE. My co-hosts this week, Paul Gillin and Keith Townsend, CTO Advisor. Guys, great week, I thought I'd be wrecked at this point. But, I mean, a lot of energy here but we heard every story. We heard all the commentary, we heard the EMC people trotting in, about their customer references. We hear the executives on message. Bottom line, let's translate it for everybody. (laughing) All the messaging, pretty tight. >> Yes. >> All singing the same songs. My take away real quickly on messaging, they want to portray that this is all good. Everything's fine. No icebergs ahead. We are going to help customers try to move from speeds to feeds, a bigger message. Not getting there yet. Still speeds and feeds. 14 (mumbles), 14G, that's kind of the high level, thoughts? >> This company wants to dominate. I mean, what we heard again and again these last few days is number one, number one. They want to own the top market share in every market in which they play, and they have a broad array of products to do that. They have a huge mix of products, maybe too many products, but with some overlap, but that's okay, but they clearly are trying to blanket or carpet bomb those markets where they think they can win. Interestingly, there are some markets like big data, like software or cloud infrastructure where they are choosing not to be a big player, and that's okay too. It means they are focused. >> John: Keith, your thoughts. >> So, again, I agree with you, tight marketing. They wanted to get out this message. I think if the present analysts get together at the beginning, they emphasized 310 analysts, from analysts and press, from all over the world. They get out the message. They get these guys and gals in here to cover that message that Dell Technologies, Dell EMC, is the leader in this space. You know what? When big mergers like this happen, I can't think of one that happened well. They are usually rocky to begin with. We haven't seen those rocks at the beginning. We haven't see that at the show. It seems like the messaging has been consistent, the customers more or less get it, and that we can't detect the chinks in the armor, so I think they did a great job of getting that out there, and portraying the stress of the brand, and throughout the show. It was a great show for them. >> They have a good story. Their story better together, obviously that's the whole theme. My impression is, in weaving through all the messaging, is generally, authentically the people are pretty happy with it. I think EMC people have been trying to break out of this, we're a storage company, you know, and I won't say they had a little bit of VMware envy, but VM World events were always different than the EMC World, so those culture clashes weren't necessarily too divergent, but different. You had storage guys, and you had VMware developers, right, so I think EMC was always trying to break out and be bigger, and just couldn't get there. Dell wanted to be more enterprise, right, so I think the two together actually is better, in my opinion. Now will it work? I still think your post is still open. They merged for the right reasons, but look it, they're not done. They got a boat-load of work to do. >> I think they're aware, to Keith's point, they are aware that history is against them, that mega mergers don't work, never have worked in this industry, and so that creates a lot of pressure to make this one work, and the good thing about that for both companies is that they're aware of what went wrong in the past. I mean, we had Howard Elias on this show the first day. Howard Elias went through two of the worst tech mega mergers in history with Compaq Digital and HP Compaq, knows where a lot of those landmines are, so they seemed hyper aware of getting everyone on message, getting everyone talking positively about synergy, and as you said John, the language was consistent from the start. >> Alright, I want to ask you guys a pointed question on that point cause it kind of brings out the next question. Management team, do they have the chops, because, to your point, history's against them, okay? We sat down with Michael Dell, founder, lead entrepreneur, still at the helm. He's a billionaire. They're private, so no shot clock on the public markets. Marius Haas, he's a pro. Howard Elias, a pro. Goulden, he does his thing. On and on and on. I think they got a pretty deep bench. I mean, your thoughts guys? >> So, let's think about that. How many bad mergers has EMC gone through? Data domain >> Paul: Home run. >> Incredible. >> Paul: Home run. >> Home run. >> Paul: DSSD. >> DSSD, well, not so much, but that wasn't really that big of a merger. >> They kind of cleaned that up pretty quickly. >> Yeah, they did, what doesn't work they get it out quick, so great management team understands the complexities of mergers. VMware merger, or acquisition, probably one of the best in the history of tech, so the management team has the chops to understand where the value is added, extract that value, and expand it. >> That's a great point. And they know when to leave stuff alone too. >> John: Yeah, engineering lead but they're also, because we heard Jeff Boudreau on talking about the storage challenges. He's like, we know what to do, we took the lumps trying to, late to the game on Flash, we're not going to be late to the game in these other areas, and he is very hyper focused. But the other thing that we didn't talk about is that EMC has just been an impeccably, credible sales organization. They know how to sell, they know how to motivate sales people. They know how to tell the tell the enterprise sales motion, which is the biggest challenge in today's industry. Every company I talked to, startup to growing IPO is we need better enterprise sales. Look at Google. Look what they're doing in the cloud. They are struggling because they have great tech and horrid sales people. They are hiring young people doing phone work. Enterprise sales is a tricky game. >> Arguably the best enterprise sales force on the planet was EMC. I mean, these are the guys who would get on a plane at midnight, would charter a plane at midnight, to get to the customer's site to fix a problem, and no other company does it like that, and Dell has a lot to learn from that. If Dell can really take that knowledge and that culture and absorb it into their own enterprise sales force, they are going to have huge opportunity with their server division. >> I want to take a minute just to thank our sponsors for their awesome CUBE coverage. You guys did great. Dell EMC, Toshiba, Virtustream, Cisco, Dato, Nutanix, Druva, and VMware. Thanks to your support, we had two CUBEs covering VM World, 20 plus videos a day, for 3 straight days. All that's on youtube.com/siliconangle. Of course, siliconangle.com for all the journalism and reporting. Wikibon.com for all the great research, and also a shoutout to Keith at @CTOAdvisor. Check him out on Twitter, always part of the conversation, super influential. Guys, great job this week. Just high level marks. My take away? Hyper converge, big time focus on these guys. VMware is the glue, Hybrid Cloud, and they're defensively using Pivotal to hold the line on Amazon, so thoughts on that point? I see you rolling your eyes. >> I just got out with James Watters, the SVP within that business unit. Pivotal is a key part of this. You know Michael has stressed on theCUBE, on Twitter, how important Pivotal is to their long term success. One of Dell's challenges, Dell EMC's, and this is not just Dell EMC, it's infrastructure companies throughout the landscape, is getting out of that conversation with their VP of infrastructure, getting into the offices of the CIOs, COs, CMO, and having these business conversations, and it's going to take a Pivotal type of solution to get that done. I thought Michael made a very great point that that white glove services, that's basically their service organization, is basically the older EMC services organization that's used to getting on a flight, solving the problem. Whatever the original statement at work was, they are willing to tear that up, and get down, get dirty, and get that done. They need to translate that >> The question for you then is this, without Pivotal, they have no play for the app developers? >> Keith: None. >> Amazon would mop that up and they'd have no positions, so I would say it is certainly a placeholder, a good one, I'm not going to deny that. The question is how big is that market for them. Can they get there, can they hold the line on Pivotal and bring in some resources and cavalry to keep that going, thoughts? >> This is where VMware comes into play. VMware has the relationship with the software layer at least, and they have a great story to tell. They need to get in front of the right people and tell that story, that CrossCloud story of being able to develop using CF and then move that to any cloud using NSX. Great story, but John, to your point, they have to get into the right rooms and have the right conversations, >> Yeah. >> Keith: That's a tough thing to do. >> I also got to give them some time. I mean, this merger happened eight months ago. I think it's pretty remarkable what they have pulled off here in such a short time, and to think about the developers are probably not their first priority right now. >> Alright, so we are going to do the metadata segment of our wrap up, which I just made up since it's such a good name, metadata, in the spirit of surveillance. What metadata can you pull out of your interviews, guys, that's a tea leaf that we could read and just nuance points, I'll start. Pat Gelsinger talked about Pivotal sharing, in between the conversations kind of weaved in, yeah, we had to spin out Pivotal, but I could almost see it in his eyes, he didn't say it specifically, but he's like, we shouldn't have sold it, right? And they had to because he said he had to work on the foundational stuff, get NSX done, get that right, but you can almost see that now as, I'd like to bring that back in, although a separate company. To me, I find that a very interesting data point, that that actually makes a lot of sense to your point about VMware. That might be an interesting combination. Why take Pivotal public, roll it into VMware. >> Yeah, I think that is going to be a interesting space to watch over the next few months. VMware and Pivotal have started to once again come back together with solutions. This NSX, CrossCloud talk makes it very compelling. It's hard to predict Dell EMC being relevant long term. They understand the value short term. They have a short rope to take advantage of this cross selling between the Dell and EMC customer. They can grow this business, get revenue short term, but there will be a cliff where they need to make that transition. Cisco is trying to make that transition into a services company, a software company, and it's hard to turn down one knob and turn the other one up. We'll see if Pat, Michael Dell, and the team have the chops to get it done. >> Cisco has to endure the public markets while they are doing that, which is one advantage Dell has. >> Data point that you can extract that you take away from this? >> Synergy, synergy. I mean when two companies this big come together, you're looking at a lot of product line overlap. I came out of this, though, thinking that there really isn't that much product line overlap. You've got a company that's strong in the mid market, with the small companies, a company that's strong in the enterprise, storage, servers, not a lot of overlap there. The big question for me, so I think the synergy question is this merger makes sense from that perspective, and the big question is software, what are they going to do with those software assets, and to your point, the future is going to be, software defined everything, and that's not a story they're telling yet. >> Keith, extracted insight that you observed that was notable that you kind of picked out of the pile of the interviews. Anything notable to you? Something obscure that made you go, wow I didn't know that, oh that's a good piece of the puzzle to put together. >> You know what, just the scale of, you look at the merger, 57 billion dollars, and on paper you are like, okay that's interesting, but a lot of the numbers coming out, you know, we talked to the senior VP of marketing and he says, you know, my guys are making bank, actually that's to paraphrase him. You said that John, that they are making bank, and one of the things that I worried about was the culture, the sales culture between Dell and EMC. Dell sales culture, very web based, very, you know I had a Dell rep and there was not an awful lot of value add, EMC >> Paul: Value add. >> The value add, and those guys earned their money, and bringing those two together and making sure that customers don't miss a beat and still get that EMC value, but Dell is able to maintain that same cost structure, I thought that was a really complicated thing to do. It seems like they are executing really well on that, and I thought from a customer's perspective, you want your supplier to make money and you want it to not be too disruptive, but you know, you want to see some value. >> Great point, that was one of highlights of my take aways, Marius Haas' interview around sales and comp and structure. They are used to a lot of bank, those sales guys, and now it's like, hey we're going to give you a haircut, what? I was about to make a million dollars on commissions this year. >> This merger will not work unless the sales organization is in sync. >> Other notables for me, just that jumped out at me, that kind of made me go, that amplifies a point, that's memorable is Michael Dell's interview hits home the point of entrepreneur founder, lead guy, and there's only three left in the industry, Ellison, Dell, and Bezos, in my mind that are billionaires that are actively, not mailing it in, they are actively driving their business, have a great ethos and culture that is creating durability. I find that the key point for me, that was a moment. I think he does sell Pivotal a little too much, which gives me a little red flag, like hmm, why is he pushing Pivotal so much, what is he hiding, but that's a different story. Michael Dell, founder. Gelsinger shared some personal commentary around his personal life. 2016 was the hardest year of his life. >> Keith: That was a mean story. >> Personal and business. Almost got fired. Remember last year? >> Yeah. >> Pat Gelsinger, you're fired. So, he had a tough year, now he's kicking ass, taking names, evaluation's on the rise. That jumped out at me. And finally, the little nuance in this merger is the alliance opportunity. Dell had the Intel, wintel, Microsoft relationships from day one, that history, Intel was on stage. EMC's had it, but not at the deep level that Dell did. So I see the alliance teams really grooving here, so that's going to impact channel marketing, SIs. I think you are going to see a massive power base, to your scale point, around alliances in the industry, the ecosystem. It's either going to blow up big or blow up bad. Either way its high octane power, Intel. >> Keith: It is a big bet. >> It's a big bet. Those are my points. Anything that jumped out at you, final thoughts, interviews? >> Jeff Townsend threw off an interesting statistic, 70% of the traffic on the internet will be video by 2020. I never heard that one before, but that has some pretty interesting implications for how infrastructure has to manage it. >> Yeah, great for our business. We're doing video right now. Keith, anything that jumped out at you, anything else? >> The scale of this show compared to, I've been at Dell World, I've been to EMC World. The energy is different here. I can say that for sure, from the EMC Worlds and the Dell Worlds that I've been at. Customers, I think, are wide eyed. I've been to plenty of VM World's. It doesn't quite have the flavor of a VM World, but I think customers are starting to understand the scale of Dell EMC, the entire portfolio. You walk the show floor, you're like, wow I didn't know >> John: The relevance has increased. >> Just little bits of this larger Dell technologies that customers are picking up on, that they're keying on that there's value there. >> The 800 pound gorilla, the very relevant impact, people are taking notice. >> If you are a one product Dell customer or a one product EMC customer and you are coming to the show for the first time, I think you're a little bit wowed. >> Alright, guys, great job. Keith, great to have you host theCUBE. Great job, as always. Really appreciate you bringing the commentary to theCUBE. Great stuff. >> Always great being here. >> Paul, great editorial, great insight, great questions. Great to work with you guys. Great to the team. Thanks to our sponsors. Go to siliconangle.com, wikibon.com, and go to youtube.com/siliconeangle and check out all the videos and the playlists, more coverage, great. Thanks for watching our special coverage of Dell EMC World 2017. See you next time.
SUMMARY :
Covering Dell EMC World 2017, brought to you by Dell EMC. We heard all the commentary, we heard the EMC people 14 (mumbles), 14G, that's kind of the high level, thoughts? and they have a broad array of products to do that. We haven't see that at the show. They merged for the right reasons, and the good thing about that for both companies on that point cause it kind of brings out the next question. So, let's think about that. really that big of a merger. team has the chops to understand where the value is added, And they know when to leave stuff alone too. They know how to tell the tell the enterprise sales motion, and Dell has a lot to learn from that. and also a shoutout to Keith at @CTOAdvisor. and it's going to take a Pivotal a good one, I'm not going to deny that. and they have a great story to tell. and to think about the developers And they had to because he said he had to work on the have the chops to get it done. Cisco has to endure the public markets while they are the future is going to be, software defined everything, oh that's a good piece of the puzzle to put together. and one of the things that I worried about was the culture, but Dell is able to maintain that same cost structure, Great point, that was one of highlights of my take aways, the sales organization is in sync. I find that the key point for me, that was a moment. Personal and business. And finally, the little nuance in this merger Anything that jumped out at you, final thoughts, interviews? 70% of the traffic on the internet will be video by 2020. Keith, anything that jumped out at you, anything else? I can say that for sure, from the EMC Worlds and the keying on that there's value there. The 800 pound gorilla, the very relevant impact, the first time, I think you're a little bit wowed. Keith, great to have you host theCUBE. Great to work with you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Michael | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Jeff Townsend | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Marius Haas | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
Compaq Digital | ORGANIZATION | 0.99+ |
James Watters | PERSON | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
wintel | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
2016 | DATE | 0.99+ |
Howard Elias | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
Virtustream | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Dato | ORGANIZATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
70% | QUANTITY | 0.99+ |
Marius Haas' | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
310 analysts | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
eighth year | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
800 pound | QUANTITY | 0.99+ |
57 billion dollars | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
3 straight days | QUANTITY | 0.99+ |
eight months ago | DATE | 0.99+ |
Druva | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
VM World | EVENT | 0.99+ |
three days | QUANTITY | 0.99+ |
20 plus videos a day | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Craig Bernero, Dell EMC & Pierluca Chiodelli, Dell - Dell EMC World 2017
>> Narrator: Live from Las Vegas, it's the Cube. Covering Dell EMC World 2017. Brought to you by Dell EMC. >> Okay, welcome back, everyone. We are here live at Dell EMC World 2017, our eighth year coverage with the Cube. Formerly EMC World, now Dell EMC World. This is the Cube's coverage. I'm John Furrier, my cohost, Paul Gillin. Our next two guests are Craig Bernero, who is the senior vice president general manager of the midrange and entry storage solutions at Dell EMC. And Pierluca Chiodelli, VP appliance management at Dell. Guys, welcome to the Cube. Great to see you guys. >> Likewise. >> Thank you. >> Give us the update. We're hearing a ton of stories, 'cause the top stories, obviously the combination, merger, acquisition, whatever side you want to call acquired who. But all good, good stories. I mean, some speed bumps, little bumps along the way, but nothing horrific. Great stories. Synergies was the word we've been hearing. So you got to have some great growth with the Dell scale. Entry-level touchpoint growth, high end, get more entry level, give us the update. >> Yeah, absolutely. So again, first and foremost, I wanted to call out to all our customers and partners that are critical for the success that we've seen. No doubt, and actually, we've committed better together one fourth, which is why you saw two of our launches, both on the Unilining and the Assine line, which historically were part of EMC and Dell respectively prior. And main point is, a lot of the feedback we got from customers was they really respected and appreciate our customer choice first philosophy. But also understanding that there's clear demarkation where each of those technologies play in their sweet spot-- >> Well, how are you demarcating them right now? >> Absolutely, so traditionally, pre the EMC acquisition, what we actually ended up determining is, when you define the midrange market segment we were looking at, it was more in the upper range, upper level of it, where they're driving value from a technology aspect and with their Unity product set. We are focusing heavily into the all-Flash market segment, too, which is one of the major refreshes we did here. And then in the Dell storage, which is very server affinity, a direct attached construct at the entry through the lower end of the midrange band, it was actually some very clear swim lanes of where each of the respective products played to their strengths as well. And so as a result of that, we've really taken that to heart with our hybrid offering on the C side to get your economics. Again, effectively, our 10 cents per gig as they've rolled it outlined on Monday as far as the most affordable hybrid solution there on the market. And then you go to the upper, premium level of value capability, with all Flash to deal with your performance workloads and other characteristics, too. >> Here Luca, talk about the overlap, because we address that, we hit him head on with that. Turns out, not a lot of overlap. But as you guys come together with, we just had Toshiba on earlier. Flash is obviously big part of the success. Getting those price points down to the entry, midrange, enabling that kind of performance and cost is key, but as you look at the product portfolio, where are the areas you guys are doubling down on, and where is kind of some of the overlapping taking care of, if any? >> Yeah, so let me tell you, the first thing that is very important and we have in the show is the reaffirmation of the investment in the two product. So we have a panel entry yesterday also a panel with 120 customer. Divide 50% between the legacy, the heritage Dell and the EMC customer, and the amazing things there was the Flash adoption is very strong, but also they want to have it economical, so Four I is very strong. So this is really feet to our two product. Because if you remember, Compellan has been created as the best storage for data progression. And we double down on Unity now that we are now that we are now so completely full line of Unity product today. So in the other face, on the FC line, we reaffirmed the completion of the family with the new 5020. That provide more performance, more capacity, much more lenience. And we'll drive our 4020 customer to a very new product. So yes, some people before they think, "Oh these guys, they have a lot of overlap." But actually, we have two amazing product that they play together in this market. >> And talk about the customer dynamic, 'cause that's interesting about that. Almost the 50/50 split as you mentioned. They got to be, I mean, not, their indifference is probably, they're probably like, "Bring on the better product." I'm not hearing any revolts. Right, that no one's really revolting. Can you just share the perspective of some of the insight that they're telling you about, what they're expecting from you guys? >> So I think it's very fun to be in this position where we are right now, where we have such a good portfolio of product where a customer, company, people inside of our company start to learn how this product works. Because you sell what you know, right? Or you use what you know. People, right, try to do the same things every day. So we are forced now to look outside of our part and say, "You know, we have two product. "What is the benefit?" And now, we sparkle this discussion with the customer. And in any customer, we have before tremendous amount of common customer, right? The customer, they have a preference, but now they say, "Oh let me," an EMC customer say, "Maybe I have a huge case for "an all-Flash upgrade with Unity." And the SC customer say, "Oh, maybe now I can run "this application on Unity or SC "or Open App to a different things." What we say is, this is the line I use. We are the top one now because we can solve any use case. Right, if you look at our competitors, they try to cover everything with one product, right? >> John: You can mix and match. >> Yes, you can mix and match and we have a very differential part between the two. And we said, "SC economy, drive economy with the fact "that we can have a de-looped compression on speedy media." Unity optimized for Flash. >> Is there any incompatibility between the two? Do the two platforms work pretty seamlessly together? >> Pierluca: Yes. >> Yeah, so I'm going to expand a little further on that. So one of the things we did highlight as part of the all-Flash offering for Unity, 350 through the 650, the four new entry models, customers were surprised, you know. And there were some questions on the level of innovation we're driving. A year later, getting a full platform refresh was a very big surprise for customers. I typically, two years, 18 months of other vendors in the field, and they're like, "You just launched "the product last year, and you already have a refresh." And we did that 'cause we listened to customer requirements and the all-Flash, the performances as absolutely critical, so the controller upgrade. We went from a Haswell to Broadwell design. We actually added additional core capabilities in memory, and all with the architecture built to do an online date and place upgrade that will be driving later in the year, too. So, and the SC 5020 that we announced too as a separate product line to complementing, as Pierluca stated, but the third area that hasn't been necessarily amplified but customers have raved about seeing in the showroom area is our Cloud IQ technology, which is actually built off of Cloud Foundry. That's a value, the portfolio of the company and a strategic aligned business. And actually, it does preemptive and proactive not only monitoring, but we're taking that from Jeff Boudreau's keynote today. That whole definition of autonomous and self-aware storage well, in midrange 'cause of all the use cases and requirements, we're driving that into it. And there's actually, we have compatibility between Unity and SC in Cloud IQ. As that one pane of glass, it's not helmet manager, but more to take that value to a whole new level. And we're going to continue to drive that level innovation beyond, not just through software, but clearly leveraging better together talent to really solve some key business needs for customers. >> As David Guilden always says in the Cube, it's better to have overlap than holes in a product line. So that's cool that you guys got that addressed, and certainly mixing and match, that's the standard operating procedure these days in a lot of guys in IT. They know how to do that. The key is, does it thread together? So, congratulations. The hard question that I want to ask you guys and what everyone wants to know about, where the customer wins? Okay, because at the end of the day, you'd be number one at whatever old category scoreboard. >> Craig: Sure. >> Scoreboard of customers is what we're looking at. Are you getting more customers? Are they adopting, are they implementing a variety of versions? Give us the updates on the wins and what the combination is of Dell EMC coming together. What has that done for sales and wins? >> Yeah, so there's a public blog I posted for Dell EMC World, and it's about the one two punch with midrange storage. >> John: What was the title of that blog post? >> It was basically a one two punch, our midrange storage. And I'll provide you the link in followup. >> John: I'll look at it later. >> The reason we preemptively provided that was the biggest question I would get from customers was, which product are you going to choose? And our point was, both, right? Both products, the power of the portfolio. We don't need to choose one. Our install base on both those technologies is significant. But in that post, I also did quote some of the publicly available IEDC data, which showed us in our last quarter, in Q4, where you compare Q3 to Q4, we actually had double digit quarter growth for both Uni and SC, our primary leading lines in both the portfolio, which actually allowed us to get effectively back into a midrange market share segment. Now that's for purpose build. >> That reflects a very positive trend for Dell EMC midrange storage portfolio. I'm quoting directly from your blog post. One two punch drives midrange storage momentum. >> Craig: Correct. >> And it's not only the storage, right? I've been with a very big customer of ours. I was telling to an analyst this morning it's amazing to see the motion of the business that we can do now that we are Dell EMC. So being a private company in one sense allows us to do creative things that we didn't do before. So we can actually position not only one product or two product, but the entire portfolio. And as you see, with the server business, the affinity that some of the storage they have with the server, we can drive more and more adoptions for our work class. >> Just quickly, how is your channel reacting to all this? Are they fully on board, do they understand? Are they out there selling both solutions? >> 100%, we put a lot of investment into our channel enablements across the midrange storage products in portfolio as well, 'cause that's the primary motion that we drive as well. And that it allowed us to actually enable them for success, both in education enablement, and clearly, proper incentives in play. They're very well received. The feedback we've gotten has been overwhelmingly positive. And we've been complementing that more and more with constant refresher of not only our technology and sharing roadmap delivery so that it can plan ahead as that storage is used. >> I asked Mike Amerius Hoss and David Guilden the question, they both had the same answers. It's good to see them on the same page. But I said, you know, what's, where are the wins? And they both commented that where there's EMC Storage, they bring more Dell in. Where there's Dell, they bring more EMC Storage in. >> Yes, that's why they judged this with this customer. The new business motion that we can now propose like we have a very loyal customer from Arita GMC for example, but now we can offer also server, a software define on top of all that and the storage, right? And you can enter from the other one, from the server and position now a full portfolio of storage. >> Alright, I'm going to ask you a personal question. I'd like to get your reactions. Take your EMC hat off for a second. Put your industry participant, individual hat on. What's the biggest surprise from the combination, from your area of expertise and your jobs that you've personally observed with the combination? Customer adoption, technology that wasn't there, chaos, mayhem, what? >> Yeah, so I'll comment first. I think the, I mean, recognizing the real power of global scale, and what I mean by that is the combined set. So from an organization and R and D investment, being able to have global scale, where you have engineering working literally 24 by five, right, based on effectively, a follow the sun model, that's how you're seeing that innovation engine just cranking into high gear. And that was further extended with the power of the supply chain and innovation bringing together has been in my opinion, super powerful, right? 'Cause couple customers had shared with me, it's like, my concern if I go with a startup that may not be in business and relative to the supply chain leverage and the level of innovation, breadth, and depth of products that we have. >> Craig, that's a great point. Before we go to Pierluca, I just want to comment on that. We're seeing the same thing in the marketplace. A lot of the startups can't get into the pure storage play because scale requirements is now the new barrier entry, not necessarily the technology. >> Exactly. >> Not necessarily the technology, so that kind of reaffirms, that's why the startups kind of are doing that a lot of data protection, white space stuff. And their valuations, by the way, are skyrocketing. Go ahead, your comment, observation that surprised you or didn't surprise you, took you by storm, what? >> I need to say that I'm living a dream in this moment because I think it's a few times in life that you can experience a trust formation. And you can have the ability, actually, in my role that I have right now to accelerate this trust formation. And that it's not the common things to do in the company that is already established. So this shape, this come together give you more and more opportunity. So I'm so very exciting to do what I'm doing, and I love it. >> Injection of the scale, and more capabilities, it's like, go to the gym and you're like pumped up, you're in shape. >> Actually, I started to go to the gym after 20 years. (laughing) >> It's like getting a good meal. You're Italian, you appreciate a good buffet of resource, right? >> That's right. >> Dell's got the gourmet-- >> You know, every day, I find something new, some product that I didn't know, something that we did, innovation that we have in the company that we can actually use together. It's very very exciting. >> And the management teams are pretty solid. They didn't really just come in and decimate EMC. They essentially, it was truly a combination. Some say that EMC acquired Dell, some say Dell acquired EMC. But the fact that is even discussed shows a nice balance in terms of a lot of EMC at the helm. Its great sales force, great commercial business with Dell, very well play, I think. You guys feel the same way? >> I appreciate that, and couldn't agree more. And I think it shows as you look at business results and even from an employee satisfaction level. We continue to see that being record high, 'cause there's always that uncertainty, but the interesting piece is people have really been jazzed based on opportunity ahead. >> Alright, we're done complimenting. Let's get to the critical analysis. What's on the roadmap? >> Craig: A lot. >> Tell us what's coming down the pike. I know you privately do your earnings call, but you guys have been transparent, some of the things. What can you say about what's coming out for customers? What can they expect from you guys in the storage? >> I'll let Pierluca run the product management team. He drives that every day. >> So I do not say much, things that I'm getting. >> Share all, come on. You're telling, just spill it out. Come on. You and your dream, come on, sell it. >> We have only 20 minutes, so, really, as I said, we announced the 5020, right, we add the 7020 in August. We are planning to finish the lineup of the new family of SC for sure. We announced the ability to tiering to the Cloud, we're going to expand that. Also, we announced a full new set a family of Flash Unity. So we're going down that trajectory to offer more and more. And we are going to be very bold to offer also upgrades from old jan to the new jan and non-destructive upgrade and also a line upgrade. So it's a very very beefy roadmap that we show with our customer in the A and DH section. I need to say the feedback is tremendous, and to your point at the beginning, what is the ecosystem? How do you integrate the thing? You're going to see more and more, for example, the UI, the experience for the customer being the same. So the experience from the UI perspective-- >> Paul: Simplicity. >> Yes, simplicity. >> Paul: Simplicity is the new norm. >> Cloud IQ key, but also going between the products who have the same kind of philosophy. >> Hey, I always say, this great business model, make thins super fast, really easy to use, and really intuitive. Can't go wrong with that triple threat right there. So that's like what you guys are doing. >> Yes. >> Absolutely. >> Guys, thanks so much for coming on the Cube and sharing insight and update. Congratulations on the one two punch and the momentum and the success. That's the scoreboard we look at on the Cube. Are customers adopting it? Sharing all the data here inside the Cube live in Las Vegas with Dell EMC World 2017, stay with us for more coverage after this short break.
SUMMARY :
Narrator: Live from Las Vegas, it's the Cube. of the midrange and entry storage solutions at Dell EMC. I mean, some speed bumps, little bumps along the way, And main point is, a lot of the feedback we got the lower end of the midrange band, Flash is obviously big part of the success. So in the other face, on the FC line, Almost the 50/50 split as you mentioned. We are the top one now because we can solve any use case. And we said, "SC economy, drive economy with the fact So, and the SC 5020 that we announced too Okay, because at the end of the day, Are you getting more customers? for Dell EMC World, and it's about the one two punch And I'll provide you the link in followup. and SC, our primary leading lines in both the portfolio, I'm quoting directly from your blog post. And it's not only the storage, right? channel enablements across the midrange storage products the question, they both had the same answers. The new business motion that we can now propose What's the biggest surprise from the combination, by that is the combined set. A lot of the startups can't get into Not necessarily the technology, so that kind of reaffirms, And that it's not the common things to do Injection of the scale, and more capabilities, Actually, I started to go to the gym after 20 years. You're Italian, you appreciate innovation that we have in the company And the management teams are pretty solid. And I think it shows as you look at business results What's on the roadmap? What can they expect from you guys in the storage? I'll let Pierluca run the product management team. You and your dream, come on, sell it. We announced the ability to tiering Cloud IQ key, but also going between the products So that's like what you guys are doing. That's the scoreboard we look at on the Cube.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul Gillin | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Craig | PERSON | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
Craig Bernero | PERSON | 0.99+ |
David Guilden | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Pierluca | PERSON | 0.99+ |
luca Chiodelli | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Jeff Boudreau | PERSON | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
120 customer | QUANTITY | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
eighth year | QUANTITY | 0.99+ |
August | DATE | 0.99+ |
last quarter | DATE | 0.99+ |
two years | QUANTITY | 0.99+ |
two product | QUANTITY | 0.99+ |
one product | QUANTITY | 0.99+ |
Arita GMC | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
A year later | DATE | 0.99+ |
yesterday | DATE | 0.99+ |
Assine | ORGANIZATION | 0.99+ |
Paul | PERSON | 0.99+ |
both solutions | QUANTITY | 0.99+ |
two platforms | QUANTITY | 0.99+ |
Monday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
24 | QUANTITY | 0.98+ |
5020 | COMMERCIAL_ITEM | 0.98+ |
Luca | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Dell EMC World | ORGANIZATION | 0.98+ |
Cube | COMMERCIAL_ITEM | 0.98+ |
one sense | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Both products | QUANTITY | 0.98+ |
two guests | QUANTITY | 0.98+ |
third area | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
couple customers | QUANTITY | 0.97+ |
Unilining | ORGANIZATION | 0.97+ |
650 | COMMERCIAL_ITEM | 0.97+ |
each | QUANTITY | 0.97+ |
Stanley Toh, Broadcom - ServiceNow Knowledge 2017 - #Know17 - #theCUBE
(exciting, upbeat music) >> (Announcer) Live from Orlando, Florida. It's theCUBE, covering ServiceNow Knowledge '17. Brought to you by ServiceNow. >> We're back. Dave Vellante with Jeff Frick. This is theCube and we're here at ServiceNow Knowledge '17. Stanley Toh is here, he's the Global IT Director at semiconductor manufacturer Broadcom. Stanley, thanks for coming to theCUBE. >> Nice to be here. >> So, semiconductor, hot space right now. Things are going crazy and it's a good market, booming. That's good, it's always good to be in a hot space. But we're here at Knowledge. Maybe talk a little bit about your role, and then we'll get into what you're doing with ServiceNow. >> Sure. You're right. Semiconductor is booming. But we don't do anything sexy. Everything is components that go into your iPhones and stuff like that. They do the sexy stuff. We do the thing that make it work. So, I'm the what we call the Enterprise and User Services Director, so basically anything that touches the end user, from the help desk to collaboration to your PC support desk, everything is under. Basically anything that touches the end user, even onboarding, and then, now with the latest, we actually moved our old customer support portal to even ServiceNow CSM. >> Okay, so what led you to ServiceNow? Maybe take us back, and take us through the before and the after. >> Okay. Broadcom Limited, before we changed our name to Broadcom, we were Avago Technologies. We are very cloud centric. Anything that we can move to the cloud, we moved to the cloud. So we were the first multi-billion dollar company to move to Google, back in 2007. That was 10 years ago. And then we never stopped since. We have Opta, we have Workday. And if you look at it, all this cloud technology works so well with ServiceNow. And ServiceNow is a platform that has all the API and connectors to all these other cloud platforms. So, when we were looking and evaluating, first as just the ITSM replacement, we selected ServiceNow because of the ease of integration. But as we get into ServiceNow, and as we learn ServiceNow, we found that it's not just an ITSM platform. You can use it for HR, for finance, for legal, for facilities. Recently, probably about six months ago, we launched the HR module. And then three weeks ago, we went live with a CSM portal for the external customer. >> When you say you go back to 2007 with Google, you're talking about what, Google Docs? >> Everything. >> Dave: Everything. >> Email, calendar, docs, sites, Drive, but it was unknown. >> Dave: All the productivity stuff. >> Everything. >> Dave: Outsourced stuff. >> They were unknown then, >> Jeff: Right, right, right. >> And it's a risk. >> So what was the conversation to take that risk? Because obviously there was a lot of concern at the enterprise level on some of these cloud services beyond test/dev in the early days. Obviously you made the right bet, it worked out pretty well. (Stanley laughing) But I'm curious, what were the conversations and why did you ultimately decide to make that bet? >> Okay. So 2007 was just after the downturn. >> Jeff: Right. >> So everyone was looking at cost, at supportability. But at the same time, the mobile phone, the smart phone is just exploding in the market. So we want something that is very flexible, very scalable, and very easy to integrate, plus also give you mobility. So that's why we went with Google as the first cloud platform, but then we started adding. So right now, we can basically do everything on your smart phone. We have Opta as our single sign-on. From one portal, I go everywhere. >> Dave: Okay, so that's good. So you talked about some of the criteria for the platform. How has that affected how you do business, how you do IT business? >> See, IT has always been looked upon as a cost center. And we are always slow, legacy system, hard to use, we don't listen to you. (Jeff laughing) >> Dave: What do those guys do? >> You know, why are we paying those guys, right? And then you look at all the consumer stuff. They are sexy, they are mobile, they have pretty pictures. Now all your internal users want the same experience. So, the experience has changed. The old UNIX command key doesn't work anymore. They want something touch, GUI, mobile. They want the feel, the color, you know. >> That might be the best description (Stanley laughing) of the consumerization of IT, Dave, that we've ever had on theCUBE. >> It's really honest. Coming from an IT person, it is, it is honest. And now you've driven ServiceNow into other areas beyond IT. >> Stanley: Yes. >> You mentioned HR. >> HR. We went live six months ago. >> Okay. And these other areas, are you thinking about it, looking at it, or? >> So we are also looking with legal, because they have a lot of legal documents and NDAs and stuff like that. And ServiceNow have a very nice integration to DocuSign and Vox. So we are looking at that. But the latest one, we went live three weeks ago, is the CSM, the customer support management portal. And that one actually replaced one of our legacy system that has a stack of sixteen application running. And we collapsed that, and went live on ServiceNow CSM three weeks ago. >> And what has been, two impacts - the business impact, and, I'm curious, is it the culture impact. You sort of set it up as the attitude. We had fun with it, but it's true. What's the business impact? And what has the cultural impact been? >> The last few years, we have been doing a lot of acquisition. So we have been bringing in a lot of new BU's. Business units. And they want things to move fast, and we want to integrate them into one brand. So speed and agility is key when you do acquisitions. So that's why we are moving into a platform where we can integrate all these new companies easily. We found that in ServiceNow and we can integrate them. So for example, when we acquired Broadcom Corporation, they have 18,000 employees. We onboarded them on day one, and usually when you do an acquisition, they don't give you the employee information until the last minute. Two days, all I need, is to bring them all on, onboarded into my collaboration suite. I only need two days of the information, and on day one, Turn it on, they are live. Their information is in, they have an email account. All their information is in ServiceNow. They call one help desk, they call our help desk, they get all the help and services. So it's fully integrated on day one itself. >> And you guys also own LSI now, right? >> Yes, LSI. >> Emulex? >> Emulex, PLX. >> PLX. >> The latest acquisition is Brocade, which we will close in the summer. And then, the rumored Toshiba NAND business. So, yeah, we are doing a lot of acquisitions. >> Yeah, quite a roll-up there. >> Correct. So as you can see, they are all very different companies. So when they come in, they have different culture. They have different workflow, they have different processes. But if you integrate them into a platform that we are very familiar right now, it's the consumerized look and feel, it's very easy to bring them in. >> And that is the cultural change that has occurred. >> Yes, it's a huge, >> So do people love IT now? >> They still hate IT. (Jeff and Dave laughing) They still say iT is a cost center. But right now, they are coming around. They see that we are bringing value to them. So right now, IT is just not to provide you the basic. IT is to enable the business to be better and more competitive. >> A true partner for the business. >> Yes, correct. >> Stanley, thanks very much for coming to theCUBE. It was great to hear your story, we appreciate it. >> Stanley: Thanks for having me. >> You're welcome. All right, keep it right there, buddy. We'll be back with our next guest. This is theCUBE, we're live from ServiceNow Knowledge '17. We'll be right back. (upbeat music)
SUMMARY :
Brought to you by ServiceNow. Stanley Toh is here, he's the Global IT Director That's good, it's always good to be in a hot space. from the help desk to collaboration Okay, so what led you to ServiceNow? And ServiceNow is a platform that has all the API Drive, but it was unknown. and why did you ultimately decide to make that bet? So right now, we can basically do everything So you talked about some of the criteria for the platform. And we are always slow, legacy system, hard to use, And then you look at all the consumer stuff. That might be the best description And now you've driven ServiceNow are you thinking about it, looking at it, or? But the latest one, we went live three weeks ago, and, I'm curious, is it the culture impact. So we have been bringing in a lot of new BU's. And then, the rumored Toshiba NAND business. that we are very familiar right now, So right now, IT is just not to provide you the basic. It was great to hear your story, we appreciate it. This is theCUBE, we're live from ServiceNow Knowledge '17.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff | PERSON | 0.99+ |
Stanley | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
2007 | DATE | 0.99+ |
Emulex | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Broadcom Corporation | ORGANIZATION | 0.99+ |
18,000 employees | QUANTITY | 0.99+ |
Broadcom Limited | ORGANIZATION | 0.99+ |
Two days | QUANTITY | 0.99+ |
PLX | ORGANIZATION | 0.99+ |
Orlando, Florida | LOCATION | 0.99+ |
Avago Technologies | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.99+ |
iPhones | COMMERCIAL_ITEM | 0.99+ |
Brocade | ORGANIZATION | 0.99+ |
UNIX | TITLE | 0.99+ |
three weeks ago | DATE | 0.99+ |
six months ago | DATE | 0.99+ |
ServiceNow | ORGANIZATION | 0.99+ |
Stanley Toh | PERSON | 0.98+ |
one portal | QUANTITY | 0.98+ |
ServiceNow | TITLE | 0.98+ |
Knowledge | ORGANIZATION | 0.98+ |
LSI | ORGANIZATION | 0.98+ |
one brand | QUANTITY | 0.97+ |
first multi-billion dollar | QUANTITY | 0.97+ |
Google Docs | TITLE | 0.97+ |
first | QUANTITY | 0.97+ |
about six months ago | DATE | 0.96+ |
first cloud platform | QUANTITY | 0.96+ |
two impacts | QUANTITY | 0.96+ |
one help desk | QUANTITY | 0.96+ |
sixteen application | QUANTITY | 0.94+ |
day one | QUANTITY | 0.92+ |
2017 | DATE | 0.91+ |
one | QUANTITY | 0.9+ |
theCUBE | ORGANIZATION | 0.88+ |
ServiceNow Knowledge '17 | ORGANIZATION | 0.83+ |
#Know17 | EVENT | 0.81+ |
Vox | ORGANIZATION | 0.73+ |
single sign | QUANTITY | 0.72+ |
DocuSign | ORGANIZATION | 0.72+ |
last few years | DATE | 0.69+ |
Toshiba NAND | ORGANIZATION | 0.69+ |
CSM | ORGANIZATION | 0.68+ |
theCube | ORGANIZATION | 0.59+ |
ServiceNow Knowledge | ORGANIZATION | 0.58+ |
ServiceNow CSM | TITLE | 0.58+ |
CSM | TITLE | 0.54+ |
Knowledge '17 | TITLE | 0.53+ |
Opta | TITLE | 0.52+ |