Andy Brown, Broadcom
(upbeat music) >> Hello and welcome to theCUBE. I'm Dave Nicholson, Chief Technology Officer at theCUBE and we are here for a very special Cube Conversation with Andy Brown from Broadcom. Andy, welcome to theCUBE, tell us a little about yourself. >> Well, a little bit about myself, my name is Andy Brown, I'm currently the Senior Director of Software Architecture and Performance Analysis here within the Data Center Solutions Group at Broadcom. I've been doing that for about seven years prior to that, I held various positions within the system architecture, systems engineering, and IC development organization, but ultimately as well as spent some time in our support organization and managing our support team. But ultimately have landed in the architecture organization as well as performance analysis. >> Great, so a lot of what you do is around improving storage performance, tell us more about that. >> So let me give you a brief history of storage from my perspective. As I mentioned, I go back about 30 years in my career and that would've started back in the NCR Microelectronics days. And originally with Parallel SCSI, so that would be, if anyone would remember the the 5380 Controller, which was one of the original Parallel SCSI controllers that existed in built by NCR Microelectronics at the time, I've seen the advent of Parallel SCSI, a stint of fiber channel, ultimately leading into the serialization of the SCSI standard into SaaS, as well as SATA, and then ultimately leading to NVMe protocols and the advent of flash moving from hard drives into a flash based media and as well on that's on the storage side on the host side, moving from parallel interfaces, ISA if everybody could remember that, moving to PCI, PCI Express and that's where we land today. >> So Andy, we are square in the middle of the era of both NVMe and SaaS. What kinds of challenges does that overlap represent? >> Well, I think obviously we've seen SaaS around for a while, it was the conversion from parallel into a serial attached SCSI and that really SaaS brings with it, the ability to connect really a high number of devices and was kind of the original scaling of devices. And really also enabled was one of the things that enabled flash based media, given the the speed and performance that came to the table. Of course NVMe came in as well with the promise of even higher speeds. And as we saw flash media really, really take a strong role in storage. NVMe came around and really was focused on trying to address that, whereas SaaS originated with hard drive technology. NVMe was really born out of how do we most efficiently deal with flash based media, SaaS with its. But SaaS still carries a benefit on scalability and NVMe maybe has, I don't want to say challenges there, but it's definitely was not designed as much to be broadly scale across many, many, say high hundreds or thousands of devices. But definitely addressed some of the performance issues that were coming up as flash media was becoming. So it was increasing the overall storage performance that we could experience if you will. >> Let's talk about host interfaces, PCIe. What's the significance there? >> Really all the storage in the world, all the performance in the world on the storage side, is not of much use to you unless you can really feed it into the beast, if you will, into the CPU and into the the rest of the service subsystem. And that's really where PCI comes into play. PCI originally was in parallel form and then moved to serial with the PCI Express as we know it today, and really has created a pathway to enable not only storage performance but any other adapter or any other networking or other types of technologies to just open up that pathway and feed the processor. And as we've moved through from PCI to PCI Express PCI 2.0 3.0 4.0, and just opening up those pipes has really enabled just a tremendous amount of flow of data into the compute engine, allowing it to be analyzed, sorted used to analyze data, big data, AI type applications. Just those pipes are critical in those types of applications. >> We know we've seen dramatic increases in performance, going from one generation of PCIe to the next. But how does that translate into the worlds of SaaS, SATA and NVMe? >> So from a performance perspective when we look at these different types of media whether it be SATA, SaaS or NVMe, of course, there are performance difference inherent in that media, SATA being probably the lowest performing with NVMe topping out at higher performing although SaaS can perform quite well as a flash based as protocol connected to flash based media. And of course, NVMe from an individual device scaling, from a by one to a by four interface, really that is where NVMe kind of has enabled a bigger pipe directly to the storage media, being able to scale up to by four whereas SaaS can limit it to by one, maybe by two in some cases, although most servers only connect the SaaS device of by one. So from a different perspective then you're really wanting to create a solution or enable the infrastructure to be able to consume that performance at NVMe is going to give you. And I think that is something where our solutions have really in the recent generation shined, at their ability to really now keep up with storage performance and NVMe, as well as provide that connectivity back down into the SaaS and SATA world as well. >> Let's talk about your perspective on RAID today. >> So there've been a lot of views and opinions on RAID over the years, it's been and those have been changing over time. RAID has been around for a very, very long time, probably about as long as again, going back over my 30 year career, it's been around for almost the entire time. Obviously RAID originally was viewed as some thing that was very, very necessary devices fail. They don't last forever, but the data that's on them is very, very important and people care about that. So RAID was brought about knowing that individual devices that are storing that data are going to fail, and really took cold as a primary mechanism of protection. But as time went on and as performance moved up both in the server and both in the media itself if we start talking about flash. RAID really took on, people started to look at traditional server storage RAID, well, maybe a more of a negative connotation. I think that because to be quite honest, it fell behind a little bit. If you look at things like parity RAID 5 and RAID 6, very, very effective efficient means of protecting your data, very storage efficient, but ultimately had some penalty a primarily around right performance, random rights in RAID 5 volumes was not keeping up with what really needed to be there. And I think that really shifted opinions of RAID that, "Hey it's just not, it's not going to keep up and we need to move on to other avenues." And we've seen that, we've seen disaggregated storage and other solutions pop up and protect your data obviously in cloud environments and things like that have shown up and they have been successful, but. >> So one of the drawbacks with RAID is always the performance tax associated with generating parody for parody RAID. What has Broadcom done to address those potential bottlenecks? >> We've really solved the RAID performance issue the right performance issue. We're in our latest generation of controllers we're exceeding a million RAID 5 right IOPS which is enough to satisfy many, many, many applications as a matter of fact, even in virtual environments aggregated solutions, we have multiple applications. And then as well in the rebuild arena, we really have through our architecture, through our hardware automation have been able to move the bar on that to where the rebuild not only the rebuild times have been brought down dramatically in SaaS based or in I'm sorry in flash based solutions. But the performance that you can observe while those rebuilds are going on is almost immeasurable. So in most applications you would almost observe no performance deficiencies during a rebuild operation which is really night and day compared to where things were just few short years ago. >> So the fact that you've been able to, dramatically decrease the time necessary for a RAID rebuild is obviously extremely important. But give us your overall performance philosophy from Broadcom's point of view. >> Over the years we have recognized that performance is obviously a critically important for our products, and the ability to analyze performance from many many angles is critically important. There are literally infinite ways you can look at performance in a storage subsystem. What we have done in our labs and in our solutions through not only hardware scaling in our labs, but also through automation scripts and things like that, have allowed us to collect a substantial amount of data to look at the performance of our solutions from every angle. IOPS, bandwidth application level performance, small topologies, large topologies, just many, many aspects. It still honestly only scratches the surface of all the possible performance points that you could gather, but we have moved them bar dramatically in that regard. And it's something that our customers really demanded of us. Storage technology has gotten more complex, and you have to look at it from a lot different angles, especially on the performance front to make sure that there are no holes there that somebody's going to run into. >> So based on specific customer needs and requests, you look at performance from a variety of different angles. What are some of the trends that you're seeing specifically in storage per performance today and moving into the future? >> Yeah, emerging trends within the storage industry. I think that to look at the emerging trends, you really need to go back and look at where we started. We started in compute where people were you would have basically your server that would be under the desk in a small business operation and individual businesses would have their own set of servers, and the storage would really be localized to those. Obviously the industry has recognized that to some extent, disaggregation of that, we see that obviously in what's happening in cloud, in hyper-converged storage and things like that. Those afford a tremendous amount of flexibility and are obviously great players in the storage world today. But with that flexibility has come some sacrifice and performance and actually quite substantial sacrifice. And what we're observing is almost, it comes back full circle. The need for inbox high performing server storage that is well protected. And with people with confidence that people have confidence that their data is protected and that they can extract the performance that they need for the demanding database applications, that still exists today, and that still operate in the offices around the country and around the world, that really need to protect their data on a local basis in the server. And I think that from a trend perspective that's what we're seeing. We also, from the standpoint of NVMe itself is really started out with, "Hey, we'll just software rate that. We'll just wrap software around that, we can protect the data." We had so many customers come back to us saying, you know what? We really need hardware RAID on NVMe. And when they came to us, we were ready. We had a solution ready to go, and we're able to provide that, and now we're seeing ongoing on demand. We are complimentary to other storage solutions out there. Server storage is not going to necessarily rule a world but it is surely has a place in the broader storage spectrum. And we think we have the right solution for that. >> Speaking of servers and server-based storage. Why would, for example, a Dell customer care about the Broadcom components in that Dell server. >> So let's say you're configuring a Dell server and you're going, why does hardware where RAID matter? What's important about that? Well, I think when you look at today's hardware RAID, first of all, you're going to see a dramatically better performance. You're going to see dramatically better performance it's going to enable you to put RAID 5 volumes a very effective and efficient mechanism for protecting your data, a storage efficient mechanism. You're going to use RAID 5 volumes where you weren't able to do that before, because when you're in the millions of IOPS range you really can satisfy a lot of application needs out there. And then you're going to also going to have rebuilt times that are lightning fast. Your performance is not going to degrade, when you're running those application, especially database applications, but not only database, but streaming applications, bandwidth to protected RAID volumes is almost imperceptively different from just raw bandwidth to the media. So the RAID configurations in today's Dell servers really afford you the opportunity to make use of that storage where you may not have already written it off as well RAID just doesn't, is not going to get me there. Quite frankly, into this in the storage servers that Dell is providing with RAID technology, there are huge windows open in what you can do today with applications. >> Well, all of this is obviously good news for Dell and Dell customers, thanks again, Andy for joining us, for this Cube Conversation, I'm Dave Nicholson for theCUBE. (upbeat music)
SUMMARY :
and we are here for a very I'm currently the Senior Great, so a lot of what you do and the advent of flash in the middle of the era and performance that came to the table. What's the significance there? and into the the rest of of PCIe to the next. have really in the Let's talk about your both in the server and So one of the drawbacks with RAID on that to where the rebuild So the fact that you've been able to, and the ability to analyze performance and moving into the future? and the storage would really about the Broadcom components in the storage servers and Dell customers, thanks
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Andy Brown | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Data Center Solutions Group | ORGANIZATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
millions | QUANTITY | 0.99+ |
today | DATE | 0.98+ |
one | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.97+ |
both | QUANTITY | 0.97+ |
theCUBE | ORGANIZATION | 0.97+ |
PCI 2.0 3.0 4.0 | OTHER | 0.97+ |
NCR Microelectronics | ORGANIZATION | 0.94+ |
about 30 years | QUANTITY | 0.94+ |
PCI Express | OTHER | 0.94+ |
one generation | QUANTITY | 0.93+ |
a million | QUANTITY | 0.92+ |
thousands of devices | QUANTITY | 0.9+ |
four | QUANTITY | 0.88+ |
few short years ago | DATE | 0.87+ |
Parallel SCSI | OTHER | 0.85+ |
RAID 5 | OTHER | 0.84+ |
RAID 5 | TITLE | 0.77+ |
30 year | QUANTITY | 0.73+ |
NCR | ORGANIZATION | 0.72+ |
RAID 6 | TITLE | 0.71+ |
5380 Controller | COMMERCIAL_ITEM | 0.71+ |
Parallel SCSI | OTHER | 0.71+ |
one of | QUANTITY | 0.7+ |
RAID | TITLE | 0.68+ |
NVMe | TITLE | 0.64+ |
SaaS | TITLE | 0.63+ |
about seven years | QUANTITY | 0.6+ |
things | QUANTITY | 0.57+ |
PCI | OTHER | 0.54+ |
IOPS | QUANTITY | 0.47+ |
Joel Dedrick, Toshiba | CUBEConversation, February 2019
(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.
SUMMARY :
in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Joel | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
20 seconds | QUANTITY | 0.99+ |
Toshiba | ORGANIZATION | 0.99+ |
Joel Dedrick | PERSON | 0.99+ |
1986 | DATE | 0.99+ |
100s | QUANTITY | 0.99+ |
500 ways | QUANTITY | 0.99+ |
February 2019 | DATE | 0.99+ |
200 ways | QUANTITY | 0.99+ |
Toshiba Memory America | ORGANIZATION | 0.99+ |
Googles | ORGANIZATION | 0.99+ |
300% | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
today | DATE | 0.99+ |
six months | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
10,000 | QUANTITY | 0.99+ |
Toshiba Memory Corporation | ORGANIZATION | 0.99+ |
25 years later | DATE | 0.98+ |
Black Friday | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
10ths of seconds | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
Saas | ORGANIZATION | 0.96+ |
Silicon Valley, | LOCATION | 0.96+ |
Every month | QUANTITY | 0.93+ |
50,000, 100,000 | QUANTITY | 0.92+ |
Flash | ORGANIZATION | 0.92+ |
EMC | ORGANIZATION | 0.91+ |
two hard drives | QUANTITY | 0.9+ |
Networks Storage Software | ORGANIZATION | 0.89+ |
millions of times a minute | QUANTITY | 0.88+ |
one way | QUANTITY | 0.88+ |
million machines a year | QUANTITY | 0.88+ |
first transport | QUANTITY | 0.87+ |
single compute | QUANTITY | 0.83+ |
Christmas | EVENT | 0.82+ |
Cloud Provider | ORGANIZATION | 0.81+ |
Kubernetes | TITLE | 0.78+ |
Flash | TITLE | 0.78+ |
two block storage command sets | QUANTITY | 0.77+ |
step one | QUANTITY | 0.75+ |
NVME | TITLE | 0.75+ |
1,000s of machines | QUANTITY | 0.75+ |
Cube | ORGANIZATION | 0.72+ |
couple | QUANTITY | 0.63+ |
NVME | ORGANIZATION | 0.62+ |
Cube Conversation | EVENT | 0.6+ |
SCSI | TITLE | 0.57+ |
Kubernetes | ORGANIZATION | 0.49+ |
CUBEConversation | EVENT | 0.49+ |
node | TITLE | 0.49+ |
President | PERSON | 0.48+ |
Conversation | EVENT | 0.36+ |
Jeff Boudreau, Dell EMC | Dell Technologies World 2018
>> Announcer: Live, from Las Vegas, it's theCUBE. Covering Dell Technologies World 2018. Brought to you by Dell EMC and its ecosystem partners. >> Well, good afternoon, or good evening if you're watching back in the Eastern Time Zone. Good to have you here live as theCUBE continues our coverage of Dell Technologies World 2018. I'm John Walls, along with Stu Miniman and we now welcome Jeff Boudreau, who's the president and GM of storage at Dell EMC. >> Thank you! >> John: Jeff, good to see you. >> Good to see you guys, thank you for having me. >> Alright, it's been like a solid six hours since you launched your new product. >> That's right. >> The PowerMax. >> What's been, I'm curious, what's been the reaction and what do people want to know from you when they get a little face time? >> Well, the big things I have is one, the reaction's been fantastic since we launched this morning. Obviously Jeff Clarke on stage with my good friend Bob Decrescenzo, PowerMax Bob, now known and understood, and Bob did a great job today announcing the product. The feedback has been phenomenal. People really want to understand, I kind of frame it as, we talk about the future of enterprise storage, and I kind of put some bold statements out there saying it's the fastest storage array, it's the most intelligent storage array, and it's the most resilient storage array in the market today. And I kind of go through that, and a lot of people want to understand a lot around what we've done around NVME as an interface. NVME in the protocol stack and also with the media itself and understand that and truly unleashing the power of doing what I would call NVME right. As you kind of think about where we are and where we want to go with storage class memory, and making sure you unleash the whole value, so that's a big one customers talk to me about. The other big one is around a lot of the ML in the AI. So, we've done a lot of great work. The team's done an amazing job with the OS and the PowerMax operating system, and we do a lot of work with the application hinting, if you will. So we have some technology that we built that actually understands the applications, almost like putting a fingerprint on it, if you will. And then we have algorithms and heuristics within the array that understands the pattern recognition across that and that really understands that. So last year I talked a lot about autonomous storage, this is the realest first step of actually trying to be truly autonomous storage. >> Yeah, Jeff, it's really interesting. The people that have watched the storage industry, there's certain things that have kind of, this is where we are. SCSI has been with us for-- >> Ever. >> Longer than my career. >> Jeff: Mine too. >> You look at NVME and storage class memory and we're starting to get beyond that. I talked with Adnan earlier and saying intelligent storage is something that I've seen lots of product announcements over the other top two intelligent storage! >> Yeah. >> But when you talk about billions of decisions being made by arrays underneath, bring us inside the product team a little bit and how much effort goes into this and the effort. >> The team, number one, is a phenomenal team. I think they're world-class in everything they do and all the products they build, it's been phenomenal. And they've done a ton of work underneath around the algorithms and heuristics. I mean, we've been doing, if you think of our install base and how much data that we store, protect, and secure, at the end of the day we do more than anybody else. So the team's done a lot of work around our data scientists and our engineers have done a lot of work to understand the I/O patterns, heuristics off the drives, the telemetry streams, and then actually build the algorithms to really make sense of that data and provide useful insights. So, it's not easy, to your point, it's a lot of great work by a great team. So, Adnan, I'm glad you had him, because he's one of the key guys to make sure that it all works and comes together. And then, understand the use case of that application tied back to the system is where the magic happens, really connecting that and really putting that forward. >> You know, we talk about faster, and you probably can, maybe you can hear the music, it's got louder. >> It's loud. >> If not faster. How so, and what was your measurement for success there? How did you say, okay, this is the goal, this is what we're shooting for, or did you take technology and say, what can we squeeze out of this? >> Well, it's kind of funny, when we built the architecture, we actually do a lot of prototyping and we do a, actually we do a lot of paperwork up front as we understand the customer requirements, the use cases we're trying to drive, we actually write a lot of that down on paper and say okay, what do we need to do to hit that market need? And then we look at what we need to do from a hardware and software standpoint as we architect the system. And that's what the team really did here. So, what we're looking at is what the customers are looking at, not only for today, but into the future, so as you think about where we are today, and you heard Michael talk about 2020, I've actually been talking a lot about 2030. If you think about IoT, you think about AI, and machine learning and all the sensa data, structured and unstructured, data's exploding. And at the end of the day is, how is our customers going to, it's one thing to store and protect and secure that data, we got to do a lot more than that. And this goes back to how do we make, get in real-time, make it accessible, but also extract the value of that data to provide useful insights back to our customers. They can provide them to their customers, either for better business decisions or more value, or what have you. And that's really where the power comes from. So I've been focusing a lot on the data, and to me it's really about the data, the data explosion that's coming. The customers really understand how big that's going to be, and the period of time, and so what we worked on today, focused on what we're trying to do tomorrow, we want to make sure that we have a clear path to help our customers on that journey. So, going back to some of the performance characteristics that we looked at is not only what we model for today, making sure that we're the best in the industry, best in the market, we also want to look forward saying okay, as data explodes over the next few years, can this technology evolve and support that growth and that data? And a lot of it's going to go back to the machine learning and AI because there's going to be a lot of compute required to actually do a lot of that and provide that intelligence going back. So some big claims I think probably the team talked to you about today, we're 2x anybody in the industry bar-none on this base, so it's ten million IOPS on an 8,000, you're talking 150 gigabytes per second for bandwidth. I mean just the latency and the performance is just phenomenal in its box and it's got so much horsepower behind it. And we also did some creative things around efficiency, as hopefully Adnan and them talked to you guys about it, but we did inline denuke, inline compression, we offload that engine so that way we could have no impact on the data services and really offload on the card, so we don't impact performance for our customers. >> Yeah, Jeff, loved that discussion of data, I think there's been a great trend the last few years talking, it's not just about storing, whether it's structured, unstructured, block, file, object, it's about how businesses can leverage that data, get it in the business. Big in the themes, the keynotes, IT and business, we really really bring it together, maybe look at your storage portfolio, how is that transforming businesses? How is the, not just storage, but data impacting what's going on? >> I mean, to me data is the precious metal, it's the crucial asset, right? You can debate if it's the most important asset for our customers, between their people and their data, you can debate. For me, if you step back, data's the most crucial asset they have, so you've got to treat it as such. To me, it's about what can we do to unleash the power of that data to enable them to be more successful? And so, I think you're dead on, it's not just about infrastructure. Infrastructure's interesting, it's cool, it's modern, we have to make sure that we enable through that way, but it's really about having a data strategy and how they want to do it. So, if you think about having the right data in the right place at the right SLA, this thing's around how you manage the mobility, the infrastructure support and all of the things that you would do to drive that, and I think that's critical. So, we want to make sure we as Dell Technologies and we as Dell EMC, and me as the storage guy, make sure that we unleash the value of that data to enable our customers to make better business decisions to add more value to their businesses, and that's what we're driving, and that's the whole strategy of what we're working on. >> All right, Jeff, talking about PowerMax, >> Jeff: Yeah. >> I've talked to the team about the X2, >> Jeff: Awesome. >> announcement, step back for a second, give us a snapshot of what's happening with the storage portfolio, and you came from what I guess we would call the legacy EMC side. >> Correct. >> Now that we've had more than a year under our belts, between the company together, give us that update on the portfolio. >> Yeah, so we still believe in the power of the portfolio and no ifs, ands, or buts, so I'm not going to shy away from that, in regards to that, brings us a lot of strengths, but it also provides some weaknesses in regards to complexity. And the big thing I think Michael talked about a year ago is we're going to leave no customer behind, and we're completely living up to that. So, you've seen launches recently on Unity, you've seen launches recently on SC, you've seen launches recently on X2 and what have you, and we're going to continue to do that because our customers, we have a large and loyal install base of legacy Dell or legacy EMC customers, which are obviously the most important people, direct and indirect sellers that have some biases or confidence in certain things, and we want to make sure we take care of them. To be clear, simplification is part of our strategy and it will be. So, going from a lot of brands to less than brands, we're absolutely going to do it, and I'm happy to share that in more detail when we have more detail. But we are working through that. But my commitment to the customer is going back to Michael's point, is really two-fold, one is on the data migration and the data mobility, it will be native and it will be seamless to move data from point A to point B. So, I want to be clear, everything will have a next gen, it might not be the same brand or tattoo that they were used to before, it will be a system that meets the market need, the customer requirements and the architecture and the future functions to support that. We'll provide the mobility natively. In addition to that we're going to provide our Loyalty Programs, so not only on the technology side we'll make sure that they're whole, but on the Loyalty Program, so our investment protection that our customers want, need, and demand, and deserve, we're going to provide that as well. So we're going to take care of them on the technology side, but we're also going to take care of them on the business side. But, like I said, I'll share more details when we're here, probably more so next year. >> Right. (laughing) Simple, predictable, profitable, right? >> That's right. >> Keep it simple. >> It's really that simple. >> That's a good formula. Jeff, thanks for being with us. We appreciate the time. >> Awesome, thank you for having me. >> Jeff Boudreau from Storage at Dell EMC. Back with more and we are live here in the Sands at Dell Technologies World 2018. (upbeat music)
SUMMARY :
Brought to you by Dell EMC in the Eastern Time Zone. Good to see you guys, since you launched your new product. and it's the most resilient storage array the storage industry, announcements over the other But when you talk about and secure, at the end of the day You know, we talk about can we squeeze out of this? best in the market, we also Big in the themes, the and all of the things and you came from the company together, and the future functions to support that. We appreciate the time. live here in the Sands
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Boudreau | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Bob Decrescenzo | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
Jeff Clarke | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Adnan | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Bob | PERSON | 0.99+ |
Dell EMC | ORGANIZATION | 0.99+ |
ten million | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
8,000 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
2030 | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
SCSI | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
six hours | QUANTITY | 0.99+ |
a year ago | DATE | 0.98+ |
today | DATE | 0.98+ |
first step | QUANTITY | 0.98+ |
more than a year | QUANTITY | 0.98+ |
2x | QUANTITY | 0.97+ |
Dell Technologies World 2018 | EVENT | 0.97+ |
PowerMax | COMMERCIAL_ITEM | 0.95+ |
two-fold | QUANTITY | 0.94+ |
PowerMax | ORGANIZATION | 0.94+ |
NVME | TITLE | 0.94+ |
this morning | DATE | 0.92+ |
150 gigabytes per second | QUANTITY | 0.9+ |
X2 | COMMERCIAL_ITEM | 0.88+ |
one thing | QUANTITY | 0.86+ |
next few years | DATE | 0.82+ |
billions of decisions | QUANTITY | 0.81+ |
Eastern Time Zone | LOCATION | 0.74+ |
Unity | ORGANIZATION | 0.72+ |
last | DATE | 0.72+ |
PowerMax | TITLE | 0.68+ |
top | QUANTITY | 0.67+ |
theCUBE | ORGANIZATION | 0.66+ |
two | QUANTITY | 0.65+ |
second | QUANTITY | 0.64+ |
X2 | EVENT | 0.64+ |
Sands | LOCATION | 0.6+ |
Adnan | ORGANIZATION | 0.59+ |
years | DATE | 0.53+ |
Bob | COMMERCIAL_ITEM | 0.52+ |