Image Title

Search Results for LinuxOne:

Joel Dedrick, Toshiba | CUBEConversation, February 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.

Published Date : Feb 28 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Peter BurrisPERSON

0.99+

2012DATE

0.99+

20 secondsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

Joel DedrickPERSON

0.99+

1986DATE

0.99+

100sQUANTITY

0.99+

500 waysQUANTITY

0.99+

February 2019DATE

0.99+

200 waysQUANTITY

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

GooglesORGANIZATION

0.99+

300%QUANTITY

0.99+

twoQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

10,000QUANTITY

0.99+

Toshiba Memory CorporationORGANIZATION

0.99+

25 years laterDATE

0.98+

Black FridayEVENT

0.98+

bothQUANTITY

0.98+

10ths of secondsQUANTITY

0.98+

oneQUANTITY

0.97+

SaasORGANIZATION

0.96+

Silicon Valley,LOCATION

0.96+

Every monthQUANTITY

0.93+

50,000, 100,000QUANTITY

0.92+

FlashORGANIZATION

0.92+

EMCORGANIZATION

0.91+

two hard drivesQUANTITY

0.9+

Networks Storage SoftwareORGANIZATION

0.89+

millions of times a minuteQUANTITY

0.88+

one wayQUANTITY

0.88+

million machines a yearQUANTITY

0.88+

first transportQUANTITY

0.87+

single computeQUANTITY

0.83+

ChristmasEVENT

0.82+

Cloud ProviderORGANIZATION

0.81+

KubernetesTITLE

0.78+

FlashTITLE

0.78+

two block storage command setsQUANTITY

0.77+

step oneQUANTITY

0.75+

NVMETITLE

0.75+

1,000s of machinesQUANTITY

0.75+

CubeORGANIZATION

0.72+

coupleQUANTITY

0.63+

NVMEORGANIZATION

0.62+

Cube ConversationEVENT

0.6+

SCSITITLE

0.57+

KubernetesORGANIZATION

0.49+

CUBEConversationEVENT

0.49+

nodeTITLE

0.49+

PresidentPERSON

0.48+

ConversationEVENT

0.36+