Image Title

Search Results for two hard drives:

Danny Allan, Veeam & James Kirschner, Amazon | AWS re:Invent 2021


 

(innovative music) >> Welcome back to theCUBE's continuous coverage of AWS re:Invent 2021. My name is Dave Vellante, and we are running one of the industry's most important and largest hybrid tech events of the year. Hybrid as in physical, not a lot of that going on this year. But we're here with the AWS ecosystem, AWS, and special thanks to AMD for supporting this year's editorial coverage of the event. We've got two live sets, two remote studios, more than a hundred guests on the program. We're going really deep, as we enter the next decade of Cloud innovation. We're super excited to be joined by Danny Allan, who's the Chief Technology Officer at Veeam, and James Kirschner who's the Engineering Director for Amazon S3. Guys, great to see you. >> Great to see you as well, Dave. >> Thanks for having me. >> So let's kick things off. Veeam and AWS, you guys have been partnering for a long time. Danny, where's the focus at this point in time? What are customers telling you they want you to solve for? And then maybe James, you can weigh in on the problems that customers are facing, and the opportunities that they see ahead. But Danny, why don't you start us off? >> Sure. So we hear from our customers a lot that they certainly want the solutions that Veeam is bringing to market, in terms of data protection. But one of the things that we're hearing is they want to move to Cloud. And so there's a number of capabilities that they're asking us for help with. Things like S3, things like EC2, and RDS. And so over the last, I'll say four or five years, we've been doing more and more together with AWS in, I'll say, two big categories. One is, how do we help them send their data to the Cloud? And we've done that in a very significant way. We support obviously tiering data into S3, but not just S3. We support S3, and S3 Glacier, and S3 Glacier Deep Archive. And more importantly than ever, we do it with immutability because customers are asking for security. So a big category of what we're working on is making sure that we can store data and we can do it securely. Second big category that we get asked about is "Help us to protect the Cloud-Native Workloads." So they have workloads running in EC2 and RDS, and EFS, and EKS, and all these different services knowing Cloud-Native Data Protection. So we're very focused on solving those problems for our customers. >> You know, James, it's interesting. I was out at the 15th anniversary of S3 in Seattle, in September. I was talking to Mai-Lan. Remember we used to talk about gigabytes and terabytes, but things have changed quite dramatically, haven't they? What's your take on this topic? >> Well, they sure have. We've seen the exponential growth data worldwide and that's made managing backups more difficult than ever before. We're seeing traditional methods like tape libraries and secondary sites fall behind, and many organizations are moving more and more of their workloads to the Cloud. They're extending backup targets to the Cloud as well. AWS offers the most storage services, data transfer methods and networking options with unmatched durability, security and affordability. And customers who are moving their Veeam Backups to AWS, they get all those benefits with a cost-effective offsite storage platform. Providing physical separation from on-premises primary data with pay-as-you-go economics, no upfront fees or capital investments, and near zero overhead to manage. AWS and APM partners like Veeam are helping to build secure, efficient, cost-effective backup, and restore solutions using the products you know and trust with the scale and reliability of the AWS Cloud. >> So thank you for that. Danny, I remember I was way back in the old days, it was a VeeamON physical event. And I remember kicking around and seeing this company called Kasten. And I was really interested in like, "You protect the containers, aren't they ephemeral?" And we started to sort of chit-chat about how that's going to change and what their vision was. Well, back in 2020, you purchased Kasten, you formed the Veeam KBU- the Kubernetes Business Unit. What was the rationale behind that acquisition? And then James, I'm going to get you to talk a little bit about modern apps. But Danny, start with the rationale behind the Kasten acquisition. >> Well, one of the things that we certainly believe is that the next generation of infrastructure is going to be based on containers, and there's a whole number of reasons for that. Things like scalability and portability. And there's a number of significant value-adds. So back in October of last year in 2020, as you mentioned, we acquired Kasten. And since that time we've been working through Kasten and from Veeam to add more capabilities and services around AWS. For example, we supported the Bottlerocket launch they just did and actually EKS anywhere. And so we're very focused on making sure that our customers can protect their data no matter whether it's a Kubernetes cluster, or whether it's on-premises in a data center, or if it's running up in the Cloud in EC2. We give this consistent data management experience and including, of course, the next generation of infrastructure that we believe will be based on containers. >> Yeah. You know, James, I've always noted to our audience that, "Hey AWS, they provide rich set of primitives and API's that ISV's like Veeam can take advantage of it." But I wonder if you could talk about your perspective, maybe what you're seeing in the ecosystem, maybe comment on what Veeam's doing. Specifically containers, app modernization in the Cloud, the evolution of S3 to support all these trends. >> Yeah. Well, it's been great to see Veeam expands for more and more AWS services to help joint customers protect their data. Especially since Veeam stores their data in Amazon S3 storage classes. And over the last 15 years, S3 has helped companies around the world optimize their work, so I'd be happy to share some insights into that with you today. When you think about S3 well, you can find virtually every use case across all industries running on S3. That ranges from backup, to (indistinct) data, to machine learning models, the list goes on and on. And one of the reasons is because S3 provides industry leading scalability, availability, durability, security, and performance. Those are characteristics customers want. To give you some examples, S3 stores exabytes the data across millions of hard drives, trillions of objects around the world and regularly peaks at millions of requests per second. S3 can process in a single region over 60 terabytes a second. So in summary, it's a very powerful storage offering. >> Yeah, indeed. So you guys always talking about, you know, working backwards, the customer centricity. I think frankly that AWS sort of change the culture of the entire industry. So, let's talk about customers. Danny do you have an example of a joint customer? Maybe how you're partnering with AWS to try to address some of the challenges in data protection. What are customers is seeing today? >> Well, we're certainly seeing that migration towards the Cloud as James alluded today. And actually, if we're talking about Kubernetes, actually there's a customer that I know of right now, Leidos. They're a fortune 500 Information Technology Company. They deal in the engineering and technology services space, and focus on highly regulated industry. Things like defense and intelligence in the civil space. And healthcare in these very regulated industries. Anyway, they decided to make a big investment in continuous integration, continuous development. There's a segment of the industry called portable DevSecOps, and they wanted to build infrastructure as code that they could deploy services, not in days or weeks or months, but they literally wanted to deploy their services in hours. And so they came to us, and with Kasten K10 actually around Kubernetes, they created a service that could enable them to do that. So they could be fully compliant, and they could deliver the services in, like I say, hours, not days or months. And they did that all while delivering the same security that they need in a cost-effective way. So it's been a great partnership, and that's just one example. We see these all the time, customers who want to combine the power of Kubernetes with the scale of the Cloud from AWS, with the data protection that comes from Veeam. >> Yes, so James, you know at AWS you don't get dinner if you don't have a customer example. So maybe you could share one with us. >> Yeah. We do love working backwards from customers and Danny, I loved hearing that story. One customer leveraging Veeam and AWS is Maritz. Maritz provides business performance solutions that connect people to results, ensuring brands deliver on their customer promises and drive growth. Recently Maritz moved over a thousand VM's and petabytes of data into AWS, using Veeam. Veeam Backup for AWS enables Maritz to protect their Amazon EC2 instances with the backup of the data in the Amazon S3 for highly available, cost-effective, long-term storage. >> You know, one of the hallmarks of Cloud is strong ecosystem. I see a lot of companies doing sort of their own version of Cloud. I always ask "What's the partner ecosystem look like?" Because that is a fundamental requirement, in my view anyway, and attribute. And so, a big part of that, Danny, is channel partners. And you have a 100 percent channel model. And I wonder if we could talk about your strategy in that regard. Why is it important to be all channel? How to consulting partners fit into the strategy? And then James, I'm going to ask you what's the fit with the AWS ecosystem. But Danny, let's start with you. >> Sure, so one of the things that we've learned, we're 15 years old as well, actually. I think we're about two months older, or younger I should say than AWS. I think their birthday was in August, ours was in October. But over that 15 years, we've learned that our customers enjoy the services, and support, and expertise that comes from the channel. And so we've always been a 100 percent channel company. And so one of the things that we've done with AWS is to make sure that our customers can purchase both how and when they want through the AWS marketplace. They have a program called Consulting Partners Private Agreements, or CPPO, I think is what it's known as. And that allows our customers to consume through the channel, but with the terms and bill that they associate with AWS. And so it's a new route-to-market for us, but we continue to partner with AWS in the channel programs as well. >> Yeah. The marketplace is really impressive. James, I wonder if you could maybe add in a little bit. >> Yeah. I think Danny said it well, AWS marketplace is a sales channel for ISV's and consulting partners. It lets them sell their solutions to AWS customers. And we focus on making it really easy for customers to find, buy, deploy, and manage software solutions, including software as a service in just a matter of minutes. >> Danny, you mentioned you're 15 years old. The first time I mean, the name Veeam. The brilliance of tying it to virtualization and VMware. I was at a VMUG when I first met you guys and saw your ascendancy tied to virtualization. And now you're obviously leaning heavily into the Cloud. You and I have talked a lot about the difference between just wrapping your stack in a container and hosting it in the Cloud versus actually taking advantage of Cloud-Native Services to drive further innovation. So my question to you is, where does Veeam fit on that spectrum, and specifically what Cloud-Native Services are you leveraging on AWS? And maybe what have been some outcomes of those efforts, if in fact that's what you're doing? And then James, I have a follow-up for you. >> Sure. So the, the outcomes clearly are just more success, more scale, more security. All the things that James is alluding to, that's true for Veeam it's true for our customers. And so if you look at the Cloud-Native capabilities that we protect today, certainly it began with EC2. So we run things in the Cloud in EC2, and we wanted to protect that. But we've gone well beyond that today, we protect RDS, we protect EFS- Elastic File Services. We talked about EKS- Elastic Kubernetes Services, ECS. So there's a number of these different services that we protect, and we're going to continue to expand on that. But the interesting thing is in all of these, Dave, when we do data protection, we're sending it to S3, and we're doing all of that management, and tiering, and security that our customers know and love and expect from Veeam. And so you'll continue to see these types of capabilities coming from Veeam as we go forward. >> Thank you for that. So James, as we know S3- very first service offered in 2006 on the AWS' Cloud. As I said, theCUBE was out in Seattle, September. It was a great, you know, a little semi-hybrid event. But so over the decade and a half, you really expanded the offerings quite dramatically. Including a number of, you got on-premise services things, like Outposts. You got other services with "Wintery" names. How have you seen partners take advantage of those services? Is there anything you can highlight maybe that Veeam is doing that's notable? What can you share? >> Yeah, I think you're right to call out that growth. We have a very broad and rich set of features and services, and we keep growing that. Almost every day there's a new release coming out, so it can be hard to keep up with. And Veeam has really been listening and innovating to support our joint customers. Like Danny called out a number of the ways in which they've expanded their support. Within Amazon S3, I want to call out their support for our infrequent access, infrequent access One-Zone, Glacier, and Glacier Deep Archive Storage Classes. And they also support other AWS storage services like AWS Outposts, AWS Storage Gateway, AWS Snowball Edge, and the Cold-themed storage offerings. So absolutely a broad set of support there. >> Yeah. There's those, winter is coming. Okay, great guys, we're going to leave it there. Danny, James, thanks so much for coming to theCUBE. Really good to see you guys. >> Good to see you as well, thank you. >> All right >> Thanks for having us. >> You're very welcome. You're watching theCUBE's coverage of 2021 AWS re:Invent, keep it right there for more action on theCUBE, your leader in hybrid tech event coverage, right back. (uplifting music)

Published Date : Nov 30 2021

SUMMARY :

and special thanks to AMD and the opportunities that they see ahead. And so over the last, I'll I was out at the 15th anniversary of S3 of the AWS Cloud. And then James, I'm going to get you is that the next generation the evolution of S3 to some insights into that with you today. of the entire industry. And so they came to us, So maybe you could share one with us. that connect people to results, And then James, I'm going to ask you and expertise that comes from the channel. James, I wonder if you could And we focus on making it So my question to you is, And so if you look at the in 2006 on the AWS' Cloud. AWS Snowball Edge, and the Really good to see you guys. coverage of 2021 AWS re:Invent,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DannyPERSON

0.99+

JamesPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

OctoberDATE

0.99+

Danny AllanPERSON

0.99+

2006DATE

0.99+

James KirschnerPERSON

0.99+

SeattleLOCATION

0.99+

AugustDATE

0.99+

100 percentQUANTITY

0.99+

2020DATE

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

fourQUANTITY

0.99+

AMDORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

VeeamORGANIZATION

0.99+

SeptemberDATE

0.99+

APMORGANIZATION

0.99+

S3TITLE

0.99+

two remote studiosQUANTITY

0.99+

five yearsQUANTITY

0.99+

OneQUANTITY

0.99+

LeidosORGANIZATION

0.99+

KubernetesTITLE

0.99+

KastenORGANIZATION

0.99+

two live setsQUANTITY

0.99+

oneQUANTITY

0.99+

15 yearsQUANTITY

0.98+

todayDATE

0.98+

more than a hundred guestsQUANTITY

0.98+

bothQUANTITY

0.98+

Joel Dedrick, Toshiba | CUBEConversation, February 2019


 

(upbeat music) >> From our studios, in the heart of Silicon Valley, Palo Alto, California, this is a Cube Conversation. >> Hi, I'm Peter Burris, and welcome again, to another Cube Conversation from our studios here in beautiful Palo Alto, California. With every Cube Conversation, we want to bring smart people together, and talk about something that's relevant and pertinent to the industry. Now, today we are going to be talking about the emergence of new classes of cloud provider, who may not be the absolute biggest, but nonetheless crucial in the overall ecosystem of how they're going to define new classes of cloud services to an expanding array of enterprise customers who need that. And to have that conversation, and some of the solutions that class of cloud service provider going to require, we've got Joel Dedrick with us today. Joel is the Vice President and General Manager of Networks Storage Software, Toshiba Memory America. Joel, welcome to theCube. >> Thanks, very much. >> So let's start by, who are you? >> My name's Joel Dedrick, I'm managing a new group at Toshiba Memory America, involved with building software that will help our customers create a cloud infrastructure that's much more like those of the Googles and Amazons of the world. But, but without the enormous teams that are required if you're building it all yourself. >> Now, Toshiba is normally associated with a lot of hardware. The software angle is, how does software play into this? >> Well, Flash is changing rapidly, more rapidly than maybe the average guy on the street realizes, and one way to think about this is inside of a SSD there's a processor that is not too far short of the average Xeon in compute power, and it's busy. So there's a lot more work going on in there than you might think. We're really bringing that up a level and doing that same sort of management across groups of SSDs to provide a network storage service that's simple to use and simple to understand, but under the hood, we're pedaling pretty fast. Just as we are today in the SSDs. >> So the problem that I articulated up front was the idea that we're going to see, as we greater specialization and enterprise needs from cloud there's going to be greater numbers of different classes of cloud service provider. Whether that be Saas or whether that be by location, by different security requirements, whatever else it might be. What is the specific issue that this emerging class of cloud service provider faces as they try to deliv really high quality services to these new, more specialized end users. >> Well let me first, kind of define terms. I mean, cloud service provider can mean many things. In addition to someone who sells infrastructure, as a service or platform as a service, we can also think about companies that deliver a service to consumers through their phone, and have a data center backing that, because of the special requirements of those applications. So we're serving that panoply of customers. They face a couple of issues that are a result of trajectory of Flash and storage of late. And one of those is that, we as Flash manufactures have a innovators dilemma, that's a term we use here in the valley, that I think most people will know. Our products are too good, they're too big, they're too fast, they're too expensive, to be a good match to a single compute node. And so you want to share them. And so the game here is can we find a way to share this really performant, you know this million IOP Dragon across multiple computers without losing that performance. So that's sort of step one, is how do we share this precious resource. Behind that is even a bigger one, that takes a little longer to explain. And that is, how do we optimize the use of all the resources in the data center in the same way that the Googles and Amazons do by moving work around between machines in a very fluid and very rapid way. To do that, you have to have the storage visible from everywhere and you have be able to run any instance anywhere. That's a tall order, and we don't solve the whole problem, but we're a necessary step. And the step we provide is we'll take the storage out of the individual compute nods and serve it back to you over your network, but we won't lose the performance that you're used to having it locally attached. >> Okay, so let's talk about the technical elements required to do this. Describe from the SSD, from the Flash node, up. I presume it's NVME? >> Um hm, so, NVME, I'm not sure if all of our listeners today really know how big a deal that is. There have been two block storage command sets. Sets of fundamental commands that you give to a block storage device, in my professional lifetime. SCSI was invented in 1986, back when high performance storage was two hard drives attached to your ribbon cable in your PC. And it's lasted up until now, and it's still, if you go to a random data center, and take a random storage wire, it's going to be transporting the SCSI command set. NVME, what, came out in 2012? So 25 years later, the first genuinely new command set. There's an alphabet soup of transports. The interfaces and formats that you can use to transport SCSI around would fill pages, and we would sort of tune them out, and we should. We're now embarking on that same journey again, except with a command set that's ideal for Flash. And we've sort of given up on or left behind the need to be backward compatible with hard discs. And we said, let's build a command set and interface that's optimum for this new medium, and then let's transport that around. NVME over Fabrics is the first transport for the NVME command set, and so what we're doing is building software that allows you to take a conventional X86 compute node with a lot of NVME drives and wrap our software around it and present it out to your compute infrastructure, and make it look like locally attached SSDs, at the same performance as locally attached SSDs, which is the big trick, but now you get to share them optimality. We do a lot of optimal things inside the box, but they ultimately don't matter to customers. What customers see is, I get to have the exact size and performance of Flash that I need at every node, for the exactly the time I need it. >> So I'm a CTO at one of these emerging cloud companies, I know that I'm not going to be adding million machines a year, maybe I'm only going to be adding 10,000 maybe I'm only adding 50,000, 100,000. So I can't afford the engineering staff required to build my own soup to nuts set of software. >> You can't roll it all yourself. >> Okay, so, how does this fit into that? >> This is the assembly kit for the lowest layer of that. We take the problem of turning raw SSDs into a block storage service and solve it for you. We have a very sharp line there. We aren't trying to be a filer or we're not trying to be EMC here. It's a very simple, but fast and rugged storage service box. It interfaces to your provisioning system, to your orchestration system, to your telemetry systems and no two of those are a like. So there's a fair amount of customization still involved, but we stand ready to do that. You can Tinker Toy this together yourself. >> Toshiba. >> Yeah, Toshiba does, yes. So, that's the problem we're solving. Is we're enabling the optimum use of Flash, and maybe subtly, but more importantly in the end we're allowing you to dis-aggregate it, so that you no longer have storage pinned to a compute node, and that enables a lot of other things, that we've talked about in the past. >> Well, that's a big feature of the cloud operating model, is the idea that any application can address any resource and any resource can address any application. And you don't end up with dramatic or significant barriers in the infrastructure, is how you provision those instances and operate those instances. >> Absolutely, the example that we see all the time, and the service providers that are providing some service through your phone, is they all have a time of day rush, or a Christmas rush, some sort of peaks to their work loads, and how do they handle the peaks, how do they handle the demand peaks? Well today, they buy enough compute hardware to handle the peak, and the rest of the year it sits idle. And this can be 300% pretty easily, and you can imagine the traffic to a shopping site Black Friday versus the rest of the year. If the customer gets frustrated and goes away, they don't come back. So you have data centers worth of machines doing nothing. And then over on the other side of the house you have the machine learning crew, who could use infinite compute resource, but the don't have a time demand, it just runs 24/7. And they can't get enough machines, and they're arguing for more budget, and yet we have 100s of 1,000s of machines doing nothing. I mean that's a pretty big piece of bait right there. >> Which is to say that, the ML guys can't use the retail guys or retail resources and the retail resources can't use the ML, and what we're trying to do is make it easier for both sides to be able to utilize the resources that are available on both sides. >> Exactly so, exactly so, and that requires more than, one of the things that requires is any given instances storage can't be pinned to some compute node. Otherwise you can't move that instance. It has to be visible from anywhere. There's some other things that need need to work in order to, move instances around your data center under load, but this is a key one, and it's a tough one. And it's one that to solve it, without ruining performance is the hard part. We've had, network storage isn't a new thing, that's been goin' on for a long time. Network storage at the performance of a locally mounted NVME drive is a tough trick. And that's the new thing here. >> But it's also a tool kit, so that, that, what appears to be a locally mounted NVME drive, even though it may be remote, can also be oriented into other classes of services. >> Yes >> So how does this, for example, I'm thinking of Kubernetes Clusters, stainless, still having storage` that's really fast, still really high performin', very reliable, very secure. How do you foresee this technology supporting and even catalyzing changes to that Kubernetes, that darker class retainer workloads. >> Sure, so for one, we implement the interface to Kubernetes. And Kubernetes is a rapidly moving target. I love their approach. They have a very fast version clock. Every month or two there's a new version. And their support attitude is if you're not within the last version or two, don't call. You know, keep up, this is. And that's sort of not the way the storage world has worked. So our commitment is to connect to that, and make that connection stay put, as you follow a moving target. But then, where this is really going is the need for really rapid provisioning. In other words, it's not the model of the IT guy sitting at a keyboard attaching a disc to a stack of machines that's running some application, and coming back in six months to see if it's still okay. As we move from containerized services to serverless kind of ideas. In the serverless world, the average lifespan of an application's 20 seconds. So we better spool it up, load the code, get it state, run, and kill it pretty quickly, millions of times a minute. And so, you need to be light of foot to do that. So we're poured in a lot of energy behind the scenes, into making software that can handle that sort of a dynamic environment. >> So how does this, the resource that allows you to present a distant NVME drive, as mounting it locally, how does that catalyze other classes of workloads? Or how does that catalyze new classes of workloads? You mentioned ML, are there other workloads that you see on the horizon that will turn into services from this new class of cloud provider? >> Well I think one big one is the serverless notion. And to digress on that a little bit. You know we went from the classic enterprise the assignment of work to machines lasts for the life of the machine. That group of machines belong to engineering, those are accounting machines, and so on. And no IT guy in his right mind. would think of running engineering code on the accounting machine or whatever. In the cloud we don't have a permanent assignment there, anymore. You rent a machine for a while, and then you give it back. But the user's still responsible for figuring out how many machines or VMs he needs. How much storage he needs, and doing the calculation, and provisioning all of that. In the serverless world, the user gives up all of that. And says, here's the set of calculations I want to do, trigger it when this happens, and you Mr. Cloud Provider figure out does this need to be sharded out 500 ways or 200 ways to meet my performance requirements. And as soon as these are done, turn 'em back off again, on a timescale of 10ths of seconds. And so, what we're enabling is the further movement in the direction of taking the responsibility for provisioning and scaling out of the user's hands and making it automatic. So we let users focus on what they want to do, not how to get it done. >> This really is not an efficiency play, when you come right down to it. This is really changing the operating model, so new classes of work can be performed, so that the overall computer infrastructure, the overall infrastructure becomes more effective and matches to the business needs better. >> It's really both. There's a tremendous efficiency gain, as we talked about with the ML versus the marketplace. But there's also, things you just can't do without an infrastructure that works this way, and so, there's an aspect of efficiency and an aspect of, man this just something we have to do to get to the next level of the cloud. >> Excellent, so do you anticipate this is portents some changes to the Toshiba's relationship with different classes of suppliers? >> I really don't. Toshiba Memory Corporation is a major supplier of both Flash and SSDs, to basically every class of storage customer, and that's not going to change. They are our best friends, and we're not out to compete with them. We're serving really an unmet need right now. We're serving a relatively small group of customers who are cloud first, cloud always. They want to operate in the sort of cloud style. But they really can't, as you said earlier, they can't invent it all soup to nuts with their own engineering, they need some pieces to come from outside. And we're just trying to fill that gap. That's the goal here. >> Got it, Joel Dedrick, Vice President and General Manager Networks Storage Software, Toshiba Memory America. Thanks very much for being on theCube. >> My pleasure, thanks. >> Once again this is Peter Burris, it's been another Cube Conversation, until next time.

Published Date : Feb 28 2019

SUMMARY :

in the heart of Silicon Valley, Palo Alto, California, and pertinent to the industry. But, but without the enormous teams that are required Now, Toshiba is normally associated of the average Xeon in compute power, and it's busy. So the problem that I articulated up front and serve it back to you over your network, Okay, so let's talk about the technical elements or left behind the need to be backward compatible I know that I'm not going to be adding million machines a year, This is the assembly kit and maybe subtly, but more importantly in the end barriers in the infrastructure, is how you provision and the service providers that are providing is make it easier for both sides to be able to utilize And it's one that to solve it, classes of services. and even catalyzing changes to that Kubernetes, And that's sort of not the way In the cloud we don't have so that the overall computer infrastructure, to get to the next level of the cloud. and that's not going to change. Thanks very much for being on theCube. Once again this is Peter Burris,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JoelPERSON

0.99+

Peter BurrisPERSON

0.99+

2012DATE

0.99+

20 secondsQUANTITY

0.99+

ToshibaORGANIZATION

0.99+

Joel DedrickPERSON

0.99+

1986DATE

0.99+

100sQUANTITY

0.99+

500 waysQUANTITY

0.99+

February 2019DATE

0.99+

200 waysQUANTITY

0.99+

Toshiba Memory AmericaORGANIZATION

0.99+

GooglesORGANIZATION

0.99+

300%QUANTITY

0.99+

twoQUANTITY

0.99+

AmazonsORGANIZATION

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

todayDATE

0.99+

six monthsQUANTITY

0.99+

both sidesQUANTITY

0.99+

firstQUANTITY

0.99+

10,000QUANTITY

0.99+

Toshiba Memory CorporationORGANIZATION

0.99+

25 years laterDATE

0.98+

Black FridayEVENT

0.98+

bothQUANTITY

0.98+

10ths of secondsQUANTITY

0.98+

oneQUANTITY

0.97+

SaasORGANIZATION

0.96+

Silicon Valley,LOCATION

0.96+

Every monthQUANTITY

0.93+

50,000, 100,000QUANTITY

0.92+

FlashORGANIZATION

0.92+

EMCORGANIZATION

0.91+

two hard drivesQUANTITY

0.9+

Networks Storage SoftwareORGANIZATION

0.89+

millions of times a minuteQUANTITY

0.88+

one wayQUANTITY

0.88+

million machines a yearQUANTITY

0.88+

first transportQUANTITY

0.87+

single computeQUANTITY

0.83+

ChristmasEVENT

0.82+

Cloud ProviderORGANIZATION

0.81+

KubernetesTITLE

0.78+

FlashTITLE

0.78+

two block storage command setsQUANTITY

0.77+

step oneQUANTITY

0.75+

NVMETITLE

0.75+

1,000s of machinesQUANTITY

0.75+

CubeORGANIZATION

0.72+

coupleQUANTITY

0.63+

NVMEORGANIZATION

0.62+

Cube ConversationEVENT

0.6+

SCSITITLE

0.57+

KubernetesORGANIZATION

0.49+

CUBEConversationEVENT

0.49+

nodeTITLE

0.49+

PresidentPERSON

0.48+

ConversationEVENT

0.36+

John Rydning, IDC | Western Digital the Next Decade of Big Data 2017


 

>> Announcer: Live from San Jose, California, it's theCUBE covering innovating to fuel the next decade of big data. Brought to you by Western Digital. >> Hey, welcome back everybody. Jeff Frick, here with theCUBE. We are at the Western Digital Headquarters in San Jose, California. It's the Al-Mady Campus. A historic campus. It's had a lot of great innovation, especially in hard drives for years and years and years. This event's called Innovating to Fuel the Next Data Big Data. And we're excited to have a big brain on. We like to get smart people who's been watching this story for a while and will give us a little bit of historical perspective. It's John Rydning. He is the Research Vice President for Hard Drives for IEC. John, Welcome. >> Thank you, Jeff. >> Absolutely. So, what is your take on today's announcement? >> I think it's our very meaningful announcement, especially when you consider that the previous BIGIT Technology announcement for the industry was Helium, about four or five years ago. But, really, the last big technology announcement prior to that was back in 2005, 2006, when the industry announced making this transition to what they called at that time, "Perpendicular Magnetic Recording." And when that was announced it was kind of a similar problem at that time in the industry that we have today, where the industry was just having a difficult time putting more data on each disc inside that drive. And, so they kind of hit this technology wall. And they announced Perpendicular Magnetic Recording and it really put them on a new S curve in terms of their ability to pack more data on each disc and just kind of put it in some perspective. So, after they announce Perpendicular Magnetic Recording, the capacity per disc increased about 30% a year for about five years. And then over, really, a ten year period, increased about an average of about 20% a year. And, so today's announcement is I see a lot of parallels to that. You know, back when Perpendicular Magnetic Recording was announced, really they build. They increased the capacity per platter was growing very slowly. That's where we are today. And with this announcement of MAMR Technology the direction that Western Digital's choosing really could put the industry on a new S curve and putting in terms of putting more capacity, storage capacity on each one of those discs. >> It's interesting. Always reminds me kind of back to the OS in Microsoft in Intel battles. Right? Intel would come out with a new chip and then Microsoft would make a bigger OS and they go back and back and forth and back and forth. >> John: Yeah, that's very >> And we're seeing that here, right? Cuz the demands for the data are growing exponentially. I think one of the numbers that was thrown out earlier today that the data thrown off by people and the data thrown off by machines is so exponentially larger than the data thrown off by business, which has been kind of the big driver of IT spin. And it's really changing. >> It's a huge fundamental shift. It really is >> They had to do something. Right? >> Yeah, the demand for a storage capacity by these large data centers is just phenomenal and yet at the same time, they don't want to just keep building new data center buildings. And putting more and more racks. They want to put more storage density in that footprint inside that building. So, that's what's really pushing the demand for these higher capacity storage devices. They want to really increase the storage capacity per cubic meter. >> Right, right. >> Inside these data centers. >> It's also just fascinating that our expectation is that they're going to somehow pull it off, right? Our expectation that Moore's laws continue, things are going to get better, faster, cheaper, and bigger. But, back in the back room, somebody's actually got to figure out how to do it. And as you said, we hit these kind of seminal moments where >> Yeah, that's right. >> You do get on a new S curve, and without that it does flatten out over time. >> You know, what's interesting though, Jeff, is really about the time that Perpendicular Magnetic Recording was announced way back in 2005, 2006, the industry was really, already at that time, talking about these thermal assist technologies like MAMR that Western Digital announced today. And it's always been a little bit of a question for those folks that are either in the industry or watching the industry, like IDC. And maybe even even more importantly for some of the HDD industry customers. They're kind of wondering, so what's really going to be the next technology race horse that takes us to that next capacity point? And it's always been a bit of a horse race between HAMR and MAMR. And there's been this lack of clarity or kind of a huge question mark hanging over the industry about which one is it going to be. And Western Digital certainly put a stake in the ground today that they see MAMR as that next technology for the future. >> (mumbles words) Just read a quote today (rushes through name) key alumni just took a new job. And he's got a pin tweet at the top of his thing. And he says, "The smart man looks for ways "To solve the problem. "Or looks at new solutions. "The wise man really spends his time studying the problem." >> I like that. >> And it's really interesting here cuz it seems kind of obvious there. Heat's never necessarily a good thing with electronics and data centers as you mentioned trying to get efficiency up. There's pressure as these things have become huge, energy consumption machines. That said, they're relatively efficient, based on other means that we've been doing they compute and the demand for this compute continues to increase, increase, increase, increase. >> Absolutely >> So, as you kind of look forward, is there anything kind of? Any gems in the numbers that maybe those of us at a layman level are kind of a first read are missing that we should really be paying attention that give us a little bit of a clue of what the feature looks like? >> Well, there's a couple of major trends going on. One is that, at least for the hard drive industry, if you kind of look back the last ten years or so, a pretty significant percentage of the revenue that they've generated a pretty good percentage of the petabytes that they ship have really gone into the PC market. And that's fundamentally shifting. And, so now it's really the data centers, so that by the time you get to 2020, 2021, about 60 plus percent of the petabytes that the industry's shipping is going into data centers, where if you look back a few years ago, 60% was going into PCs. That's a big, big change for the industry. And it's really that kind of change that's pushing the need for these higher capacity hard drives. >> Jeff: Right. >> So, that's, I think, one of the biggest shifts has taking place. >> Well, the other thing that's interesting in that comment because we know scale drives innovation better than anything and clearly Intel microprocessors rode the PC boom to get out scale to drive the innovation. And, so if you're saying, now, that the biggest scale is happening in the data center Then, that's a tremendous force for innovation in there versus Flash, which is really piggy-backing on the growth of these jobs, because that's where it's getting it's scale. So, when you look at kind of the Flash hard drive comparison, right? Obviously, Flash is the shiny new toy getting a lot of buzz over the last couple years. Western Digital has a play across the portfolio, but the announcement earlier today said, you're still going to have like this TenX cost differentiation. >> Yeah, that's right. >> Even through, I think it was 20, 25. I don't want to say what the numbers were. Over a long period of time. You see that kind of continuing DC&E kind of conflict between those two? Or is there a pretty clear stratification between what's going to go into Flash systems, or what's going to hard drives? >> That's a great question, now. So, even in the very large HyperScale data centers and we definitely see where Flash and hard disk drives are very complimentary. They're really addressing different challenges, different problems, and so I think one of the charts that we saw today at the briefing really is something that we agree with strongly at IDC. Today, maybe, about 7% or 8% of all of the combined HDD SSD petabyte shipped for enterprise are SSD petabytes. And then, that grows to maybe ten. >> What was it? Like 7% you said? >> 6% to 7%. >> 6% to 7% okay. Yeah, so we still have 92, 93%, 94% of all petabytes that again are HDD SSD petabytes for enterprise. Those are still HDD petabytes. And even when you get out to 2020, 2021, again, still bought 90%. We agree with what Western Digital talked about today. About 90% of the combined HDD SSD petabytes that are shipping for enterprise continue to be HDD. So, we do see the two technologies very complementary. Talked about SSD is kind of getting their scale on PCs and that's true. They really are going to quickly continue to become a bigger slice of the storage devices attached to new PCs. But, in the data center you really need that bulk storage capacity, low cost capacity. And that's where we see that the two SSDs and HDDs are going to live together for a long time. >> Yeah, and as we said the conflict barrier, complimentary nature of the two different applications are very different. You need the big data to build the models, to run the algorithms, to do stuff. But, at the same time, you need the fast data that's coming in. You need the real time analytics to make modifications to the algorithms and learn from the algorithms >> That's right, yeah. It's the two of those things together that are one plus one makes three type of solution. Exactly, and especially to address latency. Everybody wants their data fast. When you type something into Google, you want your response right away. And that's where SSDs really come into play, but when you do deep searches, you're looking through a lot of data that has been collected over years and a lot of that's probably sitting on hard disc drives. >> Yeah. The last piece of the puzzle, I just want to you to address before we sign off, That was an interesting point is that not just necessarily the technology story, but the ecosystem story. And I thought that was really kind of, I thought, the most interesting part of the MAMR announcement was that it fits in the same form factor, there's no change to OS, there's no kind of change in the ecosystem components in which you plug this in. >> Yeah, that's right. It's just you take out the smaller drive, the 10, or the 12, or whatever, or 14 I guess is coming up. And plug in. They showed a picture of a 40 terabyte drive. >> Right. >> You know, that's the other part of the story that maybe doesn't get as much play as it should. You're playing in an ecosystem. You can't just come up with this completely, kind of independent, radical, new thing, unless it'S so radical that people are willing to swap out their existing infrastructure. >> I completely agree. It's can be very difficult for the customer to figure out how to adopt some of these new technologies and actually, the hard disk drive industry has thrown a couple of technologies at their customers over the past five, six years, that have been a little challenging for them to adopt. So, one was when the industry went from a native 512 by sectors to 4K sectors. Seems like a pretty small change that you're making inside the drive, but it actually presented some big challenges for some of the enterprise customers. And even the single magnetic recording technologies. So, it has a way to get more data on the disc, and Western Digital certainly talked about that today. But, for the customer trying to plug and play that into a system and SMR technology actually created some real challenges for them to figure out how to adopt that. So, I agree that what was shown today about the MAMR technology is definitely a plug and play. >> Alright, we'll give you the last word as people are driving away today from the headquarters. They got a bumper sticker as to why this is so important. What's it say on the bumper sticker about MAMR? It says that we continue to get more capacity at a lower cost. >> (chuckles) Isn't that just always the goal? >> I agree. >> (chuckles) Alright, well thank you for stopping by and sharing your insight. Really appreciate it. >> Thanks, Jeff. >> Alright. Jeff Frick here at Western Digital. You're watching theCUBE! Thanks for watching. (futuristic beat)

Published Date : Oct 12 2017

SUMMARY :

Brought to you by Western Digital. He is the Research Vice President So, what is your take on today's announcement? for the industry was Helium, about four or five years ago. Always reminds me kind of back to the OS that the data thrown off by people It's a huge fundamental shift. They had to do something. Yeah, the demand for a storage capacity But, back in the back room, and without that it does flatten out over time. as that next technology for the future. "To solve the problem. and the demand for this compute continues And it's really that kind of change that's pushing the need one of the biggest shifts has taking place. and clearly Intel microprocessors rode the PC boom You see that kind of continuing DC&E kind of conflict So, even in the very large HyperScale data centers of the storage devices attached to new PCs. You need the big data to build the models, It's the two of those things together is that not just necessarily the technology story, the 10, or the 12, or whatever, or 14 I guess is coming up. that's the other part of the story that maybe doesn't get And even the single magnetic recording technologies. What's it say on the bumper sticker about MAMR? and sharing your insight. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Jeff FrickPERSON

0.99+

John RydningPERSON

0.99+

JeffPERSON

0.99+

Western DigitalORGANIZATION

0.99+

JohnPERSON

0.99+

2005DATE

0.99+

twoQUANTITY

0.99+

7%QUANTITY

0.99+

90%QUANTITY

0.99+

2006DATE

0.99+

60%QUANTITY

0.99+

2020DATE

0.99+

two technologiesQUANTITY

0.99+

94%QUANTITY

0.99+

92, 93%QUANTITY

0.99+

each discQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

2021DATE

0.99+

ten yearQUANTITY

0.99+

TodayDATE

0.99+

San Jose, CaliforniaLOCATION

0.99+

20QUANTITY

0.99+

todayDATE

0.99+

8%QUANTITY

0.99+

oneQUANTITY

0.99+

BIGITORGANIZATION

0.99+

two different applicationsQUANTITY

0.99+

40 terabyteQUANTITY

0.99+

25QUANTITY

0.99+

MAMRORGANIZATION

0.98+

tenQUANTITY

0.98+

IDCORGANIZATION

0.98+

12QUANTITY

0.98+

first readQUANTITY

0.98+

MAMR TechnologyORGANIZATION

0.98+

IntelORGANIZATION

0.97+

theCUBEORGANIZATION

0.97+

about 20% a yearQUANTITY

0.97+

about 30% a yearQUANTITY

0.97+

about five yearsQUANTITY

0.97+

OneQUANTITY

0.96+

GoogleORGANIZATION

0.96+

about 7%QUANTITY

0.96+

HAMRORGANIZATION

0.96+

14QUANTITY

0.95+

next decadeDATE

0.95+

five years agoDATE

0.95+

10QUANTITY

0.94+

6%QUANTITY

0.94+

512QUANTITY

0.93+

About 90%QUANTITY

0.91+

about 60 plus percentQUANTITY

0.91+

last couple yearsDATE

0.91+

earlier todayDATE

0.9+

singleQUANTITY

0.89+

six yearsQUANTITY

0.89+

few years agoDATE

0.88+