Jon Masters, Red Hat | AWS re:Invent 2018
(upbeat music) >> Live from Las Vegas, it's theCUBE, covering AWS re:Invent 2018, brought to you by Amazon Web Services, Intel, and their ecosystem partners. >> Well, welcome back here, as we continue our coverage at AWS re:Invent, along with Justin Warren, I'm John Walls, we are live in Las Vegas in the Sands. Day one of our coverage here, three days, with you all week. We're with Jon Masters now, who's the chief architect at Red Hat. Jon, good to see you this afternoon. >> Thank you, nice to be here. >> First off, give me your impression of what you've seen so far on the show floor, what's the feeling you've got as you come in this week? Well, it's been absolutely fabulous for me. It's my first time at re:Invent, so I've not had chance to witness firsthand the growth over the few years, but I've heard stories that we're up to 75,000 people, some very high number this year, and the growth is absolutely amazing. Very, very passionate people, it's very clear that the story of containerization and microservices is foremost this year, and yeah, it's just a fabulous experience to be here. >> Great, now yesterday, there was announcement from AWS about A1 instance, tell me a little bit about how that comes to play in a Red Hat and just your take on the release. >> Yeah, so Amazon did announce yesterday the new A1 instance type, and it's based on the Arm architecture, I think the interesting thing for me is that it's based on a processor that they themselves built called the Graviton. You know, this is really the culmination of what we've seen in the industry in the past few years. As the cloud vendors get bigger and have greater resources and greater capabilities, what they can do is they can take that self-determination aspect, and they can say, you know what, we're now big enough, and we now understand, and we're sophisticated enough that we can say we would like to deliver this to our customers, and we don't have to wait for someone to build it for us, we can just go and do it. And so what they did is they licensed an Arm design from Arm Holdings, the actual core inside the processor, and then they built the chip themselves, and contracted out to a foundry, manufactured and deployed these, and then, you know, they can snap their fingers and deploy these and, surprise, now we have Arm-based instances, so it's been very interesting. >> So I'm curious, 'cause we keep getting told that software is leading the world, and yet here we are, building hardware and customized hardware. So what is it about the Arm architecture in particular, but also the fact that you can build custom silicone, what is it that Amazon, or indeed any other cloud vendor, what benefit do they get from manufacturing their own silicone here? >> That's a very good question. Well, I think there's multiple aspects to it. At the end of the day, people tell me that the future is serverless, and I remind them that there's still servers somewhere, right? So we still need to have computers. Of course, we're going to have a smaller number of very big vendors on which we rely, I mean, we're seeing that with the adoption of public cloud, and as these vendors get bigger, they have that scale that they can invest what, for them, is a modest amount of money, for anybody else, it'd be a fortune, but a modest amount, and they can go and build a design. Now, with a traditional microprocessor design, you'd take a team of four people, and you would spend many hundreds of millions of dollars, maybe 300 million dollars over four years, to build a high-performance processor. What you can do with Arm is work with Arm Holdings, which is now a part of SoftBank, to license kind of cookie-cutter pre-made pieces, so you can license a processor core, and you can stamp it out and say, well, I'll have 16 of those in my chip. So you don't have to do the heavy lifting to design many of the building blocks, but you can integrate them together, so you get a lot of cost-efficiency there, you don't have to go and do all that design, but you can integrate building blocks. And the key piece there, I think, is the ability to choose how you want to integrate that and what you want to build. Right? And then, what we're seeing in the industry is that compute is becoming boring, right? I mean, everyone needs compute, but what are we talking about? We're talking about machine learning and GPUs and tensors and all kinds of other accelerators, right? So, the interesting thing for me is, once you've made the compute kind of so commodity that you can just license it from somebody and stamp out your own design, what opportunity does that bring later to maybe integrate various accelerators and other hardware goodies? I don't know what Amazon plan to do, but if I had a crystal ball, I would say this is probably not the end. This is kind of the beginning of a journey, and now they will have the ability to integrate some very interesting and novel hardware advances of their own as well. Okay, 'cause that does sort of lead into what my next question going to be. Which is, for a customer of Amazon, it's like, well, I don't know anything about the internals of chip design, why would I want to choose the A1 instance type over one of the other existing instance types? What's in it for me? >> Yeah, very good question. I think when Amazon announced it last night, the top line that the media picked up on first was the price benefit there, which was advertised as being 40% lower for certain workloads. Now the design that they've chosen today is not about having that top-shelf performance, that top-line performance. If you want that level of performance, clearly you're going to use one of the existing instance types. But if you want to have something that is more cost-effective for at-scale deployments, maybe where you're not using all the compute resources that you need, you're more memory-bound, or you're doing web app-serving, this kind of thing, in that case, you don't really need that level of compute. You still need the instances, and so this brings your cost down when you're doing that at-scale kind of deployment. And that seems to be where they're targeting. And in addition, they're targeting, I think, developers, and those that want to invest in the Arm ecosystem, because clearly this is the beginning of a journey, I don't know exactly where they'll go next, but one could imagine that it will continue from here. >> Okay, now you are an Arm fan. >> I am. >> But we don't actually work for Arm, you work for Red Hat, so what's the Red Hat angle here? >> Well, so I'll tell you a story. >> Okay, I like stories. (men laughing) >> Me too, so back at the end of-- >> I like stories too, Jon, go ahead. >> Well, I'll spare you the long form. The end of 2010, I was in one of my execs' offices, and I've been with Red Hat since 2006, and I had done a couple of things before that that kind of were very useful for the company but kind of dull, so they said, "All right, you choose something exciting to work on next," right? So I held up a BeagleBoard, which is a bit like a raspberry pie, and I told one of my execs, "This will be a server one day." And I walked through Moore's law and the pace of innovation and fast-forwarded and say, if these things were to happen, this technology would be in a server. Now why is that relevant to Red Hat? Well, if you look at it from Red Hat's point of view, we don't pick winners and losers, what we do is we work with customers and what they want to adopt, but we also need to be able to respond to our customers' needs, so kind of the concern was, this Arm thing looks like it could be interesting in a few years' time, what if it is? And if it is interesting, and it's kind of a zoo, as I used to call it, a free-for-all, you know, it's kind of an embedded mess, that works fine, well "fine" in quotes, if you're building cell phone widgets and so on, because it's kind of a different ecosystem there, but if you want to have a mainstream server play, we had to have a few of us in industry come in and say, all right, this looks interesting, but let's make sure that the level of standardization is there, so that if this does take off, standard operating systems and standard software can run on it, that's why we cared, was just in case it takes off. And then fans like me, of course, want to kind of promote it as well, but I think that's why Red Hat cared. >> You know, and this is kind of off-topic, but I'm just curious, because you've talked about the acceleration of change, you've talked about innovation, you've talked about new wrinkles, and Moore's law, is it possible, or do you see that the acceleration of change is so rapid that we're almost outpacing ourselves in a way? And that change is happening so dramatically and so quickly that to make a decision on a particular solution or service is difficult because you're afraid of missing the next flavor in eight months or nine months, instead of three or five years? >> That's right, and I think there's another piece there where the cloud makes even more sense, doesn't it? Because if you are a customer, or an end-user, and you're deploying an app, you could say, well, this Arm thing could be interesting, I don't know, I don't want to go and build out physical infrastructure and go and pay that tax to go and figure this out, what I want to do is I just want to try it out right now. And the fabulous thing that Amazon did yesterday, that no one had done, you know, there'd been some efforts out there to provide Arm to the mainstream, right? But Amazon put a giant rubber stamp on it and said, this is good enough for us, and it works. Now anyone who's used to a workflow in EC2, they can just use exactly the same flow to spin up one of these instances and try it out. It's a 30-second thing, just try it out, see what you think. If you like it, great, if you don't, then don't use it. And because you are able to just consume it, according to whatever you want, you don't have that commitment either, yeah. >> So a test drive? >> You can test drive it, if it works well, you can adopt it. There's no obligation, and that's, I think, key to exploring new technologies as well. >> Yeah, it does require you to have that software layer on top of it that runs, we were talking before, that Red Hat has invested a lot to actually get the Red Hat software suite to run on Arm. >> That's right. >> So I'm sure that with this announcement, there's going to be a whole lot of other people suddenly discovering how to compile to the Arm architecture. (Jon laughs) That'll be fun. >> That's right, we've invested for the last eight years in this, and what we have now is a strategy we call our multi-architecture strategy. So again, we don't pick winners and losers, we have all these different architectures that we support, obviously x86, also Power, and Mainframe, and now Arm, and all these architectures are treated equally going forward, so in RHEL 8, which we just announced the beta of RHEL 8, you'll see all these architectures treated just the same. And so the rule for our developers is, whenever they make a change, it has to run on all the architectures equally. >> Democratize it, and then make it so that it is standard across the board. >> That's right. >> Makes sense. Jon, thanks for the time. >> Oh, absolutely. >> Good to see you here at re:Invent, and wish you all the success down the road. >> Thank you very much. >> You bet. Jon Masters joining us from Red Hat. Back with more, we are here at AWS re:Invent, we're live in Las Vegas, and Justin and I'll be back in just a moment.
SUMMARY :
brought to you by Amazon Jon, good to see you this afternoon. that the story of about how that comes to play in a Red Hat and they can say, you know but also the fact that you and what you want to build. all the compute resources that you need, Okay, I like stories. but let's make sure that the level according to whatever you want, works well, you can adopt it. Yeah, it does require you So I'm sure that And so the rule for our developers is, it is standard across the board. Jon, thanks for the time. and wish you all the and Justin and I'll be
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Justin Warren | PERSON | 0.99+ |
John Walls | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
16 | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Jon | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
30-second | QUANTITY | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Jon Masters | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
SoftBank | ORGANIZATION | 0.99+ |
300 million dollars | QUANTITY | 0.99+ |
Arm Holdings | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
nine months | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
2006 | DATE | 0.99+ |
four people | QUANTITY | 0.99+ |
eight months | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
three days | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
EC2 | TITLE | 0.99+ |
last night | DATE | 0.99+ |
this year | DATE | 0.99+ |
first time | QUANTITY | 0.98+ |
First | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Arm | ORGANIZATION | 0.97+ |
end | DATE | 0.97+ |
hundreds of millions of dollars | QUANTITY | 0.97+ |
this week | DATE | 0.95+ |
Moore | PERSON | 0.94+ |
Mainframe | ORGANIZATION | 0.94+ |
one day | QUANTITY | 0.93+ |
2010 | DATE | 0.93+ |
Red Hat | TITLE | 0.92+ |
today | DATE | 0.87+ |
BeagleBoard | ORGANIZATION | 0.86+ |
up to 75,000 people | QUANTITY | 0.86+ |
Day one | QUANTITY | 0.86+ |
this afternoon | DATE | 0.85+ |
years' | QUANTITY | 0.83+ |
A1 | COMMERCIAL_ITEM | 0.82+ |
last eight years | DATE | 0.79+ |
re:Invent | EVENT | 0.78+ |
execs' | QUANTITY | 0.78+ |
re:Invent 2018 | EVENT | 0.77+ |
Arm | TITLE | 0.75+ |
re: | EVENT | 0.73+ |
Red | ORGANIZATION | 0.72+ |
over four years | QUANTITY | 0.71+ |
one of | QUANTITY | 0.71+ |
years | DATE | 0.68+ |
Invent 2018 | EVENT | 0.66+ |
Hat | TITLE | 0.61+ |
Power | ORGANIZATION | 0.6+ |
Invent | EVENT | 0.59+ |
Dave Tang, Western Digital & Martin Fink, Western Digital l | CUBEConversation Feb 2018
(inspirational music) >> Hey, welcome back everybody. Jeff Frick here with theCUBE. We are in our Palo Alto studio. The conference season hasn't really kicked off yet into full swing so we can do a lot more kind of intimate stuff here in the studio, for a CUBE Conversation. And we're really excited to have a many time CUBE alum on, and a new guest, both from Western Digital. So Dave Tang, Senior Vice President at Western Digital. Great to see you again, Dave. >> Great to be here, Jeff. >> Absolutely and Martin Fink, he is the Chief Technology Officer at Western Digital, a longtime HP alum. I'm sure people recognized you from that and our great machine keynotes we were talking about it. So great to finally meet you, Martin. >> Thank you, nice to be here. >> Absolutely, so you guys are here talking about and we've got an ongoing program actually with Western Digital about Data Makes Possible, right. With all the things that are going on in tech at the end of the day, right, there's data, it's got to be stored somewhere and then of course there's processes and things going on. We've been exploring media and entertainment, sports, healthcare, autonomous vehicles, you know. All the places that this continues to reach out and it's such a fun project because you guys are a rising tide, lifts all boats, kind of company and really enjoy watching this whole ecosystem grow. So I really want to thank you for that. But now there's some new things that we want to talk about that you guys are doing to continue really in that same theme, and that's the support of this RISC-V. So first off, for people who have no idea, what is RISC-V? Let's jump into that and then kind of what is the announcement and why it's important. >> Sure, so RISC-V is an, you know, the tagline is, it's an open source instruction set architecture. So what does that mean, just so people can kind of understand. So today the world is dominated by two instruction set architectures. For the most part the, we'll call the desktop enterprise world is dominated by the Intel instruction set architecture and that's what's in most PCs, what people talk about as x86. And then the embedded and mobile space tends to be dominated by Arm, or by Arm Holdings. And so both of those are great architectures but they're also proprietary, they're owned by their respective companies. So RISC-V is essentially a third entrant, we'll say, into this world, but the distinction is that it's completely open source. So everything about the instruction set is available to all and anybody can implement it. We can all share the implementations. We can share the code that makes up that instruction set architecture, and very importantly for us and part of our motivation is the freedom to innovate. So we now have the ability to modify the instruction set or change the implementation of the instruction set, to optimize it for our devices and our storage and our drives, etc. >> So is this the first kind of open source play in microprocessor architecture? >> No, there's been other attempts at this. OpenSpark kind of comes to mind, and things like that, but the ability to get a community of individuals to kind of rally around this in a meaningful way has really been a challenge. And so I'd say that right now, RISC-V presents probably the best sort of clean slate, let's take some thing new to the market out there. >> So open source, obviously we've seen you know, take over the software world, first in the operating system which everybody is familiar with Linux but then we see it time and time again in different applications, Hadoop. I mean, there's just a proliferation of open source projects. The benefits are tremendous. Pretty easy to ascertain at a typical software case, how is that going to be applied do you think within the microprocessor world? >> So it's a little bit different. When we're talking about open source hardware or open source chips and microprocessors, you're dealing with a physical device. So even though you can open source all of the designs and the code associated with that device, you still have to fabricate it. You still have to create a physical design and you still have to call up a fab and say, will you make this for me at these particular volumes? And so that's the difference. So there are some differences between open source software where it's, you know, you create the bits and then you distribute those bits through the Internet and all is good. Whereas here, you still have a physical need to fabricate something. >> Now, how much more flexibility can you do then for the output when you can actually impact the architecture as opposed to just creating a custom chip design, on top of somebody else's architecture? >> Well, let me give you probably a really simple, concrete example that kind of people can internalize of some of our motivation behind this, because that might sort of help get people through this. If you think of a very typical surveillance application, you have a camera pointed into a room or a hallway. The reality is we're basically grabbing a ton of video frames but very few of them change, right? So the typical surveillance application is it never changes and you really want, only know when stuff changes. Well, today, in very simple terms, all of those frames get routed up to some big server somewhere and that server spends a lot of time trying to figure out, okay have I got a frame that changed? Have I got a frame that changed, and so on. And then eventually it'll find maybe two or three or five frames that have got something interesting. So in the world what we're trying to do is to say, okay well why don't we take that, find no changes, and push that right down to the device? So we basically store all those frames, why don't we go figure out all the frames that mean nothing, and only ship up to that big bad server the frames that have something interesting and something you want to go analyze and do some work on? So that's a very typical application that's quite meaningful because we can do all of that work at the device. We can eliminate shipping a whole bunch of data to where it's just going to get discarded anyways, and we can allow the end customer to really focus on the data that matters, and get some intelligence. >> And that's critical as we get more and more immersed in a data-centric world, where we have realtime applications like Martin described as well as large data-centric applications like of course, big data analytics, but also training for AI systems or machine learning. These workloads are going to become more and more diverse and they're going to need more specialized architectures and more specialized processing. So big data is getting bigger and faster and these realtime fast data applications are getting faster and bigger. So we need ways to contend with that, that really go beyond what's available with general purpose architectures. >> So that's a great point because if we take this example of video frames, now if I can build a processor that is customized to only do that, that's the only thing it does. It can be very low power, very efficient, and do that one thing very very well, and the cost adder, if you want to call it that, to the device where we put it, is a tiny fraction, but the cost savings of the overall solution is significant. So this ability to customize the instruction set to only do what you need it to do for that very special purpose, that's gold. >> So I just wanted to, Dave, we've talked about a lot of interesting innovations that you guys have come up with over the years, with the helium launch. Which I don't know, a couple, two, three years ago, you were just at the MAMR event, really energy assisted recording. So this is really kind of foundational within the storage and the media itself and how you guys do better and take advantage of evolving land space. This is a kind of a different play for Western Digital, this isn't a direct kind of improvement in the way that storage media and architecture works but this is really more of, I'm going to ask you. What is the Western Digital play here? Why is this an important space for you guys in your core storage business? >> Well we're really broadening our focus to really develop and innovate around technologies that really help the world extract more value from data as a whole, right. So it's way beyond storage these days, right. We're looking for better ways to capture, preserve, access, and transform the data. And unless you transform it, you can't really extract the value out of it so as we see all these new applications for data and the vast possibilities for data, we really want to pave the path and help the industry innovate to bring all those applications to reality. >> It's interesting too because one of the great topics always in computing is you know, you got compute and store, which has to go to which, right. And nobody wants to move a lot of data, that's hard and may or may not be easy to get compute. Especially these IoT applications, remote devices, tough conditions and power, which we mentioned a little bit before we went on air. So the the landscape for the for the need for compute and store in networking is radically changing than either the desktop or what we're seeing a consolidation in clouds. So what's interesting here, where does the scale come, right? At the end of the day, scale always wins. And that's where we've seen historically where the general-purpose microprocessor architectures is dominated but used to be a slew of specialty purpose architectures but now there's an opportunity to bring scale to this. So how does that scale game continue to evolve? >> So it's a great point that scale does matter and we've seen that repeatedly and so it's a significant part of the reason why we decided to go early with a significant commitment was to tell the world that we were bringing scale to the equation. And so what we communicated to the marketplace is we ship on the order of a billion processor cores a year, most people don't realize that all of our devices from USB sticks to hard drives, all have processors on them. And so we said, hey we're going to basically go all-in and go big and that translates into a billion cores that we ship every year and we're going to go on a program to essentially migrate all of those cores to RISC-V. It'll take a few years to get there but we'll migrate all of those cores and so we basically were signaling to the market, hey scale is now here. Scale is here, you can make the investments, you can go forward, you can make that commitment to RISC-V because essentially we've got your back. >> So just to make sure we get that clear. So you guys have announced that you're going to slowly migrate over time your micro processors that power your devices to the tune of approximately a billion with a B, cores per year to this new architecture. >> That is correct. >> And has that started? >> So the design has started. So we have started to design and develop our first two cores but the actual manifestation into devices probably in the early stage of 2020. >> Okay, okay. But that's a pretty significant commitment and again, the ideas you explicitly said it's a signal to the ecosystem, this is worth your investment because there is some scale here. >> Martin: That's right. >> Yeah, pretty exciting. And how do you think it's going to open up the ability for you to do new things with your devices that you before either couldn't do or we're too expensive with dollars or power. >> Martin: So we're going to step and iterate through this and one key point here is a lot of people tend to want to start in this processor world at the very high end, right. I'm going to go take on a Xeon processor or something like that. It's not what we're doing. We're basically saying, we're going to go at the small end, the tiny end where power matters. Power matters a lot in our devices and where can we achieve the optimum combination of power and performance. So even in our small devices like a USB stick or a client SSD or something like that, if we can reduce power consumption and even just maintain performance that's a huge win for our customers, you know. If you think about your laptop and if I reduce the power consumption of that SSD in there so that you have longer battery life and you can get you know through the day better, that's a huge win, right. And I don't impact performance in the process, that's a huge win. So what we do, what we're doing right now is we're developing the cores based on the RISC-V architecture and then what we're going to do is once we've got that sort of design, sort of complete is we want to take all of the typical client workloads and profile them on that. Then we want to find out, okay where are the hot spots? What are the two or three things that are really consuming all the power and how do we go optimize, by either creating two or three instructions or by optimizing the micro architecture for an existing instruction. And then iterate through that a few times so that we really get a big win, even at the very low end of the spectrum and then we just iterate through that with time. >> We're in a unique position I think in that the technologies that we develop span everything from the actual media where the bits are stored, whether it's solid-state flash or rotating magnetic disk and the recording heads. We take those technologies and build them all the way up into devices and platforms and full-fledged data center systems. And if we can optimize and tune all the way from that core media level all the way up through into the system level, we can deliver significantly higher value, we believe, to the marketplace. So this is the start of that, that enables us to customize command sets and optimize the flow of data so that we can we can allow users to access it when and where they need it. >> So I think there's another actually really cool point, which goes back to the open source nature of this and we try to be very clear about this. We're not going to develop our cores for all applications. We want the world to develop all sorts of different cores. And so for many applications somebody else might come in and say, hey we've got a really cool core. So one of the companies we've partnered with and invested in for example, is Esperanto. They've actually decided to go at the high end and do a machine learning accelerator. Hey, maybe we'll use that for some machine learning applications in our system level performance. So we don't have to do it all but we've got a common architecture across the portfolio and that speaks to that sort of open source nature of the RISC-V architecture is we want the world to get going. We want our competitors to get on board, we want partners, we want software providers, we want everybody on board. >> It's such a different ecosystem with open-source and the way the contributions are made and the way contributions are valued and the way that people can go find niches that are underserved. It's this really interesting kind of bifurcation of the market really, you don't really want to be in the big general-purpose middle anymore. That's not a great place to be, there's all kinds of specialty places where you can build the competence and with software and you know with, thank goodness for Moore's law decreasing prices of the power of the compute and now the cloud, which is basically always available. Really a exciting time to develop a myriad of different applications. >> Right and you talked before about scale in terms of points of implementation that will drive adoption and drive this to critical mass but there's another aspect of scale relative to the architecture within a single system that's also important that I think RISC-V helps to break down some barriers. Because with general purpose computer architectures, they assume a certain ratio of memory and storage and processing and bandwidth for interconnect and if you exceed those ratios, you have to add a whole new processor. Even though you don't need to need the processing capability, you need it for scale. So that's another great benefit of these new architectures is that the diversity of data needs where some are going to be large data sets, some are going to be small data sets that need need high bandwidth. You can customize and blend that recipe as you need to, you're not at the mercy of these fixed ratios. >> Yeah and I think you know it's so much of kind of what is cloud computing. And the atomic nature of it, that you can apply the ratios, the amount that you need as you need, you can change it on the fly, you can tone it up, tone it down. And I think the other interesting thing that you touched on is some of these new, which are now relatively special-purpose but are going to be general-purpose very soon in terms of machine learning and AI and applying those to different places and applying them closer to the problem. It's a very very interesting evolution of the landscape but what I want to do is kind of close on you Martin, especially because again kind of back to the machine. Not the machine specifically but you have been in the business of looking way down the road for a long time. So you came out, I'd looked at your LinkedIn, you retired for three months, congratulations. (laughs) Hope you got some my golf in but you came back to Western Digital so why did you come back? And as you look down the road a ways, what do you see that it excites you, that got you off that three-month little tour around the golf course and I'm sorry I had to tease about that. But what do you see? What are you excited about that you came back and got involved in an open source microprocessor project? >> So the the short answer was that, I saw the opportunity at Western Digital to be where data lives. So I had spent my entire career, will call it at the compute or the server side of things and the interesting thing is I had a very close relationship with SanDisk, which was acquired by Western Digital. And so I had, we'll call it an insider view, of what was possible there and so what triggered was essentially what we're talking about here was given that about half the world's data lands on Western Digital devices, taking that from a real position of strength in the marketplace and say, what could we go do to make data more intelligent and rather than start kind of at that server end and so that I saw that potential there and it was just incredible, so that's that's what made me want to join. >> Exciting times. Dave good get. (laughs) >> We're delighted to have Martin with us. >> All right, well we look forward to watch it evolve. We've got another another whole set of events we're going to do again together with Western Digital that we're excited about. Again, covering Data Makes Possible but you know kind of uplifting into the application space as a lot of the cool things that people are doing in innovation. So Martin, great to finally meet you and thanks for stopping by. >> Thanks for the time. >> David as always and I think we'll see in a month or so. >> Right, always a pleasure Jeff, thanks. >> All right Martin Fink, Dave Tang. I'm Jeff Frick, you're watching theCUBE. Thanks for watching, we'll catch you next time. (inspirational music)
SUMMARY :
Great to see you again, Dave. So great to finally meet you, Martin. and that's the support of this RISC-V. So everything about the instruction set is available to all but the ability to get a community of individuals how is that going to be applied do you think and the code associated with that device, and something you want to go analyze and do some work on? and they're going to need more specialized architectures and the cost adder, if you want to call it that, and how you guys do better and the vast possibilities for data, So how does that scale game continue to evolve? and so it's a significant part of the reason why So just to make sure we get that clear. So the design has started. and again, the ideas you explicitly said that you before either couldn't do so that you have longer battery life and and optimize the flow of data and that speaks to that sort of open source nature and with software and you know with, is that the diversity of data needs where the amount that you need as you need, and the interesting thing is I had (laughs) So Martin, great to finally meet you David as always and I think Thanks for watching, we'll catch you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nutanix | ORGANIZATION | 0.99+ |
Western Digital | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Krista | PERSON | 0.99+ |
Bernie Hannon | PERSON | 0.99+ |
Jeff Frick | PERSON | 0.99+ |
Bernie | PERSON | 0.99+ |
H3C | ORGANIZATION | 0.99+ |
Citrix | ORGANIZATION | 0.99+ |
September of 2015 | DATE | 0.99+ |
Dave Tang | PERSON | 0.99+ |
Krista Satterthwaite | PERSON | 0.99+ |
SanDisk | ORGANIZATION | 0.99+ |
Martin | PERSON | 0.99+ |
James White | PERSON | 0.99+ |
Sue | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Carol Dweck | PERSON | 0.99+ |
Martin Fink | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave allante | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Raghu | PERSON | 0.99+ |
Raghu Nandan | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
three | QUANTITY | 0.99+ |
Lee Caswell | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Antonio Neri | PERSON | 0.99+ |
five years | QUANTITY | 0.99+ |
three-month | QUANTITY | 0.99+ |
four-year | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
Gary | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Feb 2018 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
seven dollars | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
Arm Holdings | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |