Kumaran Siva, AMD | VMware Explore 2022
>>Good morning, everyone. Welcome to the cubes day two coverage of VMware Explorer, 2022 live from San Francisco. Lisa Martin here with Dave Nicholson. We're excited to kick off day two of great conversations with VMware partners, customers it's ecosystem. We've got a V an alumni back with us Kumer on Siva corporate VP of business development from AMD joins us. Great to have you on the program in person. Great >>To be here. Yes. In person. Indeed. Welcome. >>So the great thing yesterday, a lot of announcements and B had an announcement with VMware, which we will unpack that, but there's about 7,000 to 10,000 people here. People are excited, ready to be back, ready to be hearing from this community, which is so nice. Yesterday am B announced. It is optimizing AMD PON distributed services card to run on VMware. Bsphere eight B for eight was announced yesterday. Tell us a little bit about that. Yeah, >>No, absolutely. The Ben Sando smart neck DPU. What it allows you to do is it, it provides a whole bunch of capabilities, including offloads, including encryption DEC description. We can even do functions like compression, but with, with the combination of VMware project Monterey and, and Ben Sando, we we're able to do is even do some of the vSphere, actual offloads integration of the hypervisor into the DPU card. It's, it's pretty interesting and pretty powerful technology. We're we're pretty excited about it. I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, in terms of manageability, in terms of being able to take care of bare metal servers and also, you know, better secure infrastructure, you know, cloudlike techniques into the, into the mainstream on-premises enterprise. >>Okay. Talk a little bit about the DPU data processing unit. They talked about it on stage yesterday, but help me understand that versus the CPU GPU. >>Yeah. So it's, it's, it's a different, it's a different point, right? So normally you'd, you'd have the CPU you'd have we call it dumb networking card. Right. And I say dumb, but it's, it's, you know, it's just designed to go process packets, you know, put and put them onto PCI and have the, the CPU do all of the, kind of the, the packet processing, the, the virtual switching, all of those functions inside the CPU. What the DPU allows you to do is, is actually offload a bunch of those functions directly onto the, onto the deep view card. So it has a combination of these special purpose processors that are programmable with the language called P four, which is one, one of the key things that pan Sando brings. Here's a, it's a, it's a real easy to program, easy to use, you know, kind of set so that not some of, some of our larger enterprise customers can actually go in and, you know, do some custom coding depending on what their network infrastructure looks like. But you can do things like the V switch in, in the, in the DPU, not having to all have that done on the CPU. So you freeze up some of the CPU course, make sure, make sure infrastructure run more efficiently, but probably even more importantly, it provides you with more, with greater security, greater separation between the, between the networking side and the, the CPU side. >>So, so that's, that's a key point because a lot of us remember the era of the tonic TCP, I P offload engine, Nick, this isn't simply offloading CPU cycles. This is actually providing a sort of isolation. So that the network that's right, is the network has intelligence that is separate from the server. Is that, is that absolutely key? Is that absolutely >>Key? Yeah. That's, that's a good way of looking at it. Yeah. And that's, that's, I mean, if you look at some of the, the, the techniques used in the cloud, the, you know, this, this, this in fact brings some of those technologies into, into the enterprise, right. So where you are wanting to have that level of separation and management, you're able to now utilize the DPU card. So that's, that's a really big, big, big part of the value proposition, the manageability manageability, not just offload, but you know, kind of a better network for enterprise. Right. >>Right. >>Can you expand on that value proposition? If I'm a customer what's in this for me, how does this help power my multi-cloud organization? >>Yeah. >>So I think we have some, we actually have a number of these in real customer use cases today. And so, you know, folks will use, for example, the compression and the, sorry, the compression and decompression, that's, that's definitely an application in the storage side, but also on the, just on the, just, just as a, as a DPU card in the mainstream general purpose, general purpose server server infrastructure fleet, they're able to use the encryption and decryption to make sure that their, their, their infrastructure is, is kind of safe, you know, from point to point within the network. So every, every connected, every connection there is actually encrypted and that, that, you know, managing those policies and orchestrating all of that, that's done to the DPU card. >>So, so what you're saying is if you have DPU involved, then the server itself and the CPUs become completely irrelevant. And basically it's just a box of sheet metal at that point. That's, that's a good way of looking at that. That's my segue talking about the value proposition of the actual AMD. >>No, absolutely. No, no. I think, I think, I think the, the, the CPUs are always going to be central in this and look. And so, so I think, I think having, having the, the DPU is extremely powerful and, and it does allow you to have better infrastructure, but the key to having better infrastructure is to have the best CPU. Well, tell >>Us, tell >>Us that's what, tell us us about that. So, so I, you know, this is, this is where a lot of the, the great value proposition between VMware and AMD come together. So VMware really allows enterprises to take advantage of these high core count, really modern, you know, CPU, our, our, our, our epic, especially our Milan, our 7,003 product line. So to be able to take advantage of 64 course, you know, VMware is critical for that. And, and so what they, what they've been able to do is, you know, know, for example, if you have workloads running on legacy, you know, like five year old servers, you're able to take a whole bunch of those servers and consolidate down, down into a single node, right. And the power that VMware gives you is the manageability, the reliability brings all of that factors and allows you to take advantage of, of the, the, the latest, latest generation CPUs. >>You know, we've actually done some TCO modeling where we can show, even if you have fully depreciated hardware, like, so it's like five years old plus, right. And so, you know, the actual cost, you know, it's already been written off, but the cost just the cost of running it in terms of the power and the administration, you know, the OPEX costs that, that are associated with it are greater than the cost of acquiring a new set of, you know, a smaller set of AMD servers. Yeah. And, and being able to consolidate those workloads, run VMware, to provide you with that great, great user experience, especially with vSphere 8.0 and the, and the hooks that VMware have built in for AMD AMD processors, you actually see really, really good. It's just a great user experience. It's also a more efficient, you know, it's just better for the planet. And it's also better on the pocketbook, which is, which is, which is a really cool thing these days, cuz our value in TCO translates directly into a value in terms of sustainability. Right. And so, you know, from, from energy consumption, from, you know, just, just the cost of having that there, it's just a whole lot better >>Talk about on the sustainability front, how AMD is helping its customers achieve their sustainability goals. And are you seeing more and more customers coming to you saying, we wanna understand what AMD is doing for sustainability because it's important for us to work with vendors who have a core focus on it. >>Yeah, absolutely. You know, I think, look, I'll be perfectly honest when we first designed our CPU, we're just trying to build the biggest baddest thing that, you know, that, that comes out in terms of having the, the, the best, the, the number, the, the largest number of cores and the best TCO for our customers, but what it's actually turned out that TCO involves energy consumption. Right. And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, whole bunch of servers. For example, we have one calculation where we showed 27, you know, like I think like five year old servers can be consolidated down into five AMD servers that, that ratio you can see already, you know, huge gains in terms of sustainability. Now you asked about the sustainability conversation. This I'd say not a week goes by where I'm not having a conversation with, with a, a, a CTO or CIO who is, you know, who's got that as part of their corporate, you know, is part of their corporate brand. And they want to find out how to make their, their infrastructure, their data center, more green. Right. And so that's, that's where we come in. Yeah. And it's interesting because at least in the us money is also green. So when you talk about the cost of power, especially in places like California, that's right. There's, there's a, there's a natural incentive to >>Drive in that direction. >>Let's talk about security. You know, the, the, the threat landscape has changed so dramatically in the last couple of years, ransomware is a household word. Yes. Ransomware attacks happened like one every 11 seconds, older technology, a little bit more vulnerable to internal threats, external threats. How is AMD helping customers address the security fund, which is the board level conversation >>That that's, that's, that's a, that's a great, great question. Look, I look at security as being, you know, it's a layered thing, right? I mean, if you talk to any security experts, security, doesn't, you know, there's not one component and we are an ingredient within the, the greater, you know, the greater scheme of things. A few things. One is we have partnered very closely with the VMware. They have enabled our SUV technology, secure encrypted virtualization technology into, into the vSphere. So such that all of the memory transactions. So you have, you have security, you know, at, you know, security, when you store store on disks, you have security over the network and you also have security in the compute. And when you go out to memory, that's what this SUV technology gives you. It gives you that, that security going, going in your, in your actual virtual machine as it's running. And so the, the, we take security extremely seriously. I mean, one of the things, every generation that you see from, from AMD and, and, you know, you have seen us hit our cadence. We do upgrade all of the security features and we address all of the sort of known threats that are out there. And obviously this threats, you know, kind of coming at us all the time, but our CPUs just get better and better from, from a, a security stance. >>So shifting gears for a minute, obviously we know the pending impossible acquisition, the announced acquisition of VMware by Broadcom, AMD's got a relationship with Broadcom independently, right? No, of course. What is, how's that relationship? >>Oh, it's a great relationship. I mean, we, we, you know, they, they have certified their, their, their niche products, their HPA products, which are utilized in, you know, for, for storage systems, sand systems, those, those type of architectures, the hardcore storage architectures. We, we work with them very closely. So they, they, they've been a great partner with us for years. >>And you've got, I know, you know, we are, we're talking about current generation available on the shelf, Milan based architecture, is that right? That's right. Yeah. But if I understand correctly, maybe sometime this year, you're, you're gonna be that's right. Rolling out the, rolling out the new stuff. >>Yeah, absolutely. So later this year, we've already, you know, we already talked about this publicly. We have a 96 core gen platform up to 96 cores gen platform. So we're just, we're just taking that TCO value just to the next level, increasing performance DDR, five CXL with, with memory expansion capability. Very, very neat leading edge technology. So that that's gonna be available. >>Is that NextGen P C I E, or has that shift already been made? It's >>Been it's NextGen. P C I E P C E gen five. Okay. So we'll have, we'll have that capability. That'll be, that'll be out by the end of this year. >>Okay. So those components you talk about. Yeah. You know, you talk about the, the Broadcom VMware universe, those components that are going into those new slots are also factors in performance and >>Yeah, absolutely. You need the balance, right? You, you need to have networking storage and the CPU. We're very cognizant of how to make sure that these cores are fed appropriately. Okay. Cuz if you've just put out a lot of cores, you don't have enough memory, you don't have enough iOS. That's, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, make sure that the systems are balanced. So you get the experience that you've had with, let's say your, you know, your 12 core, your 16 core, you can have that same experience in the 96 core in a node or 96 core socket. So maybe a 192 cores total, right? So you can have that same experience in, in a tune node in a much denser, you know, package server today or, or using Melan technology, you know, 128 cores, super, super good performance. You know, its super good experience it's, it's designed to scale. Right. And especially with VMware as, as our infrastructure, it works >>Great. I'm gonna, Lisa, Lisa's got a question to ask. I know, but bear with me one bear >>With me. Yes, sir. >>We've actually initiated coverage of this question of, you know, just hardware matter right anymore. Does it matter anymore? Yeah. So I put to you the question, do you think hardware still matters? >>Oh, I think, I think it's gonna matter even more and more going forward. I mean just, but it's all cloud who cares just in this conversation today. Right? >>Who cares? It's all cloud. Yeah. >>So, so, so definitely their workloads moving to the cloud and we love our cloud partners don't get me wrong. Right. But there are, you know, just, I've had so many conversations at this show this week about customers who cannot move to the cloud because of regulatory reasons. Yeah. You know, the other thing that you don't realize too, that's new to me is that people have depreciated their data centers. So the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy, buy public cloud service. They still want to go buy public cloud services and that, by the way, we have great, great, great AMD instances on, on AWS, on Google, on Azure, Oracle, like all of our major, all of the major cloud providers, support AMD and have, have great, you know, TCO instances that they've, they've put out there with good performance. Yeah. >>What >>Are some of the key use cases that customers are coming to AMD for? And, and what have you seen change in the last couple of years with respect to every customer needing to become a data company needing to really be data driven? >>No, that's, that's also great question. So, you know, I used to get this question a lot. >>She only asks great questions. Yeah. Yeah. I go down and like all around in the weeds and get excited about the bits and the bites she asks. >>But no, I think, look, I think the, you know, a few years ago and I, I think I, I used to get this question all the time. What workloads run best on AMD? My answer today is unequivocally all the workloads. Okay. Cuz we have processors that run, you know, run at the highest performance per thread per per core that you can get. And then we have processors that have the highest throughput and, and sometimes they're one in the same, right. And Ilan 64 configured the right way using using VMware vSphere, you can actually get extremely good per core performance and extremely good throughput performance. It works well across, just as you said, like a database to data management, all of those kinds of capabilities, DevOps, you know, E R P like there's just been a whole slew slew of applications use cases. We have design wins in, in major customers, in every single industry in every, and these, these are big, you know, the big guys, right? >>And they're, they're, they're using AMD they're successfully moving over their workloads without, without issue. For the most part. In some cases, customers tell us they just, they just move the workload on, turn it on. It runs great. Right. And, and they're, they're fully happy with it. You know, there are other cases where, where we've actually gotten involved and we figured out, you know, there's this configuration of that configuration, but it's typically not a, not a huge lift to move to AMD. And that's that I think is a, is a key, it's a key point. And we're working together with almost all of the major ISV partners. Right. And so just to make sure that, that, that they have run tested certified, I think we have over 250 world record benchmarks, you know, running in all sorts of, you know, like Oracle database, SAP business suite, all of those, those types of applications run, run extremely well on AMD. >>Is there a particular customer story that you think really articulates the value of running on AMD in terms of enabling bus, big business outcome, safer a financial services organization or healthcare organization? Yeah. >>I mean we, yeah, there's certainly been, I mean, across the board. So in, in healthcare we've seen customers actually do the, the server consolidation very effectively and then, you know, take advantage of the, the lower cost of operation because in some cases they're, they're trying to run servers on each floor of a hospital. For example, we've had use cases where customers have been able to do that because of the density that we provide and to be able to, to actually, you know, take, take their compute more even to the edge than, than actually have it in the, in those use cases in, in a centralized matter. The another, another interesting case FSI in financial services, we have customers that use us for general purpose. It, we have customers that use this for kind of the, the high performance we call it grid computing. So, you know, you have guys that, you know, do all this trading during the day, they collect tons and tons of data, and then they use our computers to, or our CPUs to just crunch to that data overnight. >>And it's just like this big, super computer that just crunches it's, it's pretty incredible. They're the, the, the density of the CPUs, the value that we bring really shines, but in, in their general purpose fleet as well. Right? So they're able to use VMware, a lot of VMware customers in that space. We love our, we love our VMware customers and they're able to, to, to utilize this, they use use us with HCI. So hyperconverge infrastructure with V VSAN and that's that that's, that's worked works extremely well. And, and, and our, our enterprise customers are extremely happy with that. >>Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares VMware undergoes its potential change. >>Yeah. So there there's a lot that we have going on. I mean, I gotta say VMware is one of the, let's say premier companies in terms of, you know, being innovative and being, being able to drive new, new, interesting pieces of technology and, and they're very experimentive right. So they, we have, we have a ton of things going with them, but certainly, you know, driving pin Sando is, is very, it is very, very important to us. Yeah. I think that the whole, we're just in the, the cusp, I believe of, you know, server consolidation becoming a big thing for us. So driving that together with VMware and, you know, into some of these enterprises where we can show, you know, save the earth while we, you know, in terms of reducing power, reducing and, and saving money in terms of TCO, but also being able to enable new capabilities. >>You know, the other part of it too, is this new infrastructure enables new workloads. So things like machine learning, you know, more data analytics, more sophisticated processing, you know, that, that is all enabled by this new infrastructure. So we, we were excited. We think that we're on the precipice of, you know, going a lot of industries moving forward to, to having, you know, the next level of it. It's no longer about just payroll or, or, or enterprise business management. It's about, you know, how do you make your, you know, your, your knowledge workers more productive, right. And how do you give them more capabilities? And that, that is really, what's exciting for us. >>Awesome Cooper. And thank you so much for joining Dave and me on the program today, talking about what AMD, what you're doing to supercharge customers, your partnership with VMware and what is exciting. What's on the, the forefront, the frontier, we appreciate your time and your insights. >>Great. Thank you very much for having me. >>Thank you for our guest and Dave Nicholson. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22 from San Francisco, but don't go anywhere, Dave and I will be right back with our next guest.
SUMMARY :
Great to have you on the program in person. So the great thing yesterday, a lot of announcements and B had an announcement with VMware, I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, but help me understand that versus the CPU GPU. And I say dumb, but it's, it's, you know, it's just designed to go process So that the network that's right, not just offload, but you know, kind of a better network for enterprise. And so, you know, folks will use, for example, the compression and the, And basically it's just a box of sheet metal at that point. the DPU is extremely powerful and, and it does allow you to have better infrastructure, And the power that VMware gives you is the manageability, the reliability brings all of that factors the administration, you know, the OPEX costs that, that are associated with it are greater than And are you seeing more and more customers coming to you saying, And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, How is AMD helping customers address the security fund, which is the board level conversation And obviously this threats, you know, kind of coming at us all the time, So shifting gears for a minute, obviously we I mean, we, we, you know, they, they have certified their, their, their niche products, available on the shelf, Milan based architecture, is that right? So later this year, we've already, you know, we already talked about this publicly. That'll be, that'll be out by the end of this year. You know, you talk about the, the Broadcom VMware universe, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, I know, but bear with me one So I put to you the question, do you think hardware still matters? but it's all cloud who cares just in this conversation today. Yeah. But there are, you know, just, I've had so many conversations at this show this week about So, you know, I used to get this question a lot. around in the weeds and get excited about the bits and the bites she asks. Cuz we have processors that run, you know, run at the highest performance you know, running in all sorts of, you know, like Oracle database, SAP business Is there a particular customer story that you think really articulates the value of running on AMD density that we provide and to be able to, to actually, you know, take, take their compute more even So they're able to use VMware, a lot of VMware customers in Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares So driving that together with VMware and, you know, into some of these enterprises where learning, you know, more data analytics, more sophisticated processing, you know, And thank you so much for joining Dave and me on the program today, talking about what AMD, Thank you very much for having me. Thank you for our guest and Dave Nicholson.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Kumaran Siva | PERSON | 0.99+ |
five year | QUANTITY | 0.99+ |
12 core | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
192 cores | QUANTITY | 0.99+ |
16 core | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
California | LOCATION | 0.99+ |
five years | QUANTITY | 0.99+ |
Cooper | PERSON | 0.99+ |
iOS | TITLE | 0.99+ |
7,003 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
128 cores | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Milan | LOCATION | 0.99+ |
today | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
this year | DATE | 0.98+ |
Yesterday am | DATE | 0.98+ |
five | QUANTITY | 0.98+ |
one component | QUANTITY | 0.98+ |
eight | QUANTITY | 0.98+ |
HPA | ORGANIZATION | 0.98+ |
each floor | QUANTITY | 0.98+ |
one | QUANTITY | 0.97+ |
this week | DATE | 0.97+ |
vSphere 8.0 | TITLE | 0.97+ |
later this year | DATE | 0.97+ |
day two | QUANTITY | 0.97+ |
10,000 people | QUANTITY | 0.96+ |
96 core | QUANTITY | 0.95+ |
TCO | ORGANIZATION | 0.95+ |
2022 | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
27 | QUANTITY | 0.94+ |
64 course | QUANTITY | 0.94+ |
Sando | ORGANIZATION | 0.94+ |
one calculation | QUANTITY | 0.94+ |
end of this year | DATE | 0.93+ |
VMwares | ORGANIZATION | 0.93+ |
Kumaran Siva, AMD | IBM Think 2021
>>from around the globe. It's the >>cube >>With digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, love the processors. Epic 7000 and three series was just launched. Its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center and the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation ethic processor that were just announced. Um So the Epic 7003, what we're calling it a series processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package to complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache, um you know, cash is super important for a variety of workloads in the large cache size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java all sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um what we call all in features. So rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases. The pc I interfaces gives you lots and lots of storage capability so all in all super products um and we're super excited to be working with IBM honest. >>Well let's get into some of the details on this impact because obviously it's not just one place where these processes are going to live. You're seeing a distributed surface area core to edge um, cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am be um delivers upon our roadmap. So we, we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation, Epic part and then now in March, we are now in the third generation. Very much on schedule. Very much um, intern expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, Jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh bare metal. Um, as we have been deployed in uh, in IBM cloud and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um and you can, you know, you can run each core uh really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the for the cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the ceo of IBM before he was Ceo at a conference and you know, he's always been, I know him, he's always loved cloud, right? So, um, but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here, I think Hybrid. Um, and so I can almost see the playbook evolving. You know, Red has an operating system, Cloud and Edge is a distributed system, it's got that vibe of a system architecture, almost got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious, could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on. Uh but in terms of our specific joint partnerships and we do have an announcement last year. Um so it's it's it's somewhat public, but we are working together on Ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability, um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh you know, there's some key pillars to this. One of this is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in something assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with uh with unprecedented security and reliability uh in the cloud, >>what's the future of processors, I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables, you're gonna have more and more devices big and small. Um what's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right, So combining that with the Andes Silicon, uh divided and see few devices really really is is it's a great combination, I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count Cpus I see that trend continuing for sure. Uh you know, I think that that is gonna, that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center um and IBM is in a really good position to take advantage of that to go to, to to drive that within the cloud. That income combination with IBM s presence on prem uh and so that's that's where the hybrid hybrid cloud value proposition comes in um and so we actually see ourselves uh you know, playing in both sides, so we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as well. >>I B M and M. D. Great partnership, great for clarifying and and sharing that insight come, I appreciate it. Thanks for for coming on the cube, I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. As you see hybrid cloud developing one of the big trends is this ecosystem play right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at A. M. D. I mean because because people are building on top of these ecosystems are building their own clouds on top of cloud, you're seeing data. Cloud, just seeing these kinds of clouds, specialty clouds. So I mean we could have a cute cloud on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud. So that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't, it isn't a pretty good position with IBM to be able to, to enable that. Um we do have some very significant osD partnerships, a lot of which that are leveraged into IBM um such as Red hat of course, but also like VM ware and Nutanix. Um this provide these always V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P I. And be able to build build um uh the the multi cloud environments that you're talking about. Um and I think that, I think that's right. I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh call the medic cloud that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another cloud environments. I think that's that's that's actually very key and very uh one of the one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very much. >>Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm
SUMMARY :
It's the With digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, love the processors. so that the process of being the complete package to complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect Um and you can, you know, you can run each core uh Um, and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know with IBM cloud, an AMG confidential computing. So so what uh you know, there's some key pillars to this. In in the IBM world? in um and so we actually see ourselves uh you know, playing in both sides, Thanks for for coming on the cube, I do want to ask you while I got you here. I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Arvin | PERSON | 0.99+ |
Cameron Siva | PERSON | 0.99+ |
March | DATE | 0.99+ |
19% | QUANTITY | 0.99+ |
64 cores | QUANTITY | 0.99+ |
each core | QUANTITY | 0.99+ |
Each core | QUANTITY | 0.99+ |
august of 2019 | DATE | 0.99+ |
628 lanes | QUANTITY | 0.99+ |
256 megabytes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
two threads | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
third generation | QUANTITY | 0.98+ |
AMG | ORGANIZATION | 0.98+ |
Epic 7003 | COMMERCIAL_ITEM | 0.97+ |
Jenin | PERSON | 0.97+ |
Andes Silicon | ORGANIZATION | 0.97+ |
Zen three | COMMERCIAL_ITEM | 0.97+ |
third generation | QUANTITY | 0.97+ |
M. D. | PERSON | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
one place | QUANTITY | 0.94+ |
Epic | ORGANIZATION | 0.93+ |
Think 2021 | COMMERCIAL_ITEM | 0.92+ |
IBM cloud | ORGANIZATION | 0.92+ |
Epic 7763 | COMMERCIAL_ITEM | 0.91+ |
one | QUANTITY | 0.9+ |
jenin | PERSON | 0.9+ |
three series | QUANTITY | 0.89+ |
Epic | COMMERCIAL_ITEM | 0.88+ |
A. M. | ORGANIZATION | 0.85+ |
A. M. | PERSON | 0.85+ |
Red | PERSON | 0.83+ |
Ceo | PERSON | 0.82+ |
Mm Kumaran Siva | PERSON | 0.8+ |
about over 400 total instances | QUANTITY | 0.79+ |
64 4 | QUANTITY | 0.78+ |
john | PERSON | 0.77+ |
up to 128 threads | QUANTITY | 0.72+ |
Epic um 72 F three | COMMERCIAL_ITEM | 0.71+ |
java | TITLE | 0.7+ |
7000 | COMMERCIAL_ITEM | 0.7+ |
Epic Force | COMMERCIAL_ITEM | 0.69+ |
E gen four | COMMERCIAL_ITEM | 0.67+ |
M. D | PERSON | 0.67+ |
IBM29 Kumaran Siva VTT
>>from around the globe. It's the >>cube with >>Digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, loved the processors. Epic 7000 and three series was just launched its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center on the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation Epic processor that we just announced. Um So the Epic 7003, what we're calling it a serious processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package, the complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache. Um you know, cash is super important for a variety of workloads in the large cat size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java whole sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um we call all in features so rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases, the Pc I interfaces gives you lots and lots of storage capability. So all in all super products um and we're super excited to be working with IBM honest. >>Well, let's get into some of the details on this impact because obviously it's not just one place where these processes are gonna live. You're seeing a distributed surface area core to edge um cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am d um delivers upon our roadmap. So we we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation. That big part and then now in March, we are now in the third generation, very much on schedule, very much um intent, expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh, bare metal, um, as we have been deployed in uh, in IBM Club and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um And you can, you know, you can run each core really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the with a cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D. Processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the Ceo of IBM before he was Ceo at a conference and you know, he's always been I know him, he's always loved cloud, right? So, um but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here. I think Hybrid. Um and so I can almost see the playbook evolving. You know, Red has an operating system. Cloud and Edge is a distributed system. It's got that vibe of a system architecture, you got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on, uh but in terms of our specific joint partnerships and we did an announcement last year, so it's it's it's somewhat public, but we are working together on ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh, you know, there's some key pillars to this. One of us is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in some of the assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with, work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with with with unprecedented security and reliability uh in the cloud, >>what's the future of processors? I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables. You're gonna have more and more devices big and small. Um What's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M. D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right? So combining that with the Andes silicon, uh divide and see few devices really really is is it's a great combination. I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count cpus, I see that trend continuing for sure. Uh you know, I think that that is gonna that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center. Um and IBM is in a really good position to take advantage of that to go to to to drive that within the cloud. That income combination with IBM s presence on prem. Uh and so that's that's where the hybrid hybrid cloud value proposition comes in. Um and so we actually see ourselves uh you know, playing in both sides. So we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as >>well. I B M and M. D. Great partnership, great for clarifying and and sharing that insight come. I appreciate it. Thanks for for coming on the cube. I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. You know, as you see hybrid cloud developing one of the big trends is this ecosystem play, right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at AMG because because people are building on top of these ecosystems are building their own clouds on top of clouds, just seeing data cloud, just seeing these kinds of clouds, specialty clouds. So we could have a cute cloud on on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud, so that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't it isn't a pretty good position with IBM to be able to to enable that. Um we do have some very significant OsD partnerships, a lot of which that are leveraged into IBM um such as red hat of course, but also like VM ware and Nutanix. Um this provide these OS V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P. I. And be able to build, build um uh the the multi cloud environments that you're talking about. Um and I think that I think that's right, I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh I've been called a meta cloud, that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another crowd environments. I think that's that's that's actually very key and very uh one of the, one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space, with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great, great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very >>much. Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm mm
SUMMARY :
It's the Digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, loved the processors. so that the process of being the complete package, the complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect I think, you know, we are, we are a leader in terms of the core density that we Um and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know with IBM cloud, an AMG confidential computing. So so what uh, you know, there's some key pillars to this. Um What's the in. Um and so we actually see ourselves uh you know, playing in both sides. Um kind of a curveball question if you don't mind. Um and I think that I think that's right, I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Cameron Siva | PERSON | 0.99+ |
March | DATE | 0.99+ |
august of 2019 | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
19% | QUANTITY | 0.99+ |
each core | QUANTITY | 0.99+ |
628 lanes | QUANTITY | 0.99+ |
Each core | QUANTITY | 0.99+ |
AMG | ORGANIZATION | 0.99+ |
256 megabytes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
last year | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.98+ |
Kumaran Siva | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
two threads | QUANTITY | 0.97+ |
Epic 7003 | COMMERCIAL_ITEM | 0.96+ |
Epic | COMMERCIAL_ITEM | 0.96+ |
M. D. | PERSON | 0.96+ |
four terabytes | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
Epic | ORGANIZATION | 0.93+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
one | QUANTITY | 0.93+ |
IBM Club | ORGANIZATION | 0.92+ |
one place | QUANTITY | 0.92+ |
M. D | PERSON | 0.91+ |
Red | PERSON | 0.91+ |
A. M. | PERSON | 0.9+ |
Epic 7763 | COMMERCIAL_ITEM | 0.9+ |
first | QUANTITY | 0.9+ |
Andes | ORGANIZATION | 0.88+ |
three series | QUANTITY | 0.86+ |
E gen four | COMMERCIAL_ITEM | 0.86+ |
jenin | PERSON | 0.86+ |
Zen three core | COMMERCIAL_ITEM | 0.85+ |
2020 plus | DATE | 0.85+ |
64 4 | QUANTITY | 0.82+ |
Ceo | PERSON | 0.81+ |
about over 400 total | QUANTITY | 0.8+ |
java | TITLE | 0.8+ |
A. M. D. | PERSON | 0.79+ |
IBM cloud | ORGANIZATION | 0.76+ |
john | PERSON | 0.75+ |
Cloud | TITLE | 0.74+ |
Watson | ORGANIZATION | 0.73+ |
72 | QUANTITY | 0.73+ |
two | QUANTITY | 0.72+ |
Digging into HeatWave ML Performance
(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.
SUMMARY :
And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Rome | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
Nipun Agarwal | PERSON | 0.99+ |
Milan | LOCATION | 0.99+ |
45 times | QUANTITY | 0.99+ |
25 times | QUANTITY | 0.99+ |
12 channels | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Zen 4 | COMMERCIAL_ITEM | 0.99+ |
Kumaran | PERSON | 0.99+ |
HeatWave | ORGANIZATION | 0.99+ |
Zen 3 | COMMERCIAL_ITEM | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Kumaran Siva | PERSON | 0.99+ |
12 data sets | QUANTITY | 0.99+ |
first aspect | QUANTITY | 0.99+ |
97 times | QUANTITY | 0.99+ |
Zen 2 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
two cycles | QUANTITY | 0.99+ |
three capabilities | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
AVX 2 | COMMERCIAL_ITEM | 0.99+ |
AVX 512 | COMMERCIAL_ITEM | 0.99+ |
second thing | QUANTITY | 0.99+ |
Redshift ML | TITLE | 0.99+ |
six data sets | QUANTITY | 0.98+ |
HeatWave | TITLE | 0.98+ |
mySQL Autopilot | TITLE | 0.98+ |
two | QUANTITY | 0.98+ |
Nipun | PERSON | 0.98+ |
two categories | QUANTITY | 0.98+ |
mySQL | TITLE | 0.98+ |
two observations | QUANTITY | 0.98+ |
first part | QUANTITY | 0.98+ |
mySQL autopilot | TITLE | 0.98+ |
three | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
One example | QUANTITY | 0.97+ |
single database | QUANTITY | 0.95+ |
16 | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
about six | QUANTITY | 0.95+ |
HeatWaves | ORGANIZATION | 0.94+ |
about a dozen data sets | QUANTITY | 0.94+ |
16 nodes | QUANTITY | 0.93+ |
mySQL HeatWave | TITLE | 0.93+ |
AMD Oracle Partnership Elevates MySQLHeatwave
(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)
SUMMARY :
and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Marc Staimer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Nipun | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
Dave | PERSON | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
Zen 3 | COMMERCIAL_ITEM | 0.99+ |
7,002 | QUANTITY | 0.99+ |
Kumaran | PERSON | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Nipun Agarwal | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
12 | QUANTITY | 0.99+ |
64 cores | QUANTITY | 0.99+ |
768 megabytes | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
MySQL | TITLE | 0.99+ |
third aspect | QUANTITY | 0.99+ |
12 channels | QUANTITY | 0.99+ |
Kumaran Siva | PERSON | 0.99+ |
HeatWave | ORGANIZATION | 0.99+ |
96 | QUANTITY | 0.99+ |
18 times | QUANTITY | 0.99+ |
Bergamo | ORGANIZATION | 0.99+ |
three parts | QUANTITY | 0.99+ |
Delta | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
MySQL HeatWave | TITLE | 0.99+ |
42 times | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
18 months | QUANTITY | 0.99+ |
Zen 2 | COMMERCIAL_ITEM | 0.99+ |
one | QUANTITY | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.98+ |
second generation | QUANTITY | 0.98+ |
single database | QUANTITY | 0.98+ |
128 cores | QUANTITY | 0.98+ |
18 months | QUANTITY | 0.98+ |
three things | QUANTITY | 0.98+ |