Brad Smith, AMD & Rahul Subramaniam, Aurea CloudFix | AWS re:Invent 2022
(calming music) >> Hello and welcome back to fabulous Las Vegas, Nevada. We're here at AWS re:Invent day three of our scintillating coverage here on theCUBE. I'm Savannah Peterson, joined by John Furrier. John Day three energy's high. How you feeling? >> I dunno, it's day two, day three, day four. It feels like day four, but again, we're back. >> Who's counting? >> Three pandemic levels in terms of 50,000 plus people? Hallways are packed. I got pictures. People don't believe it. It's actually happening. Then people are back. So, you know, and then the economy is a big question too and it's still, people are here, they're still building on the cloud and cost is a big thing. This next segment's going to be really important. I'm looking forward to this next segment. >> Yeah, me too. Without further ado let's welcome our guests for this segment. We have Brad from AMD and we have Rahul from you are, well you do a variety of different things. We'll start with CloudFix for this segment, but we could we could talk about your multiple hats all day long. Welcome to the show, gentlemen. How you doing? Brad how does it feel? We love seeing your logo above our stage here. >> Oh look, we love this. And talking about re:Invent last year, the energy this year compared to last year is so much bigger. We love it. We're excited to be here. >> Yeah, that's awesome. Rahul, how are you feeling? >> Excellent, I mean, I think this is my eighth or ninth re:Invent at this point and it's been fabulous. I think the, the crowd, the engagement, it's awesome. >> You wouldn't know there's a looming recession if you look at the activity but yet still the reality is here we had an analyst on yesterday, we were talking about spend more in the cloud, save more. So that you can still use the cloud and there's a lot of right sizing, I call you got to turn the lights off before you go to bed. Kind of be more efficient with your infrastructure as a theme. This re:Invent is a lot more about that now. Before it's about the glory days. Oh yeah, keep building, now with a little bit of pressure. This is the conversation. >> Exactly and I think most companies are looking to figure out how to innovate their way out of this uncertainty that's kind of on everyone's head. And the only way to do it is to be able to be more efficient with whatever your existing spend is, take those savings and then apply them to innovating on new stuff. And that's the way to go about it at this point. >> I think it's such a hot topic, for everyone that we're talking about. I mean, total cost optimization figuring out ways to be more efficient. I know that that's a big part of your mission at CloudFix. So just in case the audience isn't versed, give us the pitch. >> Okay, so a little bit of background on this. So the other hat I wear is CTO of ESW Capital. We have over 150 enterprise software companies within the portfolio. And one of my jobs is also to manage and run about 40 to 45,000 AWS accounts of our own. >> Casual number, just a few, just a couple pocket change, no big deal. >> And like everyone else here in the audience, yeah we had a problem with our costs, just going out of control and as we were looking at a lot of the tools to help us kind of get more efficient one of the biggest issues was that while people give you a lot of recommendations recommendations are way too far from realized savings. And we were running through the challenge of how do you take recommendation and turn them into real savings and multiple different hurdles. The short story being, we had to create CloudFix to actually realize those savings. So we took AWS recommendations around cost, filtered them down to the ones that are completely non-disruptive in nature, implemented those as simple automations that everyone could just run and realize those savings right away. We then took those savings and then started applying them to innovating and doing new interesting things with that money. >> Is there a best practice in your mind that you see merging in this time? People start more focused on it. Is there a method or a purpose kind of best practice of how to approach cost optimization? >> I think one of the things that most people don't realize is that cost optimization is not a one and done thing. It is literally nonstop. Which means that, on one hand AWS is constantly creating new services. There are over a hundred thousand API at this point of time How to use them right, how to use them efficiently You also have a problem of choice. Developers are constantly discovering new services discovering new ways to utilize them. And they are behaving in ways that you had not anticipated before. So you have to stay on top of things all the time. And really the only way to kind of stay on top is to have automation that helps you stay on top of all of these things. So yeah, finding efficiencies, standardizing your practices about how you leverage these AWS services and then automating the governance and hygiene around how you utilize them is really the key >> Brad tell me what this means for AMD and what working with CloudFix and Rahul does for your customers. >> Well, the idea of efficiency and cost optimization is near and dear to our heart. We have the leading. >> It's near and dear to everyone's heart, right now. (group laughs) >> But we are the leaders in x86 price performance and density and power efficiency. So this is something that's actually part of our core culture. We've been doing this a long time and what's interesting is most companies don't understand how much more efficiency they can get out of their applications aside from just the choices they make in cloud. but that's the one thing, the message we're giving to everybody is choice matters very much when it comes to your cloud solutions and just deciding what type of instance types you choose can have a massive impact on your bottom line. And so we are excited to partner with CloudFix, they've got a great model for this and they make it very easier for our customers to help identify those areas. And then AMD can come in as well and then help provide additional insight into those applications what else they can squeeze out of it. So it's a great relationship. >> If I hear you correctly, then there's more choice for the customers, faster selection, so no bad choices means bad performance if they have a workload or an app that needs to run, is that where you you kind of get into the, is that where it is or more? >> Well, I mean from the AMD side right now, one of the things they do very quickly is they identify where the low hanging fruit is. So it's the thing about x86 compatibility, you can shift instance types instantly in most cases without any change to your environment at all. And CloudFix has an automated tool to do that. And that's one thing you can immediately have an impact on your cost without having to do any work at all. And customers love that. >> What's the alternative if this doesn't exist they have to go manually figure it out or it gets them in the face or they see the numbers don't work or what's the, if you don't have the tool to automate what's the customer's experience >> The alternative is that you actually have people look at every single instance of usage of resources and try and figure out how to do this. At cloud scale, that just doesn't make sense. You just can't. >> It's too many different options. >> Correct The reality is that your resources your human resources are literally your most expensive part of your budget. You want to leverage all the amazing people you have to do the amazing work. This is not amazing work. This is mundane. >> So you free up all the people time. >> Correct, you free up wasting their time and resources on doing something that's mundane, simple and should be automated, because that's the only way you scale. >> I think of you is like a little helper in the background helping me save money while I'm not thinking about it. It's like a good financial planner making you money since we're talking about the economy >> Pretty much, the other analogy that I give to all the technologists is this is like garbage collection. Like for most languages when you are coding, you have these new languages that do garbage collection for you. You don't do memory management and stuff where developers back in the day used to do that. Why do that when you can have technology do that in an automated manner for you in an optimal way. So just kind of freeing up your developer's time from doing this stuff that's mundane and it's a standard best practice. One of the things that we leverage AMD for, is they've helped us define the process of seamlessly migrating folks over to AMD based instances without any major disruptions or trying to minimize every aspect of disruption. So all the best practices are kind of borrowed from them, borrowed from AWS in most other cases. And we basically put them in the automation so that you don't ever have to worry about that stuff. >> Well you're getting so much data you have the opportunity to really streamline, I mean I love this, because you can look across industry, across verticals and behavior of what other folks are doing. Learn from that and apply that in the background to all your different customers. >> So how big is the company? How big is the team? >> So we have people in about 130 different countries. So we've completely been remote and global and actually the cloud has been one of the big enablers of that. >> That's awesome, 130 countries. >> And that's the best part of it. I was just telling Brad a short while ago that's allowed us to hire the best talent from across the world and they spend their time building new amazing products and new solutions instead of doing all this other mundane stuff. So we are big believers in automation not only for our world. And once our customers started asking us about or telling us about the same problem that they were having that's when we actually took what we had internally for our own purpose. We packaged it up as CloudFix and launched it last year at re:Invent. >> If the customers aren't thinking about automation then they're going to probably have struggle. They're going to probably struggle. I mean with more data coming in you see the data story here more data's coming in, more automation. And this year Brad price performance, I've heard the word price performance more this year at re:Invent than any other year I've heard it before, but this year, price performance not performance, price performance. So you're starting to hear that dialogue of squeeze, understand the use cases use the right specialized processor instance starting to see that evolve. >> Yeah and and there's so much to it. I mean, AMD right out of the box is any instance is 10% less expensive than the equivalent in the market right now on AWS. They do a great job of maximizing those products. We've got our Zen four core general processor family just released in November and it's going to be a beast. Yeah, we're very excited about it and AWS announced support for it so we're excited to see what they deliver there too. But price performance is so critical and again it's going back to the complexity of these environments. Giving some of these enterprises some help, to help them understand where they can get additional value. It goes well beyond the retail price. There's a lot more money to be shaved off the top just by spending time thinking about those applications. >> Yeah, absolutely. I love that you talked about collaboration we've been talking about community. I want to acknowledge the AWS super fans here, standing behind the stage. Rahul, I know that you are an AWS super fan. Can you tell us about that community and the program? >> Yeah, so I have been involved with AWS and building products with AWS since 2007. So it's kind of 15 years back when literally there were just a handful of API for launching EC2 instances and S3. >> Not the a hundred thousand that you mentioned earlier, my goodness, the scale. >> So I think I feel very privileged and honored that I have been part of that journey and have had to learn or have had the opportunity to learn both from successes and failures. And it's just my way of contributing back to that community. So we are part of the FinOps foundation as well, contributing through that. I run a podcast called AWS Insiders and a livestream called AWS Made Easy. So we are trying to make sure that people out there are able to understand how to leverage AWS in the best possible way. And yeah, we are there to help and hold their hand through it. >> Talk about the community, take a minute to explain to the audience watching the community around this cost optimization area. It's evolving, you mentioned FinOps. There's a whole large community developing, of practitioners and technologists coming together to look at this. What does this all mean? Talk about this community. >> So cost management within organizations is has evolved so drastically that organizations haven't really coped with it. Historically, you've had finance teams basically buy a lot of infrastructure, which is CapEx and the engineering teams had kind of an upper bound on what they would spend and where they would spend. Suddenly with cloud, that's kind of enabled so much innovation all of a sudden, everyone's realized it, five years was spent figuring out whether people should be on the cloud or not. That's no longer a question, right. Everyone needs to be in the cloud and I think that's a no-brainer. The problem there is that suddenly your operating model has moved from CapEx to OpEx. And organizations haven't really figured out how to deal with it. Finance now no longer has the controls to control and manage and forecast costs. Engineering has never had to deal with it in the past and suddenly now they have to figure out how to do all this finance stuff. And procurement finds itself in a very awkward way position because they are no longer doing these negotiations like they were doing in the past where it was okay right up front before you engage, you do these negotiations. Now it's kind of an ongoing thing and it's constantly changing. Like every day is different. >> And you got marketplace >> And you got marketplace. So it's a very complex situation and I think what we are trying to do with the FinOps foundation is try and take a lot of the best practices across organizations that have been doing this at least for the last 10, 15 years. Take all the learnings and failures and turn them into hopefully opinionated approaches that people can take organizations can take to navigate through this faster rather than kind of falter and then decide that oh, this is not for us. >> Yeah. It's a great model, it's a great model. >> I know it's time John, go ahead. >> All right so, we got a little bumper sticker exercise we used to say what's the bumper sticker for the show? We used to say that, now we're modernizing, we're saying if you had to do an Instagram reel right now, short hot take of what's going on at re:Invent this year with AMD or CloudFix or just in general what would be the sizzle reel, that would be on Instagram or TikTok, go. >> Look, I think when you're at re:Invent right now and number one the energy is fantastic. 23 is going to be a building year. We've got a lot of difficult times ahead financially but it's the time, the ones that come out of 23 stronger and more efficient, and cost optimize are going to survive the long run. So now's the time to build. >> Well done, Rahul let's go for it. >> Yeah, so like Brad said, cost and efficiencies at the top of everyone's mind. Stuff that's the low hanging fruit, easy, use automation. Apply your sources to do most of the innovation. Take the easiest part to realizing savings and operate as efficiently as you possibly can. I think that's got to be key. >> I think they nailed it. They both nailed it. Wow, well it was really good. >> I put you on our talent list of >> And alright, so we repeat them. Are you part of our host team? I love this, I absolutely love this Rahul we wish you the best at CloudFix and your 17 other jobs. And I am genuinely impressed. Do you sleep actually? Last question. >> I do, I do. I have an amazing team that really helps me with all of this. So yeah, thanks to them and thank you for having us here. >> It's been fantastic. >> It's our pleasure. And Brad, I'm delighted we get you both now and again on our next segment. Thank you for being here with us. >> Thank you very much. >> And thank you all for tuning in to our live coverage here at AWS re:Invent, in fabulous Sin City with John Furrier, my name's Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (calm music)
SUMMARY :
How you feeling? I dunno, it's day on the cloud and cost is a big thing. Rahul from you are, the energy this year compared to last year Rahul, how are you feeling? the engagement, it's awesome. So that you can still use the cloud and then apply them to So just in case the audience isn't versed, and run about 40 to 45,000 AWS accounts just a couple pocket change, no big deal. at a lot of the tools how to approach cost optimization? is to have automation that helps you and Rahul does for your customers. We have the leading. to everyone's heart, right now. from just the choices they make in cloud. So it's the thing about x86 compatibility, The alternative is that you actually It's too many all the amazing people you have because that's the only way you scale. I think of you is like One of the things that in the background to all and actually the cloud has been one And that's the best part of it. If the customers aren't and it's going to be a beast. and the program? So it's kind of 15 years that you mentioned earlier, or have had the opportunity to learn the community around this and the engineering teams had of the best practices it's a great model. if you had to do an So now's the time to build. Take the easiest part to realizing savings I think they nailed it. Rahul we wish you the best and thank you for having us here. we get you both now And thank you all
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rahul | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Brad Smith | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
ESW Capital | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Rahul Subramaniam | PERSON | 0.99+ |
17 other jobs | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
CloudFix | TITLE | 0.99+ |
130 countries | QUANTITY | 0.99+ |
2007 | DATE | 0.99+ |
this year | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
eighth | QUANTITY | 0.98+ |
about 130 different countries | QUANTITY | 0.98+ |
ninth | QUANTITY | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
FinOps | ORGANIZATION | 0.97+ |
CTO | PERSON | 0.97+ |
Aurea CloudFix | ORGANIZATION | 0.96+ |
over a hundred thousand API | QUANTITY | 0.96+ |
Zen four core | COMMERCIAL_ITEM | 0.95+ |
one thing | QUANTITY | 0.95+ |
EC2 | TITLE | 0.95+ |
50,000 plus people | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
day four | QUANTITY | 0.95+ |
about 40 | QUANTITY | 0.95+ |
23 | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
CloudFix | ORGANIZATION | 0.94+ |
45,000 | QUANTITY | 0.93+ |
TikTok | ORGANIZATION | 0.92+ |
OpEx | ORGANIZATION | 0.92+ |
S3 | TITLE | 0.92+ |
over 150 enterprise software companies | QUANTITY | 0.89+ |
Invent | EVENT | 0.87+ |
ORGANIZATION | 0.86+ |
Next Gen Servers Ready to Hit the Market
(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)
SUMMARY :
and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
January 2023 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
November 10th | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
10 bucks | QUANTITY | 0.99+ |
five bucks | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Saturday | DATE | 0.99+ |
128 core | QUANTITY | 0.99+ |
25 gig | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
2X | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
8X | QUANTITY | 0.99+ |
4X | QUANTITY | 0.99+ |
96 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2022 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
doeshardwarematter.com | OTHER | 0.98+ |
5th Gen. | QUANTITY | 0.98+ |
4th Gen | QUANTITY | 0.98+ |
ARM | ORGANIZATION | 0.98+ |
18-wheeler | QUANTITY | 0.98+ |
Z4 | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
2023 | DATE | 0.97+ |
Zen 4 | COMMERCIAL_ITEM | 0.97+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.97+ |
thousands | QUANTITY | 0.96+ |
one server | QUANTITY | 0.96+ |
double | QUANTITY | 0.95+ |
PCIe Gen 4 | OTHER | 0.95+ |
Sapphire Rapid CPUs | COMMERCIAL_ITEM | 0.94+ |
PCIe Gen 3 | OTHER | 0.93+ |
PCIe 4 | OTHER | 0.93+ |
x86 | COMMERCIAL_ITEM | 0.92+ |
Wharton CTO Academy | ORGANIZATION | 0.92+ |
Digging into HeatWave ML Performance
(upbeat music) >> Hello everyone. This is Dave Vellante. We're diving into the deep end with AMD and Oracle on the topic of mySQL HeatWave performance. And we want to explore the important issues around machine learning. As applications become more data intensive and machine intelligence continues to evolve, workloads increasingly are seeing a major shift where data and AI are being infused into applications. And having a database that simplifies the convergence of transaction and analytics data without the need to context, switch and move data out of and into different data stores. And eliminating the need to perform extensive ETL operations is becoming an industry trend that customers are demanding. At the same time, workloads are becoming more automated and intelligent. And to explore these issues further, we're happy to have back in theCUBE Nipun Agarwal, who's the Senior Vice President of mySQL HeatWave and Kumaran Siva, who's the Corporate Vice President Strategic Business Development at AMD. Gents, hello again. Welcome back. >> Hello. Hi Dave. >> Thank you, Dave. >> Okay. Nipun, obviously machine learning has become a must have for analytics offerings. It's integrated into mySQL HeatWave. Why did you take this approach and not the specialized database approach as many competitors do right tool for the right job? >> Right? So, there are a lot of customers of mySQL who have the need to run machine learning on the data which is store in mySQL database. So in the past, customers would need to extract the data out of mySQL and they would take it to a specialized service for running machine learning. Now, the reason we decided to incorporate machine learning inside the database, there are multiple reasons. One, customers don't need to move the data. And if they don't need to move the data, it is more secure because it's protected by the same access controlled mechanisms as rest of the data There is no need for customers to manage multiple services. But in addition to that, when we run the machine learning inside the database customers are able to leverage the same service the same hardware, which has been provisioned for OTP analytics and use machine learning capabilities at no additional charge. So from a customer's perspective, they get the benefits that it is a single database. They don't need to manage multiple services. And it is offered at no additional charge. And then as another aspect, which is kind of hard to learn which is based on the IP, the work we have done it is also significantly faster than what customers would get by having a separate service. >> Just to follow up on that. How are you seeing customers use HeatWaves machine learning capabilities today? How is that evolving? >> Right. So one of the things which, you know customers very often want to do is to train their models based on the data. Now, one of the things is that data in a database or in a transaction database changes quite rapidly. So we have introduced support for auto machine learning as a part of HeatWave ML. And what it does is that it fully automates the process of training. And this is something which is very important to database users, very important to mySQL users that they don't really want to hire or data scientists or specialists for doing training. So that's the first part that training in HeatWave ML is fully automated. Doesn't require the user to provide any like specific parameters, just the source data and the task which they want to train. The second aspect is the training is really fast. So the training is really fast. The benefit is that customers can retrain quite often. They can make sure that the model is up to date with any changes which have been made to their transaction database. And as a result of the models being up to date, the accuracy of the prediction is high. Right? So that's the first aspect, which is training. The second aspect is inference, which customers run once they have the models trained. And the third thing, which is perhaps been the most sought after request from the mySQL customers is the ability to provide explanations. So, HeatWave ML provides explanations for any model which has been generated or trained by HeatWave ML. So these are the three capabilities- training, inference and explanations. And this whole process is completely automated, doesn't require a specialist or a data scientist. >> Yeah, that's nice. I mean, training obviously very popular today. I've said inference I think is going to explode in the coming decade. And then of course, AI explainable AI is a very important issue. Kumaran, what are the relevant capabilities of the AMD chips that are used in OCI to support HeatWave ML? Are they different from say the specs for HeatWave in general? >> So, actually they aren't. And this is one of the key features of this architecture or this implementation that is really exciting. Um, there with HeatWave ML, you're using the same CPU. And by the way, it's not a GPU, it's a CPU for both for all three of the functions that Nipun just talked about- inference, training and explanation all done on CPU. You know, bigger picture with the capabilities we bring here we're really providing a balance, you know between the CPU cores, memory and the networking. And what that allows you to do here is be able to feed the CPU cores appropriately. And within the cores, we have these AVX instruc... extensions in with the Zen 2 and Zen 3 cores. We had AVX 2, and then with the Zen 4 core coming out we're going to have AVX 512. But we were able to with that balance of being able to bring in the data and utilize the high memory bandwidth and then use the computation to its maximum we're able to provide, you know, build pride enough AI processing that we are able to get the job done. And then we're built to build a fit into that larger pipeline that that we build out here with the HeatWave. >> Got it. Nipun you know, you and I every time we have a conversation we've got to talk benchmarks. So you've done machine learning benchmarks with HeatWave. You might even be the first in the industry to publish you know, transparent, open ML benchmarks on GitHub. I mean, I, I wouldn't know for sure but I've not seen that as common. Can you describe the benchmarks and the data sets that you used here? >> Sure. So what we did was we took a bunch of open data sets for two categories of tasks- classification and regression. So we took about a dozen data sets for classification and about six for regression. So to give an example, the kind of data sets we used for classifications like the airlines data set, hex sensors bank, right? So these are open data sets. And what we did was for on these data sets we did a comparison of what would it take to train using HeatWave ML? And then the other service we compared with is that RedShift ML. So, there were two observations. One is that with HeatWave ML, the user does not need to provide any tuning parameters, right? The HeatWave ML using RML fully generates a train model, figures out what are the right algorithms? What are the right features? What are the right hyper parameters and sets, right? So no need for any manual intervention not so the case with Redshift ML. The second thing is the performance, right? So the performance of HeatWave ML aggregate on these 12 data sets for classification and the six data sets on regression. On an average, it is 25 times faster than Redshift ML. And note that Redshift ML in turn involves SageMaker, right? So on an average, HeatWave ML provides 25 times better performance for training. And the other point to note is that there is no need for any human intervention. That's fully automated. But in the case of Redshift ML, many of these data sets did not even complete in the set duration. If you look at price performance, one of the things again I want to highlight is because of the fact that AMD does pretty well in all kinds of workloads. We are able to use the same cluster users and use the same cluster for analytics, for OTP or for machine learning. So there is no additional cost for customers to run HeatWave ML if they have provision HeatWave. But assuming a user is provisioning a HeatWave cluster only to run HeatWave ML, right? That's the case, even in that case the price performance advantage of HeatWave ML over Redshift ML is 97 times, right? So 25 times faster at 1% of the cost compared to Redshift ML And all these scripts and all this information is available on GitHub for customers to try to modify and like, see, like what are the advantages they would get on their workloads? >> Every time I hear these numbers, I shake my head. I mean, they're just so overwhelming. Um, and so we'll see how the competition responds when, and if they respond. So, but thank you for sharing those results. Kumaran, can you elaborate on how the specs that you talked about earlier contribute to HeatWave ML's you know, benchmark results. I'm particularly interested in scalability, you know Typically things degrade as you push the system harder. What are you seeing? >> No, I think, I think it's good. Look, yeah. That's by those numbers, just blow me, blow my head too. That's crazy good performance. So look from, from an AMD perspective, we have really built an architecture. Like if you think about the chiplet architecture to begin with, it is fundamentally, you know, it's kind of scaling by design, right? And, and one of the things that we've done here is been able to work with, with the HeatWave team and heat well ML team, and then been able to, to within within the CPU package itself, be able to scale up to take very efficient use of all of the course. And then of course, work with them on how you go between nodes. So you can have these very large systems that can run ML very, very efficiently. So it's really, you know, building on the building blocks of the chiplet architecture and how scaling happens there. >> Yeah. So it's you're saying it's near linear scaling or essentially. >> So, let Nipun comment on that. >> Yeah. >> Is it... So, how about as cluster sizes grow, Nipun? >> Right. >> What happens there? >> So one of the design points for HeatWave is scale out architecture, right? So as you said, that as we add more data set or increase the size of the data, or we add the number of nodes to the cluster, we want the performance to scale. So we show that we have near linear scale factor, or nearly near scale scalability for SQL workloads in the case of HeatWave ML, as well. As users add more nodes to the cluster so the size of the cluster the performance of HeatWave ML improves. So I was giving you this example that HeatWave ML is 25 times faster compared to Redshift ML. Well, that was on a cluster size of two. If you increase the cluster size of HeatWave ML to a larger number. But I think the number is 16. The performance advantage over Redshift ML increases from 25 times faster to 45 times faster. So what that means is that on a cluster size of 16 nodes HeatWave ML is 45 times faster for training these again, dozen data sets. So this shows that HeatWave ML skills better than the computation. >> So you're saying adding nodes offsets any management complexity that you would think of as getting in the way. Is that right? >> Right. So one is the management complexity and which is why by features like last customers can scale up or scale down, you know, very easily. The second aspect is, okay What gives us this advantage, right, of scalability? Or how are we able to scale? Now, the techniques which we use for HeatWave ML scalability are a bit different from what we use for SQL processing. So in the case of HeatWave ML, they really like, you know, three, two trade offs which we have to be careful about. One is the accuracy. Because we want to provide better performance for machine learning without compromising on the accuracy. So accuracy would require like more synchronization if you have multiple threads. But if you have too much of synchronization that can slow down the degree of patterns that we get. Right? So we have to strike a fine balance. So what we do is that in HeatWave ML, there are different phases of training, like algorithm selection, feature selection, hyper probability training. Each of these phases is analyzed. And for instance, one of the ways techniques we use is that if you're trying to figure out what's the optimal hyper parameter to be used? We start up with the search space. And then each of the VMs gets a part of the search space. And then we synchronize only when needed, right? So these are some of the techniques which we have developed over the years. And there are actually paper's filed, research publications filed on this. And this is what we do to achieve good scalability. And what that results to the customer is that if they have some amount of training time and they want to make it better they can just provision a larger cluster and they will get better performance. >> Got it. Thank you. Kumaran, when I think of machine learning, machine intelligence, AI, I think GPU but you're not using GPU. So how are you able to get this type of performance or price performance without using GPU's? >> Yeah, definitely. So yeah, that's a good point. And you think about what is going on here and you consider the whole pipeline that Nipun has just described in terms of how you get you know, your training, your algorithms And using the mySQL pieces of it to get to the point where the AI can be effective. In that process what happens is you have to have a lot of memory to transactions. A lot of memory bandwidth comes into play. And then bringing all that data together, feeding the actual complex that does the AI calculations that in itself could be the bottleneck, right? And you can have multiple bottlenecks along the way. And I think what you see in the AMD architecture for epic for this use case is the balance. And the fact that you are able to do the pre-processing, the AI, and then the post-processing all kind of seamlessly together, that has a huge value. And that goes back to what Nipun was saying about using the same infrastructure, gets you the better TCO but it also gets you gets you better performance. And that's because of the fact that you're bringing the data to the computation. So the computation in this case is not strictly the bottleneck. It's really about how you pull together what you need and to do the AI computation. And that is, that's probably a more, you know, it's a common case. And so, you know, you're going to start I think the least start to see this especially for inference applications. But in this case we're doing both inference explanation and training. All using the the CPU in the same OCI infrastructure. >> Interesting. Now Nipun, is the secret sauce for HeatWave ML performance different than what we've discussed before you and I with with HeatWave generally? Is there some, you know, additive engine additive that you're putting in? >> Right? Yes. The secret sauce is indeed different, right? Just the way I was saying that for SQL processing. The reason we get very good performance and price performance is because we have come up with new algorithms which help the SQL process can scale out. Similarly for HeatWave ML, we have come up with new IP, new like algorithms. One example is that we use meta-learn proxy models, right? That's the technique we use for automating the training process, right? So think of this meta-learn proxy models to be like, you know using machine learning for machine learning training. And this is an IP which we developed. And again, we have published the results and the techniques. But having such kind of like techniques is what gives us a better performance. Similarly, another thing which we use is adaptive sampling that you can have a large data set. But we intelligently sample to figure out that how can we train on a small subset without compromising on the accuracy? So, yes, there are many techniques that you have developed specifically for machine learning which is what gives us the better performance, better price performance, and also better scalability. >> What about mySQL autopilot? Is there anything that differs from HeatWave ML that is relevant? >> Okay. Interesting you should ask. So mySQL Autopilot is think of it to be an application using machine learning. So mySQL Autopilot uses machine learning to automate various aspects of the database service. So for instance, if you want to figure out that what's the right partitioning scheme to partition the data in memory? We use machine learning techniques to figure out that what's the right, the best column based on the user's workload to partition the data in memory Or given a workload, if you want to figure out what is the right cluster size to provision? That's something we use mySQL autopilot for. And I want to highlight that we don't aware of any other database service which provides this level of machine learning based automation which customers get with mySQL Autopilot. >> Hmm. Interesting. Okay. Last question for both of you. What are you guys working on next? What can customers expect from this collaboration specifically in this space? Maybe Nipun, you can start and then Kamaran can bring us home. >> Sure. So there are two things we are working on. One is based on the feedback we have gotten from customers, we are going to keep making the machine learning capabilities richer in HeatWave ML. That's one dimension. And the second thing is which Kamaran was alluding to earlier, We are looking at the next generation of like processes coming from AMD. And we will be seeing as to how we can more benefit from these processes whether it's the size of the L3 cache, the memory bandwidth, the network bandwidth, and such or the newer effects. And make sure that we leverage the all the greatness which the new generation of processes will offer. >> It's like an engineering playground. Kumaran, let's give you the final word. >> No, that's great. Now look with the Zen 4 CPU cores, we're also bringing in AVX 512 instruction capability. Now our implementation is a little different. It was in, in Rome and Milan, too where we use a double pump implementation. What that means is, you know, we take two cycles to do these instructions. But the key thing there is we don't lower our speed of the CPU. So there's no noisy neighbor effects. And it's something that OCI and the HeatWave has taken full advantage of. And so like, as we go out in time and we see the Zen 4 core, we can... we see up to 96 CPUs that that's going to work really well. So we're collaborating closely with, with OCI and with the HeatWave team here to make sure that we can take advantage of that. And we're also going to upgrade the memory subsystem to get to 12 channels of DDR 5. So it should be, you know there should be a fairly significant boost in absolute performance. But more important or just as importantly in TCO value for the customers, the end customers who are going to adopt this great service. >> I love their relentless innovation guys. Thanks so much for your time. We're going to have to leave it there. Appreciate it. >> Thank you, David. >> Thank you, David. >> Okay. Thank you for watching this special presentation on theCUBE. Your leader in enterprise and emerging tech coverage.
SUMMARY :
And eliminating the need and not the specialized database approach So in the past, customers How are you seeing customers use So one of the things of the AMD chips that are used in OCI And by the way, it's not and the data sets that you used here? And the other point to note elaborate on how the specs And, and one of the things or essentially. So, how about as So one of the design complexity that you would So in the case of HeatWave ML, So how are you able to get And the fact that you are Nipun, is the secret sauce That's the technique we use for automating of the database service. What are you guys working on next? And the second thing is which Kamaran Kumaran, let's give you the final word. OCI and the HeatWave We're going to have to leave it there. and emerging tech coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Rome | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
OCI | ORGANIZATION | 0.99+ |
Nipun Agarwal | PERSON | 0.99+ |
Milan | LOCATION | 0.99+ |
45 times | QUANTITY | 0.99+ |
25 times | QUANTITY | 0.99+ |
12 channels | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Zen 4 | COMMERCIAL_ITEM | 0.99+ |
Kumaran | PERSON | 0.99+ |
HeatWave | ORGANIZATION | 0.99+ |
Zen 3 | COMMERCIAL_ITEM | 0.99+ |
second aspect | QUANTITY | 0.99+ |
Kumaran Siva | PERSON | 0.99+ |
12 data sets | QUANTITY | 0.99+ |
first aspect | QUANTITY | 0.99+ |
97 times | QUANTITY | 0.99+ |
Zen 2 | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Each | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
two cycles | QUANTITY | 0.99+ |
three capabilities | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
each | QUANTITY | 0.99+ |
AVX 2 | COMMERCIAL_ITEM | 0.99+ |
AVX 512 | COMMERCIAL_ITEM | 0.99+ |
second thing | QUANTITY | 0.99+ |
Redshift ML | TITLE | 0.99+ |
six data sets | QUANTITY | 0.98+ |
HeatWave | TITLE | 0.98+ |
mySQL Autopilot | TITLE | 0.98+ |
two | QUANTITY | 0.98+ |
Nipun | PERSON | 0.98+ |
two categories | QUANTITY | 0.98+ |
mySQL | TITLE | 0.98+ |
two observations | QUANTITY | 0.98+ |
first part | QUANTITY | 0.98+ |
mySQL autopilot | TITLE | 0.98+ |
three | QUANTITY | 0.97+ |
SQL | TITLE | 0.97+ |
One example | QUANTITY | 0.97+ |
single database | QUANTITY | 0.95+ |
16 | QUANTITY | 0.95+ |
today | DATE | 0.95+ |
about six | QUANTITY | 0.95+ |
HeatWaves | ORGANIZATION | 0.94+ |
about a dozen data sets | QUANTITY | 0.94+ |
16 nodes | QUANTITY | 0.93+ |
mySQL HeatWave | TITLE | 0.93+ |
Kumaran Siva, AMD | IBM Think 2021
>>from around the globe. It's the >>cube >>With digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, love the processors. Epic 7000 and three series was just launched. Its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center and the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation ethic processor that were just announced. Um So the Epic 7003, what we're calling it a series processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package to complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache, um you know, cash is super important for a variety of workloads in the large cache size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java all sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um what we call all in features. So rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases. The pc I interfaces gives you lots and lots of storage capability so all in all super products um and we're super excited to be working with IBM honest. >>Well let's get into some of the details on this impact because obviously it's not just one place where these processes are going to live. You're seeing a distributed surface area core to edge um, cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am be um delivers upon our roadmap. So we, we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation, Epic part and then now in March, we are now in the third generation. Very much on schedule. Very much um, intern expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, Jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh bare metal. Um, as we have been deployed in uh, in IBM cloud and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um and you can, you know, you can run each core uh really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the for the cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the ceo of IBM before he was Ceo at a conference and you know, he's always been, I know him, he's always loved cloud, right? So, um, but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here, I think Hybrid. Um, and so I can almost see the playbook evolving. You know, Red has an operating system, Cloud and Edge is a distributed system, it's got that vibe of a system architecture, almost got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious, could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on. Uh but in terms of our specific joint partnerships and we do have an announcement last year. Um so it's it's it's somewhat public, but we are working together on Ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability, um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh you know, there's some key pillars to this. One of this is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in something assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with uh with unprecedented security and reliability uh in the cloud, >>what's the future of processors, I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables, you're gonna have more and more devices big and small. Um what's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right, So combining that with the Andes Silicon, uh divided and see few devices really really is is it's a great combination, I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count Cpus I see that trend continuing for sure. Uh you know, I think that that is gonna, that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center um and IBM is in a really good position to take advantage of that to go to, to to drive that within the cloud. That income combination with IBM s presence on prem uh and so that's that's where the hybrid hybrid cloud value proposition comes in um and so we actually see ourselves uh you know, playing in both sides, so we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as well. >>I B M and M. D. Great partnership, great for clarifying and and sharing that insight come, I appreciate it. Thanks for for coming on the cube, I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. As you see hybrid cloud developing one of the big trends is this ecosystem play right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at A. M. D. I mean because because people are building on top of these ecosystems are building their own clouds on top of cloud, you're seeing data. Cloud, just seeing these kinds of clouds, specialty clouds. So I mean we could have a cute cloud on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud. So that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't, it isn't a pretty good position with IBM to be able to, to enable that. Um we do have some very significant osD partnerships, a lot of which that are leveraged into IBM um such as Red hat of course, but also like VM ware and Nutanix. Um this provide these always V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P I. And be able to build build um uh the the multi cloud environments that you're talking about. Um and I think that, I think that's right. I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh call the medic cloud that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another cloud environments. I think that's that's that's actually very key and very uh one of the one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very much. >>Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm
SUMMARY :
It's the With digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, love the processors. so that the process of being the complete package to complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect Um and you can, you know, you can run each core uh Um, and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know with IBM cloud, an AMG confidential computing. So so what uh you know, there's some key pillars to this. In in the IBM world? in um and so we actually see ourselves uh you know, playing in both sides, Thanks for for coming on the cube, I do want to ask you while I got you here. I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Arvin | PERSON | 0.99+ |
Cameron Siva | PERSON | 0.99+ |
March | DATE | 0.99+ |
19% | QUANTITY | 0.99+ |
64 cores | QUANTITY | 0.99+ |
each core | QUANTITY | 0.99+ |
Each core | QUANTITY | 0.99+ |
august of 2019 | DATE | 0.99+ |
628 lanes | QUANTITY | 0.99+ |
256 megabytes | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
2020 | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
Nutanix | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
two threads | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
both sides | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
third generation | QUANTITY | 0.98+ |
AMG | ORGANIZATION | 0.98+ |
Epic 7003 | COMMERCIAL_ITEM | 0.97+ |
Jenin | PERSON | 0.97+ |
Andes Silicon | ORGANIZATION | 0.97+ |
Zen three | COMMERCIAL_ITEM | 0.97+ |
third generation | QUANTITY | 0.97+ |
M. D. | PERSON | 0.94+ |
four terabytes | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
one place | QUANTITY | 0.94+ |
Epic | ORGANIZATION | 0.93+ |
Think 2021 | COMMERCIAL_ITEM | 0.92+ |
IBM cloud | ORGANIZATION | 0.92+ |
Epic 7763 | COMMERCIAL_ITEM | 0.91+ |
one | QUANTITY | 0.9+ |
jenin | PERSON | 0.9+ |
three series | QUANTITY | 0.89+ |
Epic | COMMERCIAL_ITEM | 0.88+ |
A. M. | ORGANIZATION | 0.85+ |
A. M. | PERSON | 0.85+ |
Red | PERSON | 0.83+ |
Ceo | PERSON | 0.82+ |
Mm Kumaran Siva | PERSON | 0.8+ |
about over 400 total instances | QUANTITY | 0.79+ |
64 4 | QUANTITY | 0.78+ |
john | PERSON | 0.77+ |
up to 128 threads | QUANTITY | 0.72+ |
Epic um 72 F three | COMMERCIAL_ITEM | 0.71+ |
java | TITLE | 0.7+ |
7000 | COMMERCIAL_ITEM | 0.7+ |
Epic Force | COMMERCIAL_ITEM | 0.69+ |
E gen four | COMMERCIAL_ITEM | 0.67+ |
M. D | PERSON | 0.67+ |
IBM29 Kumaran Siva VTT
>>from around the globe. It's the >>cube with >>Digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, loved the processors. Epic 7000 and three series was just launched its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center on the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation Epic processor that we just announced. Um So the Epic 7003, what we're calling it a serious processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package, the complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache. Um you know, cash is super important for a variety of workloads in the large cat size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java whole sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um we call all in features so rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases, the Pc I interfaces gives you lots and lots of storage capability. So all in all super products um and we're super excited to be working with IBM honest. >>Well, let's get into some of the details on this impact because obviously it's not just one place where these processes are gonna live. You're seeing a distributed surface area core to edge um cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am d um delivers upon our roadmap. So we we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation. That big part and then now in March, we are now in the third generation, very much on schedule, very much um intent, expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh, bare metal, um, as we have been deployed in uh, in IBM Club and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um And you can, you know, you can run each core really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the with a cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D. Processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the Ceo of IBM before he was Ceo at a conference and you know, he's always been I know him, he's always loved cloud, right? So, um but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here. I think Hybrid. Um and so I can almost see the playbook evolving. You know, Red has an operating system. Cloud and Edge is a distributed system. It's got that vibe of a system architecture, you got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on, uh but in terms of our specific joint partnerships and we did an announcement last year, so it's it's it's somewhat public, but we are working together on ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh, you know, there's some key pillars to this. One of us is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in some of the assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with, work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with with with unprecedented security and reliability uh in the cloud, >>what's the future of processors? I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables. You're gonna have more and more devices big and small. Um What's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M. D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right? So combining that with the Andes silicon, uh divide and see few devices really really is is it's a great combination. I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count cpus, I see that trend continuing for sure. Uh you know, I think that that is gonna that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center. Um and IBM is in a really good position to take advantage of that to go to to to drive that within the cloud. That income combination with IBM s presence on prem. Uh and so that's that's where the hybrid hybrid cloud value proposition comes in. Um and so we actually see ourselves uh you know, playing in both sides. So we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as >>well. I B M and M. D. Great partnership, great for clarifying and and sharing that insight come. I appreciate it. Thanks for for coming on the cube. I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. You know, as you see hybrid cloud developing one of the big trends is this ecosystem play, right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at AMG because because people are building on top of these ecosystems are building their own clouds on top of clouds, just seeing data cloud, just seeing these kinds of clouds, specialty clouds. So we could have a cute cloud on on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud, so that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't it isn't a pretty good position with IBM to be able to to enable that. Um we do have some very significant OsD partnerships, a lot of which that are leveraged into IBM um such as red hat of course, but also like VM ware and Nutanix. Um this provide these OS V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P. I. And be able to build, build um uh the the multi cloud environments that you're talking about. Um and I think that I think that's right, I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh I've been called a meta cloud, that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another crowd environments. I think that's that's that's actually very key and very uh one of the, one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space, with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great, great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very >>much. Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm mm
SUMMARY :
It's the Digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, loved the processors. so that the process of being the complete package, the complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect I think, you know, we are, we are a leader in terms of the core density that we Um and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM s progress and know with IBM cloud, an AMG confidential computing. So so what uh, you know, there's some key pillars to this. Um What's the in. Um and so we actually see ourselves uh you know, playing in both sides. Um kind of a curveball question if you don't mind. Um and I think that I think that's right, I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Cameron Siva | PERSON | 0.99+ |
March | DATE | 0.99+ |
august of 2019 | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
19% | QUANTITY | 0.99+ |
each core | QUANTITY | 0.99+ |
628 lanes | QUANTITY | 0.99+ |
Each core | QUANTITY | 0.99+ |
AMG | ORGANIZATION | 0.99+ |
256 megabytes | QUANTITY | 0.99+ |
Arvin | PERSON | 0.99+ |
last year | DATE | 0.99+ |
64 cores | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
second thing | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.98+ |
Kumaran Siva | PERSON | 0.98+ |
both | QUANTITY | 0.98+ |
Nutanix | ORGANIZATION | 0.98+ |
two threads | QUANTITY | 0.97+ |
Epic 7003 | COMMERCIAL_ITEM | 0.96+ |
Epic | COMMERCIAL_ITEM | 0.96+ |
M. D. | PERSON | 0.96+ |
four terabytes | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.94+ |
today | DATE | 0.94+ |
Epic | ORGANIZATION | 0.93+ |
Think 2021 | COMMERCIAL_ITEM | 0.93+ |
one | QUANTITY | 0.93+ |
IBM Club | ORGANIZATION | 0.92+ |
one place | QUANTITY | 0.92+ |
M. D | PERSON | 0.91+ |
Red | PERSON | 0.91+ |
A. M. | PERSON | 0.9+ |
Epic 7763 | COMMERCIAL_ITEM | 0.9+ |
first | QUANTITY | 0.9+ |
Andes | ORGANIZATION | 0.88+ |
three series | QUANTITY | 0.86+ |
E gen four | COMMERCIAL_ITEM | 0.86+ |
jenin | PERSON | 0.86+ |
Zen three core | COMMERCIAL_ITEM | 0.85+ |
2020 plus | DATE | 0.85+ |
64 4 | QUANTITY | 0.82+ |
Ceo | PERSON | 0.81+ |
about over 400 total | QUANTITY | 0.8+ |
java | TITLE | 0.8+ |
A. M. D. | PERSON | 0.79+ |
IBM cloud | ORGANIZATION | 0.76+ |
john | PERSON | 0.75+ |
Cloud | TITLE | 0.74+ |
Watson | ORGANIZATION | 0.73+ |
72 | QUANTITY | 0.73+ |
two | QUANTITY | 0.72+ |