Image Title

Search Results for AMD:

HPE Compute Security - Kevin Depew, HPE & David Chang, AMD


 

>>Hey everyone, welcome to this event, HPE Compute Security. I'm your host, Lisa Martin. Kevin Dee joins me next Senior director, future Surfer Architecture at hpe. Kevin, it's great to have you back on the program. >>Thanks, Lisa. I'm glad to be here. >>One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And if we think of how dramatically the landscape has changed in the last couple of years, I was looking at some numbers that H P V E had provided. Cybercrime will reach 10.5 trillion by 2025. It's a couple years away. The average total cost of a data breach is now over 4 million, 15% year over year crime growth predicted over the next five years. It's no longer if we get hit, it's when it's how often. What's the severity? Talk to me about the current situation with the cybersecurity landscape that you're seeing. >>Yeah, I mean the, the numbers you're talking about are just staggering and then that's exactly what we're seeing and that's exactly what we're hearing from our customers is just absolutely key. Customers have too much to lose. The, the dollar cost is just, like I said, staggering. And, and here at HP we know we have a huge part to play, but we also know that we need partnerships across the industry to solve these problems. So we have partnered with, with our, our various partners to deliver these Gen 11 products. Whether we're talking about partners like a M D or partners like our Nick vendors, storage card vendors. We know we can't solve the problem alone. And we know this, the issue is huge. And like you said, the numbers are staggering. So we're really, we're really partnering with, with all the right players to ensure we have a secure solution so we can stay ahead of the bad guys to try to limit the, the attacks on our customers. >>Right. Limit the damage. What are some of the things that you've seen particularly change in the last 18 months or so? Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? >>Well, there, there's been a massive number of attacks just in the last 12 months, but I wouldn't really say it's so much changed because the amount of attacks has been increasing dramatically over the years for many, many, many years. It's just a very lucrative area for the bad guys, whether it's ransomware or stealing personal data, whatever it is, it's there. There's unfortunately a lot of money to be made into it, made from it, and a lot of money to be lost by the good guys, the good guys being our customers. So it's not so much that it's changed, it's just that it's even accelerating faster. So the real change is, it's accelerating even faster because it's becoming even more lucrative. So we have to stay ahead of these bad guys. One of the statistics of Microsoft operating environments, the number of tax in the last year, up 50% year over year, that's a huge acceleration and we've gotta stay ahead of that. We have to make sure our customers don't get impacted to the level that these, these staggering number of attacks are. The, the bad guys are out there. We've gotta protect, protect our customers from the bad guys. >>Absolutely. The acceleration that you talked about is, it's, it's kind of frightening. It's very eye-opening. We do know that security, you know, we've talked about it for so long as a, as a a C-suite priority, a board level priority. We know that as some of the data that HPE e also sent over organizations are risking are, are listing cyber risks as a top five concern in their organization. IT budgets spend is going up where security is concerned. And so security security's on everyone's mind. In fact, the cube did, I guess in the middle part of last, I did a series on this really focusing on cybersecurity as a board issue and they went into how companies are structuring security teams changing their assumptions about the right security model, offense versus defense. But security's gone beyond the board, it's top of mind and it's on, it's in an integral part of every conversation. So my question for you is, when you're talking to customers, what are some of the key challenges that they're saying, Kevin, these are some of the things the landscape is accelerating, we know it's a matter of time. What are some of those challenges and that they're key pain points that they're coming to you to help solve? >>Yeah, at the highest level it's simply that security is incredibly important to them. We talked about the numbers. There's so much money to be lost that what they come to us and say, is security's important for us? What can you do to protect us? What can you do to prevent us from being one of those statistics? So at a high level, that's kind of what we're seeing at a, with a little more detail. We know that there's customers doing digital transformations. We know that there's customers going hybrid cloud, they've got a lot of initiatives on their own. They've gotta spend a lot of time and a lot of bandwidth tackling things that are important to their business. They just don't have the bandwidth to worry about yet. Another thing which is security. So we are doing everything we can and partnering with everyone we can to help solve those problems for customers. >>Cuz we're hearing, hey, this is huge, this is too big of a risk. How do you protect us? And by the way, we only have limited bandwidth, so what can we do? What we can do is make them assured that that platform is secure, that we're, we are creating a foundation for a very secure platform and that we've worked with our partners to secure all the pieces. So yes, they still have to worry about security, but there's pieces that we've taken care of that they don't have to worry about and there's capabilities that we've provided that they can use and we've made that easy so they can build su secure solutions on top of it. >>What are some of the things when you're in customer conversations, Kevin, that you talk about with customers in terms of what makes HPE E'S approach to security really unique? >>Well, I think a big thing is security is part of our, our dna. It's part of everything we do. Whether we're designing our own asics for our bmc, the ilo ASIC ILO six used on Gen 11, or whether it's our firmware stack, the ILO firmware, our our system, UFI firmware, all those pieces in everything we do. We're thinking about security. When we're building products in our factory, we're thinking about security. When we're think designing our supply chain, we're thinking about security. When we make requirements on our suppliers, we're driving security to be a key part of those components. So security is in our D N a security's top of mind. Security is something we think about in everything we do. We have to think like the bad guys, what could the bad guy take advantage of? What could the bad guy exploit? So we try to think like them so that we can protect our customers. >>And so security is something that that really is pervasive across all of our development organizations, our supply chain organizations, our factories, and our partners. So that's what we think is unique about HPE is because security is so important and there's a whole lot of pieces of our reliance servers that we do ourselves that many others don't do themselves. And since we do it ourselves, we can make sure that security's in the design from the start, that those pieces work together in a secure manner. So we think that gives us a, an advantage from a security standpoint. >>Security is very much intention based at HPE e I was reading in some notes, and you just did a great job of talking about this, that fundamental security approach, security is fundamental to defend against threats that are increasingly complex through what you also call an uncompromising focus to state-of-the-art security and in in innovations built into your D N A. And then organizations can protect their infrastructure, their workloads, their data from the bad guys. Talk to us briefly in our final few minutes here, Kevin, about fundamental uncompromising protected the value in it for me as an HPE customer. >>Yeah, when we talk about fundamental, we're talking about the those fundamental technologies that are part of our platform. Things like we've integrated TPMS and sorted them down in our platforms. We now have platform certificates as a standard part of the platform. We have I dev id and probably most importantly, our platforms continue to support what we really believe was a groundbreaking technology, Silicon Root of trust and what that's able to do. We have millions of lines of firmware code in our platforms and with Silicon Root of trust, we can authenticate all of those lines of firmware. Whether we're talking about the the ILO six firmware, our U E I firmware, our C P L D in the system, there's other pieces of firmware. We authenticate all those to make sure that not a single line of code, not a single bit has been changed by a bad guy, even if the bad guy has physical access to the platform. >>So that silicon route of trust technology is making sure that when that system boots off and that hands off to the operating system and then eventually the customer's application stack that it's starting with a solid foundation, that it's starting with a system that hasn't been compromised. And then we build other things into that silicon root of trust, such as the ability to do the scans and the authentications at runtime, the ability to automatically recover if we detect something has been compromised, we can automatically update that compromised piece of firmware to a good piece before we've run it because we never want to run firmware that's been compromised. So that's all part of that Silicon Root of Trust solution and that's a fundamental piece of the platform. And then when we talk about uncompromising, what we're really talking about there is how we don't compromise security. >>And one of the ways we do that is through an extension of our Silicon Root of trust with a capability called S Spdm. And this is a technology that we saw the need for, we saw the need to authenticate our option cards and the firmware in those option cards. Silicon Root Prota, Silicon Root Trust protects against many attacks, but one piece it didn't do is verify the actual option card firmware and the option cards. So we knew to solve that problem we would have to partner with others in the industry, our nick vendors, our storage controller vendors, our G vendors. So we worked with industry standards bodies and those other partners to design a capability that allows us to authenticate all of those devices. And we worked with those vendors to get the support both in their side and in our platform side so that now Silicon Rivers and trust has been extended to where we protect and we trust those option cards as well. >>So that's when, when what we're talking about with Uncompromising and with with Protect, what we're talking about there is our capabilities around protecting against, for example, supply chain attacks. We have our, our trusted supply chain solution, which allows us to guarantee that our server, when it leaves our factory, what the server is, when it leaves our factory, will be what it is when it arrives at the customer. And if a bad guy does anything in that transition, the transit from our factory to the customer, they'll be able to detect that. So we enable certain capabilities by default capability called server configuration lock, which can ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, swapping out processors, whatever it is, we'll detect if a bad guy did any of that and the customer will know it before they deploy the system. That gets enabled by default. >>We have an intrusion detection technology option when you use by the, the trusted supply chain that is included by default. That lets you know, did anybody open that system up, even if the system's not plugged in, did somebody take the hood off and potentially do something malicious to it? We also enable a capability called U EFI secure Boot, which can go authenticate some of the drivers that are located on the option card itself. Those kind of capabilities. Also ilo high security mode gets enabled by default. So all these things are enabled in the platform to ensure that if it's attacked going from our factory to the customer, it will be detected and the customer won't deploy a system that's been maliciously attacked. So that's got >>It, >>How we protect the customer through those capabilities. >>Outstanding. You mentioned partners, my last question for you, we've got about a minute left, Kevin is bring AMD into the conversation, where do they fit in this >>AMD's an absolutely crucial partner. No one company even HP can do it all themselves. There's a lot of partnerships, there's a lot of synergies working with amd. We've been working with AMD for almost 20 years since we delivered our first AM MD base ProLiant back in 2004 H HP ProLiant, DL 5 85. So we've been working with them a long time. We work with them years ahead of when a processor is announced, we benefit each other. We look at their designs and help them make their designs better. They let us know about their technology so we can take advantage of it in our designs. So they have a lot of security capabilities, like their memory encryption technologies, their a MD secure processor, their secure encrypted virtualization, which is an absolutely unique and breakthrough technology to protect virtual machines and hypervisor environments and protect them from malicious hypervisors. So they have some really great capabilities that they've built into their processor, and we also take advantage of the capabilities they have and ensure those are used in our solutions and in securing the platform. So a really such >>A great, great partnership. Great synergies there. Kevin, thank you so much for joining me on the program, talking about compute security, what HPE is doing to ensure that security is fundamental, that it is unpromised and that your customers are protected end to end. We appreciate your insights, we appreciate your time. >>Thank you very much, Lisa. >>We've just had a great conversation with Kevin Depu. Now I get to talk with David Chang, data center solutions marketing lead at a md. David, welcome to the program. >>Thank, thank you. And thank you for having me. >>So one of the hot topics of conversation that we can't avoid is security. Talk to me about some of the things that AMD is seeing from the customer's perspective, why security is so important for businesses across industries. >>Yeah, sure. Yeah. Security is, is top of mind for, for almost every, every customer I'm talking to right now. You know, there's several key market drivers and, and trends, you know, in, out there today that's really needing a better and innovative solution for, for security, right? So, you know, the high cost of data breaches, for example, will cost enterprises in downtime of, of the data center. And that time is time that you're not making money, right? And potentially even leading to your, to the loss of customer confidence in your, in your cust in your company's offerings. So there's real costs that you, you know, our customers are facing every day not being prepared and not having proper security measures set up in the data center. In fact, according to to one report, over 400 high-tech threats are being introduced every minute. So every day, numerous new threats are popping up and they're just, you know, the, you know, the bad guys are just getting more and more sophisticated. So you have to take, you know, measures today and you have to protect yourself, you know, end to end with solutions like what a AM MD and HPE has to offer. >>Yeah, you talked about some of the costs there. They're exorbitant. I've seen recent figures about the average, you know, cost of data breacher ransomware is, is close to, is over $4 million, the cost of, of brand reputation you brought up. That's a great point because nobody wants to be the next headline and security, I'm sure in your experiences. It's a board level conversation. It's, it's absolutely table stakes for every organization. Let's talk a little bit about some of the specific things now that A M D and HPE E are doing. I know that you have a really solid focus on building security features into the EPIC processors. Talk to me a little bit about that focus and some of the great things that you're doing there. >>Yeah, so, you know, we partner with H P E for a long time now. I think it's almost 20 years that we've been in business together. And, and you know, we, we help, you know, we, we work together design in security features even before the silicons even, you know, even born. So, you know, we have a great relationship with, with, with all our partners, including hpe and you know, HPE has, you know, an end really great end to end security story and AMD fits really well into that. You know, if you kind of think about how security all started, you know, in, in the data center, you, you've had strategies around encryption of the, you know, the data in, in flight, the network security, you know, you know, VPNs and, and, and security on the NS. And, and even on the, on the hard drives, you know, data that's at rest. >>You know, encryption has, you know, security has been sort of part of that strategy for a a long time and really for, you know, for ages, nobody really thought about the, the actual data in use, which is, you know, the, the information that's being passed from the C P U to the, the, the memory and, and even in virtualized environments to the, the, the virtual machines that, that everybody uses now. So, you know, for a long time nobody really thought about that app, you know, that third leg of, of encryption. And so a d comes in and says, Hey, you know, this is things that as, as the bad guys are getting more sophisticated, you, you have to start worrying about that, right? And, you know, for example, you know, you know, think, think people think about memory, you know, being sort of, you know, non-persistent and you know, when after, you know, after a certain time, the, the, you know, the, the data in the memory kind of goes away, right? >>But that's not true anymore because even in in memory data now, you know, there's a lot of memory modules that still can retain data up to 90 minutes even after p power loss. And with something as simple as compressed, compressed air or, or liquid nitrogen, you can actually freeze memory dams now long enough to extract the data from that memory module for up, you know, up, up to two or three hours, right? So lo more than enough time to read valuable data and, and, and even encryption keys off of that memory module. So our, our world's getting more complex and you know, more, the more data out there, the more insatiable need for compute and storage. You know, data management is becoming all, all the more important, you know, to keep all of that going and secure, you know, and, and creating security for those threats. It becomes more and more important. And, and again, especially in virtualized environments where, you know, like hyperconverged infrastructure or vir virtual desktop memories, it's really hard to keep up with all those different attacks, all those different attack surfaces. >>It sounds like what you were just talking about is what AMD has been able to do is identify yet another vulnerability Yes. Another attack surface in memory to be able to, to plug that hole for organizations that didn't, weren't able to do that before. >>Yeah. And, you know, and, and we kind of started out with that belief that security needed to be scalable and, and able to adapt to, to changing environments. So, you know, we, we came up with, you know, the, you know, the, the philosophy or the design philosophy that we're gonna continue to build on those security features generational generations and stay ahead of those evolving attacks. You know, great example is in, in the third gen, you know, epic C P U, that family that we had, we actually created this feature called S E V S N P, which stands for SECURENESS Paging. And it's really all around this, this new attack where, you know, your, the, the, you know, it's basically hypervisor based attacks where people are, you know, the bad actors are writing in to the memory and writing in basically bad data to corrupt the mem, you know, to corrupt the data in the memory. So s e V S and P is, was put in place to help, you know, secure that, you know, before that became a problem. And, you know, you heard in the news just recently that that becoming a more and more, more of a bigger issue. And the great news is that we had that feature built in, you know, before that became a big problem. >>And now you're on the fourth gen, those epic crosses talk of those epic processes. Talk to me a little bit about some of the innovations that are now in fourth gen. >>Yeah, so in fourth gen we actually added, you know, on top of that. So we've, we've got, you know, the sec the, the base of our, our, what we call infinity guard is, is all around the secure boot. The, you know, the, the, the, the secure root of trust that, you know, that we, we work with HPE on the, the strong memory encryption and the S E V, which is the secure encrypted virtualization. And so remember those s s and p, you know, incap capabilities that I talked about earlier. We've actually, in the fourth gen added two x the number of sev v s and P guests for even higher number of confidential VMs to support even more customers than before. Right? We've also added more guest protection from simultaneous multi threading or S M T side channel attacks. And, you know, while it's not officially part of Infinity Guard, we've actually added more APEC acceleration, which greatly benefits the security of those confidential VMs with the larger number of VCPUs, which basically means that you can build larger VMs and still be secured. And then lastly, we actually added even stronger a e s encryption. So we went from 128 bit to 256 bit, which is now military grade encryption on top of that. And, you know, and, and that's really, you know, the de facto crypto cryptography that is used for most of the applications for, you know, customers like the US federal government and, and all, you know, the, is really an essential element for memory security and the H B C applications. And I always say if it's good enough for the US government, it's good enough for you. >>Exactly. Well, it's got to be, talk a little bit about how AMD is doing this together with HPE a little bit about the partnership as we round out our conversation. >>Sure, absolutely. So security is only as strong as the layer below it, right? So, you know, that's why modern security must be built in rather than, than, you know, bolted on or, or, or, you know, added after the fact, right? So HPE and a MD actually developed this layered approach for protecting critical data together, right? Through our leadership and, and security features and innovations, we really deliver a set of hardware based features that, that help decrease potential attack surfaces. With, with that holistic approach that, you know, that safeguards the critical information across system, you know, the, the entire system lifecycle. And we provide the confidence of built-in silicon authentication on the world's most secure industry standard servers. And with a 360 degree approach that brings high availability to critical workloads while helping to defend, you know, against internal and external threats. So things like h hp, root of silicon root of trust with the trusted supply chain, which, you know, obviously AMD's part of that supply chain combined with AMD's Infinity guard technology really helps provide that end-to-end data protection in today's business. >>And that is so critical for businesses in every industry. As you mentioned, the attackers are getting more and more sophisticated, the vulnerabilities are increasing. The ability to have a pa, a partnership like H P E and a MD to deliver that end-to-end data protection is table stakes for businesses. David, thank you so much for joining me on the program, really walking us through what am MD is doing, the the fourth gen epic processors and how you're working together with HPE to really enable security to be successfully accomplished by businesses across industries. We appreciate your insights. >>Well, thank you again for having me, and we appreciate the partnership with hpe. >>Well, you wanna thank you for watching our special program HPE Compute Security. I do have a call to action for you. Go ahead and visit hpe com slash security slash compute. Thanks for watching.

Published Date : Dec 14 2022

SUMMARY :

Kevin, it's great to have you back on the program. One of the topics that we're gonna unpack in this segment is, is all about cybersecurity. And like you said, the numbers are staggering. Anything that you can share with us that's eye-opening, more eye-opening than some of the stats we already shared? So the real change is, it's accelerating even faster because it's becoming We do know that security, you know, we've talked about it for so long as a, as a a C-suite Yeah, at the highest level it's simply that security is incredibly important to them. And by the way, we only have limited bandwidth, So we try to think like them so that we can protect our customers. our reliance servers that we do ourselves that many others don't do themselves. and you just did a great job of talking about this, that fundamental security approach, of code, not a single bit has been changed by a bad guy, even if the bad guy has the ability to automatically recover if we detect something has been compromised, And one of the ways we do that is through an extension of our Silicon Root of trust with a capability ensure that nothing in the server exchange, whether it's firmware, hardware, configurations, That lets you know, into the conversation, where do they fit in this and in securing the platform. Kevin, thank you so much for joining me on the program, Now I get to talk with David Chang, And thank you for having me. So one of the hot topics of conversation that we can't avoid is security. numerous new threats are popping up and they're just, you know, the, you know, the cost of, of brand reputation you brought up. know, the data in, in flight, the network security, you know, you know, that app, you know, that third leg of, of encryption. the data from that memory module for up, you know, up, up to two or three hours, It sounds like what you were just talking about is what AMD has been able to do is identify yet another in the third gen, you know, epic C P U, that family that we had, Talk to me a little bit about some of the innovations Yeah, so in fourth gen we actually added, you know, Well, it's got to be, talk a little bit about how AMD is with that holistic approach that, you know, that safeguards the David, thank you so much for joining me on the program, Well, you wanna thank you for watching our special program HPE Compute Security.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

David ChangPERSON

0.99+

KevinPERSON

0.99+

DavidPERSON

0.99+

Kevin DeePERSON

0.99+

AMDORGANIZATION

0.99+

Kevin DepewPERSON

0.99+

MicrosoftORGANIZATION

0.99+

LisaPERSON

0.99+

2004DATE

0.99+

15%QUANTITY

0.99+

HPORGANIZATION

0.99+

10.5 trillionQUANTITY

0.99+

HPE EORGANIZATION

0.99+

H P EORGANIZATION

0.99+

360 degreeQUANTITY

0.99+

over $4 millionQUANTITY

0.99+

2025DATE

0.99+

fourth gen.QUANTITY

0.99+

fourth genQUANTITY

0.99+

over 4 millionQUANTITY

0.99+

DL 5 85COMMERCIAL_ITEM

0.99+

256 bitQUANTITY

0.99+

last yearDATE

0.99+

three hoursQUANTITY

0.98+

amdORGANIZATION

0.98+

128 bitQUANTITY

0.98+

over 400 high-tech threatsQUANTITY

0.98+

HPEORGANIZATION

0.98+

Infinity GuardORGANIZATION

0.98+

one pieceQUANTITY

0.98+

almost 20 yearsQUANTITY

0.98+

oneQUANTITY

0.97+

millions of linesQUANTITY

0.97+

single bitQUANTITY

0.97+

50%QUANTITY

0.97+

one reportQUANTITY

0.97+

OneQUANTITY

0.97+

hpeORGANIZATION

0.96+

third genQUANTITY

0.96+

todayDATE

0.96+

bothQUANTITY

0.96+

H P V EORGANIZATION

0.96+

firstQUANTITY

0.95+

twoQUANTITY

0.95+

third legQUANTITY

0.94+

last couple of yearsDATE

0.93+

Silicon RiversORGANIZATION

0.92+

up to 90 minutesQUANTITY

0.92+

S SpdmORGANIZATION

0.9+

ILOORGANIZATION

0.88+

AMORGANIZATION

0.88+

US governmentORGANIZATION

0.86+

single lineQUANTITY

0.85+

last 18 monthsDATE

0.82+

Gen 11QUANTITY

0.81+

last 12 monthsDATE

0.81+

AM MD base ProLiantCOMMERCIAL_ITEM

0.8+

next five yearsDATE

0.8+

up to twoQUANTITY

0.8+

ProtectORGANIZATION

0.79+

couple yearsQUANTITY

0.79+

Mohan Rokkam & Greg Gibby | 4th Gen AMD EPYC on Dell PowerEdge: Virtualization


 

(cheerful music) >> Welcome to theCUBE's continuing coverage of AMD's 4th Generation EPYC launch. I'm Dave Nicholson, and I'm here in our Palo Alto studios talking to Greg Gibby, senior product manager, data center products from AMD, and Mohan Rokkam, technical marketing engineer at Dell. Welcome, gentlemen. >> Mohan: Hello, hello. >> Greg: Thank you. Glad to be here. >> Good to see each of you. Just really quickly, I want to start out. Let us know a little bit about yourselves. Mohan, let's start with you. What do you do at Dell exactly? >> So I'm a technical marketing engineer at Dell. I've been with Dell for around 15 years now and my goal is to really look at the Dell powered servers and see how do customers take advantage of some of the features we have, especially with the AMD EPYC processors that have just come out. >> Greg, and what do you do at AMD? >> Yeah, so I manage our software-defined infrastructure solutions team, and really it's a cradle to grave where we work with the ISVs in the market, so VMware, Nutanix, Microsoft, et cetera, to integrate the features that we're putting into our processors and make sure they're ready to go and enabled. And then we work with our valued partners like Dell on putting those into actual solutions that customers can buy and then we work with them to sell those solutions into the market. >> Before we get into the details on the 4th Generation EPYC launch and what that means and why people should care. Mohan, maybe you can tell us a little about the relationship between Dell and AMD, how that works, and then Greg, if you've got commentary on that afterwards, that'd be great. Yeah, Mohan. >> Absolutely. Dell and AMD have a long standing partnership, right? Especially now with EPYC series. We have had products since EPYC first generation. We have been doing solutions across the whole range of Dell ecosystem. We have integrated AMD quite thoroughly and effectively and we really love how performant these systems are. So, yeah. >> Dave: Greg, what are your thoughts? >> Yeah, I would say the other thing too is, is that we need to point out is that we both have really strong relationships across the entire ecosystem. So memory vendors, the software providers, et cetera, we have technical relationships. We're working with them to optimize solutions so that ultimately when the customer buys that, they get a great user experience right out of the box. >> So, Mohan, I know that you and your team do a lot of performance validation testing as time goes by. I suspect that you had early releases of the 4th Gen EPYC processor technology. What have you been seeing so far? What can you tell us? >> AMD has definitely knocked it out of the park. Time and again, in the past four generations, in the past five years alone, we have done some database work where in five years, we have seen five exit performance. And across the board, AMD is the leader in benchmarks. We have done virtualization where we would consolidate from five into one system. We have world records in AI, we have world records in databases, we have world records in virtualization. The AMD EPYC solutions has been absolutely performant. I'll leave you with one number here. When we went from top of Stack Milan to top of Stack Genoa, we saw a performance bump of 120%. And that number just blew my mind. >> So that prompts a question for Greg. Often we, in industry insiders, think in terms of performance gains over the last generation or the current generation. A lot of customers in the real world, however, are N - 2. They're a ways back, so I guess two points on that. First of all, the kinds of increases the average person is going to see when they move to this architecture, correct me if I'm wrong, but it's even more significant than a lot of the headline numbers because they're moving two generations, number one. Correct me if I'm wrong on that, but then the other thing is the question to you, Greg. I like very long complicated questions, as you can tell. The question is, is it okay for people to skip generations or make the case for upgrades, I guess is the problem? >> Well, yeah, so a couple thoughts on that first too. Mohan talked about that five X over the generation improvements that we've seen. The other key point with that too is that we've made significant process improvements along the way moving to seven nanocomputer to now five nanocomputer and that's really reducing the total amount of power or the performance per watt the customers can realize as well. And when we look at why would a customer want to upgrade, right? And I want to rephrase that as to why aren't you? And there is a real cost of not upgrading. And so when you look at infrastructure, the average age of a server in the data center is over five years old. And if you look at the most popular processors that were sold in that timeframe, it's 8, 10, 12 cores. So now you've got a bunch of servers that you need in order to deliver the applications and meet your SLAs to your end users, and all those servers pull power. They require maintenance. They have the opportunity to go down, et cetera. You got to pay licensing and service and support costs and all those. And when you look at all the costs that roll up, even though the hardware is paid for just to keep the lights on, and not even talking about the soft costs of unplanned downtime, and, "I'm not meeting your SLAs," et cetera, it's very expensive to keep those servers running. Now, if you refresh, and now you have processors that have 32, 64, 96 cores, now you can consolidate that infrastructure and reduce your total power bill. You can reduce your CapEx, you reduce your ongoing OpEx, you improve your performance, and you improve your security profile. So it really is more cost effective to refresh than not to refresh. >> So, Mohan, what has your experience been double clicking on this topic of consolidation? I know that we're going to talk about virtualization in some of the results that you've seen. What have you seen in that regard? Does this favor better consolidation and virtualized environments? And are you both assuring us that the ROI and TCO pencil out on these new big, bad machines? >> Greg definitely hit the nail on the head, right? We are seeing tremendous savings really, if you're consolidating from two generations old. We went from, as I said, five is to one. You're going from five full servers, probably paid off down to one single server. That itself is, if you look at licensing costs, which again, with things like VMware does get pretty expensive. If you move to a single system, yes, we are at 32, 64, 96 cores, but if you compare to the licensing costs of 10 cores, two sockets, that's still pretty significant, right? That's one huge thing. Another thing which actually really drives the thing is we are looking at security, and in today's environment, security becomes a major driving factor for upgrades. Dell has its own setups, cyber-resilient architecture, as we call it, and that really is integrated from processor all the way up into the OS. And those are some of the features which customers really can take advantage of and help protect their ecosystems. >> So what kinds of virtualized environments did you test? >> We have done virtualization across primary codes with VMware, but the Azure Stack, we have looked at Nutanix. PowerFlex is another one within Dell. We have vSAN Ready Nodes. All of these, OpenShift, we have a broad variety of solutions from Dell and AMD really fits into almost every one of them very well. >> So where does hyper-converged infrastructure fit into this puzzle? We can think of a server as something that contains not only AMD's latest architecture but also latest PCIe bus technology and all of the faster memory, faster storage cards, faster nicks, all of that comes together. But how does that play out in Dell's hyper-converged infrastructure or HCI strategy? >> Dell is a leader in hyper-converged infrastructure. We have the very popular VxRail line, we have the PowerFlex, which is now going into the AWS ecosystem as well, Nutanix, and of course, Azure Stack. With all these, when you look at AMD, we have up to 96 cores coming in. We have PCIe Gen 5 which means you can now connect dual port, 100 and 200 gig nicks and get line rate on those so you can connect to your ecosystem. And I don't know if you've seen the news, 200, 400 gig routers and switchers are selling out. That's not slowing down. The network infrastructure is booming. If you want to look at the AI/ML side of things, the VDI side of things, accelerator cards are becoming more and more powerful, more and more popular. And of course they need that higher end data path that PCIe Gen 5 brings to the table. GDDR5 is another huge improvement in terms of performance and latencies. So when we take all this together, you talk about hyper-converged, all of them add into making sure that A, with hyper-converged, you get ease of management, but B, just 'cause you have ease of management doesn't mean you need to compromise on anything. And the AMD servers effectively are a no compromise offering that we at Dell are able to offer to our customers. >> So Greg, I've got a question a little bit from left field for you. We covered Supercompute Conference 2022. We were in Dallas a couple of weeks ago, and there was a lot of discussion of the current processor manufacturer battles, and a lot of buzz around 4th Gen EPYC being launched and what's coming over the next year. Do you have any thoughts on what this architecture can deliver for us in terms of things like AI? We talk about virtualization, but if you look out over the next year, do you see this kind of architecture driving significant change in the world? >> Yeah, yeah, yeah, yeah. It has the real potential to do that from just the building blocks. So we have our chiplet architecture we call it. So you have an IO die and then you have your core complexes that go around that. And we integrate it all with our infinity fabric. That architecture allows you, if we wanted to, replace some of those CCDs with specific accelerators. And so when we look two, three, four years down the road, that architecture and that capability already built into what we're delivering and can easily be moved in. We just need to make sure that when you look at doing that, that the power that's required to do that and the software, et cetera, and those accelerators actually deliver better performance as a dedicated engine versus just using standard CPUs. The other things that I would say too is if you look at emerging workloads. So data center modernization is one of the buzzwords in cloud native, right? And these container environments, well, AMD'S architecture really just screams support for those type of environments, right? Where when you get into these larger core accounts and the consolidation that Mohan talked about. Now when I'm in a container environment, that blast radius so a lot of customers have concerns around, "Hey, having a single point of failure and having more than X number of cores concerns me." If I'm in containers, that becomes less of a concern. And so when you look at cloud native, containerized applications, data center modernization, AMD's extremely well positioned to take advantage of those use cases as well. >> Yeah, Mohan, and when we talk about virtualization, I think sometimes we have to remind everyone that yeah, we're talking about not only virtualization that has a full-blown operating system in the bucket, but also virtualization where the containers have microservices and things like that. I think you had something to add, Mohan. >> I did, and I think going back to the accelerator side of business, right? When we are looking at the current technology and looking at accelerators, AMD has done a fantastic job of adding in features like AVX-512, we have the bfloat16 and eight features. And some of what these do is they're effectively built-in accelerators for certain workloads especially in the AI and media spaces. And in some of these use cases we look at, for example, are inference. Traditionally we have used external accelerator cards, but for some of the entry level and mid-level use cases, CPU is going to work just fine especially with the newer CPUs that we are seeing this fantastic performance from. The accelerators just help get us to the point where if I'm at the edge, if I'm in certain use cases, I don't need to have an accelerator in there. I can run most of my inference workloads right on the CPU. >> Yeah, yeah. You know the game. It's an endless chase to find the bottleneck. And once we've solved the puzzle, we've created a bottleneck somewhere else. Back to the supercompute conversations we had, specifically about some of the AMD EPYC processor technology and the way that Dell is packaging it up and leveraging things like connectivity. That was one of the things that was also highlighted. This idea that increasingly connectivity is critically important, not just for supercomputing, but for high-performance computing that's finding its way out of the realms of Los Alamos and down to the enterprise level. Gentlemen, any more thoughts about the partnership or maybe a hint at what's coming in the future? I know that the original AMD announcement was announcing and previewing some things that are rolling out over the next several months. So let me just toss it to Greg. What are we going to see in 2023 in terms of rollouts that you can share with us? >> That I can share with you? Yeah, so I think look forward to see more advancements in the technology at the core level. I think we've already announced our product code name Bergamo, where we'll have up to 128 cores per socket. And then as we look in, how do we continually address this demand for data, this demand for, I need actionable insights immediately, look for us to continue to drive performance leadership in our products that are coming out and address specific workloads and accelerators where appropriate and where we see a growing market. >> Mohan, final thoughts. >> On the Dell side, of course, we have four very rich and configurable options with AMD EPYC servers. But beyond that, you'll see a lot more solutions. Some of what Greg has been talking about around the next generation of processors or the next updated processors, you'll start seeing some of those. and you'll definitely see more use cases from us and how customers can implement them and take advantage of the features that. It's just exciting stuff. >> Exciting stuff indeed. Gentlemen, we have a great year ahead of us. As we approach possibly the holiday seasons, I wish both of you well. Thank you for joining us. From here in the Palo Alto studios, again, Dave Nicholson here. Stay tuned for our continuing coverage of AMD's 4th Generation EPYC launch. Thanks for joining us. (cheerful music)

Published Date : Dec 14 2022

SUMMARY :

talking to Greg Gibby, Glad to be here. What do you do at Dell exactly? of some of the features in the market, so VMware, on the 4th Generation EPYC launch the whole range of Dell ecosystem. is that we need to point out is that of the 4th Gen EPYC processor technology. Time and again, in the the question to you, Greg. of servers that you need in some of the results that you've seen. really drives the thing is we have a broad variety and all of the faster We have the very popular VxRail line, over the next year, do you that the power that's required to do that in the bucket, but also but for some of the entry I know that the original AMD in the technology at the core level. and take advantage of the features that. From here in the Palo Alto studios,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GregPERSON

0.99+

Dave NicholsonPERSON

0.99+

AMDORGANIZATION

0.99+

Greg GibbyPERSON

0.99+

DellORGANIZATION

0.99+

DavePERSON

0.99+

8QUANTITY

0.99+

MohanPERSON

0.99+

32QUANTITY

0.99+

Mohan RokkamPERSON

0.99+

100QUANTITY

0.99+

200QUANTITY

0.99+

10 coresQUANTITY

0.99+

10QUANTITY

0.99+

DallasLOCATION

0.99+

120%QUANTITY

0.99+

two socketsQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

12 coresQUANTITY

0.99+

two generationsQUANTITY

0.99+

2023DATE

0.99+

fiveQUANTITY

0.99+

64QUANTITY

0.99+

200 gigQUANTITY

0.99+

AWSORGANIZATION

0.99+

oneQUANTITY

0.99+

five full serversQUANTITY

0.99+

Palo AltoLOCATION

0.99+

two pointsQUANTITY

0.99+

400 gigQUANTITY

0.99+

EPYCORGANIZATION

0.99+

twoQUANTITY

0.99+

five yearsQUANTITY

0.99+

one systemQUANTITY

0.99+

threeQUANTITY

0.99+

Los AlamosLOCATION

0.99+

next yearDATE

0.99+

NutanixORGANIZATION

0.99+

two generationsQUANTITY

0.99+

four yearsQUANTITY

0.98+

bothQUANTITY

0.98+

Azure StackTITLE

0.98+

five nanocomputerQUANTITY

0.98+

Evan Touger, Prowess | Prowess Benchmark Testing Results for AMD EPYC Genoa on Dell Servers


 

(upbeat music) >> Welcome to theCUBE's continuing coverage of AMD's fourth generation EPYC launch. I've got a special guest with me today from Prowess Consulting. His name is Evan Touger, he's a senior technical writer with Prowess. Evan, welcome. >> Hi, great to be here. Thanks. >> So tell us a little bit about Prowess, what does Prowess do? >> Yeah, we're a consulting firm. We've been around for quite a few years, based in Bellevue, Washington. And we do quite a few projects with folks from Dell to a lot of other companies, and dive in. We have engineers, writers, production folks, so pretty much end-to-end work, doing research testing and writing, and diving into different technical topics. >> So you- in this case what we're going to be talking about is some validation studies that you've done, looking at Dell PowerEdge servers that happened to be integrating in fourth-gen EPYC processors from AMD. What were the specific workloads that you were focused on in this study? >> Yeah, this particular one was honing in on virtualization, right? You know, obviously it's pretty much ubiquitous in the industry, everybody works with virtualization in one way or another. So just getting optimal performance for virtualization was critical, or is critical for most businesses. So we just wanted to look a little deeper into, you know, how do companies evaluate that? What are they going to use to make the determination for virtualization performance as it relates to their workloads? So that led us to this study, where we looked at some benchmarks, and then went a little deeper under the hood to see what led to the results that we saw from those benchmarks. >> So when you say virtualization, does that include virtual desktop infrastructure or are we just talking about virtual machines in general? >> No, it can include both. We looked at VMs, thinking in terms of what about database performance when you're working in VMs, all the way through to VDI and companies like healthcare organizations and so forth, where it's common to roll out lots of virtual desktops, and performance is critical there as well. >> Okay, you alluded to, sort of, looking under the covers to see, you know, where these performance results were coming from. I assume what you're referencing is the idea that it's not just all about the CPU when you talk about a system. Am I correct in that assumption and- >> Yeah, absolutely. >> What can you tell us? >> Well, you know, for companies evaluating, there's quite a bit to consider, obviously. So they're looking at not just raw performance but power performance. So that was part of it, and then what makes up that- those factors, right? So certainly CPU is critical to that, but then other things come into play, like the RAID controllers. So we looked a little bit there. And then networking, of course can be critical for configurations that are relying on good performance on their networks, both in terms of bandwidth and just reducing latency overall. So interconnects as well would be a big part of that. So with, with PCIe gen 5 or 5.0 pick your moniker. You know in this- in the infrastructure game, we're often playing a game of whack-a-mole, looking for the bottlenecks, you know, chasing the bottlenecks. PCIe 5 opens up a lot of bandwidth for memory and things like RAID controllers and NICs. I mean, is the bottleneck now just our imagination, Evan, have we reached a point where there are no bottlenecks? What did you see when you ran these tests? What, you know, what were you able to stress to a point where it was saturated, if anything? >> Yeah. Well, first of all, we didn't- these are particular tests were ones that we looked at industry benchmarks, and we were examining in particular to see where world records were set. And so we uncovered a few specific servers, PowerEdge servers that were pretty key there, or had a lot of- were leading in the category in a lot of areas. So that's what led us to then, okay, well why is that? What's in these servers, and what's responsible for that? So in a lot of cases they, we saw these results even with, you know, gen 4, PCIe gen 4. So there were situations where clearly there was benefit from faster interconnects and, and especially NVMe for RAID, you know, for supporting NVMe and SSDs. But all of that just leads you to the understanding that it means it can only get better, right? So going from gen 4 to- if you're seeing great results on gen 4, then gen 5 is probably going to be, you know, blow that away. >> And in this case, >> It'll be even better. >> In this case, gen 5 you're referencing PCIe >> PCIe right. Yeah, that's right. >> (indistinct) >> And then the same thing with EPYC actually holds true, some of the records, we saw records set for both 3rd and 4th gen, so- with EPYC, so the same thing there. Anywhere there's a record set on the 3rd gen, you know, makes us really- we're really looking forward to going back and seeing over the next few months, which of those records fall and are broken by newer generation versions of these servers, once they actually wrap to the newer generation processors. You know, based on, on what we're seeing for the- for what those processors can do, not only in. >> (indistinct) Go ahead. >> Sorry, just want to say, not only in terms of raw performance, but as I mentioned before, the power performance, 'cause they're very efficient, and that's a really critical consideration, right? I don't think you can overstate that for companies who are looking at, you know, have to consider expenditures and power and cooling and meeting sustainability goals and so forth. So that was really an important category in terms of what we looked at, was that power performance, not just raw performance. >> Yeah, I want to get back to that, that's a really good point. We should probably give credit where credit is due. Which Dell PowerEdge servers are we talking about that were tested and what did those interconnect components look like from a (indistinct) perspective? >> Yeah, so we focused primarily on a couple benchmarks that seemed most important for real world performance results for virtualization. TPCx-V and VMmark 3.x. the TPCx-V, that's where we saw PowerEdge R7525, R7515. They both had top scores in different categories there. That benchmark is great for looking at database workloads in particular, right? Running in virtualization settings. And then the VMmark 3.x was critical. We saw good, good results there for the 7525 and the R 7515 as well as the R 6525, in that one and that included, sorry, just checking notes to see what- >> Yeah, no, no, no, no, (indistinct) >> Included results for power performance, as I mentioned earlier, that's where we could see that. So we kind of, we saw this in a range of servers that included both 3rd gen AMD EPYC and newer 4th gen as well as I mentioned. The RAID controllers were critical in the TPCx-V. I don't think that came into play in the VM mark test, but they were definitely part of the TPCx-V benchmarks. So that's where the RAID controllers would make a difference, right? And in those tests, I think they're using PERC 11. So, you know, the newer PERC 12 controllers there, again we'd expect >> (indistinct) >> To see continued, you know, gains in newer benchmarks. That's what we'll be looking for over the next several months. >> Yeah. So I think if I've got my Dell nomenclature down, performance, no no, PowerEdge RAID Controller, is that right? >> Exactly, yeah, there you go. Right? >> With Broadcom, you know, powered by Broadcom. >> That's right. There you go. Yeah. Isn't the Dell naming scheme there PERC? >> Yeah, exactly, exactly. Back to your comment about power. So you've had a chance to take a pretty deep look at the latest stuff coming out. You're confident that- 'cause some of these servers are going to be more expensive than previous generation. Now a server is not a server is not a server, but some are awakening to the idea that there might be some sticker shock. You're confident that the bang for your buck, the bang for your kilowatt hour is actually going to be beneficial. We're actually making things better, faster, stronger, cheaper, more energy efficient. We're continuing on that curve? >> That's what I would expect to see, right. I mean, of course can't speak to to pricing without knowing, you know, where the dollars are going to land on the servers. But I would expect to see that because you're getting gains in a couple of ways. I mean, one, if the performance increases to the point where you can run more VMs, right? Get more performance out of your VMs and run more total VMs or more BDIs, then there's obviously a good, you know, payback on your investment there. And then as we were discussing earlier, just the power performance ratio, right? So if you're bringing down your power and cooling costs, if these machines are just more efficient overall, then you should see some gains there as well. So, you know, I think the key is looking at what's the total cost of ownership over, you know, a standard like a three-year period or something and what you're going to get out of it for your number of sessions, the performance for the sessions, and the overall efficiency of the machines. >> So just just to be clear with these Dell PowerEdge servers, you were able to validate world record performance. But this isn't, if you, if you look at CPU architecture, PCIe bus architecture, memory, you know, the class of memory, the class of RAID controller, the class of NIC. Those were not all state of the art in terms of at least what has been recently announced. Correct? >> Right. >> Because (indistinct) the PCI 4.0, So to your point- world records with that, you've got next-gen RAID controllers coming out, and NICs coming out. If the motherboard was PCIe 5, with commensurate memory, all of those things are getting better. >> Exactly, right. I mean you're, you're really you're just eliminating bandwidth constraints latency constraints, you know, all of that should be improved. NVMe, you know, just collectively all these things just open the doors, you know, letting more bandwidth through reducing all the latency. Those are, those are all pieces of the puzzle, right? That come together and it's all about finding the weakest link and eliminating it. And I think we're reaching the point where we're removing the biggest constraints from the systems. >> Okay. So I guess is it fair to summarize to say that with this infrastructure that you tested, you were able to set world records. This, during this year, I mean, over the next several months, things are just going to get faster and faster and faster and faster. >> That's what I would anticipate, exactly, right. If they're setting world records with these machines before some of the components are, you know, the absolute latest, it seems to me we're going to just see a continuing trend there, and more and more records should fall. So I'm really looking forward to seeing how that goes, 'cause it's already good and I think the return on investment is pretty good there. So I think it's only going to get better as these roll out. >> So let me ask you a question that's a little bit off topic. >> Okay. >> Kind of, you know, we see these gains, you know, we're all familiar with Moore's Law, we're familiar with, you know, the advancements in memory and bus architecture and everything else. We just covered SuperCompute 2022 in Dallas a couple of weeks ago. And it was fascinating talking to people about advances in AI that will be possible with new architectures. You know, most of these supercomputers that are running right now are n minus 1 or n minus 2 infrastructure, you know, they're, they're, they're PCI 3, right. And maybe two generations of processors old, because you don't just throw out a 100,000 CPU super computing environment every 18 months. It doesn't work that way. >> Exactly. >> Do you have an opinion on this question of the qualitative versus quantitative increase in computing moving forward? And, I mean, do you think that this new stuff that you're starting to do tests on is going to power a fundamental shift in computing? Or is it just going to be more consolidation, better power consumption? Do you think there's an inflection point coming? What do you think? >> That's a great question. That's a hard one to answer. I mean, it's probably a little bit of both, 'cause certainly there will be better consolidation, right? But I think that, you know, the systems, it works both ways. It just allows you to do more with less, right? And you can go either direction, you can do what you're doing now on fewer machines, you know, and get better value for it, or reduce your footprint. Or you can go the other way and say, wow, this lets us add more machines into the mix and take our our level of performance from here to here, right? So it just depends on what your focus is. Certainly with, with areas like, you know, HPC and AI and ML, having the ability to expand what you already are capable of by adding more machines that can do more is going to be your main concern. But if you're more like a small to medium sized business and the opportunity to do what you were doing on, on a much smaller footprint and for lower costs, that's really your goal, right? So I think you can use this in either direction and it should, should pay back in a lot of dividends. >> Yeah. Thanks for your thoughts. It's an interesting subject moving forward. You know, sometimes it's easy to get lost in the minutiae of the bits and bites and bobs of all the components we're studying, but they're powering something that that's going to effect effectively all of humanity as we move forward. So what else do we need to consider when it comes to what you've just validated in the virtualization testing? Anything else, anything we left out? >> I think we hit all the key points, or most of them it's, you know, really, it's just keeping in mind that it's all about the full system, the components not- you know, the processor is a obviously a key, but just removing blockages, right? Freeing up, getting rid of latency, improving bandwidth, all these things come to play. And then the power performance, as I said, I know I keep coming back to that but you know, we just, and a lot of what we work on, we just see that businesses, that's a really big concern for businesses and finding efficiency, right? And especially in an age of constrained budgets, that's a big deal. So, it's really important to have that power performance ratio. And that's one of the key things we saw that stood out to us in, in some of these benchmarks, so. >> Well, it's a big deal for me. >> It's all good. >> Yeah, I live in California and I know exactly how much I pay for a kilowatt hour of electricity. >> I bet, yeah. >> My friends in other places don't even know. So I totally understand the power constraint question. >> Yeah, it's not going to get better, so, anything you can do there, right? >> Yeah. Well Evan, this has been great. Thanks for sharing the results that Prowess has come up with, third party validation that, you know, even without the latest and greatest components in all categories, Dell PowerEdge servers are able to set world records. And I anticipate that those world records will be broken in 2023 and I expect that Prowess will be part of that process, So Thanks for that. For the rest of us- >> (indistinct) >> Here at theCUBE, I want to thank you for joining us. Stay tuned for continuing coverage of AMD's fourth generation EPYC launch, for myself and for Evan Touger. Thanks so much for joining us. (upbeat music)

Published Date : Dec 8 2022

SUMMARY :

Welcome to theCUBE's Hi, great to be here. to a lot of other companies, and dive in. that you were focused on in this study? you know, how do companies evaluate that? all the way through to VDI looking under the covers to see, you know, you know, chasing the bottlenecks. But all of that just leads you Yeah, that's right. you know, makes us really- (indistinct) are looking at, you know, and what did those interconnect and the R 7515 as well as So, you know, the newer To see continued, you know, is that right? Exactly, yeah, there you go. With Broadcom, you There you go. the bang for your buck, to pricing without knowing, you know, PCIe bus architecture, memory, you know, So to your point- world records with that, just open the doors, you know, with this infrastructure that you tested, components are, you know, So let me ask you a question that's we're familiar with, you know, and the opportunity to do in the minutiae of the or most of them it's, you know, really, it's a big deal for me. for a kilowatt hour of electricity. So I totally understand the third party validation that, you know, I want to thank you for joining us.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
EvanPERSON

0.99+

Evan TougerPERSON

0.99+

CaliforniaLOCATION

0.99+

DallasLOCATION

0.99+

DellORGANIZATION

0.99+

Prowess ConsultingORGANIZATION

0.99+

2023DATE

0.99+

three-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

R 6525COMMERCIAL_ITEM

0.99+

BroadcomORGANIZATION

0.99+

3rdQUANTITY

0.99+

R 7515COMMERCIAL_ITEM

0.99+

R7515COMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

4th genQUANTITY

0.99+

3rd genQUANTITY

0.98+

both waysQUANTITY

0.98+

7525COMMERCIAL_ITEM

0.98+

ProwessORGANIZATION

0.98+

Bellevue, WashingtonLOCATION

0.98+

100,000 CPUQUANTITY

0.98+

PowerEdgeCOMMERCIAL_ITEM

0.97+

two generationsQUANTITY

0.97+

oneQUANTITY

0.96+

PCIe 5OTHER

0.96+

todayDATE

0.95+

theCUBEORGANIZATION

0.94+

this yearDATE

0.93+

PCI 4.0OTHER

0.92+

TPCx-VCOMMERCIAL_ITEM

0.92+

fourth-genQUANTITY

0.92+

gen 5QUANTITY

0.9+

MooreORGANIZATION

0.89+

fourth generationQUANTITY

0.88+

gen 4QUANTITY

0.87+

PCI 3OTHER

0.87+

couple of weeks agoDATE

0.85+

SuperCompute 2022TITLE

0.8+

PCIe gen 5OTHER

0.79+

VMmark 3.xCOMMERCIAL_ITEM

0.75+

minusQUANTITY

0.74+

one wayQUANTITY

0.74+

18 monthsQUANTITY

0.7+

PERC 12COMMERCIAL_ITEM

0.67+

5.0OTHER

0.67+

EPYCCOMMERCIAL_ITEM

0.65+

monthsDATE

0.64+

5QUANTITY

0.63+

PERC 11COMMERCIAL_ITEM

0.6+

next few monthsDATE

0.6+

firstQUANTITY

0.59+

VMmark 3.x.COMMERCIAL_ITEM

0.55+

EPYC GenoaCOMMERCIAL_ITEM

0.53+

genOTHER

0.52+

R7525COMMERCIAL_ITEM

0.52+

1QUANTITY

0.5+

2QUANTITY

0.47+

PowerEdgeORGANIZATION

0.47+

Brad Smith, AMD & Rahul Subramaniam, Aurea CloudFix | AWS re:Invent 2022


 

(calming music) >> Hello and welcome back to fabulous Las Vegas, Nevada. We're here at AWS re:Invent day three of our scintillating coverage here on theCUBE. I'm Savannah Peterson, joined by John Furrier. John Day three energy's high. How you feeling? >> I dunno, it's day two, day three, day four. It feels like day four, but again, we're back. >> Who's counting? >> Three pandemic levels in terms of 50,000 plus people? Hallways are packed. I got pictures. People don't believe it. It's actually happening. Then people are back. So, you know, and then the economy is a big question too and it's still, people are here, they're still building on the cloud and cost is a big thing. This next segment's going to be really important. I'm looking forward to this next segment. >> Yeah, me too. Without further ado let's welcome our guests for this segment. We have Brad from AMD and we have Rahul from you are, well you do a variety of different things. We'll start with CloudFix for this segment, but we could we could talk about your multiple hats all day long. Welcome to the show, gentlemen. How you doing? Brad how does it feel? We love seeing your logo above our stage here. >> Oh look, we love this. And talking about re:Invent last year, the energy this year compared to last year is so much bigger. We love it. We're excited to be here. >> Yeah, that's awesome. Rahul, how are you feeling? >> Excellent, I mean, I think this is my eighth or ninth re:Invent at this point and it's been fabulous. I think the, the crowd, the engagement, it's awesome. >> You wouldn't know there's a looming recession if you look at the activity but yet still the reality is here we had an analyst on yesterday, we were talking about spend more in the cloud, save more. So that you can still use the cloud and there's a lot of right sizing, I call you got to turn the lights off before you go to bed. Kind of be more efficient with your infrastructure as a theme. This re:Invent is a lot more about that now. Before it's about the glory days. Oh yeah, keep building, now with a little bit of pressure. This is the conversation. >> Exactly and I think most companies are looking to figure out how to innovate their way out of this uncertainty that's kind of on everyone's head. And the only way to do it is to be able to be more efficient with whatever your existing spend is, take those savings and then apply them to innovating on new stuff. And that's the way to go about it at this point. >> I think it's such a hot topic, for everyone that we're talking about. I mean, total cost optimization figuring out ways to be more efficient. I know that that's a big part of your mission at CloudFix. So just in case the audience isn't versed, give us the pitch. >> Okay, so a little bit of background on this. So the other hat I wear is CTO of ESW Capital. We have over 150 enterprise software companies within the portfolio. And one of my jobs is also to manage and run about 40 to 45,000 AWS accounts of our own. >> Casual number, just a few, just a couple pocket change, no big deal. >> And like everyone else here in the audience, yeah we had a problem with our costs, just going out of control and as we were looking at a lot of the tools to help us kind of get more efficient one of the biggest issues was that while people give you a lot of recommendations recommendations are way too far from realized savings. And we were running through the challenge of how do you take recommendation and turn them into real savings and multiple different hurdles. The short story being, we had to create CloudFix to actually realize those savings. So we took AWS recommendations around cost, filtered them down to the ones that are completely non-disruptive in nature, implemented those as simple automations that everyone could just run and realize those savings right away. We then took those savings and then started applying them to innovating and doing new interesting things with that money. >> Is there a best practice in your mind that you see merging in this time? People start more focused on it. Is there a method or a purpose kind of best practice of how to approach cost optimization? >> I think one of the things that most people don't realize is that cost optimization is not a one and done thing. It is literally nonstop. Which means that, on one hand AWS is constantly creating new services. There are over a hundred thousand API at this point of time How to use them right, how to use them efficiently You also have a problem of choice. Developers are constantly discovering new services discovering new ways to utilize them. And they are behaving in ways that you had not anticipated before. So you have to stay on top of things all the time. And really the only way to kind of stay on top is to have automation that helps you stay on top of all of these things. So yeah, finding efficiencies, standardizing your practices about how you leverage these AWS services and then automating the governance and hygiene around how you utilize them is really the key >> Brad tell me what this means for AMD and what working with CloudFix and Rahul does for your customers. >> Well, the idea of efficiency and cost optimization is near and dear to our heart. We have the leading. >> It's near and dear to everyone's heart, right now. (group laughs) >> But we are the leaders in x86 price performance and density and power efficiency. So this is something that's actually part of our core culture. We've been doing this a long time and what's interesting is most companies don't understand how much more efficiency they can get out of their applications aside from just the choices they make in cloud. but that's the one thing, the message we're giving to everybody is choice matters very much when it comes to your cloud solutions and just deciding what type of instance types you choose can have a massive impact on your bottom line. And so we are excited to partner with CloudFix, they've got a great model for this and they make it very easier for our customers to help identify those areas. And then AMD can come in as well and then help provide additional insight into those applications what else they can squeeze out of it. So it's a great relationship. >> If I hear you correctly, then there's more choice for the customers, faster selection, so no bad choices means bad performance if they have a workload or an app that needs to run, is that where you you kind of get into the, is that where it is or more? >> Well, I mean from the AMD side right now, one of the things they do very quickly is they identify where the low hanging fruit is. So it's the thing about x86 compatibility, you can shift instance types instantly in most cases without any change to your environment at all. And CloudFix has an automated tool to do that. And that's one thing you can immediately have an impact on your cost without having to do any work at all. And customers love that. >> What's the alternative if this doesn't exist they have to go manually figure it out or it gets them in the face or they see the numbers don't work or what's the, if you don't have the tool to automate what's the customer's experience >> The alternative is that you actually have people look at every single instance of usage of resources and try and figure out how to do this. At cloud scale, that just doesn't make sense. You just can't. >> It's too many different options. >> Correct The reality is that your resources your human resources are literally your most expensive part of your budget. You want to leverage all the amazing people you have to do the amazing work. This is not amazing work. This is mundane. >> So you free up all the people time. >> Correct, you free up wasting their time and resources on doing something that's mundane, simple and should be automated, because that's the only way you scale. >> I think of you is like a little helper in the background helping me save money while I'm not thinking about it. It's like a good financial planner making you money since we're talking about the economy >> Pretty much, the other analogy that I give to all the technologists is this is like garbage collection. Like for most languages when you are coding, you have these new languages that do garbage collection for you. You don't do memory management and stuff where developers back in the day used to do that. Why do that when you can have technology do that in an automated manner for you in an optimal way. So just kind of freeing up your developer's time from doing this stuff that's mundane and it's a standard best practice. One of the things that we leverage AMD for, is they've helped us define the process of seamlessly migrating folks over to AMD based instances without any major disruptions or trying to minimize every aspect of disruption. So all the best practices are kind of borrowed from them, borrowed from AWS in most other cases. And we basically put them in the automation so that you don't ever have to worry about that stuff. >> Well you're getting so much data you have the opportunity to really streamline, I mean I love this, because you can look across industry, across verticals and behavior of what other folks are doing. Learn from that and apply that in the background to all your different customers. >> So how big is the company? How big is the team? >> So we have people in about 130 different countries. So we've completely been remote and global and actually the cloud has been one of the big enablers of that. >> That's awesome, 130 countries. >> And that's the best part of it. I was just telling Brad a short while ago that's allowed us to hire the best talent from across the world and they spend their time building new amazing products and new solutions instead of doing all this other mundane stuff. So we are big believers in automation not only for our world. And once our customers started asking us about or telling us about the same problem that they were having that's when we actually took what we had internally for our own purpose. We packaged it up as CloudFix and launched it last year at re:Invent. >> If the customers aren't thinking about automation then they're going to probably have struggle. They're going to probably struggle. I mean with more data coming in you see the data story here more data's coming in, more automation. And this year Brad price performance, I've heard the word price performance more this year at re:Invent than any other year I've heard it before, but this year, price performance not performance, price performance. So you're starting to hear that dialogue of squeeze, understand the use cases use the right specialized processor instance starting to see that evolve. >> Yeah and and there's so much to it. I mean, AMD right out of the box is any instance is 10% less expensive than the equivalent in the market right now on AWS. They do a great job of maximizing those products. We've got our Zen four core general processor family just released in November and it's going to be a beast. Yeah, we're very excited about it and AWS announced support for it so we're excited to see what they deliver there too. But price performance is so critical and again it's going back to the complexity of these environments. Giving some of these enterprises some help, to help them understand where they can get additional value. It goes well beyond the retail price. There's a lot more money to be shaved off the top just by spending time thinking about those applications. >> Yeah, absolutely. I love that you talked about collaboration we've been talking about community. I want to acknowledge the AWS super fans here, standing behind the stage. Rahul, I know that you are an AWS super fan. Can you tell us about that community and the program? >> Yeah, so I have been involved with AWS and building products with AWS since 2007. So it's kind of 15 years back when literally there were just a handful of API for launching EC2 instances and S3. >> Not the a hundred thousand that you mentioned earlier, my goodness, the scale. >> So I think I feel very privileged and honored that I have been part of that journey and have had to learn or have had the opportunity to learn both from successes and failures. And it's just my way of contributing back to that community. So we are part of the FinOps foundation as well, contributing through that. I run a podcast called AWS Insiders and a livestream called AWS Made Easy. So we are trying to make sure that people out there are able to understand how to leverage AWS in the best possible way. And yeah, we are there to help and hold their hand through it. >> Talk about the community, take a minute to explain to the audience watching the community around this cost optimization area. It's evolving, you mentioned FinOps. There's a whole large community developing, of practitioners and technologists coming together to look at this. What does this all mean? Talk about this community. >> So cost management within organizations is has evolved so drastically that organizations haven't really coped with it. Historically, you've had finance teams basically buy a lot of infrastructure, which is CapEx and the engineering teams had kind of an upper bound on what they would spend and where they would spend. Suddenly with cloud, that's kind of enabled so much innovation all of a sudden, everyone's realized it, five years was spent figuring out whether people should be on the cloud or not. That's no longer a question, right. Everyone needs to be in the cloud and I think that's a no-brainer. The problem there is that suddenly your operating model has moved from CapEx to OpEx. And organizations haven't really figured out how to deal with it. Finance now no longer has the controls to control and manage and forecast costs. Engineering has never had to deal with it in the past and suddenly now they have to figure out how to do all this finance stuff. And procurement finds itself in a very awkward way position because they are no longer doing these negotiations like they were doing in the past where it was okay right up front before you engage, you do these negotiations. Now it's kind of an ongoing thing and it's constantly changing. Like every day is different. >> And you got marketplace >> And you got marketplace. So it's a very complex situation and I think what we are trying to do with the FinOps foundation is try and take a lot of the best practices across organizations that have been doing this at least for the last 10, 15 years. Take all the learnings and failures and turn them into hopefully opinionated approaches that people can take organizations can take to navigate through this faster rather than kind of falter and then decide that oh, this is not for us. >> Yeah. It's a great model, it's a great model. >> I know it's time John, go ahead. >> All right so, we got a little bumper sticker exercise we used to say what's the bumper sticker for the show? We used to say that, now we're modernizing, we're saying if you had to do an Instagram reel right now, short hot take of what's going on at re:Invent this year with AMD or CloudFix or just in general what would be the sizzle reel, that would be on Instagram or TikTok, go. >> Look, I think when you're at re:Invent right now and number one the energy is fantastic. 23 is going to be a building year. We've got a lot of difficult times ahead financially but it's the time, the ones that come out of 23 stronger and more efficient, and cost optimize are going to survive the long run. So now's the time to build. >> Well done, Rahul let's go for it. >> Yeah, so like Brad said, cost and efficiencies at the top of everyone's mind. Stuff that's the low hanging fruit, easy, use automation. Apply your sources to do most of the innovation. Take the easiest part to realizing savings and operate as efficiently as you possibly can. I think that's got to be key. >> I think they nailed it. They both nailed it. Wow, well it was really good. >> I put you on our talent list of >> And alright, so we repeat them. Are you part of our host team? I love this, I absolutely love this Rahul we wish you the best at CloudFix and your 17 other jobs. And I am genuinely impressed. Do you sleep actually? Last question. >> I do, I do. I have an amazing team that really helps me with all of this. So yeah, thanks to them and thank you for having us here. >> It's been fantastic. >> It's our pleasure. And Brad, I'm delighted we get you both now and again on our next segment. Thank you for being here with us. >> Thank you very much. >> And thank you all for tuning in to our live coverage here at AWS re:Invent, in fabulous Sin City with John Furrier, my name's Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (calm music)

Published Date : Nov 30 2022

SUMMARY :

How you feeling? I dunno, it's day on the cloud and cost is a big thing. Rahul from you are, the energy this year compared to last year Rahul, how are you feeling? the engagement, it's awesome. So that you can still use the cloud and then apply them to So just in case the audience isn't versed, and run about 40 to 45,000 AWS accounts just a couple pocket change, no big deal. at a lot of the tools how to approach cost optimization? is to have automation that helps you and Rahul does for your customers. We have the leading. to everyone's heart, right now. from just the choices they make in cloud. So it's the thing about x86 compatibility, The alternative is that you actually It's too many all the amazing people you have because that's the only way you scale. I think of you is like One of the things that in the background to all and actually the cloud has been one And that's the best part of it. If the customers aren't and it's going to be a beast. and the program? So it's kind of 15 years that you mentioned earlier, or have had the opportunity to learn the community around this and the engineering teams had of the best practices it's a great model. if you had to do an So now's the time to build. Take the easiest part to realizing savings I think they nailed it. Rahul we wish you the best and thank you for having us here. we get you both now And thank you all

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

AWSORGANIZATION

0.99+

RahulPERSON

0.99+

Savannah PetersonPERSON

0.99+

10%QUANTITY

0.99+

John FurrierPERSON

0.99+

Brad SmithPERSON

0.99+

AMDORGANIZATION

0.99+

ESW CapitalORGANIZATION

0.99+

NovemberDATE

0.99+

five yearsQUANTITY

0.99+

last yearDATE

0.99+

Rahul SubramaniamPERSON

0.99+

17 other jobsQUANTITY

0.99+

JohnPERSON

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

CloudFixTITLE

0.99+

130 countriesQUANTITY

0.99+

2007DATE

0.99+

this yearDATE

0.98+

OneQUANTITY

0.98+

eighthQUANTITY

0.98+

about 130 different countriesQUANTITY

0.98+

ninthQUANTITY

0.98+

CapExORGANIZATION

0.98+

bothQUANTITY

0.98+

FinOpsORGANIZATION

0.97+

CTOPERSON

0.97+

Aurea CloudFixORGANIZATION

0.96+

over a hundred thousand APIQUANTITY

0.96+

Zen four coreCOMMERCIAL_ITEM

0.95+

one thingQUANTITY

0.95+

EC2TITLE

0.95+

50,000 plus peopleQUANTITY

0.95+

day threeQUANTITY

0.95+

day fourQUANTITY

0.95+

about 40QUANTITY

0.95+

23QUANTITY

0.95+

day twoQUANTITY

0.94+

CloudFixORGANIZATION

0.94+

45,000QUANTITY

0.93+

TikTokORGANIZATION

0.92+

OpExORGANIZATION

0.92+

S3TITLE

0.92+

over 150 enterprise software companiesQUANTITY

0.89+

InventEVENT

0.87+

InstagramORGANIZATION

0.86+

Brad Smith, AMD & Mark Williams, CloudSaver | AWS re:Invent 2022


 

(bright upbeat music) >> Hello everyone and welcome back to Las Vegas, Nevada. We're live from the show floor here at AWS re:Invent on theCUBE. My name is Savannah Peterson joined by my VIP co-host John Furrier. John, what's your hot take? >> We get wall-to-wall coverage day three of theCUBE (laughing loudly) shows popping, another day tomorrow. >> How many interviews have we done so far? >> I think we're over a hundred I think, (laughing loudly) we might be pushing a hundred. >> We've had a really fantastic line up of guests on theCUBE so far. We are in the meat of the sandwich right now. We've got a full line up of programming all day long and tomorrow. We are lucky to be joined by two fantastic gentlemen on our next segment. Brad, who's a familiar face. We just got to see you in that last one. Thank you for being here, you still doing good? >> Still good. >> Okay, great, glad nothing's changed in the last 14 minutes. >> 'no, we're good. >> Would've been tragic. And welcome, Mark, the CEO of Cloud Saver. Mark, how you doing this morning? >> I'm doing great, thanks so much. >> Savannah: How's the show going for ya'? >> It's going amazing. The turnout's just fantastic. It's record turnouts here. It's been lots of activity, it's great to be part of. >> So I suspect most people know about AMD, but Mark, I'm going to let you give us just a little intro to Cloud Saver so the audience is prepped... >> 'yeah, absolutely. So at Cloud Saver we help companies manage their Cloud spin. And the way that we do it is a little bit unique. Most people try and solve Cloud cost management just through a software only solution but we have a different perspective. There's so many complexities and nuances to managing your Cloud spin, that we don't think that software's enough. So our solution is a full managed service so we can plan our own proprietary technology with a full service delivery team, so that we come in and provide project management, Cloud engineering, FinOps analysts, and we come in and basically do all the cost authorization for the company. And so it's been a fantastic solution for us and something that's really resonated well within our customer base. >> I love your slogan. "Clean up the Cloud with the Cloud Saver Tag Manager'. >> Mark: That's right. >> So yesterday in the Keynote, Adams Lesky said, "Hey if you want to tighten your belt, come to the Cloud." So, big focus right now on right sizing. >> That's right. >> I won't say repatriation 'cause that's not kind of of happening, but like people are looking at it like they're not going to, it's not the glory days where you leave all your lights on in your house and you go to bed, you don't worry about the electricity bill. Now people are like, "Okay, what am I doing? Why am I doing it?" A lot more policy, a lot more focus. What are you guys seeing as the low hanging fruit, best practices, the use cases that people are implementing right now? >> Yeah, if you think about where things are at now from a Cloud cost management perspective, there's a lot of frustration in the marketplace because everybody sees their cost continually going up. And what typically happens is they'll say, okay we need to figure out what's going on with this cost and figure out where we can make some changes. And so they go out and get a cost visibility tool and then they're a little bit disappointed because all that visibility tool is completely dependent upon properly tagging your resources. So what a lot of people don't understand is that a lot of their pain that they're experiencing, the root cause is actually they've got a data problem which is why we built a entire solution to help companies clean up their Cloud, clean up their tags. It really is a foundational piece to help them understand how to manage their costs. >> I just.. >> Data is back in the data problem again >> Shocking, right? Not a theme we've heard on the show. Not a theme we've heard on the show at all. I mean, I think with tags it matters more than people realize and it can get very messy very quick. I know that this partnership is relatively new, six months, you told us before this show. Brad what does this partnership mean for AMD customers? >> Yeah, it's critical, they have a fantastic approach to this kind of a full service approach to cost optimization, compete optimization. AMD we're very, extremely focused on providing most cost efficient, most performance, and most energy efficient products on the market. And as Adam talked about, come to the Cloud to tighten your belt. I'll follow up. When you come to the Cloud, your choice matters, right? Your choice matters on what you use and what the downstream impact and cost is. And it also matters in sustainability and other other factors with our products. >> You know, yesterday Zeyess Karvellos one of our analysts on theCUBE, he used his own independent shop. We were talking about this focus and he actually made a comment I want to get your both reaction to, he said "Spend more in the Cloud, save more." Meaning there are ways to spend more on the Cloud and save more at the same time. >> Right. >> It's not just cut and eliminate, it's right side. I don't know what the right word is. Can you guys.. >> No, I think what you're saying is, is that there are areas where you need to spend more so you can be more efficient and get value that way, but there's also plenty of areas where you're spending money unnecessarily. Either you have resources that nobody's using. Let's find those and pull them to the front and center and turn them off, right? Or if you've over provisioned certain areas let's pull those back. So I think having the right balance of where you spend your money to get the value makes total sense. >> John: Yeah >> I like that holistic approach too. I like that you're not just looking at one thing. I mean, people, you're kind of, I'm thinking of you as like the McKinsey or like the dream team that just comes in tidies everything up. Makes sure that people are being, getting that total cost optimization. It's exciting. So who, I imagine, I mean obviously the entire organization benefits, but who benefits most? What types of roles? Who's using you? >> Right, so, Cloud cost management really benefits the entire organization, especially when times get tougher and everybody's looking to tighten their belt with cost. You know.. >> Wait every time when you say that, I'm like conscious, (laughing loudly) of my abdomen. we're in Vegas, there's great food, (laughing loudly) and we got, (laughing loudly) thanks a lot Adam, thanks a lot. (all laughing loudly). >> No, but it really does benefit everybody across the organization and it also helps people to keep cost management kind of front and center, right? No company allows people to have a complete blank check to go out there for infrastructure and as a way to make sure you've got proper checks and balances in place so that you're responsibly managing your IT organization. >> Yeah, and going back to the spend comment, spend more, you know, to save money. You know, look, we're going to be facing a very difficult situation in 23. I think there's going to be a lot of headwinds for a lot of companies. And the way to look at this is it's if you can provide yourself additional operating capital to work, there's other aspects to working with the business. Time to market, right? You're talking about addressing your top line. There's other ways to use applications and the services from AWS to help enable your business to grow even faster in '23 right? So '23 is a time to build, not necessarily a time to hang back and hope everything turns out okay. >> Yeah we can't go over it, (chuckles) We can't go under it, we got to go through it... >> Got to make it work >> Got to make our way through it. I think it's, yeah, it's so important. So as the partnership grows, what's next for you two companies? Brad will go to you first. >> Yeah sure you know, we're very excited to partner with Cloud Saver. It's fantastic company, have great team. And for us it's AMB is entering into the partnership space of this now. So now we've got a great position with AWS. We love their products, and now we're going to try to enable as many partners as we can in some specific areas. And for us cost optimization is priority number one. So you'll see a lot of programs that come out in '23 around this area. We're going to dedicate a lot of sales resources to help as many enterprise customers as we can, working with our close partners like Cloud Saver. >> Next ecosystem developing for you guys. >> Absolutely, absolutely, and you know AMD's they're still fairly new in the Cloud space, right? And this is a journey that takes a long time, and this is the next leg in our growth in the environment. >> Well, certainly the trend is more horsepower, more under the hood, more capabilities, customized >> Oh that's coming. >> Workloads. You're starting to see the specialized instances, you can see what's happening and soon it's going to be like a, it's own like computer in the Cloud >> Right. >> More horsepower. >> You think about this, I mean more than 400 instance types, more than 400 types of services out there in that range. And you think about all the potential interactions and applications. It's incredibly complex, right? >> Yeah that decision matrix just went like this in my brain when you said that. That is wild. And everyone wants to do more, faster, easier but also with the comfort of that cost savings, in terms of your customers priorities, I mean, you're talking to a lot of different people across a lot of different industries both of you are, I'm sure is cost optimization the number one priority as we're going into 2023? >> Yeah. Matter of fact, I have a chance to obviously speak with AWS leadership on a regular basis. Every single, they keep telling me for the past two months, every single CEO they're speaking to right now, it's the very first things out of the mouth. It's top of mind for every major corporation right now. And I think the message is also the same. It's like, great, let's help you do that but at the same time, is it not a bad time to re:Invest with some of those additional savings, right? And I think that's where the value of else comes into play. >> Yeah, and I think what you guys are demonstrating to also is another tell sign of this what I call NextGen Cloud evolution, which is as the end-to-end messaging and positioning expands and as you see more solutions. You know, let's face it, it's going to be more complex. So the complexity will be abstracted away by new opportunities like what you guys are doing, what you're enabling. So you're starting to see kind of platforms emerging across the board as well as more ISVs. So ISVs, people building software, starting to see now more symbiotic relationship, for developers and entrepreneurship. >> Yeah, so the complexity of the Cloud is certainly something that's not going to get any less as time goes on, right? And I think as companies realize that, they see it, they acknowledge it and I think they're going to lean on partners to help them navigate those waters. So that's where I think the combination of AMD and Cloud Saver, we can really partner very well because I think we're both very passionate about creating customer value, and I think there's a tremendous number of ways that we can collaborate together to bring that to the customers. >> And you know what's interesting too you guys are both hitting on this is that this next partner channel whatever you want to call it is very joint engineering and development. It's not just relationships and selling, there's integration and the new products that can come out is a phenomenal, we're going to watch. I think I predict that the ecosystem's going to explode big time in terms of value, just new things, joint engineering, API... >> 'it's so collaborative too. >> Yeah, it's going to be... >> 'well, the innovation in the marketplace right now is absolutely on fire. I mean, it's so exciting to see all the new technologies have on board. And to be able to see that kind of permeate throughout the marketplace is something that's just really fun and excited to be part of. >> Oh, when you think about the doom and gloom that we hear every day and you look around right now, everybody's building, right? And... >> this and smiling. >> And smiling, right? >> Paul: Today, (laughing loudly) >> Until Thursday when the legs start to get out. >> Yeah. >> Yeah, what recession? I mean, it's so crowded here. And again, this is the point that the Amazon is now a big player in this economy in 2008 that last recession, they weren't a factor. Now you got be tightening new solutions. I think you're going to see, I think more agility. I think Amazon and the ecosystem might propel us out the recession faster if you get the tailwind that might be a big thing we're watching. >> I agree. Cloud computing is inevitable. >> Yeah. >> It's inevitable. >> Yeah, it's no longer a conversation, it's a commitment. And I think we all certainly agree with that. So, Brad is versed in this challenge because we did it in our last segment. But Mark, we have a new tradition I should say, at re:Invent here, where we're looking for your 32nd Instagram reel, your sizzle your thought leadership hot take on the most important story or theme of the show this year. >> For the show as a whole. Wow, well, I think innovation is absolutely front and center today. I think, of the new technologies that we're seeing out there are absolutely phenomenal. I think they're taking the whole Cloud computing to the next level, and I think it's going to have a dramatic impact on how people develop applications and run workloads in the Cloud. >> Well done. What do you think John? I think you nailed it. >> Nailed it. Yeah, want to go for round two? >> Sure. >> Sure, I'll give a shot, (laughing loudly) So... >> 'get it, Brad. >> So, when in public Cloud choice matters? >> It matters. Think about the instance types you use think about the configurations you use and think about the applications you're layering in there and why they're there, right? Optimize those environments. Take advantage of all the tools you have. >> Yeah, you're going to start tuning your Cloud now. I mean, as it gets bigger and better, stronger you're going to start to see just fine tuning more craft, I guess. >> Mark: Yeah. >> In there, great stuff. >> Paul, and in these interesting times, I'm not committed to calling it a recession yet. I still have a chart of hope. I think that the services and the value that you provide to your customers are going to be one of those painkillers that will survive through this. I mean we're seeing a little bit of the trimming of the fat, of extraneous spending in the tech sector as a whole. But I can't imagine folks not wanting to leverage AMD and Cloud Saver, it's exciting, yeah. >> Saving money never goes out of style right? (laughing loudly) >> Saving money is always sexy. I love that, yeah, (laughing loudly) It's actually really... That's a great line goes on. Mark, thank you so much for being here and sharing your story with us. We really appreciate it, Brad. It's been a fabulous thing. You're just going to stay here all day, right? >> I'll just hang out, yeah. >> All right. >> I'm yours. >> I love that. And thank you all for tuning to us live here from the show floor at AWS re:Invent in fabulous sunny Las Vegas Nevada with John Furrier, I'm Savannah Peterson you're watching theCUBE, the leader in high tech coverage. (bright upbeat music)

Published Date : Nov 30 2022

SUMMARY :

We're live from the show We get wall-to-wall I think we're over a hundred We just got to see you in that last one. in the last 14 minutes. Mark, how you doing this morning? it's great to be part of. but Mark, I'm going to let you give us and nuances to managing your Cloud spin, I love your slogan. come to the Cloud." and you go to bed, in the marketplace I mean, I think with tags it matters more come to the Cloud to tighten your belt. and save more at the same time. I don't know what the right word is. of where you spend your money I like that you're not and everybody's looking to and we got, (laughing loudly) No company allows people to So '23 is a time to build, got to go through it... So as the partnership to partner with Cloud Saver. and you know AMD's and soon it's going to be like a, And you think about all both of you are, I'm sure And I think that's where the Yeah, and I think what Yeah, so the complexity and the new products that I mean, it's so exciting to about the doom and gloom the legs start to get out. that the Amazon is now a big I agree. And I think we all it's going to have a dramatic impact I think you nailed it. Yeah, want to go for round two? Take advantage of all the tools you have. I mean, as it gets bigger and the value that you You're just going to And thank you all for

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BradPERSON

0.99+

MarkPERSON

0.99+

AWSORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

AmazonORGANIZATION

0.99+

AdamPERSON

0.99+

John FurrierPERSON

0.99+

JohnPERSON

0.99+

2008DATE

0.99+

PaulPERSON

0.99+

Mark WilliamsPERSON

0.99+

SavannahPERSON

0.99+

AMDORGANIZATION

0.99+

two companiesQUANTITY

0.99+

VegasLOCATION

0.99+

Adams LeskyPERSON

0.99+

Brad SmithPERSON

0.99+

2023DATE

0.99+

yesterdayDATE

0.99+

more than 400 typesQUANTITY

0.99+

Cloud SaverORGANIZATION

0.99+

32ndQUANTITY

0.99+

TodayDATE

0.99+

'23DATE

0.99+

bothQUANTITY

0.99+

Las Vegas, NevadaLOCATION

0.99+

AMBORGANIZATION

0.99+

firstQUANTITY

0.99+

tomorrowDATE

0.98+

six monthsQUANTITY

0.98+

Cloud SaverTITLE

0.98+

todayDATE

0.98+

oneQUANTITY

0.97+

this yearDATE

0.97+

ThursdayDATE

0.96+

over a hundredQUANTITY

0.96+

two fantastic gentlemenQUANTITY

0.95+

McKinseyORGANIZATION

0.95+

one thingQUANTITY

0.94+

round twoQUANTITY

0.94+

23DATE

0.93+

Las Vegas NevadaLOCATION

0.92+

NextGenORGANIZATION

0.91+

day threeQUANTITY

0.9+

a hundredQUANTITY

0.89+

more than 400 instance typesQUANTITY

0.87+

InstagramORGANIZATION

0.82+

CloudORGANIZATION

0.82+

theCUBEORGANIZATION

0.81+

Zeyess KarvellosPERSON

0.78+

CloudSaverORGANIZATION

0.78+

past two monthsDATE

0.78+

InventEVENT

0.76+

Invent 2022EVENT

0.74+

re:EVENT

0.71+

re:InventEVENT

0.69+

KeynoteTITLE

0.66+

morningDATE

0.65+

theCUBETITLE

0.63+

Brian Payne, Dell Technologies and Raghu Nambiar, AMD | SuperComputing 22


 

(upbeat music) >> We're back at SC22 SuperComputing Conference in Dallas. My name's Paul Gillan, my co-host, John Furrier, SiliconANGLE founder. And huge exhibit floor here. So much activity, so much going on in HPC, and much of it around the chips from AMD, which has been on a roll lately. And in partnership with Dell, our guests are Brian Payne, Dell Technologies, VP of Product Management for ISG mid-range technical solutions, and Raghu Nambiar, corporate vice president of data system, data center ecosystem, and application engineering, that's quite a mouthful, at AMD, And gentlemen, welcome. Thank you. >> Thanks for having us. >> This has been an evolving relationship between you two companies, obviously a growing one, and something Dell was part of the big general rollout, AMD's new chip set last week. Talk about how that relationship has evolved over the last five years. >> Yeah, sure. Well, so it goes back to the advent of the EPIC architecture. So we were there from the beginning, partnering well before the launch five years ago, thinking about, "Hey how can we come up with a way to solve customer problems? address workloads in unique ways?" And that was kind of the origin of the relationship. We came out with some really disruptive and capable platforms. And then it continues, it's continued till then, all the way to the launch of last week, where we've introduced four of the most capable platforms we've ever had in the PowerEdge portfolio. >> Yeah, I'm really excited about the partnership with the Dell. As Brian said, we have been partnering very closely for last five years since we introduced the first generation of EPIC. So we collaborate on, you know, system design, validation, performance benchmarks, and more importantly on software optimizations and solutions to offer out of the box experience to our customers. Whether it is HPC or databases, big data analytics or AI. >> You know, you guys have been on theCUBE, you guys are veterans 2012, 2014 back in the day. So much has changed over the years. Raghu, you were on the founding chair of the TPC for AI. We've talked about the different iterations of power service. So much has changed. Why the focus on these workloads now? What's the inflection point that we're seeing here at SuperComputing? It feels like we've been in this, you know run the ball, get, gain a yard, move the chains, you know, but we feel, I feel like there's a moment where the there's going to be an unleashing of innovation around new use cases. Where's the workloads? Why the performance? What are some of those use cases right now that are front and center? >> Yeah, I mean if you look at today, the enterprise ecosystem has become extremely complex, okay? People are running traditional workloads like Relational Database Management Systems, also new generation of workloads with the AI and HPC and actually like AI actually HPC augmented with some of the AI technologies. So what customers are looking for is, as I said, out of the box experience, or time to value is extremely critical. Unlike in the past, you know, people, the customers don't have the time and resources to run months long of POCs, okay? So that's one idea that we are focusing, you know, working closely with Dell to give out of the box experience. Again, you know, the enterprise applicate ecosystem is, you know, really becoming complex and the, you know, as you mentioned, some of the industry standard benchmark is designed to give the fair comparison of performance, and price performance for the, our end customers. And you know, Brian and my team has been working closely to demonstrate our joint capabilities in the AI space with, in a set of TPCx-AI benchmark cards last week it was the major highlight of our launch last week. >> Brian, you got showing the demo in the booth at Dell here. Not demo, the product, it's available. What are you seeing for your use cases that customers are kind of rallying around now, and what are they doubling down on. >> Yeah, you know, I, so Raghu I think teed it up well. The really data is the currency of business and all organizations today. And that's what's pushing people to figure out, hey, both traditional workloads as well as new workloads. So we've got in the traditional workload space, you still have ERP systems like SAP, et cetera, and we've announced world records there, a hundred plus percent improvements in our single socket system, 70% and dual. We actually posted a 40% advantage over the best Genoa result just this week. So, I mean, we're excited about that in the traditional space. But what's exciting, like why are we here? Why, why are people thinking about HPC and AI? It's about how do we make use of that data, that data being the currency and how do we push in that space? So Raghu mentioned the TPC AI benchmark. We launched, or we announced in collaboration you talk about how do we work together, nine world records in that space. In one case it's a 3x improvement over prior generations. So the workloads that people care about is like how can I process this data more effectively? How can I store it and secure it more effectively? And ultimately, how do I make decisions about where we're going, whether it's a scientific breakthrough, or a commercial application. That's what's really driving the use cases and the demand from our customers today. >> I think one of the interesting trends we've seen over the last couple of years is a resurgence in interest in task specific hardware around AI. In fact venture capital companies invested a $1.8 billion last year in AI hardware startups. I wonder, and these companies are not doing CPUs necessarily, or GPUs, they're doing accelerators, FPGAs, ASICs. But you have to be looking at that activity and what these companies are doing. What are you taking away from that? How does that affect your own product development plans? Both on the chip side and on the system side? >> I think the future of computing is going to be heterogeneous. Okay. I mean a CPU solving certain type of problems like general purpose computing databases big data analytics, GPU solving, you know, problems in AI and visualization and DPUs and FPGA's accelerators solving you know, offloading, you know, some of the tasks from the CPU and providing realtime performance. And of course, you know, the, the software optimizes are going to be critical to stitch everything together, whether it is HPC or AI or other workloads. You know, again, as I said, heterogeneous computing is going to be the future. >> And, and for us as a platform provider, the heterogeneous, you know, solutions mean we have to design systems that are capable of supporting that. So if as you think about the compute power whether it's a GPU or a CPU, continuing to push the envelope in terms of, you know, to do the computations, power consumption, things like that. How do we design a system that can be, you know, incredibly efficient, and also be able to support the scaling, you know, to solve those complex problems. So that gets into challenges around, you know, both liquid cooling, but also making the most out of air cooling. And so we're seeing not only are we we driving up you know, the capability of these systems, we're actually improving the energy efficiency. And those, the most recent systems that we launched around the CPU, which is still kind of at the heart of everything today, you know, are seeing 50% improvement, you know, gen to gen in terms of performance per watt capabilities. So it's, it's about like how do we package these systems in effective ways and make sure that our customers can get, you know, the advertised benefits, so to speak, of the new chip technologies. >> Yeah. To add to that, you know, performance, scalability total cost of ownership, these are the key considerations, but now energy efficiency has become more important than ever, you know, our commitment to sustainability. This is one of the thing that we have demonstrated last week was with our new generation of EPIC Genoa based systems, we can do a one five to one consolidation, significantly reducing the energy requirement. >> Power's huge costs are going up. It's a global issue. >> Raghu: Yeah, it is. >> How do you squeeze more performance too out of it at the same time, I mean, smaller, faster, cheaper. Paul, you wrote a story about, you know, this weekend about hardware and AI making hardware so much more important. You got more power requirements, you got the sustainability, but you need more horsepower, more compute. What's different in the architecture if you guys could share like today versus years ago, what's different in as these generations step function value increases? >> So one of the major drivers from the processor perspective is if you look at the latest generation of processors, the five nanometer technology, bringing efficiency and density. So we are able to pack 96 processor cores, you know, in a two socket system, we are talking about 196 processor cores. And of course, you know, other enhancements like IPC uplift, bringing DDR5 to the market PC (indistinct) for the market, offering overall, you know, performance uplift of more than 2.5x for certain workloads. And of course, you know, significantly reducing the power footprint. >> Also, I was just going to cut, I mean, architecturally speaking, you know, then how do we take the 96 cores and surround it, deliver a balanced ecosystem to make sure that we can get the, the IO out of the system, and make sure we've got the right data storage. So I mean, you'll see 60% improvements and total storage in the system. I think in 2012 we're talking about 10 gig ethernet. Well, you know, now we're on to 100 and 400 on the forefront. So it's like how do we keep up with this increased power, by having, or computing capabilities both offload and core computing and make sure we've got a system that can deliver the desired (indistinct). >> So the little things like the bus, the PCI cards, the NICs, the connectors have to be rethought through. Is that what you're getting at? >> Yeah, absolutely. >> Paul: And the GPUs, which are huge power consumers. >> Yeah, absolutely. So I mean, cooling, we introduce, and we call it smart cooling is a part of our latest generation of servers. I mean, the thermal design inside of a server is a is a complex, you know, complex system, right? And doing that efficiently because of course fans consume power. So I mean, yeah, those are the kind of considerations that we have to put through to make sure that you're not either throttling performance because you don't have you know, keeping the chips at the right temperature. And, and you know, ultimately when you do that, you're hurting the productivity of the investment. So I mean, it's, it's our responsibility to put our thoughts and deliver those systems that are (indistinct) >> You mention data too, if you bring in the data, one of the big discussions going into the big Amazon show coming up, re:Invent is egress costs. Right, So now you've got compute and how you design data latency you know, processing. It's not just contained in a machine. You got to think about outside that machine talking to other machines. Is there an intelligent (chuckles) network developing? I mean, what's the future look like? >> Well, I mean, this is a, is an area that, that's, you know, it's fun and, you know, Dell's in a unique position to work on this problem, right? We have 70% of the mission housed, 70% of the mission critical data that exists in the world. How do we bring that closer to compute? How do we deliver system level solutions? So server compute, so recently we announced innovations around NVMe over Fabrics. So now you've got the NVMe technology and the SAN. How do we connect that more efficiently across the servers? Those are the kinds, and then guide our customers to make use of that. Those are the kinds of challenges that we're trying to unlock the value of the data by making sure we're (indistinct). >> There are a lot of lessons learned from, you know, classic HPC and some of the, you know big data analytics. Like, you know, Hadoops of the world, you know, you know distributor processing for crunching a large amount of amount of data. >> With the growth of the cloud, you see, you know, some pundits saying that data centers will become obsolete in five years, and everything's going to move to the cloud. Obviously data center market that's still growing, and is projected to continue to grow. But what's the argument for captive hardware, for owning a data center these days when the cloud offers such convenience and allegedly cost benefit? >> I would say the reality is that we're, and I think the industry at large has acknowledged this, that we're living in a multicloud world and multicloud methods are going to be necessary to you know, to solve problems and compete. And so, I mean, you know, in some cases, whether it's security or latency, you know, there's a push to have things in your own data center. And then of course growth at the edge, right? I mean, that's, that's really turning, you know, things on their head, if you will, getting data closer to where it's being generated. And so I would say we're going to live in this edge cloud, you know, and core data center environment with multi, you know, different cloud providers providing solutions and services where it makes sense, and it's incumbent on us to figure out how do we stitch together that data platform, that data layer, and help customers, you know, synthesize this data to, to generate, you know, the results they need. >> You know, one of the things I want to get into on the cloud you mentioned that Paul, is that we see the rise of graph databases. And so is that on the radar for the AI? Because a lot of more graph data is being brought in, the database market's incredibly robust. It's one of the key areas that people want performance out of. And as cloud native becomes the modern application development, a lot more infrastructure as code's happening, which means that the internet and the networks and the process should be programmable. So graph database has been one of those things. Have you guys done any work there? What's some data there you can share on that? >> Yeah, actually, you know, we have worked closely with a company called TigerGraph, there in the graph database space. And we have done a couple of case studies, one on the healthcare side, and the other one on the financial side for fraud detection. Yeah, I think they have a, this is an emerging area, and we are able to demonstrate industry leading performance for graph databases. Very excited about it. >> Yeah, it's interesting. It brings up the vertical versus horizontal applications. Where is the AI HPC kind of shining? Is it like horizontal and vertical solutions or what's, what's your vision there. >> Yeah, well, I mean, so this is a case where I'm also a user. So I own our analytics platform internally. We actually, we have a chat box for our product development organization to figure out, hey, what trends are going on with the systems that we sell, whether it's how they're being consumed or what we've sold. And we actually use graph database technology in order to power that chat box. So I'm actually in a position where I'm like, I want to get these new systems into our environment so we can deliver. >> Paul: Graphs under underlie most machine learning models. >> Yeah, Yeah. >> So we could talk about, so much to talk about in this space, so little time. And unfortunately we're out of that. So fascinating discussion. Brian Payne, Dell Technologies, Raghu Nambiar, AMD. Congratulations on the successful launch of your new chip set and the growth of, in your relationship over these past years. Thanks so much for being with us here on theCUBE. >> Super. >> Thank you much. >> It's great to be back. >> We'll be right back from SuperComputing 22 in Dallas. (upbeat music)

Published Date : Nov 16 2022

SUMMARY :

and much of it around the chips from AMD, over the last five years. in the PowerEdge portfolio. you know, system design, So much has changed over the years. Unlike in the past, you know, demo in the booth at Dell here. Yeah, you know, I, so and on the system side? And of course, you know, the heterogeneous, you know, This is one of the thing that we It's a global issue. What's different in the And of course, you know, other Well, you know, now the connectors have to Paul: And the GPUs, which And, and you know, you know, processing. is an area that, that's, you know, the world, you know, you know With the growth of the And so, I mean, you know, in some cases, on the cloud you mentioned that Paul, Yeah, actually, you know, Where is the AI HPC kind of shining? And we actually use graph Paul: Graphs under underlie Congratulations on the successful launch SuperComputing 22 in Dallas.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
BrianPERSON

0.99+

Brian PaynePERSON

0.99+

PaulPERSON

0.99+

Paul GillanPERSON

0.99+

DallasLOCATION

0.99+

50%QUANTITY

0.99+

60%QUANTITY

0.99+

70%QUANTITY

0.99+

2012DATE

0.99+

RaghuPERSON

0.99+

John FurrierPERSON

0.99+

DellORGANIZATION

0.99+

96 coresQUANTITY

0.99+

two companiesQUANTITY

0.99+

40%QUANTITY

0.99+

100QUANTITY

0.99+

$1.8 billionQUANTITY

0.99+

400QUANTITY

0.99+

TigerGraphORGANIZATION

0.99+

AMDORGANIZATION

0.99+

last weekDATE

0.99+

Raghu NambiarPERSON

0.99+

2014DATE

0.99+

oneQUANTITY

0.99+

bothQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

96 processor coresQUANTITY

0.99+

last yearDATE

0.99+

BothQUANTITY

0.99+

AmazonORGANIZATION

0.98+

five yearsQUANTITY

0.98+

two socketQUANTITY

0.98+

3xQUANTITY

0.98+

this weekDATE

0.98+

five years agoDATE

0.98+

todayDATE

0.98+

first generationQUANTITY

0.98+

fourQUANTITY

0.98+

SiliconANGLEORGANIZATION

0.97+

more than 2.5xQUANTITY

0.97+

fiveQUANTITY

0.97+

one ideaQUANTITY

0.97+

ISGORGANIZATION

0.96+

one caseQUANTITY

0.95+

five nanometerQUANTITY

0.95+

SuperComputingORGANIZATION

0.94+

EPICORGANIZATION

0.93+

yearsDATE

0.93+

GenoaORGANIZATION

0.92+

Raghu NambiarORGANIZATION

0.92+

SC22 SuperComputing ConferenceEVENT

0.91+

last couple of yearsDATE

0.9+

hundred plus percentQUANTITY

0.89+

TPCORGANIZATION

0.88+

nine worldQUANTITY

0.87+

SuperComputing 22ORGANIZATION

0.87+

about 196 processor coresQUANTITY

0.85+

AMD & Oracle Partner to Power Exadata X9M


 

(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)

Published Date : Sep 22 2022

SUMMARY :

in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JuanPERSON

0.99+

Dave VellantePERSON

0.99+

OracleORGANIZATION

0.99+

Juan LoaizaPERSON

0.99+

MarkPERSON

0.99+

10QUANTITY

0.99+

20%QUANTITY

0.99+

Mark PapermasterPERSON

0.99+

AMDORGANIZATION

0.99+

Last AprilDATE

0.99+

11 yearsQUANTITY

0.99+

thousandsQUANTITY

0.99+

both companiesQUANTITY

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

X9MTITLE

0.99+

50%QUANTITY

0.99+

fourth genQUANTITY

0.98+

todayDATE

0.98+

FirstQUANTITY

0.98+

ZenCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

X86COMMERCIAL_ITEM

0.97+

first appearanceQUANTITY

0.97+

ExadataTITLE

0.97+

third genQUANTITY

0.96+

earlier this centuryDATE

0.96+

seven nanometerQUANTITY

0.96+

ExadataORGANIZATION

0.94+

firstQUANTITY

0.92+

Eastern Pacific Yacht ClubORGANIZATION

0.9+

EPYCORGANIZATION

0.87+

bothQUANTITY

0.86+

OCITITLE

0.85+

One thingQUANTITY

0.83+

Exadata X9MCOMMERCIAL_ITEM

0.81+

Power ExadataORGANIZATION

0.81+

The CubeORGANIZATION

0.8+

OCIORGANIZATION

0.79+

The CubeCOMMERCIAL_ITEM

0.79+

ZenORGANIZATION

0.78+

three yearsQUANTITY

0.78+

Exadata X9MCOMMERCIAL_ITEM

0.74+

X9MCOMMERCIAL_ITEM

0.74+

yearsDATE

0.73+

15 years agoDATE

0.7+

10DATE

0.7+

EPYCOTHER

0.65+

ExadaraORGANIZATION

0.64+

Oracle Cloud InfrastructureORGANIZATION

0.61+

last few yearsDATE

0.6+

Exadata Cloud Infrastructure X9MTITLE

0.6+

Dan Molina, nth, Terry Richardson, AMD, & John Frey, HPE | Better Together with SHI


 

(futuristic music) >> Hey everyone. Lisa Martin here for theCUBE back with you, three guests join me. Dan Molina is here, the co-president and chief technology officer at NTH Generation. And I'm joined once again by Terry Richardson, North American channel chief for AMD and Dr. John Fry, chief technologist, sustainable transformation at HPE. Gentlemen, It's a pleasure to have you on theCUBE Thank you for joining me. >> Thank you, Lisa. >> Dan. Let's have you kick things off. Talk to us about how NTH Generation is addressing the environmental challenges that your customers are having while meeting the technology demands of the future. That those same customers are no doubt having. >> It's quite an interesting question, Lisa, in our case we have been in business since 1991 and we started by providing highly available computing solutions. So this is great for me to be partnered here with HPE and the AMD because we want to provide quality computing solutions. And back in the day, since 1991 saving energy saving footprint or reducing footprint in the data center saving on cooling costs was very important. Over time those became even more critical components of our solutions design. As you know, as a society we started becoming more aware of the benefits and the must that we have a responsibility back to society to basically contribute with our social and environmental responsibility. So one of the things that we continue to do and we started back in 1991 is to make sure that we're deciding compute solutions based on clients' actual needs. We go out of our way to collect real performance data real IT resource consumption data. And then we architect solutions using best in the industry components like AMD and HPE to make sure that they were going to be meeting those goals and energy savings, like cooling savings, footprint reduction, knowing that instead of maybe requiring 30 servers, just to mention an example maybe we're going to go down to 14 and that's going to result in great energy savings. Our commitment to making sure that we're providing optimized solutions goes all the way to achieving the top level certifications from our great partner, Hewlett Packard Enterprise. Also go deep into micro processing technologies like AMD but we want to make sure that the designs that we're putting together actually meet those goals. >> You talked about why sustainability is important to NTH from back in the day. I love how you said that. Dan, talk to us a little bit about what you're hearing from customers as we are seeing sustainability as a corporate initiative horizontally across industries and really rise up through the C-suite to the board. >> Right, it is quite interesting Lisa We do service pretty much horizontally just about any vertical, including public sector and the private sector from retail to healthcare, to biotech to manufacturing, of course, cities and counties. So we have a lot of experience with many different verticals. And across the board, we do see an increased interest in being socially responsible. And that includes not just being responsible on recycling as an example, most of our conversations or engagements that conversation happens, 'What what's going to happen with the old equipment ?' as we're replacing with more modern, more powerful, more efficient equipment. And we do a number of different things that go along with social responsibility and environment protection. And that's basically e-waste programs. As an example, we also have a program where we actually donate some of that older equipment to schools and that is quite quite something because we're helping an organization save energy, footprint. Basically the things that we've been talking about but at the same time, the older equipment even though it's not saving that much energy it still serves a purpose in society where maybe the unprivileged or not as able to afford computing equipment in certain schools and things of that nature. Now they can benefit and being productive to society. So it's not just about energy savings there's so many other factors around social corporate responsibility. >> So sounds like Dan, a very comprehensive end to end vision that NTH has around sustainability. Let's bring John and Terry into the conversation. John, we're going to start with you. Talk to us a little bit about how HPE and NTH are partnering together. What are some of the key aspects of the relationship from HPE's perspective that enable you both to meet not just your corporate sustainable IT objectives, but those of your customers. >> Yeah, it's a great question. And one of the things that HPE brings to bear is 20 years experience on sustainable IT, white papers, executive workbooks and a lot of expertise for how do we bring optimized solutions to market. If the customer doesn't want to manage those pieces himself we have our 'As a service solutions, HPE GreenLake. But our sales force won't get to every customer across the globe that wants to take advantage of this expertise. So we partner with companies like NTH to know the customer better, to develop the right solution for that customer and with NTH's relationships with the customers, they can constantly help the customer optimize those solutions and see where there perhaps areas of opportunity that may be outside of HPE's own portfolio, such as client devices where they can bring that expertise to bear, to help the customer have a better total customer experience. >> And that is critical, that better overall comprehensive total customer experience. As we know on the other end, all customers are demanding customers like us who want data in real time, we want access. We also want the corporate and the social responsibility of the companies that we work with. Terry, bringing you into the conversation. Talk to us a little about AMD. How are you helping customers to create what really is a sustainable IT strategy from what often starts out as sustainability tactics? >> Exactly. And to pick up on what John and and Dan were saying, we're really energized about the opportunity to allow customers to accelerate their ability to attain some of their more strategic sustainability goals. You know, since we started on our current data center, CPU and GPU offerings, each generation we continue to focus on increasing the performance capability with great sensitivity to the efficiency, right? So as customers are modernizing their data center and achieving their own digital transformation initiatives we are able to deliver solutions through HPE that really address a greater performance per watt which is a a core element in allowing customers to achieve the goals that John and Dan talked about. So, you know, as a company, we're fully on board with some very public positions around our own sustainability goals, but working with terrific partners like NTH and HPE allows us to together bring those enabling technologies directly to customers >> Enabling and accelerating technologies. Dan, let's go back to you. You mentioned some of the things that NTH is doing from a sustainability approach, the social and the community concern, energy use savings, recycling but this goes all the way from NTH's perspective to things like outreach and fairness in the workplace. Talk to us a little bit about some of those other initiatives that NTH has fired up. >> Absolutely, well at NTH , since the early days, we have invested heavily on modern equipment and we have placed that at NTH labs, just like HPE labs we have NTH labs, and that's what we do a great deal of testing to make sure that our clients, right our joint clients are going to have high quality solutions that we're not just talking about it and we actually test them. So that is definitely an investment by being conscious about energy conservation. We have programs and scripts to shut down equipment that is not needed at the time, right. So we're definitely conscious about it. So I wanted to mention that example. Another one is, we all went through a pandemic and this is still ongoing from some perspectives. And that forced pretty much all of our employees, at least for some time to work from home. Being an IT company, we're very proud that we made that transition almost seamlessly. And we're very proud that you know people who continue to work from home, they're saving of course, gasoline, time, traffic, all those benefits that go with reducing transportation, and don't get me wrong, I mean, sometimes it is important to still have face to face meetings, especially with new organizations that you want to establish trust. But for the most part we have become a hybrid workforce type of organization. At the same time, we're also implementing our own hybrid IT approach which is what we talk to our clients about. So there's certain workloads, there are certain applications that truly belong in in public cloud or Software as a Service. And there's other workloads that truly belong, to stay in your data center. So a combination and doing it correctly can result in significant savings, not just money, but also again energy, consumption. Other things that we're doing, I mentioned trading programs, again, very proud that you know, we use a e-waste programs to make sure that those IT equipment is properly disposed of and it's not going to end in a landfill somewhere but also again, donating to schools, right? And very proud about that one. We have other outreach programs. Normally at the end of the year we do some substantial donations and we encourage our employees, my coworkers to donate. And we match those donations to organizations like Operation USA, they provide health and education programs to recover from disasters. Another one is Salvation Army, where basically they fund rehabilitation programs that heal addictions change lives and restore families. We also donate to the San Diego Zoo. We also believe in the whole ecosystem, of course and we're very proud to be part of that. They are supporting more than 140 conservation projects and partnerships in 70 countries. And we're part of that donation. And our owner has been part of the board or he was for a number of years. Mercy House down in San Diego, we have our headquarters. They have programs for the homeless. And basically that they're servicing. Also Save a Life Foundation for the youth to be educated to help prevent sudden cardiac arrest for the youth. So programs like that. We're very proud to be part of the donations. Again, it's not just about energy savings but it's so many other things as part of our corporate social responsibility program. Other things that I wanted to mention. Everything in our buildings, in our offices, multiple locations. Now we turn into LED. So again, we're eating our own dog food as they say but that is definitely some significant energy savings. And then lastly, I wanted to mention, this is more what we do for our customers, but the whole HPE GreenLake program we have a growing number of clients especially in Southern California. And some of those are quite large like school districts, like counties. And we feel very proud that in the old days customers would buy IT equipment for the next three to five years. Right? And they would buy extra because obviously they're expecting some growth while that equipment must consume energy from day one. With a GreenLake type of program, the solution is sized properly. Maybe a little bit of a buffer for unexpected growth needs. And anyway, but with a GreenLake program as customers need more IT resources to continue to expand their workloads for their organizations. Then we go in with 'just in time' type of resources. Saving energy and footprint and everything else that we've been talking about along the way. So very proud to be one of the go-tos for Hewlett Packard Enterprise on the GreenLake program which is now a platform, so. >> That's great. Dan, it sounds like NTH generation has such a comprehensive focus and strategy on sustainability where you're pulling multiple levers it's almost like sustainability to the NTH degree ? See what I did there ? >> (laughing) >> I'd like to talk with all three of you now. And John, I want to start with you about employees. Dan, you talked about the hybrid work environment and some of the silver linings from the pandemic but I'd love to know, John, Terry and then Dan, in that order how educated and engaged are your employees where sustainability is concerned? Talk to me about that from their engagement perspective and also from the ability to retain them and make them proud as Dan was saying to work for these companies, John ? >> Yeah, absolutely. One of the things that we see in technology, and we hear it from our customers every day when we're meeting with them is we all have a challenge attracting and retaining new employees. And one of the ways that you can succeed in that challenge is by connecting the work that the employee does to both the purpose of your company and broader than that global purpose. So environmental and social types of activities. So for us, we actually do a tremendous amount of education for our employees. At the moment, all of our vice presidents and above are taking climate training as part of our own climate aspirations to really drive those goals into action. But we're opening that training to any employee in the company. We have a variety of employee resource groups that are focused on sustainability and carbon reduction. And in many cases, they're very loud advocates for why aren't we pushing a roadmap further? Why aren't we doing things in a particular industry segment where they think we're not moving quite as quickly as we should be. But part of the recognition around all of that as well is customers often ask us when we suggest a sustainability or sustainable IT solution to them. Their first question back is, are you doing this yourselves? So for all of those reasons, we invest a lot of time and effort in educating our employees, listening to our employees on that topic and really using them to help drive our programs forward. >> That sounds like it's critical, John for customers to understand, are you doing this as well? Are you using your own technology ? Terry, talk to us about from the AMD side the education of your employees, the engagement of them where sustainability is concerned. >> Yeah. So similar to what John said, I would characterize AMD is a very socially responsible company. We kind of share that alignment in point of view along with NTH. Corporate responsibility is something that you know, most companies have started to see become a lot more prominent, a lot more talked about internally. We've been very public with four key sustainability goals that we've set as an organization. And we regularly provide updates on where we are along the way. Some of those goals extend out to 2025 and in one case 2030 so not too far away, but we're providing milestone updates against some pretty aggressive and important goals. I think, you know, as a technology company, regardless of the role that you're in there's a way that you can connect to what the company's doing that I think is kind of a feel good. I spend more of my time with the customer facing or partner facing resources and being able to deliver a tool to partners like NTH and strategic partners like HPE that really helps quantify the benefit, you know in a bare metal, in terms of greenhouse gas emissions and a TCO tool to really quantify what an implementation of a new and modern solution will mean to a customer. And for the first time they have choice. So I think employees, they can really feel good about being able to to do something that is for a greater good than just the traditional corporate goals. And of course the engineers that are designing the next generation of products that have these as core competencies clearly can connect to the impact that we're able to make on the broader global ecosystem. >> And that is so important. Terry, you know, employee productivity and satisfaction directly translates to customer satisfaction, customer retention. So, I always think of them as inextricably linked. So great to hear what you're all doing in terms of the employee engagement. Dan, talk to me about some of the outcomes that NTH is enabling customers to achieve, from an outcomes perspective those business outcomes, maybe even at a high level or a generic level, love to dig into some of those. >> Of course. Yes. So again, our mission is really to deliver awesome in everything we do. And we're very proud about that mission, very crispy clear, short and sweet and that includes, we don't cut corners. We go through the extent of, again, learning the technology getting those certifications, testing those in the lab so that when we're working with our end user organizations they know they're going to have a quality solution. And part of our vision has been to provide industry leading transformational technologies and solutions for example, HPE and AMD for organizations to go through their own digital transformation. Those two words have been used extensively over the last decade, but this is a multi decade type of trend, super trend or mega trend. And we're very proud that by offering and architecting and implementing, and in many cases supporting, with our partners, those, you know, best in class IT cyber security solutions were helping those organizations with those business outcomes, their own digital transformation. If you extend that Lisa , a Little bit further, by helping our clients, both public and private sector become more efficient, more scalable we're also helping, you know organizations become more productive, if you scale that out to the entire society in the US that also helps with the GDP. So it's all interrelated and we're very proud through our, again, optimized solutions. We're not just going to sell a box we're going to understand what the organization truly needs and adapt and architect our solutions accordingly. And we have, again, access to amazing technology, micro processes. Is just amazing what they can do today even compared to five years ago. And that enables new initiatives like artificial intelligence through machine learning and things of that nature. You need GPU technology , that specialized microprocessors and companies like AMD, like I said that are enabling organizations to go down that path faster, right? While saving energy, footprint and everything that we've been talking about. So those are some of the outcomes that I see >> Hey, Dan, listening to you talk, I can't help but think this is not a stretch for NTH right? Although, you know, terms like sustainability and reducing carbon footprint might be, you know more in vogue, the type of solutions that you've been architecting for customers your approach, dates back decades, and you don't have to change a lot. You just have new kind of toys to play with and new compelling offerings from great vendors like HPE to position to your customers. But it's not a big change in what you need to do. >> We're blessed from that perspective that's how our founders started the company. And we only, I think we go through a very extensive interview process to make sure that there will be a fit both ways. We want our new team members to get to know the the rest of the team before they actually make the decision. We are very proud as well, Terry, Lisa and John, that our tenure here at NTH is probably well over a decade. People get here and they really value how we help organizations through our dedicated work, providing again, leading edge technology solutions and the results that they see in our own organizations where we have made many friends in the industry because they had a problem, right? Or they had a very challenging initiative for their organization and we work together and the outcome there is something that they're very, very proud of. So you're right, Terry, we've been doing this for a long time. We're also very happy again with programs like the HPE GreenLake. We were already doing optimized solutions but with something like GreenLake is helping us save more energy consumption from the very beginning by allowing organizations to pay for only what they need with a little bit of buffer that we talked about. So what we've been doing since 1991 combined with a program like GreenLake I think is going to help us even better with our social corporate responsibility. >> I think what you guys have all articulated beautifully in the last 20 minutes is how strategic and interwoven the partnerships between HP, AMD and NTH is what your enabling customers to achieve those outcomes. What you're also doing internally to do things like reduce waste, reduce carbon emissions, and ensure that your employees are proud of who they're working for. Those are all fantastic guys. I wish we had more time cause I know we are just scratching the surface here. We appreciate everything that you shared with respect to sustainable IT and what you're enabling the end user customer to achieve. >> Thank you, Lisa. >> Thanks. >> Thank you. >> My pleasure. From my guests, I'm Lisa Martin. In a moment, Dave Vellante will return to give you some closing thoughts on sustainable IT You're watching theCUBE. the leader in high tech enterprise coverage.

Published Date : Sep 15 2022

SUMMARY :

to have you on theCUBE Talk to us about how NTH and the must that we have a responsibility the C-suite to the board. that older equipment to schools Talk to us a little bit that HPE brings to bear and the social responsibility And to pick up on what John of the things that NTH is doing for the next three to five years. to the NTH degree ? and also from the ability to retain them And one of the ways that you can succeed for customers to understand, and being able to deliver a tool So great to hear what you're all doing that are enabling organizations to go Hey, Dan, listening to you talk, and the results that they and interwoven the partnerships between to give you some closing

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
JohnPERSON

0.99+

Dave VellantePERSON

0.99+

DanPERSON

0.99+

Lisa MartinPERSON

0.99+

NTHORGANIZATION

0.99+

TerryPERSON

0.99+

Dan MolinaPERSON

0.99+

Terry RichardsonPERSON

0.99+

AMDORGANIZATION

0.99+

HPEORGANIZATION

0.99+

HPORGANIZATION

0.99+

NTH GenerationORGANIZATION

0.99+

1991DATE

0.99+

NTH GenerationORGANIZATION

0.99+

San DiegoLOCATION

0.99+

Southern CaliforniaLOCATION

0.99+

30 serversQUANTITY

0.99+

20 yearsQUANTITY

0.99+

LisaPERSON

0.99+

two wordsQUANTITY

0.99+

USLOCATION

0.99+

Salvation ArmyORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

2025DATE

0.99+

14QUANTITY

0.99+

John FreyPERSON

0.99+

first questionQUANTITY

0.99+

three guestsQUANTITY

0.99+

more than 140 conservation projectsQUANTITY

0.99+

oneQUANTITY

0.99+

each generationQUANTITY

0.99+

five years agoDATE

0.99+

bothQUANTITY

0.99+

70 countriesQUANTITY

0.99+

first timeQUANTITY

0.98+

John FryPERSON

0.98+

OneQUANTITY

0.98+

both waysQUANTITY

0.98+

AMD Oracle Partnership Elevates MySQLHeatwave


 

(upbeat music) >> For those of you who've been following the cloud database space, you know that MySQL HeatWave has been on a technology tear over the last 24 months with Oracle claiming record breaking benchmarks relative to other database platforms. So far, those benchmarks remain industry leading as competitors have chosen not to respond, perhaps because they don't feel the need to, or maybe they don't feel that doing so would serve their interest. Regardless, the HeatWave team at Oracle has been very aggressive about its performance claims, making lots of noise, challenging the competition to respond, publishing their scripts to GitHub. But so far, there are no takers, but customers seem to be picking up on these moves by Oracle and it's likely the performance numbers resonate with them. Now, the other area we want to explore, which we haven't thus far, is the engine behind HeatWave and that is AMD. AMD's epic processors have been the powerhouse on OCI, running MySQL HeatWave since day one. And today we're going to explore how these two technology companies are working together to deliver these performance gains and some compelling TCO metrics. In fact, a recent Wikibon analysis from senior analyst Marc Staimer made some TCO comparisons in OLAP workloads relative to AWS, Snowflake, GCP, and Azure databases, you can find that research on wikibon.com. And with that, let me introduce today's guest, Nipun Agarwal senior vice president of MySQL HeatWave and Kumaran Siva, who's the corporate vice president for strategic business development at AMD. Welcome to theCUBE gentlemen. >> Welcome. Thank you. >> Thank you, Dave. >> Hey Nipun, you and I have talked a lot about this. You've been on theCUBE a number of times talking about MySQL HeatWave. But for viewers who may not have seen those episodes maybe you could give us an overview of HeatWave and how it's different from competitive cloud database offerings. >> Sure. So MySQL HeatWave is a fully managed MySQL database service offering from Oracle. It's a single database, which can be used to run transactional processing, analytics and machine learning workloads. So, in the past, MySQL has been designed and optimized for transaction processing. So customers of MySQL when they had to run, analytics machine learning, would need to extract the data out of MySQL, into some other database or service, to run analytics or machine learning. MySQL HeatWave offers a single database for running all kinds of workloads so customers don't need to extract data into some of the database. In addition to having a single database, MySQL HeatWave is also very performant compared to one up databases and also it is very price competitive. So the advantages are; single database, very performant, and very good price performance. >> Yes. And you've published some pretty impressive price performance numbers against competitors. Maybe you could describe those benchmarks and highlight some of the results, please. >> Sure. So one thing to notice that the performance of any database is going to like vary, the performance advantage is going to vary based on, the size of the data and the specific workloads, so the mileage varies, that's the first thing to know. So what we have done is, we have published multiple benchmarks. So we have benchmarks on PPCH or PPCDS and we have benchmarks on different data sizes because based on the customer's workload, the mileage is going to vary, so we want to give customers a broad range of comparisons so that they can decide for themselves. So in a specific case, where we are running on a 30 terabyte PPCH workload, HeatWave is about 18 times better price performance compared to Redshift. 18 times better compared to Redshift, about 33 times better price performance, compared to Snowflake, and 42 times better price performance compared to Google BigQuery. So, this is on 30 Terabyte PPCH. Now, if the data size is different, or the workload is different, the characteristics may vary slightly but this is just to give a flavor of the kind of performance advantage MySQL HeatWave offers. >> And then my last question before we bring in Kumaran. We've talked about the secret sauce being the tight integration between hardware and software, but would you add anything to that? What is that secret sauce in HeatWave that enables you to achieve these performance results and what does it mean for customers? >> So there are three parts to this. One is HeatWave has been designed with a scale out architecture in mind. So we have invented and implemented new algorithms for skill out query processing for analytics. The second aspect is that HeatWave has been really optimized for cloud, commodity cloud, and that's where AMD comes in. So for instance, many of the partitioning schemes we have for processing HeatWave, we optimize them for the L3 cache of the AMD processor. The thing which is very important to our customers is not just the sheer performance but the price performance, and that's where we have had a very good partnership with AMD because not only does AMD help us provide very good performance, but the price performance, right? And that all these numbers which I was showing, big part of it is because we are running on AMD which provides very good price performance. So that's the second aspect. And the third aspect is, MySQL autopilot, which provides machine learning based automation. So it's really these three things, a combination of new algorithms, design for scale out query processing, optimized for commodity cloud hardware, specifically AMD processors, and third, MySQL auto pilot which gives us this performance advantage. >> Great, thank you. So that's a good segue for AMD and Kumaran. So Kumaran, what is AMD bringing to the table? What are the, like, for instance, relevance specs of the chips that are used in Oracle cloud infrastructure and what makes them unique? >> Yeah, thanks Dave. That's a good question. So, OCI is a great customer of ours. They use what we call the top of stack devices meaning that they have the highest core count and they also are very, very fast cores. So these are currently Zen 3 cores. I think the HeatWave product is right now deployed on Zen 2 but will shortly be also on the Zen 3 core as well. But we provide in the case of OCI 64 cores. So that's the largest devices that we build. What actually happens is, because these large number of CPUs in a single package and therefore increasing the density of the node, you end up with this fantastic TCO equation and the cost per performance, the cost per for deployed services like HeatWave actually ends up being extraordinarily competitive and that's a big part of the contribution that we're bringing in here. >> So Zen 3 is the AMD micro architecture which you introduced, I think in 2017, and it's the basis for EPIC, which is sort of the enterprise grade that you really attacked the enterprise with. Maybe you could elaborate a little bit, double click on how your chips contribute specifically to HeatWave's, price performance results. >> Yeah, absolutely. So in the case of HeatWave, so as Nipun alluded to, we have very large L3 caches, right? So in our very, very top end parts just like the Milan X devices, we can go all the way up to like 768 megabytes of L3 cache. And that gives you just enormous performance and performance gains. And that's part of what we're seeing with HeatWave today and that not that they're currently on the second generation ROM based product, 'cause it's a 7,002 based product line running with the 64 cores. But as time goes on, they'll be adopting the next generation Milan as well. And the other part of it too is, as our chip led architecture has evolved, we know, so from the first generation Naples way back in 2017, we went from having multiple memory domains and a sort of NUMA architecture at the time, today we've really optimized that architecture. We use a common I/O Die that has all of the memory channels attached to it. And what that means is that, these scale out applications like HeatWave, are able to really scale very efficiently as they go from a small domain of CPUs to, for example the entire chip, all 64 cores that scaling, is been a key focus for AMD and being able to design and build architectures that can take advantage of that and then have applications like HeatWave that scale so well on it, has been, a key aim of ours. >> And Gen 3 moving up the Italian countryside. Nipun, you've taken the somewhat unusual step of posting the benchmark parameters, making them public on GitHub. Now, HeatWave is relatively new. So people felt that when Oracle gained ownership of MySQL it would let it wilt on the vine in favor of Oracle database, so you lost some ground and now, you're getting very aggressive with HeatWave. What's the reason for publishing those benchmark parameters on GitHub? >> So, the main reason for us to publish price performance numbers for HeatWave is to communicate to our customers a sense of what are the benefits they're going to get when they use HeatWave. But we want to be very transparent because as I said the performance advantages for the customers may vary, based on the data size, based on the specific workloads. So one of the reasons for us to publish, all these scripts on GitHub is for transparency. So we want customers to take a look at the scripts, know what we have done, and be confident that we stand by the numbers which we are publishing, and they're very welcome, to try these numbers themselves. In fact, we have had customers who have downloaded the scripts from GitHub and run them on our service to kind of validate. The second aspect is in some cases, they may be some deviations from what we are publishing versus what the customer would like to run in the production deployments so it provides an easy way, for customers to take the scripts, modify them in some ways which may suit their real world scenario and run to see what the performance advantages are. So that's the main reason, first, is transparency, so the customers can see what we are doing, because of the comparison, and B, if they want to modify it to suit their needs, and then see what is the performance of HeatWave, they're very welcome to do so. >> So have customers done that? Have they taken the benchmarks? And I mean, if I were a competitor, honestly, I wouldn't get into that food fight because of the impressive performance, but unless I had to, I mean, have customers picked up on that, Nipun? >> Absolutely. In fact, we have had many customers who have benchmarked the performance of MySQL HeatWave, with other services. And the fact that the scripts are available, gives them a very good starting point, and then they've also tweaked those queries in some cases, to see what the Delta would be. And in some cases, customers got back to us saying, hey the performance advantage of HeatWave is actually slightly higher than what was published and what is the reason. And the reason was, when the customers were trying, they were trying on the latest version of the service, and our benchmark results were posted let's say, two months back. So the service had improved in those two to three months and customers actually saw better performance. So yes, absolutely. We have seen customers download the scripts, try them and also modify them to some extent and then do the comparison of HeatWave with other services. >> Interesting. Maybe a question for both of you how is the competition responding to this? They haven't said, "Hey, we're going to come up "with our own benchmarks." Which is very common, you oftentimes see that. Although, for instance, Snowflake hasn't responded to data bricks, so that's not their game, but if the customers are actually, putting a lot of faith in the benchmarks and actually using that for buying decisions, then it's inevitable. But how have you seen the competition respond to the MySQL HeatWave and AMD combo? >> So maybe I can take the first track from the database service standpoint. When customers have more choice, it is invariably advantages for the customer because then the competition is going to react, right? So the way we have seen the reaction is that we do believe, that the other database services are going to take a closer eye to the price performance, right? Because if you're offering such good price performance, the vendors are already looking at it. And, you know, instances where they have offered let's say discount to the customers, to kind of at least like close the gap to some extent. And the second thing would be in terms of the capability. So like one of the things which I should have mentioned even early on, is that not only does MySQL HeatWave on AMD, provide very good price performance, say on like a small cluster, but it's all the way up to a cluster size of 64 nodes, which has about 1000 cores. So the point is, that HeatWave performs very well, both on a small system, as well as a huge scale out. And this is again, one of those things which is a differentiation compared to other services so we expect that even other database services will have to improve their offerings to provide the same good scale factor, which customers are now starting to expectancy, with MySQL HeatWave. >> Kumaran, anything you'd add to that? I mean, you guys are an arms dealer, you love all your OEMs, but at the same time, you've got chip competitors, Silicon competitors. How do you see the competitive-- >> I'd say the broader answer and the big picture for AMD, we're very maniacally focused on our customers, right? And OCI and Oracle are huge and important customers for us, and this particular use cases is extremely interesting both in that it takes advantage, very well of our architecture and it pulls out some of the value that AMD bring. I think from a big picture standpoint, our aim is to execute, to build to bring out generations of CPUs, kind of, you know, do what we say and say, sorry, say what we do and do what we say. And from that point of view, we're hitting, the schedules that we say, and being able to bring out the latest technology and bring it in a TCO value proposition that generationally keeps OCI and HeatWave ahead. That's the crux of our partnership here. >> Yeah, the execution's been obvious for the last several years. Kumaran, staying with you, how would you characterize the collaboration between, the AMD engineers and the HeatWave engineering team? How do you guys work together? >> No, I'd say we're in a very, very deep collaboration. So, there's a few aspects where, we've actually been working together very closely on the code and being able to optimize for both the large L3 cache that AMD has, and so to be able to take advantage of that. And then also, to be able to take advantage of the scaling. So going between, you know, our architecture is chip like based, so we have these, the CPU cores on, we call 'em CCDs and the inter CCD communication, there's opportunities to optimize an application level and that's something we've been engaged with. In the broader engagement, we are going back now for multiple generations with OCI, and there's a lot of input that now, kind of resonates in the product line itself. And so we value this very close collaboration with HeatWave and OCI. >> Yeah, and the cadence, Nip, and you and I have talked about this quite a bit. The cadence has been quite rapid. It's like this constant cycle every couple of months I turn around, is something new on HeatWave. But for question again, for both of you, what new things do you think that organizations, customers, are going to be able to do with MySQL HeatWave if you could look out next 12 to 18 months, is there anything you can share at this time about future collaborations? >> Right, look, 12 to 18 months is a long time. There's going to be a lot of innovation, a lot of new capabilities coming out on in MySQL HeatWave. But even based on what we are currently offering, and the trend we are seeing is that customers are bringing, more classes of workloads. So we started off with OLTP for MySQL, then it went to analytics. Then we increased it to mixed workloads, and now we offer like machine learning as alike. So one is we are seeing, more and more classes of workloads come to MySQL HeatWave. And the second is a scale, that kind of data volumes people are using HeatWave for, to process these mixed workloads, analytics machine learning OLTP, that's increasing. Now, along the way we are making it simpler to use, we are making it more cost effective use. So for instance, last time, when we talked about, we had introduced this real time elasticity and that's something which is a very, very popular feature because customers want the ability to be able to scale out, or scale down very efficiently. That's something we provided. We provided support for compression. So all of these capabilities are making it more efficient for customers to run a larger part of their workloads on MySQL HeatWave, and we will continue to make it richer in the next 12 to 18 months. >> Thank you. Kumaran, anything you'd add to that, we'll give you the last word as we got to wrap it. >> No, absolutely. So, you know, next 12 to 18 months we will have our Zen 4 CPUs out. So this could potentially go into the next generation of the OCI infrastructure. This would be with the Genoa and then Bergamo CPUs taking us to 96 and 128 cores with 12 channels at DDR five. This capability, you know, when applied to an application like HeatWave, you can see that it'll open up another order of magnitude potentially of use cases, right? And we're excited to see what customers can do do with that. It certainly will make, kind of the, this service, and the cloud in general, that this cloud migration, I think even more attractive. So we're pretty excited to see how things evolve in this period of time. >> Yeah, the innovations are coming together. Guys, thanks so much, we got to leave it there really appreciate your time. >> Thank you. >> All right, and thank you for watching this special Cube conversation, this is Dave Vellante, and we'll see you next time. (soft calm music)

Published Date : Sep 14 2022

SUMMARY :

and it's likely the performance Thank you. and how it's different from So the advantages are; single and highlight some of the results, please. the first thing to know. We've talked about the secret sauce So for instance, many of the relevance specs of the chips that are used and that's a big part of the contribution and it's the basis for EPIC, So in the case of HeatWave, of posting the benchmark parameters, So one of the reasons for us to publish, So the service had improved how is the competition responding to this? So the way we have seen the but at the same time, and the big picture for AMD, for the last several years. and so to be able to Yeah, and the cadence, and the trend we are seeing is we'll give you the last and the cloud in general, Yeah, the innovations we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Marc StaimerPERSON

0.99+

Dave VellantePERSON

0.99+

NipunPERSON

0.99+

OracleORGANIZATION

0.99+

2017DATE

0.99+

DavePERSON

0.99+

OCIORGANIZATION

0.99+

Zen 3COMMERCIAL_ITEM

0.99+

7,002QUANTITY

0.99+

KumaranPERSON

0.99+

second aspectQUANTITY

0.99+

Nipun AgarwalPERSON

0.99+

AMDORGANIZATION

0.99+

12QUANTITY

0.99+

64 coresQUANTITY

0.99+

768 megabytesQUANTITY

0.99+

twoQUANTITY

0.99+

MySQLTITLE

0.99+

third aspectQUANTITY

0.99+

12 channelsQUANTITY

0.99+

Kumaran SivaPERSON

0.99+

HeatWaveORGANIZATION

0.99+

96QUANTITY

0.99+

18 timesQUANTITY

0.99+

BergamoORGANIZATION

0.99+

three partsQUANTITY

0.99+

DeltaORGANIZATION

0.99+

three monthsQUANTITY

0.99+

MySQL HeatWaveTITLE

0.99+

42 timesQUANTITY

0.99+

bothQUANTITY

0.99+

18 monthsQUANTITY

0.99+

Zen 2COMMERCIAL_ITEM

0.99+

oneQUANTITY

0.99+

GitHubORGANIZATION

0.99+

OneQUANTITY

0.98+

second generationQUANTITY

0.98+

single databaseQUANTITY

0.98+

128 coresQUANTITY

0.98+

18 monthsQUANTITY

0.98+

three thingsQUANTITY

0.98+

Oracle & AMD Partner to Power Exadata X9M


 

[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]

Published Date : Sep 13 2022

**Summary and Sentiment Analysis are not been shown because of improper transcript**

ENTITIES

EntityCategoryConfidence
20 percentQUANTITY

0.99+

juan loyzaPERSON

0.99+

amdORGANIZATION

0.99+

amazonORGANIZATION

0.99+

8QUANTITY

0.99+

256-wayQUANTITY

0.99+

10QUANTITY

0.99+

OracleORGANIZATION

0.99+

alibabaORGANIZATION

0.99+

87 percentQUANTITY

0.99+

128QUANTITY

0.99+

oracleORGANIZATION

0.99+

two threadsQUANTITY

0.99+

googleORGANIZATION

0.99+

11 yearsQUANTITY

0.99+

todayDATE

0.99+

50QUANTITY

0.99+

200QUANTITY

0.99+

ipodCOMMERCIAL_ITEM

0.99+

bothQUANTITY

0.99+

two chipsQUANTITY

0.99+

both companiesQUANTITY

0.99+

10DATE

0.98+

iphoneCOMMERCIAL_ITEM

0.98+

earlier this centuryDATE

0.98+

last aprilDATE

0.98+

third generationQUANTITY

0.98+

juanPERSON

0.98+

64 coresQUANTITY

0.98+

128-wayQUANTITY

0.98+

two socketQUANTITY

0.98+

eight lanesQUANTITY

0.98+

awsORGANIZATION

0.97+

AMDORGANIZATION

0.97+

iosTITLE

0.97+

fourth genQUANTITY

0.96+

168 pcieQUANTITY

0.96+

dave vellantePERSON

0.95+

third genQUANTITY

0.94+

aws azureORGANIZATION

0.94+

appleORGANIZATION

0.94+

thousands of parametersQUANTITY

0.92+

yearsDATE

0.91+

15 yearsQUANTITY

0.9+

Power ExadataORGANIZATION

0.9+

over 90 percentQUANTITY

0.89+

four companiesQUANTITY

0.89+

firstQUANTITY

0.88+

ociORGANIZATION

0.87+

first appearanceQUANTITY

0.85+

one teamQUANTITY

0.84+

almost 15 years agoDATE

0.83+

seven nanometerQUANTITY

0.83+

last few yearsDATE

0.82+

one thingQUANTITY

0.82+

15 years agoDATE

0.82+

epycTITLE

0.8+

over 60QUANTITY

0.79+

amd produceORGANIZATION

0.79+

Jason Collier, AMD | VMware Explore 2022


 

(upbeat music) >> Welcome back to San Francisco, "theCUBE" is live, our day two coverage of VMware Explore 2022 continues. Lisa Martin with Dave Nicholson. Dave and I are pleased to welcome Jason Collier, principal member of technical staff at AMD to the program. Jason, it's great to have you. >> Thank you, it's great to be here. >> So what's going on at AMD? I hear you have some juicy stuff to talk about. >> Oh, we've got a ton of juicy stuff to talk about. Clearly the Project Monterey announcement was big for us, so we've got that to talk about. Another thing that I really wanted to talk about was a tool that we created and we call it, it's the VMware Architecture Migration Tool, call it VAMT for short. It's a tool that we created and we worked together with VMware and some of their professional services crew to actually develop this tool. And it is also an open source based tool. And really the primary purpose is to easily enable you to move from one CPU architecture to another CPU architecture, and do that in a cold migration fashion. >> So we're probably not talking about CPUs from Tandy, Radio Shack systems, likely this would be what we might refer to as other X86 systems. >> Other X86 systems is a good way to refer to it. >> So it's interesting timing for the development and the release of a tool like this, because in this sort of X86 universe, there are players who have been delayed in terms of delivering their next gen stuff. My understanding is AMD has been public with the idea that they're on track for by the end of the year, Genoa, next gen architecture. So can you imagine a situation where someone has an existing set of infrastructure and they're like, hey, you know what I want to get on board, the AMD train, is this something they can use from the VMware environment? >> Absolutely, and when you think about- >> Tell us exactly what that would look like, walk us through 100 servers, VMware, 1000 VMs, just to make the math easy. What do you do? How does it work? >> So one, there's several things that the tool can do, we actually went through, the design process was quite extensive on this. And we went through all of the planning phases that you need to go through to do these VM migrations. Now this has to be a cold migration, it's not a live migration. You can't do that between the CPU architectures. But what we do is you create a list of all of the virtual machines that you want to migrate. So we take this CSV file, we import this CSV file, and we ask for things like, okay, what's the name? Where do you want to migrate it to? So from one cluster to another, what do you want to migrate it to? What are the networks that you want to move it to? And then the storage platform. So we can move storage, it could either be shared storage, or we could move say from VSAN to VSAN, however you want to set it up. So it will do those storage migrations as well. And then what happens is it's actually going to go through, it's going to shut down the VM, it's going to take a snapshot, it is going to then basically move the compute and/or storage resources over. And once it does that, it's going to power 'em back up. And it's going to check, we've got some validation tools, where it's going to make sure VM Tools comes back up where everything is copacetic, it didn't blue screen or anything like that. And once it comes back up, then everything's good, it moves onto the next one. Now a couple of things that we've got feature wise, we built into it. You can parallelize these tasks. So you can say, how many of these machines do you want to do at any given time? So it could be, say 10 machines, 50 machines, 100 machines at a time, that you want to go through and do this move. Now, if it did blue screen, it will actually roll it back to that snapshot on the origin cluster. So that there is some protection on that. A couple other things that are actually in there are things like audit tracking. So we do full audit logging on this stuff, we take a snapshot, there's basically kind of an audit trail of what happens. There's also full logging, SYS logging, and then also we'll do email reporting. So you can say, run this and then shoot me a report when this is over. Now, one other cool thing is you can also actually define a change window. So I don't want to do this in the middle of the afternoon on a Tuesday. So I want to do this later at night, over the weekend, you can actually just queue this up, set it, schedule it, it'll run. You can also define how long you want that change window to be. And what it'll do, it'll do as many as it can, then it'll effectively stop, finish up, clean up the tasks and then send you a report on what all was successfully moved. >> Okay, I'm going to go down the rabbit hole a little bit on this, 'cause I think it's important. And if I say something incorrect, you correct me. >> No problem. >> In terms of my technical understanding. >> I got you. >> So you've got a VM, essentially a virtual machine typically will consist of an entire operating system within that virtual machine. So there's a construct that containerizes, if you will, the operating system, what is the difference, where is the difference in the instruction set? Where does it lie? Is it in the OS' interaction with the CPU or is it between the construct that is the sort of wrapper around the VM that is the difference? >> It's really primarily the OS, right? And we've not really had too many issues doing this and most of the time, what is going to happen, that OS is going to boot up, it's going to recognize the architecture that it's on, it's going to see the underlying architecture, and boot up. All the major operating systems that we test worked fine. I mean, typically they're going to work on all the X86 platforms. But there might be instruction sets that are kind of enabled in one architecture that may not be in another architecture. >> And you're looking for that during this process. >> Well usually the OS itself is going to kind of detect that. So if it pops up, the one thing that is kind of a caution that you need to look for. If you've got an application that's explicitly using an instruction set that's on one CPU vendor and not the other CPU vendor. That's the one thing where you're probably going to see some application differences. That said, it'll probably be compatible, but you may not get that instruction set advantage in it. >> But this tool remediates against that. >> Yeah, and what we do, we're actually using VM Tools itself to go through and validate a lot of those components. So we'll look and make sure VM Tools is enabled in the first place, on the source system. And then when it gets to the destination system, we also look at VM Tools to see what is and what is not enabled. >> Okay, I'm going to put you on the spot here. What's the zinger, where doesn't it work? You already said cold, we understand, you can schedule for cold migrations, that's not a zinger. What's the zinger, where doesn't it work? >> It doesn't work like, live migrations just don't work. >> No live, okay, okay, no live. What about something else? What's the oh, you've got that version, you've got that version of X86 architecture, it-won't work, anything? >> A majority of those cases work, where it would fail, where it's going to kick back and say, hey, VM Tools is not installed. So where you would see this is if you're running a virtual appliance from some vendor, like insert vendor here that say, got a firewall, or got something like that, and they don't have VM Tools enabled. It's going to fail it out of the gate, and say, hey, VM Tools is not on this, you might want to manually do it. >> But you can figure out how to fix that? >> You can figure out how to do that. You can also, and there's a flag in there, so in kind of the options that you give it, you say, ignore VM Tools, don't care, move it anyway. So if you've got less, some VMs that are in there, but they're not a priority VM, then it's going to migrate just fine. >> Got It. >> Can you elaborate a little bit on the joint development work that AMD and VMware are doing together and the value in it for customers? >> Yeah, so it's one of those things we worked with VMware to basically produce this open source tool. So we did a lot of the core component and design and we actually engaged VMware Professional Services. And a big shout out to Austin Browder. He helped us a ton in this project specifically. And we basically worked, we created this, kind of co-designed, what it was going to look like. And then jointly worked together on the coding, of pulling this thing together. And then after that, and this is actually posted up on VMware's public repos now in GitHub. So you can go to GitHub, you can go to the VMware samples code, and you can download this thing that we've created. And it's really built to help ease migrations from one architecture to another. So if you're looking for a big data center move and you got a bunch of VMs to move. I mean, even if it's same architecture to same architecture, it's definitely going to ease the pain of going through and doing a migration of, it's one thing when you're doing 10 machines, but when you're doing 10,000 virtual machines, that's a different story. It gets to be quite operationally inefficient. >> I lose track after three. >> Yeah. >> So I'm good for three, not four. >> I was going to ask you what your target market segment is here. Expand on that a little bit and talk to me about who you're working with and those organizations. >> So really this is targeted toward organizations that have large deployments in enterprise, but also I think this is a big play with channel partners as well. So folks out there in the channel that are doing these migrations and they do a lot of these, when you're thinking about the small and mid-size organizations, it's a great fit for that. Especially if they're kind of doing that upgrade, the lift and shift upgrade, from here's where you've been five to seven years on an architecture and you want to move to a new architecture. This is really going to help. And this is not a point and click GUI kind of thing. It's command line driven, it's using PowerShell, we're using PowerCLI to do the majority of this work. And for channel partners, this is an excellent opportunity to put the value and the value add and VAR, And there's a lot of opportunity for, I think, channel partners to really go and take this. And once again, being open source. We expect this to be extensible, we want the community to contribute and put back into this to basically help grow it and make it a more useful tool for doing these cold migrations between CPU architectures. >> Have you seen any in the last couple of years of dynamics, obviously across the world, any industries in particular that are really leading edge for what you guys are doing? >> Yeah, that's really, really interesting. I mean, we've seen it, it's honestly been a very horizontal problem, pretty much across all vertical markets. I mean, we've seen it in financial services, we've seen it in, honestly, pretty much across the board. Manufacturing, financial services, healthcare, we have seen kind of a strong interest in that. And then also we we've actually taken this and presented this to some of our channel partners as well. And there's been a lot of interest in it. I think we presented it to about 30 different channel partners, a couple of weeks back about this. And I got contact from 30 different channel partners that said they're interested in basically helping us work on it. >> Tagging on to Lisa's question, do you have visibility into the AMD thought process around the timing of your next gen release versus others that are competitors in the marketplace? How you might leverage that in terms of programs where partners are going out and saying, hey, perfect time, you need a refresh, perfect time to look at AMD, if you haven't looked at them recently. Do you have any insight into that in what's going on? I know you're focused on this area. But what are your thoughts on, well, what's the buzz? What's the buzz inside AMD on that? >> Well, when you look overall, if you look at the Gartner Hype Cycle, when VMware was being broadly adopted, when VMware was being broadly adopted, I'm going to be blunt, and I'm going to be honest right here, AMD didn't have a horse in the race. And the majority of those VMware deployments we see are not running on AMD. Now that said, there's an extreme interest in the fact that we've got these very cored in systems that are now coming up on, now you're at that five to seven year refresh window of pulling in new hardware. And we have extremely attractive hardware when it comes to running virtualized workloads. The test cluster that I'm running at home, I've got that five to seven year old gear, and I've got some of the, even just the Milan systems that we've got. And I've got three nodes of another architecture going onto AMD. And when I got these three nodes completely maxed to the number of VMs that I can run on 'em, I'm at a quarter of the capacity of what I'm putting on the new stuff. So what you get is, I mean, we worked the numbers, and it's definitely, it's like a 30% decrease in the amount of resources that you need. >> That's a compelling number. >> It's a compelling number. >> 5%, 10%, nobody's going to do anything for that. You talk 30%. >> 30%. It's meaningful, it's meaningful. Now you you're out of Austin, right? >> Yes. >> So first thing I thought of when you talk about running clusters in your home is the cost of electricity, but you're okay. >> I'm okay. >> You don't live here, you don't live here, you don't need to worry about that. >> I'm okay. >> Do you have a favorite customer example that you think really articulates the value of AMD when you're in customer conversations and they go, why AMD and you hit back with this? >> Yeah. Actually it's funny because I had a conversation like that last night, kind of random person I met later on in the evening. We were going through this discussion and they were facing exactly this problem. They had that five to seven year infrastructure. It's funny, because the guy was a gamer too, and he's like, man, I've always been a big AMD fan, I love the CPUs all the way since back in basically the Opterons and Athlons right. He's like, I've always loved the AMD systems, loved the graphics cards. And now with what we're doing with Ryzen and all that stuff. He's always been a big AMD fan. He's like, and I'm going through doing my infrastructure refresh. And I told him, I'm just like, well, hey, talk to your VAR and have 'em plug some AMD SKUs in there from the Dells, HPs and Lenovos. And then we've got this tool to basically help make that migration easier on you. And so once we had that discussion and it was great, then he swung by the booth today and I was able to just go over, hey, this is the tool, this is how you use it, here's all the info. Call me if you need any help. >> Yeah, when we were talking earlier, we learned that you were at Scale. So what are you liking about AMD? How does that relate? >> The funny thing is this is actually the first time in my career that I've actually had a job where I didn't work for myself. I've been doing venture backed startups the last 25 years and we've raised couple hundred million dollars worth of investment over the years. And so one, I figured, here I am going to AMD, a larger corporation. I'm just like, am I going to be able to make it a year? And I have been here longer than a year and I absolutely love it. The culture at AMD is amazing. We still have that really, I mean, almost it's like that underdog mentality within the organization. And the team that I'm working with is a phenomenal team. And it's actually, our EVP and our Corp VP, were actually my executive sponsors, we were at a prior company. They were one of my executive sponsors when I was at Scale. And so my now VP boss calls me up and says, hey, I'm putting a band together, are you interested? And I was kind of enjoying a semi-retirement lifestyle. And then I'm just like, man, because it's you, yes, I am interested. And the group that we're in, the work that we're doing, the way that we're really focusing on forward looking things that are affecting the data center, what's going to be the data center like three to five years from now. It's exciting, and I am having a blast, I'm having the time of my life. I absolutely love it. >> Well, that relationship and the trust that you will have with each other, that bleeds into the customer conversations, the partner conversations, the employee conversations, it's all inextricably linked. >> Yes it is. >> And we want to know, you said three to five years out, like what? Like what? Just general futurist stuff, where do you think this is going. >> Well, it's interesting. >> So moon collides with the earth in 2025, we already know that. >> So we dialed this back to the Pensando acquisition. When you look at the Pensando acquisition and you look at basically where data centers are today, but then you look at where basically the big hyperscalers are. You look at an AWS, you look at their architecture, you specifically wrap Nitro around that, that's a very different architecture than what's being run in the data center. And when you look at what Pensando does, that's a lot of starting to bring what these real clouds out there, what these big hyperscalers are running into the grasps of the data center. And so I think you're going to see a fundamental shift. The next 10 years are going to be exciting because the way you look at a data center now, when you think of what CPUs do, what shared storage, how the networking is all set up, it ain't going to look the same. >> Okay, so the competing vision with that, to play devil's advocate, would be DPUs are kind of expensive. Why don't we just use NICs, give 'em some more bandwidth, and use the cheapest stuff. That's the competing vision. >> That could be. >> Or the alternative vision, and I imagine everything else we've experienced in our careers, they will run in parallel paths, fit for function. >> Well, parallel paths always exist, right? Otherwise, 'cause you know how many times you've heard mainframe's dead, tape's dead, spinning disk is dead. None of 'em dead, right? The reality is you get to a point within an industry where it basically goes from instead of a growth curve like that, it goes to a growth curve of like that, it's pretty flat. So from a revenue growth perspective, I don't think you're going to see the revenue growth there. I think you're going to see the revenue growth in DPUs. And when you actually take, they may be expensive now, but you look at what Monterey's doing and you look at the way that those DPUs are getting integrated in at the OEM level. It's going to be a part of it. You're going to order your VxRail and VSAN style boxes, they're going to come with them. It's going to be an integrated component. Because when you start to offload things off the CPU, you've driven your overall utilization up. When you don't have to process NSX on basically the X86, you've just freed up cores and a considerable amount of them. And you've also moved that to where there's a more intelligent place for that pack to be processed right, out here on this edge. 'Cause you know what, that might not need to go into the host bus at all. So you have just alleviated any transfers over a PCI bus, over the PCI lanes, into DRAM, all of these components, when you're like, but all to come with, oh, that bit needs to be on this other machine. So now it's coming in and it's making that decision there. And then you take and integrate that into things like the Aruba Smart Switch, that's running the Pensando technology. So now you got top of rack that is already making those intelligent routing decisions on where packets really need to go. >> Jason, thank you so much for joining us. I know you guys could keep talking. >> No, I was going to say, you're going to have to come back. You're going to have to come back. >> We've just started to peel the layers of the onion, but we really appreciate you coming by the show, talking about what AMD and VMware are doing, what you're enabling customers to achieve. Sounds like there's a lot of tailwind behind you. That's awesome. >> Yeah. >> Great stuff, thank you. >> It's a great time to be at AMD, I can tell you that. >> Oh, that's good to hear, we like it. Well, thank you again for joining us, we appreciate it. For our guest and Dave Nicholson, I'm Lisa Martin. You're watching "theCUBE Live" from San Francisco, VMware Explore 2022. We'll be back with our next guest in just a minute. (upbeat music)

Published Date : Aug 31 2022

SUMMARY :

Jason, it's great to have you. I hear you have some to easily enable you to move So we're probably good way to refer to it. and the release of a tool like this, 1000 VMs, just to make the math easy. And it's going to check, we've Okay, I'm going to In terms of my that is the sort of wrapper and most of the time, that during this process. that you need to look for. in the first place, on the source system. What's the zinger, where doesn't it work? It doesn't work like, live What's the oh, you've got that version, So where you would see options that you give it, And a big shout out to Austin Browder. I was going to ask you what and the value add and VAR, and presented this to some of competitors in the marketplace? in the amount of resources that you need. nobody's going to do anything for that. Now you you're out of Austin, right? is the cost of electricity, you don't live here, you don't They had that five to So what are you liking about AMD? that are affecting the data center, Well, that relationship and the trust where do you think this is going. we already know that. because the way you look Okay, so the competing Or the alternative vision, And when you actually take, I know you guys could keep talking. You're going to have to come back. peel the layers of the onion, to be at AMD, I can tell you that. Oh, that's good to hear, we like it.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Jason CollierPERSON

0.99+

Dave NicholsonPERSON

0.99+

LisaPERSON

0.99+

50 machinesQUANTITY

0.99+

10 machinesQUANTITY

0.99+

JasonPERSON

0.99+

10 machinesQUANTITY

0.99+

100 machinesQUANTITY

0.99+

DavePERSON

0.99+

AMDORGANIZATION

0.99+

AustinLOCATION

0.99+

San FranciscoLOCATION

0.99+

San FranciscoLOCATION

0.99+

fiveQUANTITY

0.99+

threeQUANTITY

0.99+

100 serversQUANTITY

0.99+

seven yearQUANTITY

0.99+

theCUBE LiveTITLE

0.99+

10,000 virtual machinesQUANTITY

0.99+

LenovosORGANIZATION

0.99+

30%QUANTITY

0.99+

2025DATE

0.99+

AWSORGANIZATION

0.99+

fourQUANTITY

0.99+

oneQUANTITY

0.99+

10%QUANTITY

0.99+

30 different channel partnersQUANTITY

0.99+

five yearsQUANTITY

0.99+

earthLOCATION

0.99+

5%QUANTITY

0.99+

1000 VMsQUANTITY

0.99+

DellsORGANIZATION

0.99+

GitHubORGANIZATION

0.99+

seven yearsQUANTITY

0.98+

Austin BrowderPERSON

0.98+

a yearQUANTITY

0.98+

TandyORGANIZATION

0.98+

Radio ShackORGANIZATION

0.98+

VMwareORGANIZATION

0.98+

MontereyORGANIZATION

0.98+

todayDATE

0.97+

HPsORGANIZATION

0.97+

first timeQUANTITY

0.97+

TuesdayDATE

0.97+

ScaleORGANIZATION

0.97+

VM ToolsTITLE

0.97+

one thingQUANTITY

0.96+

last nightDATE

0.96+

about 30 different channel partnersQUANTITY

0.95+

firstQUANTITY

0.95+

AthlonsCOMMERCIAL_ITEM

0.95+

VxRailCOMMERCIAL_ITEM

0.95+

X86TITLE

0.94+

PensandoORGANIZATION

0.94+

VMware Explore 2022TITLE

0.94+

RyzenCOMMERCIAL_ITEM

0.94+

five yearsQUANTITY

0.93+

Kumaran Siva, AMD | VMware Explore 2022


 

>>Good morning, everyone. Welcome to the cubes day two coverage of VMware Explorer, 2022 live from San Francisco. Lisa Martin here with Dave Nicholson. We're excited to kick off day two of great conversations with VMware partners, customers it's ecosystem. We've got a V an alumni back with us Kumer on Siva corporate VP of business development from AMD joins us. Great to have you on the program in person. Great >>To be here. Yes. In person. Indeed. Welcome. >>So the great thing yesterday, a lot of announcements and B had an announcement with VMware, which we will unpack that, but there's about 7,000 to 10,000 people here. People are excited, ready to be back, ready to be hearing from this community, which is so nice. Yesterday am B announced. It is optimizing AMD PON distributed services card to run on VMware. Bsphere eight B for eight was announced yesterday. Tell us a little bit about that. Yeah, >>No, absolutely. The Ben Sando smart neck DPU. What it allows you to do is it, it provides a whole bunch of capabilities, including offloads, including encryption DEC description. We can even do functions like compression, but with, with the combination of VMware project Monterey and, and Ben Sando, we we're able to do is even do some of the vSphere, actual offloads integration of the hypervisor into the DPU card. It's, it's pretty interesting and pretty powerful technology. We're we're pretty excited about it. I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, in terms of manageability, in terms of being able to take care of bare metal servers and also, you know, better secure infrastructure, you know, cloudlike techniques into the, into the mainstream on-premises enterprise. >>Okay. Talk a little bit about the DPU data processing unit. They talked about it on stage yesterday, but help me understand that versus the CPU GPU. >>Yeah. So it's, it's, it's a different, it's a different point, right? So normally you'd, you'd have the CPU you'd have we call it dumb networking card. Right. And I say dumb, but it's, it's, you know, it's just designed to go process packets, you know, put and put them onto PCI and have the, the CPU do all of the, kind of the, the packet processing, the, the virtual switching, all of those functions inside the CPU. What the DPU allows you to do is, is actually offload a bunch of those functions directly onto the, onto the deep view card. So it has a combination of these special purpose processors that are programmable with the language called P four, which is one, one of the key things that pan Sando brings. Here's a, it's a, it's a real easy to program, easy to use, you know, kind of set so that not some of, some of our larger enterprise customers can actually go in and, you know, do some custom coding depending on what their network infrastructure looks like. But you can do things like the V switch in, in the, in the DPU, not having to all have that done on the CPU. So you freeze up some of the CPU course, make sure, make sure infrastructure run more efficiently, but probably even more importantly, it provides you with more, with greater security, greater separation between the, between the networking side and the, the CPU side. >>So, so that's, that's a key point because a lot of us remember the era of the tonic TCP, I P offload engine, Nick, this isn't simply offloading CPU cycles. This is actually providing a sort of isolation. So that the network that's right, is the network has intelligence that is separate from the server. Is that, is that absolutely key? Is that absolutely >>Key? Yeah. That's, that's a good way of looking at it. Yeah. And that's, that's, I mean, if you look at some of the, the, the techniques used in the cloud, the, you know, this, this, this in fact brings some of those technologies into, into the enterprise, right. So where you are wanting to have that level of separation and management, you're able to now utilize the DPU card. So that's, that's a really big, big, big part of the value proposition, the manageability manageability, not just offload, but you know, kind of a better network for enterprise. Right. >>Right. >>Can you expand on that value proposition? If I'm a customer what's in this for me, how does this help power my multi-cloud organization? >>Yeah. >>So I think we have some, we actually have a number of these in real customer use cases today. And so, you know, folks will use, for example, the compression and the, sorry, the compression and decompression, that's, that's definitely an application in the storage side, but also on the, just on the, just, just as a, as a DPU card in the mainstream general purpose, general purpose server server infrastructure fleet, they're able to use the encryption and decryption to make sure that their, their, their infrastructure is, is kind of safe, you know, from point to point within the network. So every, every connected, every connection there is actually encrypted and that, that, you know, managing those policies and orchestrating all of that, that's done to the DPU card. >>So, so what you're saying is if you have DPU involved, then the server itself and the CPUs become completely irrelevant. And basically it's just a box of sheet metal at that point. That's, that's a good way of looking at that. That's my segue talking about the value proposition of the actual AMD. >>No, absolutely. No, no. I think, I think, I think the, the, the CPUs are always going to be central in this and look. And so, so I think, I think having, having the, the DPU is extremely powerful and, and it does allow you to have better infrastructure, but the key to having better infrastructure is to have the best CPU. Well, tell >>Us, tell >>Us that's what, tell us us about that. So, so I, you know, this is, this is where a lot of the, the great value proposition between VMware and AMD come together. So VMware really allows enterprises to take advantage of these high core count, really modern, you know, CPU, our, our, our, our epic, especially our Milan, our 7,003 product line. So to be able to take advantage of 64 course, you know, VMware is critical for that. And, and so what they, what they've been able to do is, you know, know, for example, if you have workloads running on legacy, you know, like five year old servers, you're able to take a whole bunch of those servers and consolidate down, down into a single node, right. And the power that VMware gives you is the manageability, the reliability brings all of that factors and allows you to take advantage of, of the, the, the latest, latest generation CPUs. >>You know, we've actually done some TCO modeling where we can show, even if you have fully depreciated hardware, like, so it's like five years old plus, right. And so, you know, the actual cost, you know, it's already been written off, but the cost just the cost of running it in terms of the power and the administration, you know, the OPEX costs that, that are associated with it are greater than the cost of acquiring a new set of, you know, a smaller set of AMD servers. Yeah. And, and being able to consolidate those workloads, run VMware, to provide you with that great, great user experience, especially with vSphere 8.0 and the, and the hooks that VMware have built in for AMD AMD processors, you actually see really, really good. It's just a great user experience. It's also a more efficient, you know, it's just better for the planet. And it's also better on the pocketbook, which is, which is, which is a really cool thing these days, cuz our value in TCO translates directly into a value in terms of sustainability. Right. And so, you know, from, from energy consumption, from, you know, just, just the cost of having that there, it's just a whole lot better >>Talk about on the sustainability front, how AMD is helping its customers achieve their sustainability goals. And are you seeing more and more customers coming to you saying, we wanna understand what AMD is doing for sustainability because it's important for us to work with vendors who have a core focus on it. >>Yeah, absolutely. You know, I think, look, I'll be perfectly honest when we first designed our CPU, we're just trying to build the biggest baddest thing that, you know, that, that comes out in terms of having the, the, the best, the, the number, the, the largest number of cores and the best TCO for our customers, but what it's actually turned out that TCO involves energy consumption. Right. And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, whole bunch of servers. For example, we have one calculation where we showed 27, you know, like I think like five year old servers can be consolidated down into five AMD servers that, that ratio you can see already, you know, huge gains in terms of sustainability. Now you asked about the sustainability conversation. This I'd say not a week goes by where I'm not having a conversation with, with a, a, a CTO or CIO who is, you know, who's got that as part of their corporate, you know, is part of their corporate brand. And they want to find out how to make their, their infrastructure, their data center, more green. Right. And so that's, that's where we come in. Yeah. And it's interesting because at least in the us money is also green. So when you talk about the cost of power, especially in places like California, that's right. There's, there's a, there's a natural incentive to >>Drive in that direction. >>Let's talk about security. You know, the, the, the threat landscape has changed so dramatically in the last couple of years, ransomware is a household word. Yes. Ransomware attacks happened like one every 11 seconds, older technology, a little bit more vulnerable to internal threats, external threats. How is AMD helping customers address the security fund, which is the board level conversation >>That that's, that's, that's a, that's a great, great question. Look, I look at security as being, you know, it's a layered thing, right? I mean, if you talk to any security experts, security, doesn't, you know, there's not one component and we are an ingredient within the, the greater, you know, the greater scheme of things. A few things. One is we have partnered very closely with the VMware. They have enabled our SUV technology, secure encrypted virtualization technology into, into the vSphere. So such that all of the memory transactions. So you have, you have security, you know, at, you know, security, when you store store on disks, you have security over the network and you also have security in the compute. And when you go out to memory, that's what this SUV technology gives you. It gives you that, that security going, going in your, in your actual virtual machine as it's running. And so the, the, we take security extremely seriously. I mean, one of the things, every generation that you see from, from AMD and, and, you know, you have seen us hit our cadence. We do upgrade all of the security features and we address all of the sort of known threats that are out there. And obviously this threats, you know, kind of coming at us all the time, but our CPUs just get better and better from, from a, a security stance. >>So shifting gears for a minute, obviously we know the pending impossible acquisition, the announced acquisition of VMware by Broadcom, AMD's got a relationship with Broadcom independently, right? No, of course. What is, how's that relationship? >>Oh, it's a great relationship. I mean, we, we, you know, they, they have certified their, their, their niche products, their HPA products, which are utilized in, you know, for, for storage systems, sand systems, those, those type of architectures, the hardcore storage architectures. We, we work with them very closely. So they, they, they've been a great partner with us for years. >>And you've got, I know, you know, we are, we're talking about current generation available on the shelf, Milan based architecture, is that right? That's right. Yeah. But if I understand correctly, maybe sometime this year, you're, you're gonna be that's right. Rolling out the, rolling out the new stuff. >>Yeah, absolutely. So later this year, we've already, you know, we already talked about this publicly. We have a 96 core gen platform up to 96 cores gen platform. So we're just, we're just taking that TCO value just to the next level, increasing performance DDR, five CXL with, with memory expansion capability. Very, very neat leading edge technology. So that that's gonna be available. >>Is that NextGen P C I E, or has that shift already been made? It's >>Been it's NextGen. P C I E P C E gen five. Okay. So we'll have, we'll have that capability. That'll be, that'll be out by the end of this year. >>Okay. So those components you talk about. Yeah. You know, you talk about the, the Broadcom VMware universe, those components that are going into those new slots are also factors in performance and >>Yeah, absolutely. You need the balance, right? You, you need to have networking storage and the CPU. We're very cognizant of how to make sure that these cores are fed appropriately. Okay. Cuz if you've just put out a lot of cores, you don't have enough memory, you don't have enough iOS. That's, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, make sure that the systems are balanced. So you get the experience that you've had with, let's say your, you know, your 12 core, your 16 core, you can have that same experience in the 96 core in a node or 96 core socket. So maybe a 192 cores total, right? So you can have that same experience in, in a tune node in a much denser, you know, package server today or, or using Melan technology, you know, 128 cores, super, super good performance. You know, its super good experience it's, it's designed to scale. Right. And especially with VMware as, as our infrastructure, it works >>Great. I'm gonna, Lisa, Lisa's got a question to ask. I know, but bear with me one bear >>With me. Yes, sir. >>We've actually initiated coverage of this question of, you know, just hardware matter right anymore. Does it matter anymore? Yeah. So I put to you the question, do you think hardware still matters? >>Oh, I think, I think it's gonna matter even more and more going forward. I mean just, but it's all cloud who cares just in this conversation today. Right? >>Who cares? It's all cloud. Yeah. >>So, so, so definitely their workloads moving to the cloud and we love our cloud partners don't get me wrong. Right. But there are, you know, just, I've had so many conversations at this show this week about customers who cannot move to the cloud because of regulatory reasons. Yeah. You know, the other thing that you don't realize too, that's new to me is that people have depreciated their data centers. So the cost for them to just go put in new AMD servers is actually very low compared to the cost of having to go buy, buy public cloud service. They still want to go buy public cloud services and that, by the way, we have great, great, great AMD instances on, on AWS, on Google, on Azure, Oracle, like all of our major, all of the major cloud providers, support AMD and have, have great, you know, TCO instances that they've, they've put out there with good performance. Yeah. >>What >>Are some of the key use cases that customers are coming to AMD for? And, and what have you seen change in the last couple of years with respect to every customer needing to become a data company needing to really be data driven? >>No, that's, that's also great question. So, you know, I used to get this question a lot. >>She only asks great questions. Yeah. Yeah. I go down and like all around in the weeds and get excited about the bits and the bites she asks. >>But no, I think, look, I think the, you know, a few years ago and I, I think I, I used to get this question all the time. What workloads run best on AMD? My answer today is unequivocally all the workloads. Okay. Cuz we have processors that run, you know, run at the highest performance per thread per per core that you can get. And then we have processors that have the highest throughput and, and sometimes they're one in the same, right. And Ilan 64 configured the right way using using VMware vSphere, you can actually get extremely good per core performance and extremely good throughput performance. It works well across, just as you said, like a database to data management, all of those kinds of capabilities, DevOps, you know, E R P like there's just been a whole slew slew of applications use cases. We have design wins in, in major customers, in every single industry in every, and these, these are big, you know, the big guys, right? >>And they're, they're, they're using AMD they're successfully moving over their workloads without, without issue. For the most part. In some cases, customers tell us they just, they just move the workload on, turn it on. It runs great. Right. And, and they're, they're fully happy with it. You know, there are other cases where, where we've actually gotten involved and we figured out, you know, there's this configuration of that configuration, but it's typically not a, not a huge lift to move to AMD. And that's that I think is a, is a key, it's a key point. And we're working together with almost all of the major ISV partners. Right. And so just to make sure that, that, that they have run tested certified, I think we have over 250 world record benchmarks, you know, running in all sorts of, you know, like Oracle database, SAP business suite, all of those, those types of applications run, run extremely well on AMD. >>Is there a particular customer story that you think really articulates the value of running on AMD in terms of enabling bus, big business outcome, safer a financial services organization or healthcare organization? Yeah. >>I mean we, yeah, there's certainly been, I mean, across the board. So in, in healthcare we've seen customers actually do the, the server consolidation very effectively and then, you know, take advantage of the, the lower cost of operation because in some cases they're, they're trying to run servers on each floor of a hospital. For example, we've had use cases where customers have been able to do that because of the density that we provide and to be able to, to actually, you know, take, take their compute more even to the edge than, than actually have it in the, in those use cases in, in a centralized matter. The another, another interesting case FSI in financial services, we have customers that use us for general purpose. It, we have customers that use this for kind of the, the high performance we call it grid computing. So, you know, you have guys that, you know, do all this trading during the day, they collect tons and tons of data, and then they use our computers to, or our CPUs to just crunch to that data overnight. >>And it's just like this big, super computer that just crunches it's, it's pretty incredible. They're the, the, the density of the CPUs, the value that we bring really shines, but in, in their general purpose fleet as well. Right? So they're able to use VMware, a lot of VMware customers in that space. We love our, we love our VMware customers and they're able to, to, to utilize this, they use use us with HCI. So hyperconverge infrastructure with V VSAN and that's that that's, that's worked works extremely well. And, and, and our, our enterprise customers are extremely happy with that. >>Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares VMware undergoes its potential change. >>Yeah. So there there's a lot that we have going on. I mean, I gotta say VMware is one of the, let's say premier companies in terms of, you know, being innovative and being, being able to drive new, new, interesting pieces of technology and, and they're very experimentive right. So they, we have, we have a ton of things going with them, but certainly, you know, driving pin Sando is, is very, it is very, very important to us. Yeah. I think that the whole, we're just in the, the cusp, I believe of, you know, server consolidation becoming a big thing for us. So driving that together with VMware and, you know, into some of these enterprises where we can show, you know, save the earth while we, you know, in terms of reducing power, reducing and, and saving money in terms of TCO, but also being able to enable new capabilities. >>You know, the other part of it too, is this new infrastructure enables new workloads. So things like machine learning, you know, more data analytics, more sophisticated processing, you know, that, that is all enabled by this new infrastructure. So we, we were excited. We think that we're on the precipice of, you know, going a lot of industries moving forward to, to having, you know, the next level of it. It's no longer about just payroll or, or, or enterprise business management. It's about, you know, how do you make your, you know, your, your knowledge workers more productive, right. And how do you give them more capabilities? And that, that is really, what's exciting for us. >>Awesome Cooper. And thank you so much for joining Dave and me on the program today, talking about what AMD, what you're doing to supercharge customers, your partnership with VMware and what is exciting. What's on the, the forefront, the frontier, we appreciate your time and your insights. >>Great. Thank you very much for having me. >>Thank you for our guest and Dave Nicholson. I'm Lisa Martin. You're watching the cube live from VMware Explorer, 22 from San Francisco, but don't go anywhere, Dave and I will be right back with our next guest.

Published Date : Aug 31 2022

SUMMARY :

Great to have you on the program in person. So the great thing yesterday, a lot of announcements and B had an announcement with VMware, I think this, this, this could, you know, potentially, you know, bring some of the cloud value into, but help me understand that versus the CPU GPU. And I say dumb, but it's, it's, you know, it's just designed to go process So that the network that's right, not just offload, but you know, kind of a better network for enterprise. And so, you know, folks will use, for example, the compression and the, And basically it's just a box of sheet metal at that point. the DPU is extremely powerful and, and it does allow you to have better infrastructure, And the power that VMware gives you is the manageability, the reliability brings all of that factors the administration, you know, the OPEX costs that, that are associated with it are greater than And are you seeing more and more customers coming to you saying, And, and it involves, you know, the whole process of bringing down a whole bunch of nodes, How is AMD helping customers address the security fund, which is the board level conversation And obviously this threats, you know, kind of coming at us all the time, So shifting gears for a minute, obviously we I mean, we, we, you know, they, they have certified their, their, their niche products, available on the shelf, Milan based architecture, is that right? So later this year, we've already, you know, we already talked about this publicly. That'll be, that'll be out by the end of this year. You know, you talk about the, the Broadcom VMware universe, that's the key to, to, to, you know, our approach to, to enabling performance in the enterprise, I know, but bear with me one So I put to you the question, do you think hardware still matters? but it's all cloud who cares just in this conversation today. Yeah. But there are, you know, just, I've had so many conversations at this show this week about So, you know, I used to get this question a lot. around in the weeds and get excited about the bits and the bites she asks. Cuz we have processors that run, you know, run at the highest performance you know, running in all sorts of, you know, like Oracle database, SAP business Is there a particular customer story that you think really articulates the value of running on AMD density that we provide and to be able to, to actually, you know, take, take their compute more even So they're able to use VMware, a lot of VMware customers in Talk about, as we wrap things up here, what's next for AMD, especially AMD with VMwares So driving that together with VMware and, you know, into some of these enterprises where learning, you know, more data analytics, more sophisticated processing, you know, And thank you so much for joining Dave and me on the program today, talking about what AMD, Thank you very much for having me. Thank you for our guest and Dave Nicholson.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave NicholsonPERSON

0.99+

BroadcomORGANIZATION

0.99+

AMDORGANIZATION

0.99+

DavePERSON

0.99+

San FranciscoLOCATION

0.99+

Kumaran SivaPERSON

0.99+

five yearQUANTITY

0.99+

12 coreQUANTITY

0.99+

VMwareORGANIZATION

0.99+

192 coresQUANTITY

0.99+

16 coreQUANTITY

0.99+

96 coreQUANTITY

0.99+

CaliforniaLOCATION

0.99+

five yearsQUANTITY

0.99+

CooperPERSON

0.99+

iOSTITLE

0.99+

7,003QUANTITY

0.99+

OracleORGANIZATION

0.99+

LisaPERSON

0.99+

128 coresQUANTITY

0.99+

yesterdayDATE

0.99+

AWSORGANIZATION

0.99+

MilanLOCATION

0.99+

todayDATE

0.99+

GoogleORGANIZATION

0.99+

this yearDATE

0.98+

Yesterday amDATE

0.98+

fiveQUANTITY

0.98+

one componentQUANTITY

0.98+

eightQUANTITY

0.98+

HPAORGANIZATION

0.98+

each floorQUANTITY

0.98+

oneQUANTITY

0.97+

this weekDATE

0.97+

vSphere 8.0TITLE

0.97+

later this yearDATE

0.97+

day twoQUANTITY

0.97+

10,000 peopleQUANTITY

0.96+

96 coreQUANTITY

0.95+

TCOORGANIZATION

0.95+

2022DATE

0.95+

OneQUANTITY

0.95+

27QUANTITY

0.94+

64 courseQUANTITY

0.94+

SandoORGANIZATION

0.94+

one calculationQUANTITY

0.94+

end of this yearDATE

0.93+

VMwaresORGANIZATION

0.93+

Justin Murrill, AMD & John Frey, HPE | HPE Discover 2022


 

>> Announcer: theCUBE presents HPE Discover 2022. Brought to you by HPE. >> Okay, we're back here at HPE Discover 2022, theCUBE's continuous coverage. This is day two, Dave Vellante with John Furrier. John Frey's here. He is the chief technologist for sustainable transformation at Hewlett Packard Enterprise and Justin Murrill who's the director of corporate responsibility for AMD. Guys, welcome to theCUBE. Good to see you. >> Thank you. >> Thank you. It's great to be here. >> So again, I remember the days where, you know, CIOs didn't really care about the power budget. They didn't pay the power budget. You had, you know, facilities over here, IT over here and they didn't talk to each other. That's changed. Why is there so much discussion around sustainable IT today? >> It's exciting to see how much it's up leveled, as you say. I think there are a couple different trends happening but mainly, you know, the IT teams and IT leaders that are making decisions are seeing to your point how their decisions are affecting enterprise level, greenhouse gas emission reduction goals. So that connection is becoming very clear. Everything from the server processor to beyond it, those decisions have a key role. And importantly we're seeing, you know, 60% of the Fortune 500 now have climate or energy efficiency related goals. So there's a perfect storm of sorts happening where more companies setting goals, IT decision makers looking particularly at the data center because as the computational heart of an organization, it has a wealth of opportunity from an energy and a mission savings perspective. >> I'm surprised it's only 60%. I mean, that number really shocked me. So it's got to be a 100% within the next couple of years here. I would think, I mean, it's not trivial, right? You've got responsibilities in terms of reporting and you can't just mail it in, right? >> Yeah, absolutely. So there's a lot more disclosure happening but the goal setting is really upleveling as well. >> And the metrics involved too. Can you just scope the scale and challenge of like getting the right metrics, not when you have the goals. Does that factor in, how do you see there? What's your commentary on that? >> Yeah, I think there's, the aperture is continuing to open as metrics go, so to speak. So from an operations perspective, companies are reporting on what's referred to as scope one and scope two. And scope two is the big one from electricity, right? And then scope three is everything else. That's the supply chain and the outside of that. So a lot of implications there as well from IT decision making. >> Is there a business case for sustainable IT? I mean, you're probably not going to lower the power budget, right? But is it just, hey, it's the right thing to do. We have to do it, it's good for the brand. It'll allow us to attract people or is there a a more of a rich business case? >> So there really is a business case even just within inside the data center walls, for example. There's inefficiencies that are inherent in many of these data centers. There's really low utilization levels as well. And by reducing over provisioning and increasing utilization, there's real money to be saved in terms of equipment costs, maintenance agreement costs, software licensing costs. So actually the power consumption and the environmental piece is an added benefit but it's not the main reason. So we actually had IDC do a survey for us last year and we asked IT executives, 500 senior IT executives, were you implementing sustainable IT programs and why? My guess initially was about 40% of them would say yes. Actually the number was 96% of them. And when we asked them why they fell into three categories. The digital leaders, those that are the early adopters moving the quickest. They said we do it to attract and retain institutional investors. They've been hearing from their boards. They've been hearing from their investor relations teams and investors are starting to ask and even in a couple cases board seats are becoming contentious based on the environmental perspective of the person being nominated. This digital mainstream, the folks in the middle about 80% of the total pie, they're doing it to attract and retain customers because customers are asking them about their sustainable IT programs. If they're a non-manufacturing customer, their data center consumption is probably the largest part of their company. It's also by the way usually the most expensive real estate the company owns. So customers are asking and customers are not only asking, do you have basic programs in place? But they're asking, what are your goals to Justin's point? The customers are starting to realize that carbon goals have been vaguely defined historically. So they're asking for specificity, they're asking for transparency and by the way the science-based target initiative recently released their requirements for net zero science-based targets. And that requires significant reduction to your point before you start considering renewable energy in that balance. The third reason those digital followers, that slowest group or folks that are in industries that move the slowest, they said they were doing this to attract and retain employees. Because they recognize the data scientists, the computer science, computer engineering students that they're trying to attract want to work at a company where they can see how what they do directly contributes to purpose. And they vote with their feet. If they come on and they can't make that connection pretty quickly or if they spend a lot of their time chasing down inefficiencies in a technology infrastructure, they're not going to stay there very long. >> I mean, the mission-driven organization is definitely an employee factor. People are interested in that. The work for company is responsible, doing the right thing but that business case is interesting because I think there's recognition now more than ever before. You think you're right on. It used to be kind of like mailed it in before. Okay, we're doing some stuff. Now it's like, we all have to do it. And it's a board issue. It's a financing issue. It might be a filing issue as you guys mentioned. So that's all great. So I got to ask how you guys specifically are working together, AMD and HPE. What are you guys doing to make it more efficient? And then I'll see with Cloud and Cloud scale, there's more servers being shipped now than ever before. And more devices at the edge. What are you guys doing together specifically? >> Yeah, we've been working together, AMD and HPE on advancing sustainability for many years. I've had the opportunity to working directly with John for many years and I've learned a lot from him and your team. It's fantastic to see all the developments here. I mean, so most recently the top 500 and the green 500 list of supercomputers came out. And at the top of that list is AMD HPE systems. And it shows kind of the pinnacle of what can be possible for other data centers looking to modernize and scale. So the number one system, the fastest system in the world and the most energy efficient system in the world, the Frontier supercomputer has AMD HPE technology in it. And it just passed the exit scale barrier. I mean, I'm still just blown away by this. A billion, billion calculations per second. It's just amazing. And the research is doing around clean energy, alternative energy sources, scientific research is really exciting. So there's that. The other system that really jumps out is the LUMI system, the number three system because it's a 100% powered by renewable energy. So not only that, it takes the heat and it channels it to a nearby town and covers 20% of that town's heating needs thereby avoiding 12,400 metric tons of carbon emissions. So this system is carbon negative, right? And you just go down the list. I mean, AMD is in the top eight out of 10 most green... >> Rewind that second. So you have the heat and the power shifting to a town? >> Yes, the LUMI supercomputer has the heat from the system to an nearby town. It's like a closed loop, the idea of circular economy but with energy. And it takes that waste and it makes it an input, a resource. >> But this is the kind of innovation that's going on, right? This is the scale, this is where scale and efficiency kind of come together. That's huge. Where's that going to go? What's your perspective on where that goes next because that's a blueprint that could be replicated. >> You bet. So I think we're going to continue to see overall power consumption go up at the system level. But performance per wat is climbing much more dramatically. So I think that's going to continue to scale. It's going to require a new cooling technology. So direct liquid cooling is becoming more and more in use and customers really interested in that. There's shifting from industry standard architectures to lower end high performance computer architectures to get direct liquid cooling, higher core processors and get the performance they want in a smaller footprint. And at the same time, they're really thinking about how do we operate the infrastructure as a system not as individual piece parts. And one of the things that Frontier and LUMI do so well is they were designed from the start as a system, not as piece parts making up the system. So I think that happens. The other thing that's really critical is no one company is going to solve these challenges ourself. So one of the things I love about our partnership with AMD is we look at each other's sustainability goals before we launch 'em. We say, well, how can we help? One of AMD's goals that I'll let Justin talk about came about because HPE at the time of separation laid a really aggressive product, energy efficiency goal out, said but we're not sure how we're going to make this. And AMD said we can help. So that collaboration, we critique each other's programs, we push each other, but we work together. I like to say partnership is leadership in this. >> Well, that's a nuance point. Before you get to that solution there Justin, this system's thinking is really important. You're seeing that now with Cloud. Some of the things that GreenLake and the systems are pointing out, this holistic systems' thinking is applied to partnerships, not just the company. >> Yep. >> This is a really nuanced point but we're seeing that more and more. >> Yeah, absolutely. In fact, Justin mentioned the heat reuse, same way with the national renewable energy lab. They actually did snow removal and building heating with the heat reuse. So if you're designing for example, a liquid cold system from the start, how do you make it a symbiotic relationship? There's more and more interest in co-locating data centers and greenhouses in colder environments for example. Because the principle of the circular economy is nothing is waste. So if you think it's waste or you think it's a byproduct, think about how can that be an input to something else. >> Right, so you might put a data center so you can use ambient cooling or in somewhere in the Columbia River so you can, you know, take advantage of, you know, renewable energy. What are some goals that you guys can share with us? >> So we've got some great momentum and a track record coming off of, going back to 2014, we set a 25 by 20 goal to improve the energy efficiency for our mobile processors and mobile devices, right? So laptops. And we were able to achieve a 31.7x in that timeframe. So which was twice the industry trend to that. And then moving on, we've doubled down on data center and we've set a new goal of a 30x increase in energy efficiency for our server processors and accelerators to really focused on HPC and AI training. So that's a 30x goal over 2020 to 2025 focused on these really important workloads 'cause they're fast growing. We heard yesterday 150 billion devices connected by 2025 generating a lot of data, right? So that's one of the reasons why we focused on that. 'Cause these are demanding workloads. And this represents a 2.5x increase over the historical trend, right? And fundamentally speaking, that's a 97% reduction in energy use per computation in five years. So we're very pleased. We announced an update recently. We're at 6.8x. We're on track for this goal and making great progress and showing how these, you know, solutions at a processor level and an accelerator level can be amplified, taken into HPE technology. >> Generally tech companies, you know, that compete want to rip each other's faces off. And is that the case in this space or do you guys collaborate with your competitors to share best practice? Is that beginning? Is it already there? >> There's much more collaboration in this space. This is one of the safe places I think where collaboration does occur more. >> Yeah. And we've all got to work together. A great example that was in the supply chain. When HPE first set our supply chain expectations for our suppliers around things like worker rights and environment and worker protection from a health and safety perspective. We initially had our code of conduct asked their suppliers to comply with it. Started auditing in event. And we quickly got into the factories and saw they were doing it for our workloads. But if you looked around the factory, they weren't doing in other places. And we took a step back and said, well, wait a minute. Why is that? And they said that vendor doesn't require it. So we took a step back and said let's get the industry together. We share a common supply chain. How do we have a common set of expectations and push them out to our supply chain? How to now do third party audits so the same supplier doesn't get audited by each of the major vendors and then share those audit results. And what we found was that really had a large lever effect of moving the electronic supply chain much more rapidly towards our expectations in all those areas. Well then other industries looked and said, well, wait a minute, if that worked for electronics, it'll probably work broader. And so now, the output of that is what's called the responsible business alliance across many industries taking that same approach. So that's a pre-competitive. We all have the same challenge. In many cases we share a common supply chain. So that's a great example of electronic companies coming together, design standards for things. There's a green grid group at the moment looking at liquid cooling connects. You know, we don't want every vendor to have a different connection point for liquid cooling for example. So how do we standardize that to make our customers have a easier time about looking at the technologies they want from any vendor and having common connection points. >> Right. Okay. So a lot of collaboration. Last question. How much of a difference do you think it can make? In other words, what percent of the blame pie goes to information technology? And I think regardless, you got to do your part. Will it make a dent? >> I think the sector has done a really good job of keeping that increase from going up while exponentially increasing performance. So it's been a really amazing industry effort. And moving forward, I think this is more important than ever, right? And with the slowdown of Moore's law we're seeing more gains that need to come from beyond process architecture to include packaging innovations, to power management, to just the architecture here. So the challenge of mitigating and minimizing energy growth is important. And we believe like with that 30x energy efficiency goal that it is doable but it does take a lot of collaboration and focus. >> That's a great point. I mean, if you didn't pay attention to this, IT could really become a big piece of the pie. Guys thanks so much for coming on theCUBE. Really appreciate. >> People are watching. They're paying attention at all levels. Congratulations. >> Absolutely. >> All right, Dave Vellante, John Furrier and our guests. Don't forget to go to SiliconANGLE.com for all the news. Our YouTube channel, actually go to CUBE.net. You'll get all these videos in our YouTube channel, youtube.com/SiliconANGLE. You can check out everything on demand. Keep it right there. We'll be right back. HPE Discover 2022 from Las Vegas. You're watching theCUBE. (soft music)

Published Date : Jun 29 2022

SUMMARY :

Brought to you by HPE. He is the chief technologist It's great to be here. So again, I remember the days where, Everything from the server So it's got to be a 100% but the goal setting is And the metrics involved too. and the outside of that. the right thing to do. and by the way the science-based So I got to ask how you guys specifically I've had the opportunity to So you have the heat and the has the heat from the system This is the scale, and get the performance they and the systems are pointing out, a really nuanced point but a liquid cold system from the start, or in somewhere in the So that's one of the reasons And is that the case in this space This is one of the safe places And so now, the output of that of the blame pie goes So the challenge of mitigating a big piece of the pie. People are watching. SiliconANGLE.com for all the news.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Justin MurrillPERSON

0.99+

Dave VellantePERSON

0.99+

John FreyPERSON

0.99+

Dave VellantePERSON

0.99+

John FurrierPERSON

0.99+

JustinPERSON

0.99+

30xQUANTITY

0.99+

20%QUANTITY

0.99+

2014DATE

0.99+

100%QUANTITY

0.99+

2.5xQUANTITY

0.99+

AMDORGANIZATION

0.99+

JohnPERSON

0.99+

LUMIORGANIZATION

0.99+

12,400 metric tonsQUANTITY

0.99+

31.7xQUANTITY

0.99+

97%QUANTITY

0.99+

FrontierORGANIZATION

0.99+

last yearDATE

0.99+

96%QUANTITY

0.99+

25QUANTITY

0.99+

2025DATE

0.99+

Las VegasLOCATION

0.99+

Columbia RiverLOCATION

0.99+

HPEORGANIZATION

0.99+

yesterdayDATE

0.99+

oneQUANTITY

0.99+

eachQUANTITY

0.99+

five yearsQUANTITY

0.99+

500 senior IT executivesQUANTITY

0.99+

GreenLakeORGANIZATION

0.99+

twiceQUANTITY

0.99+

60%QUANTITY

0.99+

6.8x.QUANTITY

0.99+

10QUANTITY

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.98+

third reasonQUANTITY

0.98+

2020DATE

0.98+

theCUBEORGANIZATION

0.97+

CUBE.netOTHER

0.97+

150 billion devicesQUANTITY

0.97+

about 40%QUANTITY

0.97+

about 80%QUANTITY

0.97+

SiliconANGLE.comOTHER

0.96+

YouTubeORGANIZATION

0.96+

OneQUANTITY

0.96+

firstQUANTITY

0.95+

LUMIOTHER

0.92+

zeroQUANTITY

0.92+

eightQUANTITY

0.91+

HPE Discover 2022TITLE

0.91+

IDCORGANIZATION

0.9+

20 goalQUANTITY

0.89+

todayDATE

0.88+

carbonQUANTITY

0.86+

day twoQUANTITY

0.8+

500QUANTITY

0.78+

next couple of yearsDATE

0.77+

billion, billion calculations per secondQUANTITY

0.74+

three categoriesQUANTITY

0.72+

coupleQUANTITY

0.7+

500 listQUANTITY

0.7+

youtube.com/SiliconANGLEOTHER

0.7+

three systemQUANTITY

0.66+

Discover 2022EVENT

0.66+

Mike Beltrano, AMD & Phil Soper, HPE | HPE Discover 2022


 

(soft upbeat music) >> Narrator: theCUBE presents HPE Discover 2022 brought to you by HPE. >> Hey everyone. Welcome back to Las Vegas. theCUBE is live. We love saying that. theCUBE is live at HPE Discover '22. It's about 8,000 HP folks here, customers, partners, leadership. It's been an awesome day one. We're looking forward to a great conversation next. Lisa Martin, Dave Vellante, two guests join us. We're going to be talking about the power of the channel. Mike Beltrano joins us, Worldwide Channel Sales Leader at AMD, and Phil Soper is here, the North America Head of Channel Sales at HPE. Guys, great to have you. >> Thanks for having us. >> Great to be here. >> So we're talking a lot today about the ecosystem. It's evolved tremendously. Talk to us about the partnership. Mike, we'll start with you. Phil, we'll go to you. What's new with HPE and AMD Better Together? >> It's more than a partnership. It's actually a relationship. We are really tied at the hip, not just in X86 servers but we're really starting to get more diverse in HP's portfolio. We're in their hyper-converged solutions, we're in their storage solutions, we're in GreenLake. It's pretty hard to get away from AMD within the HP portfolio so the relationship is really good. It's gone beyond just a partnership so starting to transition now down into the channel, and we're really excited about it. >> Phil, talk about that more. Talk about the evolution of the partnership and that kind of really that pull-down. >> I think there's an impression sometimes that AMD is kind of the processor that's in our computers and it's so much more, the relationship is so much more than the inclusion of the technology. We co-develop solutions. Interesting news today at Antonio's presentation of the first Exascale supercomputer. We're solving health problems with the supercomputer that was co-developed between AMD and HPE. The other thing I would add is from a channel perspective, it's way more than just what's in the technology. It's how we engage and how we go to market together. And we're very active in working together to offer our solutions to customers and to be competitive and to win. >> Describe that go-to-market model that you guys have, specifically in the channel. >> So, there is a, his organization and mine, we develop joint go-to-market channel programs. We work through the same channel ecosystem of partners. We engage on specific opportunities. We work together to make sure we have the right creative solution pricing to be aggressive in the marketplace and to compete. >> It's a great question because we're in a supply chain crisis right now, right? And you look at the different ways that HP can go to market through the channel. There's probably about four or five ways that channel partners can provide solutions, but it's also route to purchase for the customers. So, we're in a supply chain crisis right now, but we have HP AMD servers in stock in distribution right now. That's a real big competitive advantage, okay? And if those aren't exactly what you need, HP can do custom solutions with AMD platforms all day, across the board. And if you want to go ahead and do it through the cloud, you've got AMD technology in GreenLake. So, it's pretty much have it your way for the customers through the channel and it's really great for the customers too because there's multiple ways for them to procure the equipment through the channel so we really love the way that HP allows us to kind of integrate into their products, but then integrate into their procurement model down through the channel for the end user to make the right choice. So, it's fantastic. >> You mentioned that AMD's in HCI, in storage, in GreenLake and in the channel. What are the different requirements within those areas? How does the channel influence those requirements and what you guys actually go to market with? >> Well, it comes down to awareness. Awareness is our biggest enemy and the channel's just huge for us because AMD's competitive advantage in our technology is much different. And when you think about price and performance and security and sustainability, that's what we're delivering. And really the channel kind of plugs that in and educates their customers through their marketing and demand gen, kind of influences when they hear from their customers or if they're proactively touching them, influences the route to purchase based on their situation, if they want to pay for it as a service, if they want to finance it, if it does happen to be in stock and speed of delivery is important to them, the channel partner influences that through the relationships and distribution or they can go ahead and place it as a custom to order. So, it's just really based on where they're at in their purchasing cycle and also, it's not about the hardware as much as it's about the software and the applications and the high-value workloads that they're running and that kind of just dictates the platform. >> Does hardware matter? >> Yes, it sure does. It does, man. We're just kind of, it's kind of like the vessel at this point and our processors and our GPS are in the HP vessel, but it is about the application. >> I love that analogy. I would say, absolutely does, workloads matter more and then what's the hardware to run those workloads is really critical. >> And to your point though, it's not just about the CPU anymore. It's about, you guys have made some acquisitions to sort of diversify. It's about all the other supporting sort of actors, if you will, that support those new workloads. >> Let me give you an example that's being showcased at this show, okay? Our extreme search solution with being driven by Splunk, okay? And it's a cybersecurity solution that the industry is going to have to be able to handle in regards to response to any sort of breach and when you think about, they have to search through the data and how they have to get through it and do it in a timely fashion. What we've done is developed a DL385 solution where we have a epic processor from AMD, we have a Xilinx which who we own now, they're FGPA, and Samsung SSDs which are four terabytes per drive packed in a DL385. Now you add the Splunk solution on top of that and if there ever is a breach, it would normally take about days to go ahead and access that breach. Now it can be done in 25 minutes and we have that solution here right now so it's not like we acquire Xilinx and we're waiting to integrate it. We hit the ground running and it's fantastic 'cause the solution's being driven by one of our top partners, WWT, and it's live in their booth here today so we're kind of showing that integration of what AMD is doing with our acquisitions in HP servers and being able to show that today with a workload on top of it is real deal. >> Purpose-built to scan through all those log files and actually surface the inside. >> Exactly what it is, and it's on public sector right now, that's a requirement to be able to do that and to not have it take weeks and be able to do it in 25 minutes is pretty impressive. >> Those are the outcomes customers are demanding? >> That's it. People are, if you're purchasing an outcome, HP can deliver it with AMD and if you're looking to build your own, we can give it to you that way too so, it's flexibility. >> Absolutely critical. Mike, from your perspective on the partnership we've seen and obviously a lot of transformation at HPE over the last couple of years, Antonio stood on this stage three years ago and said, "By 2022, we're going to deliver the entire portfolio as a service." How influential has AMD been from a relationship perspective on what he said three years ago and where they are today? >> Oh my gosh! We've been with them all the way through. I mean, HP is just such a great partner, and right now, we're the VDI solution on GreenLake so it's HP GreenLake, VDI solutions powered by AMD. We love that brand recognition as a service, okay? Same with high-performance computing powered by AMD, offered on HP GreenLake so it's really changed it a lot because as a service, it's just a different way for a customer to procure it and they don't have to worry about that hardware and the stack and anything like that. It's more about them going into that GreenLake portal and being able to understand that they're paying it just like they pay their phone bill or anything else so it's really Antonio's been spot-on with that because that's a reality today and it's being delivered through the channel and AMD's proud to be a part of it and it's much different 'cause we don't need to be as evolved as we have to be from a hardware sale perspective when it's going through GreenLake and it makes it much easier for us. >> Phil, you talked about workloads, really kind of what matter, how are they evolving? How is that affecting? What are customers grabbing you and saying, "We need this." What do you and from a workload standpoint and how are you delivering that? >> Well, the edge to the cloud platform or GreenLake is very much as a service offering, aimed at workloads. And so, if HPE is building and focusing its solutions on addressing specific workload needs, it's not about necessarily the performance you mentioned, or you're asking the question about hardware. It's not necessarily about that. It's, what is the workload, should the workload be, or could the workload be in public cloud or is it a workload that needs to be on premise and customers are making those choices and we're working with those customers to help them drive those strategies and then we adapt depending on where the customer wants the workload. >> Well, it's interesting, because Antonio in his keynote today said, "That's the wrong question," and my reaction was that's the question everybody's asking. It may be the wrong question, but that's what so, your challenge is to, I guess, get them to stop asking that question and just run the right tool for the right job kind of thing. >> That's exactly what it's about because you take high-value workloads, okay? And that can mean a lot of different things and if you just pick one of them, let's say like VDI or hyper-converged. HP's the only game in town where they can kind of go into a gun, a battle with four different guns. They give you a lot of choices and they offer them on an AMD platform and they're not locking you in. They give you a lot of flexibility and choice. So, if you were doing hyper-converged through HPE and you were looking to do it on AMD platform, they can offer to you with VMware vSAN ReadyNodes. They can offer it to you with SimpliVity. They can offer it to you with Nutanix. They can offer it to you with Microsoft, all on an AMD stack. And if you want to bring your own VMware and go bare metal, HP will just give you the notes. If you want to go factory integrated or if you want to purchase it via OEM through HP and have them support it, they just deliver it any way you want to get it. It's just a fantastic story. >> I'll just say, look, others could do that, but they don't want to, okay? That's the fact. Sometimes it happens, sometimes the channel cobbles it together in the field, but it's like they do it grinding their teeth so I mean, I think that is a differentiator of HPE. You're agnostic to that. In fact, by design. >> They can bring your own, you can bring your own software. I mean, it's like, you just bring your own. I mean, if you have it, why would we make a customer buy it again? And HP gives them that flexibility and if it's multiple hypervisors and it's brand agnostic, it's more about, let's deliver you the nodes, purpose-built, for the application that you're going to run in that workload and then HP goes ahead and does that across their portfolio on a custom to order. It's just beautiful for us to fit the need for the customer. >> Well, you're meeting customers where they are. >> Yes. >> Which in today's world is critical. There's no, really no other option for companies. Customers are demanding. Demands are not going to go. We're not going to see a decrease after the pandemic's over of demand, right? And the expectations on businesses. So meeting the customers where they are, giving them that choice, that flexibility is table stakes. >> How has those, you've mentioned supply chain constraints, it sounds like you guys are managing that pretty well. It's I think it's a lot of these hard to get supporting components, maybe not the most expensive component, but they just don't have it. So you can't ship the car or you can't ship the server, whatever it is, how is that affecting the channel? How are they dealing with that? Maybe you could give us an update. >> Oh, the channel's just, we love them, they're the front line, that's who the customers call in, who's been waiting to get their technology and we're wading through it, thank goodness that we have GreenLake because if you wanted to buy it traditionally, because HP is supplying supply-to-purchase through distribution in stock, but it's very limited. And then if you go customer order, that's where the long lead times come into place because it's not just the hard drives and memory and the traditional things that are constrained now. Now it's like the clips and the intangibles and things like that and when you get to that point, you got to just do the best you can and HP supply chain has just been fantastic, super informative, AMD, we're not the problem. We got HP, plenty of processors and plenty of accelerators and GPUs and we're standing with them because that back to the relationship, we're facing the customer with them and managing their expectations to the best we can and trying to give them options to keep their business floating. >> So is that going to be, is this a supply chain constraints could be an accelerant for GreenLake because that capacity is in place for you to service your customers with GreenLake presumably. You're planning for that. There's headroom there in terms of being able to deliver that. If you can't deliver GreenLake, all this promise. >> I would say I would be careful not to position GreenLake as an answer to supply chain challenges, right? I think there's a greater value proposition to a client, and keep in mind, you still have technology at the heart of it, right? And so, and to your question though about our partners, honestly in a lot of ways, it's heartbreaking given the challenges that they face, not just with HPE, but other vendors that they sell and support and without our partners and managing those, we'd be in a world of hurt, frankly and we're working on options. We work with our partners really closely. We work with AMD where we have constraints to move to other potential configurations. >> Does GreenLake make it harder or easier for you to forecast? Because on the one hand, it's as a service and on the other hand, I can dial it down as a customer or dial it up and spike it up if I need to. Do you have enough experience to know at this point, whether it's easier or harder to forecast? >> I think intuitively it's probably harder because you have that variable component that you can't forecast, right? It's with GreenLake, you have your baseline so you know what that baseline is going to be, the baseline commitment and you build in that variable component which is as a service, you pay for what you consume. So that variable component is the one thing that is we can estimate but we don't know exactly what the customer is going to use. >> When you do a GreenLake deal, how does it work? Let's say it's a two-year deal or a three-year deal, whatever and you negotiate a price with a customer for price per X. Do you know like what that contract value is going to be over the life or do you only know that that baseline and then everything else is upside for you and extra additional cost? So how does that work? >> It's a good question. So you know both, you know the baseline and you know what the variable capacity is, what the limits are. So at the beginning of the contract, that's what you know, whether or not a customer determines that they have to expand or do a change order to add another workload into the configuration is the one thing that we hope happens. You don't know. >> But you know with certainty that over the life of that contract, the amount of that contract that's booked, you're going to recognize at some point that. You just don't know when. >> Yes, and so that, and that's to your question, you know that element, the fluctuation in terms of usage is depending on what's happening in the world, right? The pandemic, as an example, with GreenLake customers, probably initially at the beginning of the pandemic, their usage went down for obvious reasons and then it fluctuates up. >> I think a lot of people don't understand that. That's an interesting nuance. Cool, thank you. >> Guys, thanks so much for joining us on the program, talking about the relationship that AMD and HPE have together, the benefits for customers on the outcomes that it's achieving. We appreciate your insights and your time. >> Thanks for having us, guys. >> Appreciate it. >> Our pleasure. >> Phil: Thank you. >> For our guests and Dave Vellante. I'm Lisa Martin live in Las Vegas at HPE Discover '22. Stick around. Our keynote analysis is up next. (soft upbeat music)

Published Date : Jun 29 2022

SUMMARY :

brought to you by HPE. and Phil Soper is here, to us about the partnership. It's pretty hard to get away from AMD and that kind of really that pull-down. and to be competitive and to win. model that you guys have, to make sure we have the right that HP can go to market and what you guys actually and also, it's not about the hardware it's kind of like the vessel at this point and then what's the hardware it's not just about the CPU anymore. and being able to show and actually surface the inside. and be able to do it in 25 and if you're looking to build your own, on the partnership we've seen and they don't have to and how are you delivering that? Well, the edge to the that question and just run the right tool they can offer to you with That's the fact. and if it's multiple hypervisors customers where they are. So meeting the customers where they are, that affecting the channel? and the traditional things So is that going to be, is and keep in mind, you and on the other hand, I can the customer is going to use. and you negotiate a price with and you know what the that over the life of that contract, that's to your question, I think a lot of people on the outcomes that it's achieving. analysis is up next.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

MikePERSON

0.99+

AntonioPERSON

0.99+

Mike BeltranoPERSON

0.99+

PhilPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HPEORGANIZATION

0.99+

two-yearQUANTITY

0.99+

three-yearQUANTITY

0.99+

HPORGANIZATION

0.99+

AMDORGANIZATION

0.99+

Phil SoperPERSON

0.99+

Las VegasLOCATION

0.99+

Las VegasLOCATION

0.99+

two guestsQUANTITY

0.99+

SamsungORGANIZATION

0.99+

GreenLakeORGANIZATION

0.99+

25 minutesQUANTITY

0.99+

five waysQUANTITY

0.99+

three years agoDATE

0.99+

firstQUANTITY

0.99+

bothQUANTITY

0.99+

25 minutesQUANTITY

0.99+

three years agoDATE

0.99+

todayDATE

0.99+

XilinxORGANIZATION

0.98+

2022DATE

0.98+

oneQUANTITY

0.97+

one thingQUANTITY

0.96+

pandemicEVENT

0.95+

theCUBEORGANIZATION

0.95+

George Watkins, AMD | AWS re:Invent 2021


 

(upbeat music) Welcome back to theCUBE's coverage of AWS re:Invent 2021. I'm John Furrier, host of theCUBE. We have George Watkins, the product marketing manager, cloud gaming and visual cloud at AMD. George, thanks for coming on theCUBE. >> Thank you for having me. >> Love this segment, accelerating game development. AWS cloud, big topic on how the gaming developer environment's changing and how AMD is powering it. Let's get into it. So streaming remote, working remote, flexible collaboration, all powered by the G4ad virtual workstations, it's been a big part of success. Take us through what's going on there. >> Yeah, certainly. So obviously from a remote working perspective, there was a huge impact on collaboration and productivity for many industries out there. But, a collaborative environment like game design, it was even more so. First off, happy to have these big bulky workstation ship to local artists, so they can actually carry on working was a massive nightmare for IT management. Making sure that they have the right hardware, the right resources, the right applications and security. So it was a real mean task. And on top of that, working remotely also brings in other efficiencies when it comes to collaboration. So for example, working on a data sets, as I mentioned before, it's a huge team collaboration effort when it comes to game development, and using the same dataset happens very very often. So if you're actually working remotely and an artist, for example pulled a dataset, from a server, worked on it, then took it back up into the cloud. I'll tell you now, it takes some time to do. And at the same time you might have one or two other artists trying to use that data set. The problem or the big issue that comes here is version control. And essentially because these artists are using the older version, there's creating errors, and keeping that production timed longer. So it's very very inefficient. And then this is where the cloud really comes to end zone. First off the cloud, and then obviously in this case, the AWS cloud, with G4ad instances, really does bring the whole pipeline together. It brings the data sets and the virtual workstations, obviously, as I mentioned, G4ad, as well as all the applications into one place. It's all centralized. And from an IT perspective, this is fantastic. And actually sending out a workstation now is really really simple. It's log in details into an email to your new staff, and there's some really great benefits as well from a staff perspective. Not only are they not tethered to a local workstation, they have the flexibility of work where they need to, and also how they like to. But it's also really interesting about how they work on a day-to-day basis. So a good example of this is, if a artist is using or working on a very very heavy dataset and the configuration from their VM or virtual workstation, isn't up to snuff because of the such a large dataset, all they need to do is call up IT and say, I need more resource. And literally in a couple of minutes time, they can actually have that resource, again, improving that productivity, reducing that time. So it's really really important. And just a final note here as well, with having all that data and all that resource in the cloud, version control tools, really do help bring that efficiency as it's all built into the applications and that data sets really, really exciting staff and ultimately, bring in that productivity and reducing that time and errors down. >> I could see your point too because, when you don't bring it to the cloud, people are going to be bored, waiting for things to happen. And they say I want to take a shortcut. Shortcuts equal mistakes. So, I can see that the G4ad with focus for artists is cool because it's purpose-built for what you're talking about. So take me through how you see the improved efficiencies in the development pipeline with cloud computing around this area because, obviously it makes a lot of sense. Everything's in the cloud, you've got the instances there. Now what happens next? How does the coding all work? What's going on around the game development pipeline? >> 3D applications today, particularly at use in the game industry, I'll be honest, they are still based on legacy hardware. And what I mean by this is that the applications typically require higher CPU Hertz the typically single threaded, maybe some kind of multi threaded functionality there. But generally they are limited by what the traditional workstation has been. And obviously why not? They've been built over the last 10-15 years to access that type of data. Now that is great, but it's not accessing what could be, all the resources that are available in the cloud. And this is what's really really exciting in my part. So ultimately what we're saying is that is that you have this great virtual workstation experience. You have all your applications running on there, you can be efficient, but then there's these really specific and really interesting use cases that aren't accessing the cloud. And I've got a couple of examples, so first off there's a feature inside Unreal 4 engine, called Unreal Swarm. And this feature helps actually reduce the time it takes, in this case and to bake light maps into auto scale, to bake light maps into a game. And this is done by auto scaling, the compiling in AWS cloud. So for example, after making the amends to a light map, we're ready to essentially recompile, but instead of doing this on the local workstation, using the traditional CPU and memory resource, which you would expect to see in a workstation, and actually in this case, it takes around about 50 minutes to do. When you actually use Unreal Swarm, you can, the coordinator as part of this functionality, bursts the requirement or the actual compiling into the cloud. And actually in this case, it's using, like, 10 C5a instances. So these are all CPU high-performance computing instances. And because you have this ability to auto-scale, you actually essentially bring that time, that original 50 minutes, down to 4 minutes. And this type of kind of functionality or this type of task that you would typically see with a 3d artist or with a programmer, basically happens multiple times a day. So when you start factoring in a saving of 45 minutes multiple times a day, it starts really bringing down, the amount of time saved, and obviously the amount of cost saved as well for that artist's time. So it's really really exciting and, certainly something to talk about. >> That's totally cool. I got to ask you since you're here, because it brings up the question that pops into my head, which is okay. What's the state of the art development trends that you're seeing because, on the cloud side, on non gaming world, so shift left to security. You start to see more agile kind of methods around what used to be different modules, right? So you mentioned compiling, acceleration, what's going on in the actual workflows for the developers? What are some of the cool things that you could share that people might not know about that are important? >> Well certainly it's really about finding, those bursty computational expensive and time consuming processes, and actually moving them to the cloud. So really, from a compiling standpoint, they are usually CPU bound. So essentially the GPU does all the work when it comes to the view pole, all that high rendering frames per second, that's what it's really designed for. And it does a very good job with that. But actually the compiling aspect, the compute aspect is all done on the CPU side. And, the work that we've been doing with AWS and the game tech team is actually finding certain ways of actually helping to reduce the compiling nature because ultimately that is always restricted by the amount of calls that you actually have on a local device. So again, another example is there's a company out there called Incredibuild, and they specialize in accelerating the development of that programming code. And obviously in this case, it's the game code. And if an artist, entered a clean source code built on unreal engine full, it would take approximately around about 60 minutes to do on a local machine. However, using the Incredibuild solution to accelerate that type of workload, you can complete it in just 6 minutes. Because again, it's auto scaled out that compiling to several in this case 16 C5a large instances, which essentially reduces all that time for the artist freeing them up to do more stuff. >> And the more creativity is just the classic use case of the clouds, beautiful thing. It's just reminds me of how good this is, because, when you think about what you guys are doing, pushing the envelope for cloud with the creators. gaming is such a state-of-the-art pressure point to make high performance come better. It really is putting a lot of pressure on AMD and everyone else's to get faster and stronger because, it truly is pushing state-of-the-art in general. It's always been that way. If you look at the gaming world. This is a whole 'nother level. I mean, you starting to see that. What's your view on that? If you look at the gaming as a tail sign for the trends and the tech side, better, faster, cheaper processors and speeds and feeds, and how codes work in between processes GPU's and CPU's, all this is cool. All kind of new, if you will. New patterns, new usage, what's your view on that? >> Well certainly, cloud gaming is a really exciting topic and, we believe that cloud gaming with the introduction of various key elements are reading revolutionalize the way that some people are actually using their complaint gamings and interacting with games. And what I mean by this is like, today we can do cloud gaming, it's a fantastic experience. You're usually hardwired, using a broadband connection to actually play those games. And you tend to try and be close to an actual data sensors, to try to reduce that latency. However this is only going to get better with the introduction of 5G coverage and also just, as important edge computing. And because of these two elements, what we're going to be seeing is very high speeds wirelessly, and more importantly, low latency. And this is very important for, that very dynamic cinematic gaming experiences. But not only this, what it can actually do is bring, 4k, 8K gaming to people wirelessly. It can also bring VR and AR experiences wirelessly, and also it can access, these new emerging technologies that are making higher fidelity gaming experiences like hardware retraces. All this can be done with these new technologies. And it's incredibly incredibly exciting. But more importantly, what's really great about this is, from a game publisher perspective, because it's actually helping them simplify their business processes, particularly from a game development standpoint. And actually what I mean by this is, if we take a typical example of what a game developer has to do for a mobile game, there's certain considerations that they need to think about when they actually comes to developing and validate. First off they'll have to understand what type of OS to account for. And actually what type of version of that OS to account for. What type of IPA they're going to be building on. And also finally, what type of resources, are actually on that end point device. So there's a lot of considerations here, and a lot of testing. So ultimately a lot of work to get that game out, to those gamers who might be on a couple of these different mobile platforms. However, when it comes to game streaming, it really does kind of change all this because ultimately what the game developer is actually doing is that they're developing and they're validating on one source. And that is going to be the server that is essentially pairing that game streaming service. Because how game streaming works is that we essentially trans code the actual game via H.264 to a software client on any end point device. So this could be those mobile devices I just mentioned. It can also be TVs, it could be consoles, it can be even low powered laptops. And what's very exciting is that, from an end user perspective, they're getting the ultimate in gaming experiences and usually these types of solutions are traditionally subscription-based. So you're actually reducing the requirement of this kind of high-end thousands of dollars gaming solution or simply a high-end next gen console. All of this is actually been given to you and delivered as part of a game streaming service. So it's very very exciting and, certainly we can see the adoption on both the game development side, as well as the gamer's side. That's a great way to put an end to this awesome segment. I think that business model innovation around making it easier, and making it better to develop environment, that's just how they work. So that's good, check. But really the business model here, the gaming as a service, you're making it possible for the developer and the artist to see an outcome faster. That's the cloud way. >> Thank you >> And they doubled down on success and they could do that. So again, this is all new and exciting and certainly the edge and having data being processed at the edge as well. Again, all this is coming in to create more good choice. Thank you so much for coming on and sharing that insight with us from the AMD perspective. And again, more power, more speed, we always say, no one's going to complain, they get more compute, that's what I always. >> Absolutely absolutely. >> Thanks for coming I appreciate it. >> Thank you. >> theCUBE coverage here at AWS re:Invent 2021. I'm John Furrier host of theCUBE. Thanks for watching. (upbeat music)

Published Date : Nov 30 2021

SUMMARY :

the product marketing manager, all powered by the G4ad And at the same time you might So, I can see that the G4ad So for example, after making the amends I got to ask you since you're here, So essentially the GPU does all the work And the more creativity and the artist to see an outcome faster. and certainly the edge and I'm John Furrier host of theCUBE.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
George WatkinsPERSON

0.99+

John FurrierPERSON

0.99+

50 minutesQUANTITY

0.99+

45 minutesQUANTITY

0.99+

oneQUANTITY

0.99+

GeorgePERSON

0.99+

AMDORGANIZATION

0.99+

AWSORGANIZATION

0.99+

IncredibuildORGANIZATION

0.99+

6 minutesQUANTITY

0.99+

4 minutesQUANTITY

0.99+

two elementsQUANTITY

0.99+

FirstQUANTITY

0.98+

todayDATE

0.98+

Unreal 4TITLE

0.97+

firstQUANTITY

0.97+

thousands of dollarsQUANTITY

0.97+

Unreal SwarmTITLE

0.96+

bothQUANTITY

0.96+

one sourceQUANTITY

0.95+

two other artistsQUANTITY

0.92+

singleQUANTITY

0.91+

G4adTITLE

0.9+

H.264OTHER

0.89+

one placeQUANTITY

0.89+

around about 50 minutesQUANTITY

0.84+

times a dayQUANTITY

0.82+

2021TITLE

0.81+

re:Invent 2021EVENT

0.79+

around about 60 minutesQUANTITY

0.74+

G4adORGANIZATION

0.73+

approximatelyQUANTITY

0.72+

G4adCOMMERCIAL_ITEM

0.69+

8KOTHER

0.68+

IncredibuildTITLE

0.68+

theCUBETITLE

0.66+

C5aTITLE

0.65+

4kOTHER

0.64+

5GQUANTITY

0.64+

16OTHER

0.63+

10TITLE

0.58+

agileTITLE

0.57+

re:Invent 2021TITLE

0.52+

yearsDATE

0.49+

10-15QUANTITY

0.49+

lastDATE

0.49+

re:TITLE

0.45+

InventEVENT

0.41+

theCUBEORGANIZATION

0.41+

secondQUANTITY

0.38+

AWS reInvent 2021 AMD Michael D'Aniello


 

(bright music) >> Welcome back to theCUBE's coverage of AWS re:Invent 2021. I'm John Furrier, your host of theCUBE. We have Michael D´aniello, platform architect at VMware's Carbon Black. Michael, great to see you. We're here at re-Invent virtual hybrid in person. Great to have you on theCUBE. Thanks for coming on. >> Yeah, thanks a lot. Glad to be here. >> So one of the big stories that we're tracking, obviously, is workloads. All cloud for all workloads. Obviously the data is a big part of things, but under the covers and optimizing cloud for the application developers, this modern application movement is more and more at the top of the stack. People just wanting to code. Infrastructure as code. You've seen DevSecOps is a big trend that's driving all new microservices, all new greatness for developers, but still, there's an optimization question. I want to get your thoughts on this, is what you do. Take a minute to explain what your role is at Carbon Black around this cloud optimization. >> Yeah, absolutely. Yeah, so my name is Michael D'aniello. I am a platform architect of VMware Carbon Black. I work across all the different engineering teams. And our main objective is to develop scalable platform tools and that includes, yeah, cloud security, automation pieces, pipelines, cost optimization, like we'll be talking about today, developer enablement tooling and observability tooling. >> One of the big things about instances is that, you know, do I have enough instances? 'Cause honestly, the elastic cloud is amazing, all kinds of new resources there, but talk about the AMD portion of the instances. How do we identify these instances? How to developers understand it, what's in them, and what's the selection criteria? Take us through that whole process of the Amazon web service and the AMD instances. >> Yeah, sure. So essentially, we're leveraging a lot of our instances to run our EKS clusters, which is a managed service for me and for us to run our Kubernetes clusters. And we identify that we can take a bunch of those instances and gain some cost optimization benefits by selecting from Intel to AMD processors. And, you know, initially, we had measured out to be roughly a 10% reduction in cost just for selecting that instance type. But yeah, we actually learned we gained quite a bit more, so. >> John: You know, developers are always like, I want more power, and this is what, you know, the whole idea of Cloud is. Cloud scale has been a big competitive advantage, but also the cost aspect of it. What's the balance between maximizing performance and cost optimization? Because now, you know, people don't want to, you know, they want more power. They also don't want to have a lot of extra spend. And this is kind of one of those things they talk about in Cloud where it's been so successful, cost is important. >> Yeah, yeah, for sure. And it's got to be easy, too, to get that cost optimization benefit. Otherwise, you're spending all your cycles and burning that money there in the human capital and the team and the engineering effort. So luckily, this change is a one-line change. We use Terraform for our automated provisioning, a layer, and we were able to make that one line change and then developers didn't have to make any application changes, which was great. So it was a no-brainer for us to pursue this. >> Talk about the EC2 instances that leverage AMD based process for the EKS, you mentioned that earlier, what is that all about? What's the benefits, what's in it for you guys? >> Yeah, for sure. So essentially, the workloads that are running on these instance types are actual Carbon Black Cloud application. So, all the backend systems that support our customers. And so in that use case, we're, you know, we're spinning up all of our containers that are running our applications and essentially, that's our use case for those instance types. >> How did you come to use the AWS EC2 instances on the AMD? Did you have an evaluation process? Did you just go select it? I mean, take us through that migration aspect of it. >> Yeah, sure, yeah. So originally, we're looking across the board. How can we do better cost optimization, right? And that goes across every different AWS resource, but we targeted this one specifically. We worked alongside with their AWS TAMs and representatives to basically find out, "Hey, is this financially worth the effort?" And we did reach that conclusion with some analysis, basically targeting these instance types and doing some analysis on that cost optimization specifically. And it ended up, you know, being the right thing to target. >> What was the ease of use of the switch? Take us through that. Was it a heavy lift? Was it seamless? Take us through the impact, there, on the move over and what were the results of that? >> Yeah, so I mean, that's the greatest thing. Like I said before, I mean, we had to make just a single line change just to change that instance type in our config and then roll that out across our regions. We did slow roll that in order to make sure that those changes in our development environments didn't make any, you know, performance hits or we didn't run into any snags with the applications themselves. But yeah, I mean, that's the greatest part about the story from my perspective is the ease to migrate over and to switch to these instance types, and then you just immediately gain that cost optimization benefit. >> You know what I love about what your job is, platform architect, that word kind of had a lot of meaning even 10, 15 years ago, but now with the Cloud, it's almost like you're always finagling and managing and massaging and nurturing the infrastructure to enable it. More new things are coming online as well, more high level services. So you've got a fun job and it's always evolving. How do you stay on top of it? What's the impact been for your customers, too, as you start deploying some of these new instance capabilities? Take us through kind of a day in the life of what you do and then what's the impact of customers? >> Yeah, sure. So, you know, like you said, there's quite a bit now to look at. You know, you got to stay on top of different blogs and keep connected with your network to see what your other colleagues are doing across different companies. You know, you can go into conferences like AWS re:Invent, right, to keep on the cutting edge here. But yeah, that's essentially, you know, one of the key aspects is just trying to look at all the different aspects, all the new technologies that are coming out, making sure you're making the right choices there and trying to get the most bang for your buck while you're at it. >> What are some of the big factors that you see in cloud native as you start to look at what customers are doing? Obviously with Kubernetes, your starting to see that platform develop inside the industry as well as defacto, kind of orchestration layer. But now as customers start to look at it, they want to have more ease of use there, too. At the same time, they don't want to have to do a lot of front end work. They want to get instant benefits in the Cloud, obviously, whether it's from a security standpoint or just rolling out a modern application. Okay, so as having all this infrastructure under the covers, how do you look at that problem and how do you capture that opportunity? >> Yeah, and I think that's why we're seeing a movement here on platform teams. It's kind of a newer terminology, usually a band of developers and SREs come together and say, "Well, we've got a lot of different things to look at. We're onboarding applications to Kubernetes, and we need to make tools so that developers don't have to think much about the transition and the underlying platform." And so that's one of our success metrics on the platform engineering team is just to almost, you know, be non-existent, right? To just have everything flow through our systems and then have just a high ease of use to onboard the applications to the new platform. >> You know, it looks like you have some great success with the AMD based instances. Can I ask you a question? 'Cause I wanted figure this out. How do you identify an AMD based instance when you're making the selections? >> Yeah, sure. It's as easy as just the A after the name. So for us, it was the C5.4XL. And if you want the AMD one, it's just the C5A.4XL. So I guess technically, instead of a one line change, it's actually a one letter change. So, quite easy there. >> Yeah, it's almost like back in the old glory days of command line, one quick update. The customer aspect of this is also important, too. If you don't mind, while I got you here, what are some of the things that you're hearing from your customers, from a performance standpoint, that they're looking for? Obviously, the cost optimization is key, but as they look to deploy more power and more performance, what are some of the things that your customers are looking for from Carbon Black? >> Yeah, so I mean, we are a security company, but we're really a data company because we have, you know, 8,000 customers, we processed over a trillion events per day, we ingress over a hundred terabytes of data per day. And so, our customers need high level performance. And if we can't provide that with low latency, we're not successful. So that's why, you know, performance on the underlying systems that are running our applications is super critical. >> Yeah, you're looking at trailblazer over there. I mean, the work that you guys are doing with the data is amazing. And that's a big theme at re:Invent this year is that data is a huge part. We look at the success of the cloud growth on this, I call gen-two cloud, happening. This whole modern movement is all about how people handle the data at scale, 'cause cloud scales here and now you've got processing all that data, The trailblazing that's going on, there's like this new wave of, I almost called it first-generation trailblazers, but you guys are doing that. What advice would you have for other architects out there and kind of the mainstream enterprises who are like, "Hey, I want to take advantage of the path that you guys have plowed through." What's your advice? >> Yeah, I think one of the key things in a place where we've had a lot of success is creating standards, making sure that we're choosing technology wisely, and making sure that your company isn't building the same solution in silos. And you know, that's a huge pattern that I've seen in my career. And if you can negate that, you're going to be in a great place. So, you know, choose the right technology, container first, cloud native first, push forward, and then make sure that everybody's kind of on that same ship running in the same direction. >> Well, great case study on this AMD based instance migration. Was there any uplift and experience that you've seen on the switch and the performance? Can you just talk about that? What does it mean to upgrade? What benefits are you seeing on the performance you have? >> Yeah, so I didn't hit on this yet and I really wanted to. Yeah, so upfront, the instance itself is 10% cheaper. However, we found out that we had to run far less instances because of that performance increase. So we ended up saving roughly 30% and we've continued to scale out. So at first, it was a couple of hundred instances. Now we're in the thousands and we're going to keep ramping up to over 10 thousands, tens of that. >> John: Let me get this right. So single line change, letter change, instance change. So you get not as many instances, and you save money, so you get cost optimization and higher performance. >> Yep. They say, if it's too good to be true, it's not. But in this case, it actually is. >> So why is it so good in your opinion? What did you discover? What was the big revelation that went down this path? Because that's good value proposition. >> Yeah, for sure. I mean, so initially, we were just chasing that initial BC to 10% and then as we kind of push it forward, we're looking at the metrics, month to month costs and we're actually saying, well, as we kind of swap over from one instance type to another, we're actually paying less. And then once we fully swapped over, it took five or six months to get to the same amount of costs as we continued to scale upward. So it's been a great story. >> It is a great story. It's super nuanced, but it's super important to know these platform benefits. I got to ask you on a personal question, if you don't mind. We love covering Cloud. We've been covering Amazon, it's our ninth year at re:Invent. Just love covering all the action and tech as this just total awesomeness environment. Cloud scale, innovation, capabilities, it's like surfing a big wave. But there's a bigger wave coming and we're seeing it now. I want to get your thoughts on this. As you look to the next big wave, beyond Cloud now, Cloud scale, data, new architecture is rolling out with Edge, basically distributing computing at large scale, and tons of security challenges, right? How do you look at this next big wave coming? Are you staring at it saying, wow, this is going to be huge? And how do you ride that wave? What's your mindset and how do you look at that? >> Well first of all, I'm extremely excited about it. Just the further this thing grows out, there's definitely more complexity, but just a whole slew of fun problems to solve. But when we look at these different problems and solving them at scale across multiple regions, it gets pretty exciting, right? So I can say one example of this is our security of our Cloud, not the security product, and we've developed automation for prevention and auto-remediation in our pipelines. It's been such a success story. And these type of technologies did not exist even a couple of years ago and we've been able to take advantage of them. So, there's going to be a lot more of that where that came from. So, yeah. >> Michael, great work. And again, you're truly a trailblazer, and this is, again, you got to do it. You got to screw your own cloud and stay on the cutting edge and ride that wave. Congratulations on the CostOp cloud optimization and the success with AMD based instances. Congratulations. Thanks. >> Thanks. >> Okay, this is theCUBEs coverage of AWS's re:Invent 2021. I'm John Furrier, your host of theCUBE. Thanks for watching. (inspirational music)

Published Date : Nov 16 2021

SUMMARY :

Great to have you on theCUBE. Glad to be here. So one of the big and that includes, yeah, cloud security, and the AMD instances. And, you know, initially, this is what, you know, and the engineering effort. And so in that use case, we're, you know, AWS EC2 instances on the AMD? being the right thing to target. on the move over and what and then you just immediately gain and nurturing the But yeah, that's essentially, you know, and how do you capture that opportunity? and the underlying platform." Can I ask you a question? And if you want the AMD in the old glory days of So that's why, you know, I mean, the work that you guys are doing and making sure that your on the performance you have? because of that performance increase. So you get not as many good to be true, it's not. What did you discover? that initial BC to 10% I got to ask you on a personal Just the further this thing grows out, and this is, again, you got to do it. coverage of AWS's re:Invent 2021.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Michael D'anielloPERSON

0.99+

Michael D´anielloPERSON

0.99+

JohnPERSON

0.99+

fiveQUANTITY

0.99+

MichaelPERSON

0.99+

AWSORGANIZATION

0.99+

John FurrierPERSON

0.99+

10%QUANTITY

0.99+

Michael D'AnielloPERSON

0.99+

AmazonORGANIZATION

0.99+

six monthsQUANTITY

0.99+

Carbon BlackORGANIZATION

0.99+

8,000 customersQUANTITY

0.99+

AMDORGANIZATION

0.99+

VMwareORGANIZATION

0.99+

ninth yearQUANTITY

0.99+

VMware Carbon BlackORGANIZATION

0.98+

over 10 thousandsQUANTITY

0.98+

thousandsQUANTITY

0.98+

one lineQUANTITY

0.97+

tensQUANTITY

0.97+

oneQUANTITY

0.97+

todayDATE

0.96+

OneQUANTITY

0.96+

one-lineQUANTITY

0.96+

CloudTITLE

0.95+

IntelORGANIZATION

0.95+

one letterQUANTITY

0.95+

DevSecOpsTITLE

0.94+

over a trillion events per dayQUANTITY

0.92+

one quick updateQUANTITY

0.92+

EC2TITLE

0.92+

single lineQUANTITY

0.91+

big waveEVENT

0.91+

one instanceQUANTITY

0.91+

couple of years agoDATE

0.9+

one exampleQUANTITY

0.89+

C5A.4XL.COMMERCIAL_ITEM

0.89+

firstQUANTITY

0.88+

30%QUANTITY

0.87+

re:Invent 2021EVENT

0.87+

this yearDATE

0.86+

2021DATE

0.84+

EdgeTITLE

0.83+

C5.4XL.COMMERCIAL_ITEM

0.82+

over a hundred terabytes of data per dayQUANTITY

0.82+

reEVENT

0.81+

10, 15 years agoDATE

0.81+

re:InventEVENT

0.8+

hundred instancesQUANTITY

0.8+

Carbon BlORGANIZATION

0.78+

InventEVENT

0.74+

bigEVENT

0.71+

KubernetesTITLE

0.66+

theCUBEORGANIZATION

0.63+

Carbon Black CloudTITLE

0.63+

gen-twoQUANTITY

0.62+

first-generationQUANTITY

0.62+

CostOpORGANIZATION

0.59+

waveEVENT

0.57+

TerraformTITLE

0.57+

theCUBETITLE

0.56+

dayQUANTITY

0.51+

George Hope, HPE, Terry Richardson and Peter Chan, AMD | HPE Discover 2021


 

>>from the cube studios in Palo alto in boston connecting with thought leaders all around the world. >>This is a cute conversation. Welcome to the cubes coverage of HP discover 2021 I'm lisa martin. I've got three guests with me here. They're going to be talking about the partnership between HP and AMG. Please welcome George hope worldwide Head of partner sales at HP terry, Richardson north american channel chief for AMG and Peter chan, the director of media channel sales at AMG Gentlemen, it's great to have you on the cube. >>Well, thanks for having us lisa. >>All right, >>we're excited to talk to you. We want to start by talking about this partnership terry. Let's go ahead and start with you. H P E and M D have been partners for a very long time, very long history of collaboration. Talk to us about the partnership >>HB named, He do have a rich history of collaboration spinning back to the days of chapter on and then when A M. D brought the first generation AMG equity process department back in 2017, HP was a foundational partner providing valuable engineering and customer insights from day one AmY has a long history of innovation that created a high performance CP roadmap for value partners like HP to leverage in their workload optimized product portfolios, maximizing the synergies between the two companies. We've kicked off initiatives to grow the chain of business together with workload focused solutions and together we define the future. >>Thanks terry George, let's get your perspective as worldwide had a partner sales at HP. Talked to me about H P S perspective of that AMG partnership. >>Yeah, they say it's uh the introduction of the third generation AMG Epic processors, we've we've doubled our A. M. D. Based Pro Lion portfolio. We've even extended it to our follow systems. And with this we have achieved a number of world records across a variety of workloads and are seeing real world results. The third generation am the epic processor delivers strong performance, expand ability and the security our customers need as they continue their digital transformation, We can deliver better outcomes and lay a strong foundation for profitable apartment growth. And we're incorporating unmatched workload optimization and intelligent automation with 360° security. And of course, uh with that as a service experience. >>But as a service experience becoming even more critical as is the security as we've seen some of the groundbreaking numbers and data breaches in 2020 alone. Peter I want to jump over to you now. One of the things that we see H P E and M. D. Talking about our solutions and workloads that are key areas of focus for both companies. Can you explain some of those key solutions and the value that they deliver for your customers? >>Absolutely. It's from computing to HPC to the cloud and everything in between and the young HB have been focused on delivering not just servers but meaningful solutions that can solve customer challenges. For example, we've seen here in India, the DL- 325 has been really powerful for customers that want to deploy video. Hp nmD have worked together with icy partners in the industry to tune the performance and ensure that the user experience is exceptional. Um This just one example of many of course, for instance, the 3 45 with database 3 65 for dense deployments, it's key the 35 That has led the way in big data analytics. Um the Apollo 60 500 breaking new path in terms of AI and Machine learning, quite a trending topic and m D H p are always in the news when it comes to groundbreaking HPC solutions and oh by the way, we're able to do this due to an unyielding commitment to the data center and long term laser focused execution on the M the road map. >>Excellent. Thanks. Peter. Let's talk about the channel expansion a little bit more terry with you. You know, you and the team here. Channel Chief focused on the channel. What is A. M. D. Doing specifically to expand your channel capabilities and support all of the Channel partners that work with Andy >>great question lisa Campbell is investing in so many areas around the channel. Let's start with digital transformation. Our Channel partners consistently provided feedback that customers need to do more with less between A and B and H P. E. We have solutions that increase capabilities and deliver faster time to value for the customer looking to do more with less. We have a tool on our website called the and metrics server virtualization, Tco estimation tool and those who have visually see the savings. We also have lots of other resources such as technical documentation, A and E arena for training and general CPU's departments can take advantage of aside from solution examples, AMG is investing in headcount internally and at our channel part race. I'm actually an example of the investment MD is making to build out the channel. One more thing that I'll mention is the investment that are, you know, lisa su and Andy are making to build out the ecosystem from head Count to code development and is investing to have a more powerful user experience with our software partners in the ecosystem. From my discussions with our channel partners, they're glad to see A and d expanding our our channel through the many initiatives and really bringing that ecosystem. >>Here's another question for you as channel chief. I'm just curious in the last year, speaking and you talked about digital transformation. We've seen so much acceleration of the adoption of that since the last 15 months has presented such challenges. Talk to me a little bit about some of the feedback from your channel partners about what you am, D N H B are doing together to help those customers needed to deliver that fast time to value, >>you know, so really it's all about close collaboration. Um we we work very closely with our counterparts at H P. E just to make sure we understand partner and customer requirements and then we work to craft solutions together from engaging, technically to collaborating on on, you know, when products will be shipped and delivered and also just what are we doing to uh to identify the next key workloads and projects that are going to be engaged in together? So it's it's really brought the companies I think even closer together, >>that's excellent as a covid catalyst. As I say, there's a lot of silver linings that we've seen and it sounds like the collaboration terry that you mentioned has become even stronger George. I want to go to you. Let's HP has been around for a long time. My first job in tech was Hewlett Packard by the way, many years ago. I won't mention how long but talk to me about the partnership with AMG from H P s perspective, is this part of H P S D N A? >>Absolutely. Partnering is our D N A. We've had 80 years of collaboration with an ever expanding ecosystem of partners that that all play a key role in our go to market strategy. We actually design and test our strategic initiatives in close collaboration with our partners so that we can meet their most pressing needs. We do that through like farmer advisory boards and things of that nature. Um but we have we have one of the most profitable partner programs in the industry, 2-3 times higher rebates than most of our competitors. And we continue to invest in the partner experience in creating that expertise so partners can stand out in a highly competitive market. Uh And Andy is in direct alignment with that strategy. We have strong synergies and a common focus between the two companies. >>And I also imagine George one question and one question to that there's tremendous value in it for your end user customers, especially those that have had to everyone pivot so many times in the last year and have talked to me a little bit about George What you're saying from the customer's perspective. >>Well as Antonio Neri said a couple of years back, the world is going to be hybrid and uh, he was right. We continue uh we continue to see that evolution and we continue to deliver solutions around a hybrid digital world with, with Green Lake and the new wave of digital transformation that we refer to now as the age of insight customers want a cloud experience everywhere. And 70% of today's workloads can easily be re factored for the public cloud or they need to stay physically close to the data and other apps at the emerging edge or in polos are in the data centers. So as a result, most organizations are forced to deal with the complexity of having two divergent operating models and they're paying higher cost to maintain them both with Green Lake, we provide one consistent operating model with visibility and control across public clouds and on prem environments. And that applies to all workloads, you know, whether it's cloud native or non cloud native applications. Um we also have other benefits like no cloud block in or no data. Egress charges, so you have to pay a steep price just to move workloads out of the public cloud. And then we're expanding collaboration opportunities within for our partner ecosystem so that we can bring that cloud experience to a faster growing number of customers worldwide. So we've launched new initiatives uh in support of the core strategy as we accelerate our as a service vision and then work with partners to unlock better customer outcomes with Green Lake and of course, hb compute of which I am d is part of is, is the underlying value added technology. >>Can you expand on some of those customer outcomes as we look at, as I mentioned before, this very dynamic market in which we live. It's all about customer outcomes. What are some of those that from a hybrid cloud environment perspective with Green like that you're helping customers achieve? >>Well, at least Greenland has come out with with about 30 different different offerings that package up some solutions. So you're not just buying infrastructure as a service. We have offerings like HPC as a service. We have offerings like uh, V D I as a service, ml, ops as a service. So we're packaging in technology, some are are some are not ours, but into completing some solutions. So that creates the outcome that the customers are looking for. >>Excellent. Thanks, George and Peter, last question to you again with the hybrid cloud environment being something that we're seeing more and more of the benefits that Green Lake is delivering through the channel. What's your perspective from a. M decide? >>Absolutely lisa. So, so I mean I think it's clear with a MD based systems, customers get the benefit of performance, security and fast time to value whether deployed on prem and cloud on a hybrid model. So please come try out our HP system based on name the processors and see how we can accelerate and protect your applications. Thank you lisa. >>Excellent, Peter George terry, thank you for joining me today. I'm sure there's a lot more that folks are going to be able to learn about what AM D and H. P. Are doing together on the virtual show floor. We appreciate your time. Thank you. Yeah, for my guests, I'm lisa martin. You're watching the cubes coverage of HP discover 2021 Yeah.

Published Date : Jun 16 2021

SUMMARY :

it's great to have you on the cube. Let's go ahead and start with you. We've kicked off initiatives to grow the chain of business together with workload focused solutions Talked to me about H P S perspective of that AMG partnership. And of course, uh with that as a service experience. One of the things that we see H P E and M. Um This just one example of many of course, for instance, the 3 45 with database Let's talk about the channel expansion a little bit more terry with you. I'm actually an example of the investment MD is making to build out the channel. I'm just curious in the last year, speaking and you talked about digital transformation. and projects that are going to be engaged in together? the collaboration terry that you mentioned has become even stronger George. We actually design and test our strategic initiatives in close collaboration with our partners And I also imagine George one question and one question to that there's tremendous value in it factored for the public cloud or they need to stay physically close to the data and other apps What are some of those that from a hybrid cloud environment perspective with Green like that you're helping So that creates the outcome that the customers are looking for. being something that we're seeing more and more of the benefits that Green Lake is customers get the benefit of performance, security and fast time to value whether deployed on prem going to be able to learn about what AM D and H. P. Are doing together on the virtual show floor.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
GeorgePERSON

0.99+

AMGORGANIZATION

0.99+

AndyPERSON

0.99+

PeterPERSON

0.99+

2017DATE

0.99+

HPORGANIZATION

0.99+

lisa martinPERSON

0.99+

IndiaLOCATION

0.99+

Peter chanPERSON

0.99+

2020DATE

0.99+

lisa CampbellPERSON

0.99+

80 yearsQUANTITY

0.99+

Hewlett PackardORGANIZATION

0.99+

Antonio NeriPERSON

0.99+

two companiesQUANTITY

0.99+

70%QUANTITY

0.99+

one questionQUANTITY

0.99+

GreenORGANIZATION

0.99+

both companiesQUANTITY

0.99+

Peter ChanPERSON

0.99+

H P. EORGANIZATION

0.99+

Palo altoLOCATION

0.99+

three guestsQUANTITY

0.99+

third generationQUANTITY

0.99+

last yearDATE

0.99+

lisa suPERSON

0.99+

George HopePERSON

0.99+

todayDATE

0.99+

oneQUANTITY

0.99+

Peter George terryPERSON

0.99+

DL- 325COMMERCIAL_ITEM

0.99+

2021DATE

0.99+

HPEORGANIZATION

0.99+

bothQUANTITY

0.98+

AMG GentlemenORGANIZATION

0.98+

GreenlandORGANIZATION

0.98+

first jobQUANTITY

0.98+

M DPERSON

0.98+

RichardsonPERSON

0.98+

Green LakeORGANIZATION

0.98+

AMDORGANIZATION

0.97+

lisaPERSON

0.97+

OneQUANTITY

0.97+

Apollo 60 500COMMERCIAL_ITEM

0.97+

Terry RichardsonPERSON

0.97+

AmYORGANIZATION

0.96+

D N H BORGANIZATION

0.96+

terry GeorgePERSON

0.95+

terryPERSON

0.94+

first generationQUANTITY

0.94+

H PORGANIZATION

0.93+

about 30 different different offeringsQUANTITY

0.93+

bostonLOCATION

0.93+

two divergent operating modelsQUANTITY

0.92+

2-3 timesQUANTITY

0.91+

3 65OTHER

0.89+

one exampleQUANTITY

0.87+

M. D.PERSON

0.85+

HBPERSON

0.84+

last 15 monthsDATE

0.84+

HPCORGANIZATION

0.82+

360°QUANTITY

0.81+

Kumaran Siva, AMD | IBM Think 2021


 

>>from around the globe. It's the >>cube >>With digital coverage of IBM think 2021 brought to you by IBM. Welcome back to the cube coverage of IBM Think 2021. I'm john for the host of the cube here for virtual event Cameron Siva who's here with corporate vice president with a M. D. Uh CVP and business development. Great to see you. Thanks for coming on the cube. >>Nice to be. It's an honor to be here. >>You know, love A. M. D. Love the growth, love the processors. Epic 7000 and three series was just launched. Its out in the field. Give us a quick overview of the of the of the processor, how it's doing and how it's going to help us in the data center and the edge >>for sure. No this is uh this is an exciting time for A. M. D. This is probably one of the most exciting times uh to be honest and in my 2020 plus years of uh working in sex industry, I think I've never been this excited about a new product as I am about the the third generation ethic processor that were just announced. Um So the Epic 7003, what we're calling it a series processor. It's just a fantastic product. We not only have the fastest server processor in the world with the AMG Epic 7763 but we also have the fastest CPU core so that the process of being the complete package to complete socket and then we also the fastest poor in the world with the the Epic um 72 F three for frequency. So that one runs run super fast on each core. And then we also have 64 cores in the CPU. So it's it's addressing both kind of what we call scale up and scale out. So it's overall overall just just an enormous, enormous product line that that I think um you know, we'll be we'll be amazing within within IBM IBM cloud. Um The processor itself includes 256 megabytes of L three cache, um you know, cash is super important for a variety of workloads in the large cache size. We have shown our we've seen scale in particular cloud applications, but across the board, um you know, database, uh java all sorts of things. This processor is also based on the Zen three core, which is basically 19% more instructions per cycle relative to ours, N two. So that was the prior generation, the second generation Epic Force, which is called Rome. So this this new CPU is actually quite a bit more capable. It runs also at a higher frequency with both the 64 4 and the frequency optimized device. Um and finally, we have um what we call all in features. So rather than kind of segment our product line and charge you for every little, you know, little thing you turn on or off. We actually have all in features includes, you know, really importantly security, which is becoming a big, big team and something that we're partnering with IBM very closely on um and then also things like 628 lanes of pc I E gen four, um are your faces that grew up to four terabytes so you can do these big large uh large um in memory databases. The pc I interfaces gives you lots and lots of storage capability so all in all super products um and we're super excited to be working with IBM honest. >>Well let's get into some of the details on this impact because obviously it's not just one place where these processes are going to live. You're seeing a distributed surface area core to edge um, cloud and hybrid is now in play. It's pretty much standard now. Multi cloud on the horizon. Company's gonna start realizing, okay, I gotta put this to work and I want to get more insights out of the data and civilian applications that are evolving on this. But you guys have seen some growth in the cloud with the Epic processors, what can customers expect and why our cloud providers choosing Epic processors, >>you know, a big part of this is actually the fact that I that am be um delivers upon our roadmap. So we, we kind of do what we say and say what we do and we delivered on time. Um so we actually announced I think was back in august of 2019, their second generation, Epic part and then now in March, we are now in the third generation. Very much on schedule. Very much um, intern expectations and meeting the performance that we had told the industry and told our customers that we're going to meet back then. So it's a really super important pieces that our customers are now learning to expect performance, jenin, Jenin and on time from A. M. D, which is, which is uh, I think really a big part of our success. The second thing is, I think, you know, we are, we are a leader in terms of the core density that we provide and cloud in particular really values high density. So the 64 cores is absolutely unique today in the industry and that it has the ability to be offered both in uh bare metal. Um, as we have been deployed in uh, in IBM cloud and also in virtualized type environment. So it has that ability to spend a lot of different use cases. Um and you can, you know, you can run each core uh really fast, But then also have the scale out and then be able to take advantage of all 64 cores. Each core has two threads up to 128 threads per socket. It's a super powerful uh CPU and it has a lot of value for um for the for the cloud cloud provider, they're actually about over 400 total instances by the way of A. M. D processors out there. And that's all the flavors, of course, not just that they're generation, but still it's it's starting to really proliferate. We're trying to see uh M d I think all across the cloud, >>more cores, more threads all goodness. I gotta ask you, you know, I interviewed Arvin the ceo of IBM before he was Ceo at a conference and you know, he's always been, I know him, he's always loved cloud, right? So, um, but he sees a little bit differently than just being like copying the clouds. He sees it as we see it unfolding here, I think Hybrid. Um, and so I can almost see the playbook evolving. You know, Red has an operating system, Cloud and Edge is a distributed system, it's got that vibe of a system architecture, almost got processors everywhere. Could you give us a sense of the over an overview of the work you're doing with IBM Cloud and what a M. D s role is there? And I'm curious, could you share for the folks watching too? >>For sure. For sure. By the way, IBM cloud is a fantastic partner to work with. So, so, first off you talked about about the hybrid, hybrid cloud is a really important thing for us and that's um that's an area that we are definitely focused in on. Uh but in terms of our specific joint partnerships and we do have an announcement last year. Um so it's it's it's somewhat public, but we are working together on Ai where IBM is a is an undisputed leader with Watson and some of the technologies that you guys bring there. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know how on the AI side. In addition, IBM is also known for um you know, really enterprise grade, yeah, security and working with some of the key sectors that need and value, reliability, security, availability, um in those areas. Uh and so I think that partnership, we have quite a bit of uh quite a strong relationship and partnership around working together on security and doing confidential computer. >>Tell us more about the confidential computing. This is a joint development agreement, is a joint venture joint development agreement. Give us more detail on this. Tell us more about this announcement with IBM cloud, an AMG confidential computing. >>So that's right. So so what uh you know, there's some key pillars to this. One of this is being able to to work together, define open standards, open architecture. Um so jointly with an IBM and also pulling in something assets in terms of red hat to be able to work together and pull together a confidential computer that can so some some key ideas here, we can work with work within a hybrid cloud. We can work within the IBM cloud and to be able to provide you with, provide, provide our joint customers are and customers with uh with unprecedented security and reliability uh in the cloud, >>what's the future of processors, I mean, what should people think when they expect to see innovation? Um Certainly data centers are evolving with core core features to work with hybrid operating model in the cloud. People are getting that edge relationship basically the data centers a large edge, but now you've got the other edges, we got industrial edges, you got consumers, people wearables, you're gonna have more and more devices big and small. Um what's the what's the road map look like? How do you describe the future of a. M. D. In in the IBM world? >>I think I think R I B M M D partnership is bright, future is bright for sure, and I think there's there's a lot of key pieces there. Uh you know, I think IBM brings a lot of value in terms of being able to take on those up earlier, upper uh layers of software and that and the full stack um so IBM strength has really been, you know, as a systems company and as a software company. Right, So combining that with the Andes Silicon, uh divided and see few devices really really is is it's a great combination, I see, you know, I see um growth in uh you know, obviously in in deploying kind of this, this scale out model where we have these very large uh large core count Cpus I see that trend continuing for sure. Uh you know, I think that that is gonna, that is sort of the way of the future that you want cloud data applications that can scale across multi multiple cores within the socket and then across clusters of Cpus with within the data center um and IBM is in a really good position to take advantage of that to go to, to to drive that within the cloud. That income combination with IBM s presence on prem uh and so that's that's where the hybrid hybrid cloud value proposition comes in um and so we actually see ourselves uh you know, playing in both sides, so we do have a very strong presence now and increasingly so on premises as well. And we we partner we were very interested in working with IBM on the on on premises uh with some of some of the key customers and then offering that hybrid connectivity onto, onto the the IBM cloud as well. >>I B M and M. D. Great partnership, great for clarifying and and sharing that insight come, I appreciate it. Thanks for for coming on the cube, I do want to ask you while I got you here. Um kind of a curveball question if you don't mind. As you see hybrid cloud developing one of the big trends is this ecosystem play right? So you're seeing connections between IBM and their and their partners being much more integrated. So cloud has been a big KPI kind of model. You connect people through a. P. I. S. There's a big trend that we're seeing and we're seeing this really in our reporting on silicon angle the rise of a cloud service provider within these ecosystems where hey, I could build on top of IBM cloud and build a great business. Um and as I do that, I might want to look at an architecture like an AMG, how does that fit into to your view as a doing business development over at A. M. D. I mean because because people are building on top of these ecosystems are building their own clouds on top of cloud, you're seeing data. Cloud, just seeing these kinds of clouds, specialty clouds. So I mean we could have a cute cloud on top of IBM maybe someday. So, so I might want to build out a whole, I might be a cloud. So that's more processors needed for you. So how do you see this enablement? Because IBM is going to want to do that, it's kind of like, I'm kind of connecting the dots here in real time, but what's your, what's your take on that? What's your reaction? >>I think, I think that's I think that's right and I think m d isn't, it isn't a pretty good position with IBM to be able to, to enable that. Um we do have some very significant osD partnerships, a lot of which that are leveraged into IBM um such as Red hat of course, but also like VM ware and Nutanix. Um this provide these always V partners provide kind of the base level infrastructure that we can then build upon and then have that have that A P I. And be able to build build um uh the the multi cloud environments that you're talking about. Um and I think that, I think that's right. I think that is that is one of the uh you know, kind of future trends that that we will see uh you know, services that are offered on top of IBM cloud that take advantage of the the capabilities of the platform that come with it. Um and you know, the bare metal offerings that that IBM offer on their cloud is also quite unique um and hyper very performance. Um and so this actually gives um I think uh the the kind of uh call the medic cloud that unique ability to kind of go in and take advantage of the M. D. Hardware at a performance level and at a um uh to take advantage of that infrastructure better than they could in another cloud environments. I think that's that's that's actually very key and very uh one of the one of the features of the IBM problems that differentiates it >>so much headroom there corns really appreciate you sharing that. I think it's a great opportunity. As I say, if you're you want to build and compete. Finally, there's no with the white space with no competition or be better than the competition. So as they say in business, thank you for coming on sharing. Great great future ahead for all builders out there. Thanks for coming on the cube. >>Thanks thank you very much. >>Okay. IBM think cube coverage here. I'm john for your host. Thanks for watching. Mm

Published Date : May 12 2021

SUMMARY :

It's the With digital coverage of IBM think 2021 brought to you by IBM. It's an honor to be here. You know, love A. M. D. Love the growth, love the processors. so that the process of being the complete package to complete socket and then we also the fastest poor some growth in the cloud with the Epic processors, what can customers expect Um and you can, you know, you can run each core uh Um, and so I can almost see the playbook evolving. So we're bringing together, you know, it's kind of this real hard work goodness with IBM problems and know with IBM cloud, an AMG confidential computing. So so what uh you know, there's some key pillars to this. In in the IBM world? in um and so we actually see ourselves uh you know, playing in both sides, Thanks for for coming on the cube, I do want to ask you while I got you here. I think that is that is one of the uh you know, So as they say in business, thank you for coming on sharing. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
IBMORGANIZATION

0.99+

ArvinPERSON

0.99+

Cameron SivaPERSON

0.99+

MarchDATE

0.99+

19%QUANTITY

0.99+

64 coresQUANTITY

0.99+

each coreQUANTITY

0.99+

Each coreQUANTITY

0.99+

august of 2019DATE

0.99+

628 lanesQUANTITY

0.99+

256 megabytesQUANTITY

0.99+

last yearDATE

0.99+

2020DATE

0.99+

64 coresQUANTITY

0.99+

NutanixORGANIZATION

0.99+

second thingQUANTITY

0.99+

2021DATE

0.99+

two threadsQUANTITY

0.99+

second generationQUANTITY

0.99+

AMDORGANIZATION

0.99+

both sidesQUANTITY

0.98+

OneQUANTITY

0.98+

bothQUANTITY

0.98+

third generationQUANTITY

0.98+

AMGORGANIZATION

0.98+

Epic 7003COMMERCIAL_ITEM

0.97+

JeninPERSON

0.97+

Andes SiliconORGANIZATION

0.97+

Zen threeCOMMERCIAL_ITEM

0.97+

third generationQUANTITY

0.97+

M. D.PERSON

0.94+

four terabytesQUANTITY

0.94+

firstQUANTITY

0.94+

todayDATE

0.94+

one placeQUANTITY

0.94+

EpicORGANIZATION

0.93+

Think 2021COMMERCIAL_ITEM

0.92+

IBM cloudORGANIZATION

0.92+

Epic 7763COMMERCIAL_ITEM

0.91+

oneQUANTITY

0.9+

jeninPERSON

0.9+

three seriesQUANTITY

0.89+

EpicCOMMERCIAL_ITEM

0.88+

A. M.ORGANIZATION

0.85+

A. M.PERSON

0.85+

RedPERSON

0.83+

CeoPERSON

0.82+

Mm Kumaran SivaPERSON

0.8+

about over 400 total instancesQUANTITY

0.79+

64 4QUANTITY

0.78+

johnPERSON

0.77+

up to 128 threadsQUANTITY

0.72+

Epic um 72 F threeCOMMERCIAL_ITEM

0.71+

javaTITLE

0.7+

7000COMMERCIAL_ITEM

0.7+

Epic ForceCOMMERCIAL_ITEM

0.69+

E gen fourCOMMERCIAL_ITEM

0.67+

M. DPERSON

0.67+

Deania Davidson, Dell Technologies & Dave Lincoln, Dell Technologies | MWC Barcelona 2023


 

>> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hey everyone and welcome back to Barcelona, Spain, it's theCUBE. We are live at MWC 23. This is day two of our coverage, we're giving you four days of coverage, but you already know that because you were here yesterday. Lisa Martin with Dave Nicholson. Dave this show is massive. I was walking in this morning and almost getting claustrophobic with the 80,000 people that are joining us. There is, seems to be at MWC 23 more interest in enterprise-class technology than we've ever seen before. What are some of the things that you've observed with that regard? >> Well I've observed a lot of people racing to the highest level messaging about how wonderful it is to have the kiss of a breeze on your cheek, and to feel the flowing wheat. (laughing) I want to hear about the actual things that make this stuff possible. >> Right. >> So I think we have a couple of guests here who can help us start to go down that path of actually understanding the real cool stuff that's behind the scenes. >> And absolutely we got some cool stuff. We've got two guests from Dell. Dave Lincoln is here, the VP of Networking and Emerging the Server Solutions, and Deania Davidson, Director Edge Server Product Planning and Management at Dell. So great to have you. >> Thank you. >> Two Daves, and a Davidson. >> (indistinct) >> Just me who stands alone here. (laughing) So guys talk about, Dave, we'll start with you the newest generation of PowerEdge servers. What's new? Why is it so exciting? What challenges for telecom operators is it solving? >> Yeah, well so this is actually Dell's largest server launch ever. It's the most expansive, which is notable because of, we have a pretty significant portfolio. We're very proud of our core mainstream portfolio. But really since the Supercompute in Dallas in November, that we started a rolling thunder of launches. MWC being part of that leading up to DTW here in May, where we're actually going to be announcing big investments in those parts of the market that are the growth segments of server. Specifically AIML, where we in, to address that. We're investing heavy in our XE series which we, as I said, we announced at Supercompute in November. And then we have to address the CSP segment, a big investment around the HS series which we just announced, and then lastly, the edge telecom segment which we're, we had the biggest investment, biggest announce in portfolio launch with XR series. >> Deania, lets dig into that. >> Yeah. >> Where we see the growth coming from you mentioned telecom CSPs with the edge. What are some of the growth opportunities there that organizations need Dell's help with to manage, so that they can deliver what they're demanding and user is wanting? >> The biggest areas being obviously, in addition the telecom has been the biggest one, but the other areas too we're seeing is in retail and manufacturing as well. And, so internally, I mean we're going to be focused on hardware, but we also have a solutions team who are working with us to build the solutions focused on retail, and edge and telecom as well on top of the servers that we'll talk about shortly. >> What are some of the biggest challenges that retailers and manufacturers are facing? And during the pandemic retailers, those that were successful pivoted very quickly to curbside delivery. >> Deania: Yeah. >> Those that didn't survive weren't able to do that digitally. >> Deania: Yeah. >> But we're seeing such demand. >> Yeah. >> At the retail edge. On the consumer side we want to get whatever we want right now. >> Yes. >> It has to be delivered, it has to be personalized. Talk a little bit more about some of the challenges there, within those two verticals and how Dell is helping to address those with the new server technologies. >> For retail, I think there's couple of things, the one is like in the fast food area. So obviously through COVID a lot of people got familiar and comfortable with driving through. >> Lisa: Yeah. >> And so there's probably a certain fast food restaurant everyone's pretty familiar with, they're pretty efficient in that, and so there are other customers who are trying to replicate that, and so how do we help them do that all, from a technology perspective. From a retail, it's one of the pickup and the online experience, but when you go into a store, I don't know about you but I go to Target, and I'm looking for something and I have kids who are kind of distracting you. Its like where is this one thing, and so I pull up the Target App for example, and it tells me where its at, right. And then obviously, stores want to make more money, so like hey, since you picked this thing, there are these things around you. So things like that is what we're having conversations with customers about. >> It's so interesting because the demand is there. >> Yeah, it is. >> And its not going to go anywhere. >> No. >> And it's certainly not going to be dialed down. We're not going to want less stuff, less often. >> Yeah (giggles) >> And as typical consumers, we don't necessarily make the association between what we're seeing in the palm of our hand on a mobile device. >> Deania: Right. >> And the infrastructure that's actually supporting all of it. >> Deania: Right. >> People hear the term Cloud and they think cloud-phone mystery. >> Yeah, magic just happens. >> Yeah. >> Yeah. >> But in fact, in order to support the things that we want to be able to do. >> Yeah. >> On the move, you have to optimize the server hardware. >> Deania: Yes. >> In certain ways. What does that mean exactly? When you say that its optimized, what are the sorts of decisions that you make when you're building? I think of this in the terms of Lego bricks. >> Yes, yeah >> Put together. What are some of the decisions that you make? >> So there were few key things that we really had to think about in terms of what was different from the Data center, which obviously supports the cloud environment, but it was all about how do we get closer to the customer right? How do we get things really fast and how do we compute that information really quickly. So for us, it's things like size. All right, so our server is going to weigh one of them is the size of a shoe box and (giggles), we have a picture with Dave. >> Dave: It's true. >> Took off his shoe. >> Its actually, its actually as big as a shoe. (crowd chuckles) >> It is. >> It is. >> To be fair, its a pretty big shoe. >> True, true. >> It is, but its small in relative to the old big servers that you see. >> I see what you're doing, you find a guy with a size 12, (crowd giggles) >> Yeah. >> Its the size of your shoe. >> Yeah. >> Okay. >> Its literally the size of a shoe, and that's our smallest server and its the smallest one in the portfolio, its the XR 4000, and so we've actually crammed a lot of technology in there going with the Intel ZRT processors for example to get into that compute power. The XR 8000 which you'll be hearing a lot more about shortly with our next guest is one I think from a telco perspective is our flagship product, and its size was a big thing there too. Ruggedization so its like (indistinct) certification, so it can actually operate continuously in negative 5 to 55 C, which for customers, or they need that range of temperature operation, flexibility was a big thing too. In meaning that, there are some customers who wanted to have one system in different areas of deployment. So can I take this one system and configure it one way, take that same system, configure another way and have it here. So flexibility was really key for us as well, and so we'll actually be seeing that in the next segment coming. >> I think one of, some of the common things you're hearing from this is our focus on innovation, purpose build servers, so yes our times, you know economic situation like in itself is tough yeah. But far from receding we've doubled down on investment and you've seen that with the products that we are launching here, and we will be launching in the years to come. >> I imagine there's a pretty sizeable day impact to the total adjustable market for PowerEdge based on the launch what you're doing, its going to be a tam, a good size tam expansion. >> Yeah, absolutely. Depending on how you look at it, its roughly we add about $30 Billion of adjustable tam between the three purposeful series that we've launched, XE, HS and XR. >> Can you comment on, I know Dell and customers are like this. Talk about, I'd love to get both of your perspective, I'm sure you have a favorite customer stories. But talk about the involvement of the customer in the generation, and the evolution of PowerEdge. Where are they in that process? What kind of feedback do they deliver? >> Well, I mean, just to start, one thing that is essential Cortana of Dell period, is it all is about the customer. All of it, everything that we do is about the customer, and so there is a big focus at our level, from on high to get out there and talk with customers, and actually we have a pretty good story around XR8000 which is call it our flagship of the XR line that we've just announced, and because of this deep customer intimacy, there was a last minute kind of architectural design change. >> Hm-mm. >> Which actually would have been, come to find out it would have been sort of a fatal flaw for deployment. So we corrected that because of this tight intimacy with our customers. This was in two Thanksgiving ago about and, so anyways it's super cool and the fact that we were able to make a change so late in development cycle, that's a testament to a lot of the speed and, speed of innovation that we're driving, so anyway that was that's one, just case of one example. >> Hm-mm. >> Let talk about AI, we can't go to any trade show without talking about AI, the big thing right now is ChatGPT. >> Yeah. >> I was using it the other day, it's so interesting. But, the growing demand for AI, talk about how its driving the evolution of the server so that more AI use cases can become more (indistinct). >> In the edge space primarily, we actually have another product, so I guess what you'll notice in the XR line itself because there are so many different use cases and technologies that support the different use cases. We actually have a range form factor, so we have really small, I guess I would say 350 ml the size of a shoe box, you know, Dave's shoe box. (crowd chuckles) And then we also have, at the other end a 472, so still small, but a little bit bigger, but we did recognize obviously AI was coming up, and so that is our XR 7620 platform and that does support 2 GPUs right, so, like for Edge infrencing, making sure that we have the capability to support customers in that too, but also in the small one, we do also have a GPU capability there, that also helps in those other use cases as well. So we've built the platforms even though they're small to be able to handle the GPU power for customers. >> So nice tight package, a lot of power there. >> Yes. >> Beside as we've all clearly demonstrated the size of Dave's shoe. (crowd chuckles) Dave, talk about Dell's long standing commitment to really helping to rapidly evolve the server market. >> Dave: Yeah. >> Its a pivotal payer there. >> Well, like I was saying, we see innovation, I mean, this is, to us its a race to the top. You talked about racing and messaging that sort of thing, when you opened up the show here, but we see this as a race to the top, having worked at other server companies where maybe its a little bit different, maybe more of a race to the bottom source of approach. That's what I love about being at Dell. This is very much, we understand that it's innovation is that is what's going to deliver the most value for our customers. So whether its some of the first to market, first of its kind sort of innovation that you find in the XR4000, or XR8000, or any of our XE line, we know that at the end of day, that is what going to propel Dell, do the best for our customers and thereby do the best for us. To be honest, its a little bit surprising walking by some of our competitors booths, there's been like a dearth of zero, like no, like it's almost like you wouldn't even know that there was a big launch here right? >> Yeah. >> Or is it just me? >> No. >> It was a while, we've been walking around and yet we've had, and its sort of maybe I should take this as a flattery, but a lot of our competitors have been coming by to our booth everyday actually. >> Deania: Yeah, everyday. >> They came by multiple times yesterday, they came by multiple times today, they're taking pictures of our stuff I kind of want to just send 'em a sample. >> Lisa: Or your shoe. >> Right? Or just maybe my shoe right? But anyway, so I suppose I should take it as an honor. >> Deania: Yeah. >> And conversely when we've walked over there we actually get in back (indistinct), maybe I need a high Dell (indistinct). (crowd chuckles) >> We just had that experience, yeah. >> Its kind of funny but. >> Its a good position to be in. >> Yeah. >> Yes. >> You talked about the involvement of the customers, talk a bit more about Dell's ecosystem is also massive, its part of what makes Dell, Dell. >> Wait did you say ego-system? (laughing) After David just. >> You caught that? Darn it! The talk about the influence or the part of the ecosystem and also some of the feedback from the partners as you've been rapidly evolving the server market and clearly your competitors are taking notice. >> Yeah, sorry. >> Deania: That's okay. >> Dave: you want to take that? >> I mean I would say generally, one of the things that Dell prides itself on is being able to deliver the worlds best innovation into the hands of our customers, faster and better that any other, the optimal solution. So whether its you know, working with our great partners like Intel, AMD Broadcom, these sorts of folks. That is, at the end of the day that is our core mantra, again its retractor on service, doing the best, you know, what's best for the customers. And we want to bring the world's best innovation from our technology partners, get it into the hands of our partners you know, faster and better than any other option out there. >> Its a satisfying business for all of us to be in, because to your point, I made a joke about the high level messaging. But really, that's what it comes down to. >> Lisa: Yeah. >> We do these things, we feel like sometimes we're toiling in obscurity, working with the hardware. But what it delivers. >> Deania: Hm-mm. >> The experiences. >> Dave: Absolutely. >> Deania: Yes. >> Are truly meaningful. So its a fun. >> Absolutely. >> Its a really fun thing to be a part of. >> It is. >> Absolutely. >> Yeah. Is there a favorite customer story that you have that really articulates the value of what Dell is doing, full PowerEdge, at the Edge? >> Its probably one I can't particularly name obviously but, it was, they have different environments, so, in one case there's like on flights or on sea vessels, and just being able to use the same box in those different environments is really cool. And they really appreciate having the small compact, where they can just take the server with them and go somewhere. That was really cool to me in terms of how they were using the products that we built for them. >> I have one that's kind of funny. It around XR8000. Again a customer I won't name but they're so proud of it, they almost kinds feel like they co defined it with us, they want to be on the patent with us so, anyways that's. >> Deania: (indistinct). >> That's what they went in for, yeah. >> So it shows the strength of the partnership that. >> Yeah, exactly. >> Of course, the ecosystem of partners, customers, CSVs, telecom Edge. Guys thank you so much for joining us today. >> Thank you. >> Thank you. >> Sharing what's new with the PowerEdge. We can't wait to, we're just, we're cracking open the box, we saw the shoe. (laughing) And we're going to be dealing a little bit more later. So thank you. >> We're going to be able to touch something soon? >> Yes, yes. >> Yeah. >> In couple of minutes? >> Next segment I think. >> All right! >> Thanks for setting the table for that guys. We really appreciate your time. >> Thank you for having us. >> Thank you. >> Alright, our pleasure. >> For our guests and for Dave Nicholson, I'm Lisa Martin . You're watching theCUBE. The leader in live tech coverage, LIVE in Barcelona, Spain, MWC 23. Don't go anywhere, we will be right back with our next guests. (gentle music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. What are some of the have the kiss of a breeze that's behind the scenes. the VP of Networking and and a Davidson. the newest generation that are the growth segments of server. What are some of the but the other areas too we're seeing is What are some of the biggest challenges do that digitally. On the consumer side we some of the challenges there, the one is like in the fast food area. and the online experience, because the demand is there. going to be dialed down. in the palm of our hand And the infrastructure People hear the term Cloud the things that we want to be able to do. the server hardware. decisions that you make What are some of the from the Data center, its actually as big as a shoe. that you see. and its the smallest one in the portfolio, some of the common things for PowerEdge based on the between the three purposeful and the evolution of PowerEdge. flagship of the XR line and the fact that we were able the big thing right now is ChatGPT. the evolution of the server but also in the small one, a lot of power there. the size of Dave's shoe. the first to market, and its sort of maybe I should I kind of want to just send 'em a sample. But anyway, so I suppose I should take it we actually get in back (indistinct), involvement of the customers, Wait did you say ego-system? and also some of the one of the things that I made a joke about the we feel like sometimes So its a fun. that really articulates the the server with them they want to be on the patent with us so, So it shows the Of course, the ecosystem of partners, we saw the shoe. the table for that guys. we will be right back

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DavePERSON

0.99+

DeaniaPERSON

0.99+

Lisa MartinPERSON

0.99+

LisaPERSON

0.99+

MayDATE

0.99+

Dave LincolnPERSON

0.99+

DavidPERSON

0.99+

NovemberDATE

0.99+

DellORGANIZATION

0.99+

CortanaTITLE

0.99+

350 mlQUANTITY

0.99+

DallasLOCATION

0.99+

TargetORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

IntelORGANIZATION

0.99+

TwoQUANTITY

0.99+

XR 4000COMMERCIAL_ITEM

0.99+

four daysQUANTITY

0.99+

80,000 peopleQUANTITY

0.99+

two guestsQUANTITY

0.99+

XR 8000COMMERCIAL_ITEM

0.99+

XR8000COMMERCIAL_ITEM

0.99+

55 CQUANTITY

0.99+

2 GPUsQUANTITY

0.99+

Deania DavidsonPERSON

0.99+

XR4000COMMERCIAL_ITEM

0.99+

yesterdayDATE

0.99+

todayDATE

0.99+

two verticalsQUANTITY

0.99+

Barcelona, SpainLOCATION

0.98+

bothQUANTITY

0.98+

LegoORGANIZATION

0.98+

oneQUANTITY

0.98+

XR seriesCOMMERCIAL_ITEM

0.98+

one systemQUANTITY

0.98+

about $30 BillionQUANTITY

0.97+

SupercomputeORGANIZATION

0.97+

MWCEVENT

0.97+

zeroQUANTITY

0.95+

5QUANTITY

0.95+

firstQUANTITY

0.94+

MWC 23EVENT

0.94+

this morningDATE

0.94+

telcoORGANIZATION

0.93+

one wayQUANTITY

0.93+

DavidsonORGANIZATION

0.92+

coupleQUANTITY

0.92+

twoDATE

0.91+

EdgeORGANIZATION

0.91+

Breaking Analysis: MWC 2023 goes beyond consumer & deep into enterprise tech


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While never really meant to be a consumer tech event, the rapid ascendancy of smartphones sucked much of the air out of Mobile World Congress over the years, now MWC. And while the device manufacturers continue to have a major presence at the show, the maturity of intelligent devices, longer life cycles, and the disaggregation of the network stack, have put enterprise technologies front and center in the telco business. Semiconductor manufacturers, network equipment players, infrastructure companies, cloud vendors, software providers, and a spate of startups are eyeing the trillion dollar plus communications industry as one of the next big things to watch this decade. Hello, and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we bring you part two of our ongoing coverage of MWC '23, with some new data on enterprise players specifically in large telco environments, a brief glimpse at some of the pre-announcement news and corresponding themes ahead of MWC, and some of the key announcement areas we'll be watching at the show on theCUBE. Now, last week we shared some ETR data that showed how traditional enterprise tech players were performing, specifically within the telecoms vertical. Here's a new look at that data from ETR, which isolates the same companies, but cuts the data for what ETR calls large telco. The N in this cut is 196, down from 288 last week when we included all company sizes in the dataset. Now remember the two dimensions here, on the y-axis is net score, or spending momentum, and on the x-axis is pervasiveness in the data set. The table insert in the upper left informs how the dots and companies are plotted, and that red dotted line, the horizontal line at 40%, that indicates a highly elevated net score. Now while the data are not dramatically different in terms of relative positioning, there are a couple of changes at the margin. So just going down the list and focusing on net score. Azure is comparable, but slightly lower in this sector in the large telco than it was overall. Google Cloud comes in at number two, and basically swapped places with AWS, which drops slightly in the large telco relative to overall telco. Snowflake is also slightly down by one percentage point, but maintains its position. Remember Snowflake, overall, its net score is much, much higher when measuring across all verticals. Snowflake comes down in telco, and relative to overall, a little bit down in large telco, but it's making some moves to attack this market that we'll talk about in a moment. Next are Red Hat OpenStack and Databricks. About the same in large tech telco as they were an overall telco. Then there's Dell next that has a big presence at MWC and is getting serious about driving 16G adoption, and new servers, and edge servers, and other partnerships. Cisco and Red Hat OpenShift basically swapped spots when moving from all telco to large telco, as Cisco drops and Red Hat bumps up a bit. And VMware dropped about four percentage points in large telco. Accenture moved up dramatically, about nine percentage points in big telco, large telco relative to all telco. HPE dropped a couple of percentage points. Oracle stayed about the same. And IBM surprisingly dropped by about five points. So look, I understand not a ton of change in terms of spending momentum in the large sector versus telco overall, but some deltas. The bottom line for enterprise players is one, they're just getting started in this new disruption journey that they're on as the stack disaggregates. Two, all these players have experience in delivering horizontal solutions, but now working with partners and identifying big problems to be solved, and three, many of these companies are generally not the fastest moving firms relative to smaller disruptive disruptors. Now, cloud has been an exception in fairness. But the good news for the legacy infrastructure and IT companies is that the telco transformation and the 5G buildout is going to take years. So it's moving at a pace that is very favorable to many of these companies. Okay, so looking at just some of the pre-announcement highlights that have hit the wire this week, I want to give you a glimpse of the diversity of innovation that is occurring in the telecommunication space. You got semiconductor manufacturers, device makers, network equipment players, carriers, cloud vendors, enterprise tech companies, software companies, startups. Now we've included, you'll see in this list, we've included OpeRAN, that logo, because there's so much buzz around the topic and we're going to come back to that. But suffice it to say, there's no way we can cover all the announcements from the 2000 plus exhibitors at the show. So we're going to cherry pick here and make a few call outs. Hewlett Packard Enterprise announced an acquisition of an Italian private cellular network company called AthoNet. Zeus Kerravala wrote about it on SiliconANGLE if you want more details. Now interestingly, HPE has a partnership with Solana, which also does private 5G. But according to Zeus, Solona is more of an out-of-the-box solution, whereas AthoNet is designed for the core and requires more integration. And as you'll see in a moment, there's going to be a lot of talk at the show about private network. There's going to be a lot of news there from other competitors, and we're going to be watching that closely. And while many are concerned about the P5G, private 5G, encroaching on wifi, Kerravala doesn't see it that way. Rather, he feels that these private networks are really designed for more industrial, and you know mission critical environments, like factories, and warehouses that are run by robots, et cetera. 'Cause these can justify the increased expense of private networks. Whereas wifi remains a very low cost and flexible option for, you know, whatever offices and homes. Now, over to Dell. Dell announced its intent to go hard after opening up the telco network with the announcement that in the second half of this year it's going to begin shipping its infrastructure blocks for Red Hat. Remember it's like kind of the converged infrastructure for telco with a more open ecosystem and sort of more flexible, you know, more mature engineered system. Dell has also announced a range of PowerEdge servers for a variety of use cases. A big wide line bringing forth its 16G portfolio and aiming squarely at the telco space. Dell also announced, here we go, a private wireless offering with airspan, and Expedo, and a solution with AthoNet, the company HPE announced it was purchasing. So I guess Dell and HPE are now partnering up in the private wireless space, and yes, hell is freezing over folks. We'll see where that relationship goes in the mid- to long-term. Dell also announced new lab and certification capabilities, which we said last week was going to be critical for the further adoption of open ecosystem technology. So props to Dell for, you know, putting real emphasis and investment in that. AWS also made a number of announcements in this space including private wireless solutions and associated managed services. AWS named Deutsche Telekom, Orange, T-Mobile, Telefonica, and some others as partners. And AWS announced the stepped up partnership, specifically with T-Mobile, to bring AWS services to T-Mobile's network portfolio. Snowflake, back to Snowflake, announced its telecom data cloud. Remember we showed the data earlier, it's Snowflake not as strong in the telco sector, but they're continuing to move toward this go-to market alignment within key industries, realigning their go-to market by vertical. It also announced that AT&T, and a number of other partners, are collaborating to break down data silos specifically in telco. Look, essentially, this is Snowflake taking its core value prop to the telco vertical and forming key partnerships that resonate in the space. So think simplification, breaking down silos, data sharing, eventually data monetization. Samsung previewed its future capability to allow smartphones to access satellite services, something Apple has previously done. AMD, Intel, Marvell, Qualcomm, are all in the act, all the semiconductor players. Qualcomm for example, announced along with Telefonica, and Erickson, a 5G millimeter network that will be showcased in Spain at the event this coming week using Qualcomm Snapdragon chipset platform, based on none other than Arm technology. Of course, Arm we said is going to dominate the edge, and is is clearly doing so. It's got the volume advantage over, you know, traditional Intel, you know, X86 architectures. And it's no surprise that Microsoft is touting its open AI relationship. You're going to hear a lot of AI talk at this conference as is AI is now, you know, is the now topic. All right, we could go on and on and on. There's just so much going on at Mobile World Congress or MWC, that we just wanted to give you a glimpse of some of the highlights that we've been watching. Which brings us to the key topics and issues that we'll be exploring at MWC next week. We touched on some of this last week. A big topic of conversation will of course be, you know, 5G. Is it ever going to become real? Is it, is anybody ever going to make money at 5G? There's so much excitement around and anticipation around 5G. It has not lived up to the hype, but that's because the rollout, as we've previous reported, is going to take years. And part of that rollout is going to rely on the disaggregation of the hardened telco stack, as we reported last week and in previous Breaking Analysis episodes. OpenRAN is a big component of that evolution. You know, as our RAN intelligent controllers, RICs, which essentially the brain of OpenRAN, if you will. Now as we build out 5G networks at massive scale and accommodate unprecedented volumes of data and apply compute-hungry AI to all this data, the issue of energy efficiency is going to be front and center. It has to be. Not only is it a, you know, hot political issue, the reality is that improving power efficiency is compulsory or the whole vision of telco's future is going to come crashing down. So chip manufacturers, equipment makers, cloud providers, everybody is going to be doubling down and clicking on this topic. Let's talk about AI. AI as we said, it is the hot topic right now, but it is happening not only in consumer, with things like ChatGPT. And think about the theme of this Breaking Analysis in the enterprise, AI in the enterprise cannot be ChatGPT. It cannot be error prone the way ChatGPT is. It has to be clean, reliable, governed, accurate. It's got to be ethical. It's got to be trusted. Okay, we're going to have Zeus Kerravala on the show next week and definitely want to get his take on private networks and how they're going to impact wifi. You know, will private networks cannibalize wifi? If not, why not? He wrote about this again on SiliconANGLE if you want more details, and we're going to unpack that on theCUBE this week. And finally, as always we'll be following the data flows to understand where and how telcos, cloud players, startups, software companies, disruptors, legacy companies, end customers, how are they going to make money from new data opportunities? 'Cause we often say in theCUBE, don't ever bet against data. All right, that's a wrap for today. Remember theCUBE is going to be on location at MWC 2023 next week. We got a great set. We're in the walkway in between halls four and five, right in Congress Square, stand CS-60. Look for us, we got a full schedule. If you got a great story or you have news, stop by. We're going to try to get you on the program. I'll be there with Lisa Martin, co-hosting, David Nicholson as well, and the entire CUBE crew, so don't forget to come by and see us. I want to thank Alex Myerson, who's on production and manages the podcast, and Ken Schiffman, as well, in our Boston studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at SiliconANGLE.com. He does some great editing. Thank you. All right, remember all these episodes they are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcasts. I publish each week on Wikibon.com and SiliconANGLE.com. All the video content is available on demand at theCUBE.net, or you can email me directly if you want to get in touch David.Vellante@SiliconANGLE.com or DM me @DVellante, or comment on our LinkedIn posts. And please do check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Mobile World Congress '23, MWC '23, or next time on Breaking Analysis. (bright music)

Published Date : Feb 25 2023

SUMMARY :

bringing you data-driven in the mid- to long-term.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
David NicholsonPERSON

0.99+

Lisa MartinPERSON

0.99+

Alex MyersonPERSON

0.99+

OrangeORGANIZATION

0.99+

QualcommORGANIZATION

0.99+

HPEORGANIZATION

0.99+

TelefonicaORGANIZATION

0.99+

Kristen MartinPERSON

0.99+

AWSORGANIZATION

0.99+

Dave VellantePERSON

0.99+

AMDORGANIZATION

0.99+

SpainLOCATION

0.99+

T-MobileORGANIZATION

0.99+

Ken SchiffmanPERSON

0.99+

Deutsche TelekomORGANIZATION

0.99+

Hewlett Packard EnterpriseORGANIZATION

0.99+

IBMORGANIZATION

0.99+

CiscoORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

MarvellORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

SamsungORGANIZATION

0.99+

AppleORGANIZATION

0.99+

AT&TORGANIZATION

0.99+

DellORGANIZATION

0.99+

IntelORGANIZATION

0.99+

Rob HofPERSON

0.99+

Palo AltoLOCATION

0.99+

OracleORGANIZATION

0.99+

40%QUANTITY

0.99+

last weekDATE

0.99+

AthoNetORGANIZATION

0.99+

EricksonORGANIZATION

0.99+

Congress SquareLOCATION

0.99+

AccentureORGANIZATION

0.99+

next weekDATE

0.99+

Mobile World CongressEVENT

0.99+

SolanaORGANIZATION

0.99+

BostonLOCATION

0.99+

two dimensionsQUANTITY

0.99+

ETRORGANIZATION

0.99+

MWC '23EVENT

0.99+

MWCEVENT

0.99+

288QUANTITY

0.98+

todayDATE

0.98+

this weekDATE

0.98+

SolonaORGANIZATION

0.98+

David.Vellante@SiliconANGLE.comOTHER

0.98+

telcoORGANIZATION

0.98+

TwoQUANTITY

0.98+

each weekQUANTITY

0.97+

Zeus KerravalaPERSON

0.97+

MWC 2023EVENT

0.97+

about five pointsQUANTITY

0.97+

theCUBE.netOTHER

0.97+

Red HatORGANIZATION

0.97+

SnowflakeTITLE

0.96+

oneQUANTITY

0.96+

DatabricksORGANIZATION

0.96+

threeQUANTITY

0.96+

theCUBE StudiosORGANIZATION

0.96+

Breaking Analysis: Google's Point of View on Confidential Computing


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)

Published Date : Feb 11 2023

SUMMARY :

bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BrancoPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2019DATE

0.99+

2017DATE

0.99+

Kristin MartinPERSON

0.99+

Nelly PorterPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

last yearDATE

0.99+

Palo AltoLOCATION

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Patricia FlorissiPERSON

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

fiveQUANTITY

0.99+

second pointQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

MetaORGANIZATION

0.99+

secondQUANTITY

0.99+

thirdQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

ArmORGANIZATION

0.99+

eachQUANTITY

0.99+

two expertsQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

two decades agoDATE

0.99+

bothQUANTITY

0.99+

this yearDATE

0.99+

seven yearsQUANTITY

0.99+

OCTOORGANIZATION

0.99+

zero daysQUANTITY

0.98+

10 years agoDATE

0.98+

each weekQUANTITY

0.98+

todayDATE

0.97+

Breaking Analysis: Google's PoV on Confidential Computing


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)

Published Date : Feb 10 2023

SUMMARY :

bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HoofPERSON

0.99+

Cheryl KnightPERSON

0.99+

Nelly PorterPERSON

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BroncoPERSON

0.99+

2019DATE

0.99+

Ken SchiffmanPERSON

0.99+

IntelORGANIZATION

0.99+

AMDORGANIZATION

0.99+

2017DATE

0.99+

ARMORGANIZATION

0.99+

AemORGANIZATION

0.99+

NelliePERSON

0.99+

Kristin MartinPERSON

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

Patricia FlorissiPERSON

0.99+

oneQUANTITY

0.99+

MetaORGANIZATION

0.99+

twoQUANTITY

0.99+

thirdQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

second pointQUANTITY

0.99+

two expertsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

two decades agoDATE

0.99+

'23DATE

0.99+

eachQUANTITY

0.99+

a decade agoDATE

0.99+

threeQUANTITY

0.99+

zero daysQUANTITY

0.98+

fourQUANTITY

0.98+

OCTOORGANIZATION

0.98+

todayDATE

0.98+

theCUBE's New Analyst Talks Cloud & DevOps


 

(light music) >> Hi everybody. Welcome to this Cube Conversation. I'm really pleased to announce a collaboration with Rob Strechay. He's a guest cube analyst, and we'll be working together to extract the signal from the noise. Rob is a long-time product pro, working at a number of firms including AWS, HP, HPE, NetApp, Snowplow. I did a stint as an analyst at Enterprise Strategy Group. Rob, good to see you. Thanks for coming into our Marlboro Studios. >> Well, thank you for having me. It's always great to be here. >> I'm really excited about working with you. We've known each other for a long time. You've been in the Cube a bunch. You know, you're in between gigs, and I think we can have a lot of fun together. Covering events, covering trends. So. let's get into it. What's happening out there? We're sort of exited the isolation economy. Things were booming. Now, everybody's tapping the brakes. From your standpoint, what are you seeing out there? >> Yeah. I'm seeing that people are really looking how to get more out of their data. How they're bringing things together, how they're looking at the costs of Cloud, and understanding how are they building out their SaaS applications. And understanding that when they go in and actually start to use Cloud, it's not only just using the base services anymore. They're looking at, how do I use these platforms as a service? Some are easier than others, and they're trying to understand, how do I get more value out of that relationship with the Cloud? They're also consolidating the number of Clouds that they have, I would say to try to better optimize their spend, and getting better pricing for that matter. >> Are you seeing people unhook Clouds, or just reduce maybe certain Cloud activities and going maybe instead of 60/40 going 90/10? >> Correct. It's more like the 90/10 type of rule where they're starting to say, Hey I'm not going to get rid of Azure or AWS or Google. I'm going to move a portion of this over that I was using on this one service. Maybe I got a great two-year contract to start with on this platform as a service or a database as a service. I'm going to unhook from that and maybe go with an independent. Maybe with something like a Snowflake or a Databricks on top of another Cloud, so that I can consolidate down. But it also gives them more flexibility as well. >> In our last breaking analysis, Rob, we identified six factors that were reducing Cloud consumption. There were factors and customer tactics. And I want to get your take on this. So, some of the factors really, you got fewer mortgage originations. FinTech, obviously big Cloud user. Crypto, not as much activity there. Lower ad spending means less Cloud. And then one of 'em, which you kind of disagreed with was less, less analytics, you know, fewer... Less frequency of calculations. I'll come back to that. But then optimizing compute using Graviton or AMD instances moving to cheaper storage tiers. That of course makes sense. And then optimize pricing plans. Maybe going from On Demand, you know, to, you know, instead of pay by the drink, buy in volume. Okay. So, first of all, do those make sense to you with the exception? We'll come back and talk about the analytics piece. Is that what you're seeing from customers? >> Yeah, I think so. I think that was pretty much dead on with what I'm seeing from customers and the ones that I go out and talk to. A lot of times they're trying to really monetize their, you know, understand how their business utilizes these Clouds. And, where their spend is going in those Clouds. Can they use, you know, lower tiers of storage? Do they really need the best processors? Do they need to be using Intel or can they get away with AMD or Graviton 2 or 3? Or do they need to move in? And, I think when you look at all of these Clouds, they always have pricing curves that are arcs from the newest to the oldest stuff. And you can play games with that. And understanding how you can actually lower your costs by looking at maybe some of the older generation. Maybe your application was written 10 years ago. You don't necessarily have to be on the best, newest processor for that application per se. >> So last, I want to come back to this whole analytics piece. Last June, I think it was June, Dev Ittycheria, who's the-- I call him Dev. Spelled Dev, pronounced Dave. (chuckles softly) Same pronunciation, different spelling. Dev Ittycheria, CEO of Mongo, on the earnings call. He was getting, you know, hit. Things were starting to get a little less visible in terms of, you know, the outlook. And people were pushing him like... Because you're in the Cloud, is it easier to dial down? And he said, because we're the document database, we support transaction applications. We're less discretionary than say, analytics. Well on the Snowflake earnings call, that same month or the month after, they were all over Slootman and Scarpelli. Oh, the Mongo CEO said that they're less discretionary than analytics. And Snowflake was an interesting comment. They basically said, look, we're the Cloud. You can dial it up, you can dial it down, but the area under the curve over a period of time is going to be the same, because they get their customers to commit. What do you say? You disagreed with the notion that people are running their calculations less frequently. Is that because they're trying to do a better job of targeting customers in near real time? What are you seeing out there? >> Yeah, I think they're moving away from using people and more expensive marketing. Or, they're trying to figure out what's my Google ad spend, what's my Meta ad spend? And what they're trying to do is optimize that spend. So, what is the return on advertising, or the ROAS as they would say. And what they're looking to do is understand, okay, I have to collect these analytics that better understand where are these people coming from? How do they get to my site, to my store, to my whatever? And when they're using it, how do they they better move through that? What you're also seeing is that analytics is not only just for kind of the retail or financial services or things like that, but then they're also, you know, using that to make offers in those categories. When you move back to more, you know, take other companies that are building products and SaaS delivered products. They may actually go and use this analytics for making the product better. And one of the big reasons for that is maybe they're dialing back how many product managers they have. And they're looking to be more data driven about how they actually go and build the product out or enhance the product. So maybe they're, you know, an online video service and they want to understand why people are either using or not using the whiteboard inside the product. And they're collecting a lot of that product analytics in a big way so that they can go through that. And they're doing it in a constant manner. This first party type tracking within applications is growing rapidly by customers. >> So, let's talk about who wins in that. So, obviously the Cloud guys, AWS, Google and Azure. I want to come back and unpack that a little bit. Databricks and Snowflake, we reported on our last breaking analysis, it kind of on a collision course. You know, a couple years ago we were thinking, okay, AWS, Snowflake and Databricks, like perfect sandwich. And then of course they started to become more competitive. My sense is they still, you know, compliment each other in the field, right? But, you know, publicly, they've got bigger aspirations, they get big TAMs that they're going after. But it's interesting, the data shows that-- So, Snowflake was off the charts in terms of spending momentum and our EPR surveys. Our partner down in New York, they kind of came into line. They're both growing in terms of market presence. Databricks couldn't get to IPO. So, we don't have as much, you know, visibility on their financials. You know, Snowflake obviously highly transparent cause they're a public company. And then you got AWS, Google and Azure. And it seems like AWS appears to be more partner friendly. Microsoft, you know, depends on what market you're in. And Google wants to sell BigQuery. >> Yeah. >> So, what are you seeing in the public Cloud from a data platform perspective? >> Yeah. I think that was pretty astute in what you were talking about there, because I think of the three, Google is definitely I think a little bit behind in how they go to market with their partners. Azure's done a fantastic job of partnering with these companies to understand and even though they may have Synapse as their go-to and where they want people to go to do AI and ML. What they're looking at is, Hey, we're going to also be friendly with Snowflake. We're also going to be friendly with a Databricks. And I think that, Amazon has always been there because that's where the market has been for these developers. So, many, like Databricks' and the Snowflake's have gone there first because, you know, Databricks' case, they built out on top of S3 first. And going and using somebody's object layer other than AWS, was not as simple as you would think it would be. Moving between those. >> So, one of the financial meetups I said meetup, but the... It was either the CEO or the CFO. It was either Slootman or Scarpelli talking at, I don't know, Merrill Lynch or one of the other financial conferences said, I think it was probably their Q3 call. Snowflake said 80% of our business goes through Amazon. And he said to this audience, the next day we got a call from Microsoft. Hey, we got to do more. And, we know just from reading the financial statements that Snowflake is getting concessions from Amazon, they're buying in volume, they're renegotiating their contracts. Amazon gets it. You know, lower the price, people buy more. Long term, we're all going to make more money. Microsoft obviously wants to get into that game with Snowflake. They understand the momentum. They said Google, not so much. And I've had customers tell me that they wanted to use Google's AI with Snowflake, but they can't, they got to go to to BigQuery. So, honestly, I haven't like vetted that so. But, I think it's true. But nonetheless, it seems like Google's a little less friendly with the data platform providers. What do you think? >> Yeah, I would say so. I think this is a place that Google looks and wants to own. Is that now, are they doing the right things long term? I mean again, you know, you look at Google Analytics being you know, basically outlawed in five countries in the EU because of GDPR concerns, and compliance and governance of data. And I think people are looking at Google and BigQuery in general and saying, is it the best place for me to go? Is it going to be in the right places where I need it? Still, it's still one of the largest used databases out there just because it underpins a number of the Google services. So you almost get, like you were saying, forced into BigQuery sometimes, if you want to use the tech on top. >> You do strategy. >> Yeah. >> Right? You do strategy, you do messaging. Is it the right call by Google? I mean, it's not a-- I criticize Google sometimes. But, I'm not sure it's the wrong call to say, Hey, this is our ace in the hole. >> Yeah. >> We got to get people into BigQuery. Cause, first of all, BigQuery is a solid product. I mean it's Cloud native and it's, you know, by all, it gets high marks. So, why give the competition an advantage? Let's try to force people essentially into what is we think a great product and it is a great product. The flip side of that is, they're giving up some potential partner TAM and not treating the ecosystem as well as one of their major competitors. What do you do if you're in that position? >> Yeah, I think that that's a fantastic question. And the question I pose back to the companies I've worked with and worked for is, are you really looking to have vendor lock-in as your key differentiator to your service? And I think when you start to look at these companies that are moving away from BigQuery, moving to even, Databricks on top of GCS in Google, they're looking to say, okay, I can go there if I have to evacuate from GCP and go to another Cloud, I can stay on Databricks as a platform, for instance. So I think it's, people are looking at what platform as a service, database as a service they go and use. Because from a strategic perspective, they don't want that vendor locking. >> That's where Supercloud becomes interesting, right? Because, if I can run on Snowflake or Databricks, you know, across Clouds. Even Oracle, you know, they're getting into business with Microsoft. Let's talk about some of the Cloud players. So, the big three have reported. >> Right. >> We saw AWSs Cloud growth decelerated down to 20%, which is I think the lowest growth rate since they started to disclose public numbers. And they said they exited, sorry, they said January they grew at 15%. >> Yeah. >> Year on year. Now, they had some pretty tough compares. But nonetheless, 15%, wow. Azure, kind of mid thirties, and then Google, we had kind of low thirties. But, well behind in terms of size. And Google's losing probably almost $3 billion annually. But, that's not necessarily a bad thing by advocating and investing. What's happening with the Cloud? Is AWS just running into the law, large numbers? Do you think we can actually see a re-acceleration like we have in the past with AWS Cloud? Azure, we predicted is going to be 75% of AWS IAS revenues. You know, we try to estimate IAS. >> Yeah. >> Even though they don't share that with us. That's a huge milestone. You'd think-- There's some people who have, I think, Bob Evans predicted a while ago that Microsoft would surpass AWS in terms of size. You know, what do you think? >> Yeah, I think that Azure's going to keep to-- Keep growing at a pretty good clip. I think that for Azure, they still have really great account control, even though people like to hate Microsoft. The Microsoft sellers that are out there making those companies successful day after day have really done a good job of being in those accounts and helping people. I was recently over in the UK. And the UK market between AWS and Azure is pretty amazing, how much Azure there is. And it's growing within Europe in general. In the states, it's, you know, I think it's growing well. I think it's still growing, probably not as fast as it is outside the U.S. But, you go down to someplace like Australia, it's also Azure. You hear about Azure all the time. >> Why? Is that just because of the Microsoft's software state? It's just so convenient. >> I think it has to do with, you know, and you can go with the reasoning they don't break out, you know, Office 365 and all of that out of their numbers is because they have-- They're in all of these accounts because the office suite is so pervasive in there. So, they always have reasons to go back in and, oh by the way, you're on these old SQL licenses. Let us move you up here and we'll be able to-- We'll support you on the old version, you know, with security and all of these things. And be able to move you forward. So, they have a lot of, I guess you could say, levers to stay in those accounts and be interesting. At least as part of the Cloud estate. I think Amazon, you know, is hitting, you know, the large number. Laws of large numbers. But I think that they're also going through, and I think this was seen in the layoffs that they were making, that they're looking to understand and have profitability in more of those services that they have. You know, over 350 odd services that they have. And you know, as somebody who went there and helped to start yet a new one, while I was there. And finally, it went to beta back in September, you start to look at the fact that, that number of services, people, their own sellers don't even know all of their services. It's impossible to comprehend and sell that many things. So, I think what they're going through is really looking to rationalize a lot of what they're doing from a services perspective going forward. They're looking to focus on more profitable services and bringing those in. Because right now it's built like a layer cake where you have, you know, S3 EBS and EC2 on the bottom of the layer cake. And then maybe you have, you're using IAM, the authorization and authentication in there and you have all these different services. And then they call it EMR on top. And so, EMR has to pay for that entire layer cake just to go and compete against somebody like Mongo or something like that. So, you start to unwind the costs of that. Whereas Azure, went and they build basically ground up services for the most part. And Google kind of falls somewhere in between in how they build their-- They're a sort of layer cake type effect, but not as many layers I guess you could say. >> I feel like, you know, Amazon's trying to be a platform for the ecosystem. Yes, they have their own products and they're going to sell. And that's going to drive their profitability cause they don't have to split the pie. But, they're taking a piece of-- They're spinning the meter, as Ziyas Caravalo likes to say on every time Snowflake or Databricks or Mongo or Atlas is, you know, running on their system. They take a piece of the action. Now, Microsoft does that as well. But, you look at Microsoft and security, head-to-head competitors, for example, with a CrowdStrike or an Okta in identity. Whereas, it seems like at least for now, AWS is a more friendly place for the ecosystem. At the same time, you do a lot of business in Microsoft. >> Yeah. And I think that a lot of companies have always feared that Amazon would just throw, you know, bodies at it. And I think that people have come to the realization that a two pizza team, as Amazon would call it, is eight people. I think that's, you know, two slices per person. I'm a little bit fat, so I don't know if that's enough. But, you start to look at it and go, okay, if they're going to start out with eight engineers, if I'm a startup and they're part of my ecosystem, do I really fear them or should I really embrace them and try to partner closer with them? And I think the smart people and the smart companies are partnering with them because they're realizing, Amazon, unless they can see it to, you know, a hundred million, $500 million market, they're not going to throw eight to 16 people at a problem. I think when, you know, you could say, you could look at the elastic with OpenSearch and what they did there. And the licensing terms and the battle they went through. But they knew that Elastic had a huge market. Also, you had a number of ecosystem companies building on top of now OpenSearch, that are now domain on top of Amazon as well. So, I think Amazon's being pretty strategic in how they're doing it. I think some of the-- It'll be interesting. I think this year is a payout year for the cuts that they're making to some of the services internally to kind of, you know, how do we take the fat off some of those services that-- You know, you look at Alexa. I don't know how much revenue Alexa really generates for them. But it's a means to an end for a number of different other services and partners. >> What do you make of this ChatGPT? I mean, Microsoft obviously is playing that card. You want to, you want ChatGPT in the Cloud, come to Azure. Seems like AWS has to respond. And we know Google is, you know, sharpening its knives to come up with its response. >> Yeah, I mean Google just went and talked about Bard for the first time this week and they're in private preview or I guess they call it beta, but. Right at the moment to select, select AI users, which I have no idea what that means. But that's a very interesting way that they're marketing it out there. But, I think that Amazon will have to respond. I think they'll be more measured than say, what Google's doing with Bard and just throwing it out there to, hey, we're going into beta now. I think they'll look at it and see where do we go and how do we actually integrate this in? Because they do have a lot of components of AI and ML underneath the hood that other services use. And I think that, you know, they've learned from that. And I think that they've already done a good job. Especially for media and entertainment when you start to look at some of the ways that they use it for helping do graphics and helping to do drones. I think part of their buy of iRobot was the fact that iRobot was a big user of RoboMaker, which is using different models to train those robots to go around objects and things like that, so. >> Quick touch on Kubernetes, the whole DevOps World we just covered. The Cloud Native Foundation Security, CNCF. The security conference up in Seattle last week. First time they spun that out kind of like reinforced, you know, AWS spins out, reinforced from reinvent. Amsterdam's coming up soon, the CubeCon. What should we expect? What's hot in Cubeland? >> Yeah, I think, you know, Kubes, you're going to be looking at how OpenShift keeps growing and I think to that respect you get to see the momentum with people like Red Hat. You see others coming up and realizing how OpenShift has gone to market as being, like you were saying, partnering with those Clouds and really making it simple. I think the simplicity and the manageability of Kubernetes is going to be at the forefront. I think a lot of the investment is still going into, how do I bring observability and DevOps and AIOps and MLOps all together. And I think that's going to be a big place where people are going to be looking to see what comes out of CubeCon in Amsterdam. I think it's that manageability ease of use. >> Well Rob, I look forward to working with you on behalf of the whole Cube team. We're going to do more of these and go out to some shows extract the signal from the noise. Really appreciate you coming into our studio. >> Well, thank you for having me on. Really appreciate it. >> You're really welcome. All right, keep it right there, or thanks for watching. This is Dave Vellante for the Cube. And we'll see you next time. (light music)

Published Date : Feb 7 2023

SUMMARY :

I'm really pleased to It's always great to be here. and I think we can have the number of Clouds that they have, contract to start with those make sense to you And, I think when you look in terms of, you know, the outlook. And they're looking to My sense is they still, you know, in how they go to market And he said to this audience, is it the best place for me to go? You do strategy, you do messaging. and it's, you know, And I think when you start Even Oracle, you know, since they started to to be 75% of AWS IAS revenues. You know, what do you think? it's, you know, I think it's growing well. Is that just because of the And be able to move you forward. I feel like, you know, I think when, you know, you could say, And we know Google is, you know, And I think that, you know, you know, AWS spins out, and I think to that respect forward to working with you Well, thank you for having me on. And we'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AmazonORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Bob EvansPERSON

0.99+

MicrosoftORGANIZATION

0.99+

HPORGANIZATION

0.99+

AWSORGANIZATION

0.99+

RobPERSON

0.99+

GoogleORGANIZATION

0.99+

OracleORGANIZATION

0.99+

Rob StrechayPERSON

0.99+

New YorkLOCATION

0.99+

SeptemberDATE

0.99+

SeattleLOCATION

0.99+

JanuaryDATE

0.99+

Dev IttycheriaPERSON

0.99+

HPEORGANIZATION

0.99+

NetAppORGANIZATION

0.99+

AmsterdamLOCATION

0.99+

75%QUANTITY

0.99+

UKLOCATION

0.99+

AWSsORGANIZATION

0.99+

JuneDATE

0.99+

SnowplowORGANIZATION

0.99+

eightQUANTITY

0.99+

80%QUANTITY

0.99+

ScarpelliPERSON

0.99+

15%QUANTITY

0.99+

AustraliaLOCATION

0.99+

MongoORGANIZATION

0.99+

SlootmanPERSON

0.99+

two-yearQUANTITY

0.99+

AMDORGANIZATION

0.99+

EuropeLOCATION

0.99+

DatabricksORGANIZATION

0.99+

six factorsQUANTITY

0.99+

threeQUANTITY

0.99+

Merrill LynchORGANIZATION

0.99+

Last JuneDATE

0.99+

five countriesQUANTITY

0.99+

eight peopleQUANTITY

0.99+

U.S.LOCATION

0.99+

last weekDATE

0.99+

16 peopleQUANTITY

0.99+

Databricks'ORGANIZATION

0.99+

Breaking Analysis: Cloud players sound a cautious tone for 2023


 

>> From the Cube Studios in Palo Alto in Boston bringing you data-driven insights from the Cube and ETR. This is Breaking Analysis with Dave Vellante. >> The unraveling of market enthusiasm continued in Q4 of 2022 with the earnings reports from the US hyperscalers, the big three now all in. As we said earlier this year, even the cloud is an immune from the macro headwinds and the cracks in the armor that we saw from the data that we shared last summer, they're playing out into 2023. For the most part actuals are disappointing beyond expectations including our own. It turns out that our estimates for the big three hyperscaler's revenue missed by 1.2 billion or 2.7% lower than we had forecast from even our most recent November estimates. And we expect continued decelerating growth rates for the hyperscalers through the summer of 2023 and we don't think that's going to abate until comparisons get easier. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we share our view of what's happening in cloud markets not just for the hyperscalers but other firms that have hitched a ride on the cloud. And we'll share new ETR data that shows why these trends are playing out tactics that customers are employing to deal with their cost challenges and how long the pain is likely to last. You know, riding the cloud wave, it's a two-edged sword. Let's look at the players that have gone all in on or are exposed to both the positive and negative trends of cloud. Look the cloud has been a huge tailwind for so many companies like Snowflake and Databricks, Workday, Salesforce, Mongo's move with Atlas, Red Hats Cloud strategy with OpenShift and so forth. And you know, the flip side is because cloud is elastic what comes up can also go down very easily. Here's an XY graphic from ETR that shows spending momentum or net score on the vertical axis and market presence in the dataset on the horizontal axis provision or called overlap. This is data from the January 2023 survey and that the red dotted lines show the positions of several companies that we've highlighted going back to January 2021. So let's unpack this for a bit starting with the big three hyperscalers. The first point is AWS and Azure continue to solidify their moat relative to Google Cloud platform. And we're going to get into this in a moment, but Azure and AWS revenues are five to six times that of GCP for IaaS. And at those deltas, Google should be gaining ground much faster than the big two. The second point on Google is notice the red line on GCP relative to its starting point. While it appears to be gaining ground on the horizontal axis, its net score is now below that of AWS and Azure in the survey. So despite its significantly smaller size it's just not keeping pace with the leaders in terms of market momentum. Now looking at AWS and Microsoft, what we see is basically AWS is holding serve. As we know both Google and Microsoft benefit from including SaaS in their cloud numbers. So the fact that AWS hasn't seen a huge downward momentum relative to a January 2021 position is one positive in the data. And both companies are well above that magic 40% line on the Y-axis, anything above 40% we consider to be highly elevated. But the fact remains that they're down as are most of the names on this chart. So let's take a closer look. I want to start with Snowflake and Databricks. Snowflake, as we reported from several quarters back came down to Earth, it was up in the 80% range in the Y-axis here. And it's still highly elevated in the 60% range and it continues to move to the right, which is positive but as we'll address in a moment it's customers can dial down consumption just as in any cloud. Now, Databricks is really interesting. It's not a public company, it never made it to IPO during the sort of tech bubble. So we don't have the same level of transparency that we do with other companies that did make it through. But look at how much more prominent it is on the X-axis relative to January 2021. And it's net score is basically held up over that period of time. So that's a real positive for Databricks. Next, look at Workday and Salesforce. They've held up relatively well, both inching to the right and generally holding their net scores. Same from Mongo, which is the brown dot above its name that says Elastic, it says a little gets a little crowded which Elastic's actually the blue dot above it. But generally, SaaS is harder to dial down, Workday, Salesforce, Oracles, SaaS and others. So it's harder to dial down because commitments have been made in advance, they're kind of locked in. Now, one of the discussions from last summer was as Mongo, less discretionary than analytics i.e. Snowflake. And it's an interesting debate but maybe Snowflake customers, you know, they're also generally committed to a dollar amount. So over time the spending is going to be there. But in the short term, yeah maybe Snowflake customers can dial down. Now that highlighted dotted red line, that bolded one is Datadog and you can see it's made major strides on the X-axis but its net score has decelerated quite dramatically. Openshift's momentum in the survey has dropped although IBM just announced that OpenShift has a a billion dollar ARR and I suspect what's happening there is IBM consulting is bundling OpenShift into its modernization projects. It's got a, that sort of captive base if you will. And as such it's probably not as top of mind to the respondents but I'll bet you the developers are certainly aware of it. Now the other really notable call out here is CloudFlare, We've reported on them earlier. Cloudflare's net score has held up really well since January of 2021. It really hasn't seen the downdraft of some of these others, but it's making major major moves to the right gaining market presence. We really like how CloudFlare is performing. And the last comment is on Oracle which as you can see, despite its much, much lower net score continues to gain ground in the market and thrive from a profitability standpoint. But the data pretty clearly shows that there's a downdraft in the market. Okay, so what's happening here? Let's dig deeper into this data. Here's a graphic from the most recent ETR drill down asking customers that said they were going to cut spending what technique they're using to do so. Now, as we've previously reported, consolidating redundant vendors is by far the most cited approach but there's two key points we want to make here. One is reducing excess cloud resources. As you can see in the bars is the second most cited technique and it's up from the previous polling period. The second we're not showing, you know directly but we've got some red call outs there. Reducing cloud costs jumps to 29% and 28% respectively in financial services and tech telco. And it's much closer to second. It's basically neck and neck with consolidating redundant vendors in those two industries. So they're being really aggressive about optimizing cloud cost. Okay, so as we said, cloud is great 'cause you can dial it up but it's just as easy to dial down. We've identified six factors that customers tell us are affecting their cloud consumption and there are probably more, if you got more we'd love to hear them but these are the ones that are fairly prominent that have hit our radar. First, rising mortgage rates mean banks are processing fewer loans means less cloud. The crypto crash means less trading activity and that means less cloud resources. Third lower ad spend has led companies to reduce not only you know, their ad buying but also their frequency of running their analytics and their calculations. And they're also often using less data, maybe compressing the timeframe of the corpus down to a shorter time period. Also very prominent is down to the bottom left, using lower cost compute instances. For example, Graviton from AWS or AMD chips and tiering storage to cheaper S3 or deep archived tiers. And finally, optimizing based on better pricing plans. So customers are moving from, you know, smaller companies in particular moving maybe from on demand or other larger companies that are experimenting using on demand or they're moving to spot pricing or reserved instances or optimized savings plans. That all lowers cost and that means less cloud resource consumption and less cloud revenue. Now in the days when everything was on prem CFOs, what would they do? They would freeze CapEx and IT Pros would have to try to do more with less and often that meant a lot of manual tasks. With the cloud it's much easier to move things around. It still takes some thinking and some effort but it's dramatically simpler to do so. So you can get those savings a lot faster. Now of course the other huge factor is you can cut or you can freeze. And this graphic shows data from a recent ETR survey with 159 respondents and you can see the meaningful uptick in hiring freezes, freezing new IT deployments and layoffs. And as we've been reporting, this has been trending up since earlier last year. And note the call out, this is especially prominent in retail sectors, all three of these techniques jump up in retail and that's a bit of a concern because oftentimes consumer spending helps the economy make a softer landing out of a pullback. But this is a potential canary in the coal mine. If retail firms are pulling back it's because consumers aren't spending as much. And so we're keeping a close eye on that. So let's boil this down to the market data and what this all means. So in this graphic we show our estimates for Q4 IaaS revenues compared to the "actual" IaaS revenues. And we say quote because AWS is the only one that reports, you know clean revenue and IaaS, Azure and GCP don't report actuals. Why would they? Because it would make them look even, you know smaller relative to AWS. Rather, they bury the figures in overall cloud which includes their, you know G-Suite for Google and all the Microsoft SaaS. And then they give us little tidbits about in Microsoft's case, Azure, they give growth rates. Google gives kind of relative growth of GCP. So, and we use survey data and you know, other data to try to really pinpoint and we've been covering this for, I don't know, five or six years ever since the cloud really became a thing. But looking at the data, we had AWS growing at 25% this quarter and it came in at 20%. So a significant decline relative to our expectations. AWS announced that it exited December, actually, sorry it's January data showed about a 15% mid-teens growth rate. So that's, you know, something we're watching. Azure was two points off our forecast coming in at 38% growth. It said it exited December in the 35% growth range and it said that it's expecting five points of deceleration off of that. So think 30% for Azure. GCP came in three points off our expectation coming in 35% and Alibaba has yet to report but we've shaved a bid off that forecast based on some survey data and you know what maybe 9% is even still not enough. Now for the year, the big four hyperscalers generated almost 160 billion of revenue, but that was 7 billion lower than what what we expected coming into 2022. For 2023, we're expecting 21% growth for a total of 193.3 billion. And while it's, you know, lower, you know, significantly lower than historical expectations it's still four to five times the overall spending forecast that we just shared with you in our predictions post of between 4 and 5% for the overall market. We think AWS is going to come in in around 93 billion this year with Azure closing in at over 71 billion. This is, again, we're talking IaaS here. Now, despite Amazon focusing investors on the fact that AWS's absolute dollar growth is still larger than its competitors. By our estimates Azure will come in at more than 75% of AWS's forecasted revenue. That's a significant milestone. AWS is operating margins by the way declined significantly this past quarter, dropping from 30% of revenue to 24%, 30% the year earlier to 24%. Now that's still extremely healthy and we've seen wild fluctuations like this before so I don't get too freaked out about that. But I'll say this, Microsoft has a marginal cost advantage relative to AWS because one, it has a captive cloud on which to run its massive software estate. So it can just throw software at its own cloud and two software marginal costs. Marginal economics despite AWS's awesomeness in high degrees of automation, software is just a better business. Now the upshot for AWS is the ecosystem. AWS is essentially in our view positioning very smartly as a platform for data partners like Snowflake and Databricks, security partners like CrowdStrike and Okta and Palo Alto and many others and SaaS companies. You know, Microsoft is more competitive even though AWS does have competitive products. Now of course Amazon's competitive to retail companies so that's another factor but generally speaking for tech players, Amazon is a really thriving ecosystem that is a secret weapon in our view. AWS happy to spin the meter with its partners even though it sells competitive products, you know, more so in our view than other cloud players. Microsoft, of course is, don't forget is hyping now, we're hearing a lot OpenAI and ChatGPT we reported last week in our predictions post. How OpenAI is shot up in terms of market sentiment in ETR's emerging technology company surveys and people are moving to Azure to get OpenAI and get ChatGPT that is a an interesting lever. Amazon in our view has to have a response. They have lots of AI and they're going to have to make some moves there. Meanwhile, Google is emphasizing itself as an AI first company. In fact, Google spent at least five minutes of continuous dialogue, nonstop on its AI chops during its latest earnings call. So that's an area that we're watching very closely as the buzz around large language models continues. All right, let's wrap up with some assumptions for 2023. We think SaaS players are going to continue to be sticky. They're going to be somewhat insulated from all these downdrafts because they're so tied in and customers, you know they make the commitment up front, you've got the lock in. Now having said that, we do expect some backlash over time on the onerous and generally customer unfriendly pricing models of most large SaaS companies. But that's going to play out over a longer period of time. Now for cloud generally and the hyperscalers specifically we do expect accelerating growth rates into Q3 but the amplitude of the demand swings from this rubber band economy, we expect to continue to compress and become more predictable throughout the year. Estimates are coming down, CEOs we think are going to be more cautious when the market snaps back more cautious about hiring and spending and as such a perhaps we expect a more orderly return to growth which we think will slightly accelerate in Q4 as comps get easier. Now of course the big risk to these scenarios is of course the economy, the FED, consumer spending, inflation, supply chain, energy prices, wars, geopolitics, China relations, you know, all the usual stuff. But as always with our partners at ETR and the Cube community, we're here for you. We have the data and we'll be the first to report when we see a change at the margin. Okay, that's a wrap for today. I want to thank Alex Morrison who's on production and manages the podcast, Ken Schiffman as well out of our Boston studio getting this up on LinkedIn Live. Thank you for that. Kristen Martin also and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our Editor-in-Chief over at siliconangle.com. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com, at siliconangle.com where you can see all the data and you want to get in touch. Just all you can do is email me david.vellante@siliconangle.com or DM me @dvellante if you if you got something interesting, I'll respond. If you don't, it's either 'cause I'm swamped or it's just not tickling me. You can comment on our LinkedIn post as well. And please check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (gentle upbeat music)

Published Date : Feb 4 2023

SUMMARY :

From the Cube Studios and how long the pain is likely to last.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Alex MorrisonPERSON

0.99+

AWSORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

Cheryl KnightPERSON

0.99+

Kristen MartinPERSON

0.99+

Dave VellantePERSON

0.99+

Ken SchiffmanPERSON

0.99+

January 2021DATE

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Rob HofPERSON

0.99+

2.7%QUANTITY

0.99+

JanuaryDATE

0.99+

AmazonORGANIZATION

0.99+

DecemberDATE

0.99+

January of 2021DATE

0.99+

fiveQUANTITY

0.99+

January 2023DATE

0.99+

SnowflakeORGANIZATION

0.99+

Palo AltoLOCATION

0.99+

1.2 billionQUANTITY

0.99+

20%QUANTITY

0.99+

IBMORGANIZATION

0.99+

DatabricksORGANIZATION

0.99+

29%QUANTITY

0.99+

30%QUANTITY

0.99+

six factorsQUANTITY

0.99+

second pointQUANTITY

0.99+

24%QUANTITY

0.99+

2022DATE

0.99+

david.vellante@siliconangle.comOTHER

0.99+

X-axisORGANIZATION

0.99+

2023DATE

0.99+

28%QUANTITY

0.99+

193.3 billionQUANTITY

0.99+

ETRORGANIZATION

0.99+

38%QUANTITY

0.99+

7 billionQUANTITY

0.99+

21%QUANTITY

0.99+

EarthLOCATION

0.99+

25%QUANTITY

0.99+

MongoORGANIZATION

0.99+

OracleORGANIZATION

0.99+

AtlasORGANIZATION

0.99+

two industriesQUANTITY

0.99+

last weekDATE

0.99+

six yearsQUANTITY

0.99+

first pointQUANTITY

0.99+

Red HatsORGANIZATION

0.99+

35%QUANTITY

0.99+

fourQUANTITY

0.99+

159 respondentsQUANTITY

0.99+

OktaORGANIZATION

0.99+

Seamus Jones & Milind Damle


 

>>Welcome to the Cube's Continuing coverage of AMD's fourth generation Epic launch. I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. We have two very interesting guests to dive into some of the announcements that have been made and maybe take a look at this from an AI and ML perspective. Our first guest is Milland Doley. He's a senior director for software and solutions at amd, and we're also joined by Shamus Jones, who's a director of server engineering at Dell Technologies. Welcome gentlemen. How are you? >>Very good, thank >>You. Welcome to the Cube. So let's start out really quickly, Shamus, what, give us a thumbnail sketch of what you do at Dell. >>Yeah, so I'm the director of technical marketing engineering here at Dell, and our team really takes a look at the technical server portfolio and solutions and ensures that we can look at, you know, the performance metrics, benchmarks, and performance characteristics, so that way we can give customers a good idea of what they can expect from the server portfolio when they're looking to buy Power Edge from Dell. >>Milland, how about you? What's, what's new at a M D? What do you do there? >>Great to be here. Thank you for having me at amd, I'm the senior director of performance engineering and ISV ecosystem enablement, which is a long winter way of saying we do a lot of benchmarks, improved performance and demonstrate with wonderful partners such as Shamus and Dell, the combined leverage that AMD four generation processes and Dell systems can bring to bear on a multitude of applications across the industry spectrum. >>Shamus, talk about that relationship a little bit more. The relationship between a M D and Dell. How far back does it go? What does it look like in practical terms? >>Absolutely. So, you know, ever since AM MD reentered the server space, we've had a very close relationship. You know, it's one of those things where we are offering solutions that are out there to our customers no matter what generation A portfolio, if they're, if they're demanding either from their competitor or a m d, we offer a portfolio solutions that are out there. What we're finding is that within their generational improvements, they're just getting better and better and better. Really exciting things happening from a m D at the moment, and we're seeing that as we engineer those CPU stacks into our, our server portfolio, you know, we're really seeing unprecedented performance across the board. So excited about the, the history, you know, my team and Lin's team work very closely together, so much so that we were communicating almost on a daily basis around portfolio platforms and updates around the, the, the benchmarks testing and, and validation efforts. >>So Melind, are you happy with these PowerEdge boxes that Seamus is building to, to house, to house your baby? >>We are delighted, you know, it's hard to find stronger partners than Shamus and Dell with AMD's, second generation epic service CPUs. We already had undisputable industry performance leadership, and then with the third and now the fourth generation CPUs, we've just increased our lead with competition. We've got so many outstanding features at the platform, at the CPU level, everybody focuses on the high core counts, but there's also the DDR five, the memory, the io, and the storage subsystem. So we believe we have a fantastic performance and performance per dollar performance per what edge over competition, and we look to partners such as Dell to help us showcase that leadership. >>Well. So Shay Yeah, through Yeah, go ahead >>Dave. What, what I'd add, Dave, is that through the, the partnership that we've had, you know, we've been able to develop subsystems and platform features that historically we couldn't have really things around thermals power efficiency and, and efficiency within the platform. That means that customers can get the most out of their compute infrastructure. >>So this is gonna be a big question moving forward as next generation platforms are rolled out, there's the potential for people to have sticker shock. You talk about something that has eight or 12 cores in a, in a physical enclosure versus 96 cores, and, and I guess the, the question is, do the ROI and TCO numbers look good for someone to make that upgrade? Shamus, you wanna, you wanna hit that first or you guys are integrated? >>Absolutely, yeah, sorry. Absolutely. So we, I'll tell you what, at the moment, customers really can't afford not to upgrade at the moment, right? We've taken a look at the cost basis of keeping older infrastructure in place, let's say five or seven year old infrastructure servers that are, that are drawing more power maybe are, are poorly utilized within the infrastructure and take more and more effort and time to manage, maintain and, and really keep in production. So as customers look to upgrade or refresh their platforms, what we're finding right is that they can take a dynamic consolidation sometimes 5, 7, 8 to one consolidation depending on which platform they have as a historical and which one they're looking to upgrade to. Within AI specifically and machine learning frameworks, we're seeing really unprecedented performance. Lin's team partnered with us to deliver multiple benchmarks for the launch, some of which we're still continuing to see the goodness from things like TP C X AI as a framework, and I'm talking about here specifically the CPU U based performance. >>Even though in a lot of those AI frameworks, you would also expect to have GPUs, which all of the four platforms that we're offering on the AM MD portfolio today offer multiple G P U offerings. So we're seeing a balance between a huge amount of C P U gain and performance, as well as more and more GPU offerings within the platform. That was real, that was a real challenge for us because of the thermal challenges. I mean, you think GPUs are going up 300, 400 watt, these CPUs at 96 core are, are quite demanding thermally, but what we're able to do is through some, some unique smart cooling engineering within the, the PowerEdge portfolio, we can take a look at those platforms and make the most efficient use case by having things like telemetry within the platform so that way we can dynamically change fan speeds to get customers the best performance without throttling based on their need. >>Melin the cube was at the Supercomputing conference in Dallas this year, supercomputing conference 2022, and a lot of the discussion was around not only advances in microprocessor technology, but also advances in interconnect technology. How do you manage that sort of research partnership with Dell when you aren't strictly just focusing on the piece that you are bringing to the party? It's kind of a potluck, you know, we, we, we, we mentioned P C I E Gen five or 5.0, whatever you want to call it, new DDR storage cards, Nicks, accelerators, all of those, all of those things. How do you keep that straight when those aren't things that you actually build? >>Well, excellent question, Dave. And you know, as we are developing the next platform, obviously the, the ongoing relationship is there with Dell, but we start way before launch, right? Sometimes it's multiple years before launch. So we are not just focusing on the super high core counts at the CPU level and the platform configurations, whether it's single socket or dual socket, we are looking at it from the memory subsystem from the IO subsystem, P c i lanes for storage is a big deal, for example, in this generation. So it's really a holistic approach. And look, core counts are, you know, more important at the higher end for some customers h HPC space, some of the AI applications. But on the lower end you have database applications or some other is s v applications that care a lot about those. So it's, I guess different things matter to different folks across verticals. >>So we partnered with Dell very early in the cycle, and it's really a joint co-engineering. Shamus talked about the focus on AI with TP C X xci, I, so we set five world records in that space just on that one benchmark with AD and Dell. So fantastic kick kick off to that across a multitude of scale factors. But PPP c Xci is not just the only thing we are focusing on. We are also collaborating with Dell and des e i on some of the transformer based natural language processing models that we worked on, for example. So it's not just a steep CPU story, it's CPU platform, es subsystem software and the whole thing delivering goodness across the board to solve end user problems in AI and and other verticals. >>Yeah, the two of you are at the tip of the spear from a performance perspective. So I know it's easy to get excited about world records and, and they're, they're fantastic. I know Shamus, you know, that, you know, end user customers might, might immediately have the reaction, well, I don't need a Ferrari in my data center, or, you know, what I need is to be able to do more with less. Well, aren't we delivering that also? And you know, you imagine you milland you mentioned natural, natural language processing. Shamus, are you thinking in 2023 that a lot more enterprises are gonna be able to afford to do things like that? I mean, what are you hearing from customers on this front? >>I mean, while the adoption of the top bin CPU stack is, is definitely the exception, not the rule today we are seeing marked performance, even when we look at the mid bin CPU offerings from from a m d, those are, you know, the most common sold SKUs. And when we look at customers implementations, really what we're seeing is the fact that they're trying to make the most, not just of dollar spend, but also the whole subsystem that Melin was talking about. You know, the fact that balanced memory configs can give you marked performance improvements, not just at the CPU level, but as actually all the way through to the, to the application performance. So it's, it's trying to find the correct balance between the application needs, your budget, power draw and infrastructure within the, the data center, right? Because not only could you, you could be purchasing and, and look to deploy the most powerful systems, but if you don't have an infrastructure that's, that's got the right power, right, that's a large challenge that's happening right now and the right cooling to deal with the thermal differences of the systems, might you wanna ensure that, that you can accommodate those for not just today but in the future, right? >>So it's, it's planning that balance. >>If I may just add onto that, right? So when we launched, not just the fourth generation, but any generation in the past, there's a natural tendency to zero in on the top bin and say, wow, we've got so many cores. But as Shamus correctly said, it's not just that one core count opn, it's, it's the whole stack. And we believe with our four gen CPU processor stack, we've simplified things so much. We don't have, you know, dozens and dozens of offerings. We have a fairly simple skew stack, but we also have a very efficient skew stack. So even, even though at the top end we've got 96 scores, the thermal budget that we require is fairly reasonable. And look, with all the energy crisis going around, especially in Europe, this is a big deal. Not only do customers want performance, but they're also super focused on performance per want. And so we believe with this generation, we really delivered not just on raw performance, but also on performance per dollar and performance per one. >>Yeah. And it's not just Europe, I'm, we're, we are here in Palo Alto right now, which is in California where we all know the cost of an individual kilowatt hour of electricity because it's quite, because it's quite high. So, so thermals, power cooling, all of that, all of that goes together and that, and that drives cost. So it's a question of how much can you get done per dollar shame as you made the point that you, you're not, you don't just have a one size fits all solution that it's, that it's fit for function. I, I'm, I'm curious to hear from you from the two of you what your thoughts are from a, from a general AI and ML perspective. We're starting to see right now, if you hang out on any kind of social media, the rise of these experimental AI programs that are being presented to the public, some will write stories for you based on prom, some will create images for you. One of the more popular ones will create sort of a, your superhero alter ego for, I, I can't wait to do it, I just got the app on my phone. So those are all fun and they're trivial, but they sort of get us used to this idea that, wow, these systems can do things. They can think on their own in a certain way. W what do, what do you see the future of that looking like over the next year in terms of enterprises, what they're going to do for it with it >>Melan? Yeah, I can go first. Yeah, yeah, yeah, yeah, >>Sure. Yeah. Good. >>So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty and curiosity. You know, people using AI to write stories or poems or, you know, even carve out little jokes, check grammar and spelling very useful, but still, you know, kind of in the realm of novelty in the mainstream, in the enterprise. Look, in my opinion, AI is not just gonna be a vertical, it's gonna be a horizontal capability. We are seeing AI deployed across the board once the models have been suitably trained for disparate functions ranging from fraud detection or anomaly detection, both in the financial markets in manufacturing to things like image classification or object detection that you talked about in, in the sort of a core AI space itself, right? So we don't think of AI necessarily as a vertical, although we are showcasing it with a specific benchmark for launch, but we really look at AI emerging as a horizontal capability and frankly, companies that don't adopt AI on a massive scale run the risk of being left behind. >>Yeah, absolutely. There's an, an AI as an outcome is really something that companies, I, I think of it in the fact that they're adopting that and the frameworks that you're now seeing as the novelty pieces that Melin was talking about is, is really indicative of the under the covers activity that's been happening within infrastructures and within enterprises for the past, let's say 5, 6, 7 years, right? The fact that you have object detection within manufacturing to be able to, to be able to do defect detection within manufacturing lines. Now that can be done on edge platforms all the way at the device. So you're no longer only having to have things be done, you know, in the data center, you can bring it right out to the edge and have that high performance, you know, inferencing training models. Now, not necessarily training at the edge, but the inferencing models especially, so that way you can, you know, have more and, and better use cases for some of these, these instances things like, you know, smart cities with, with video detection. >>So that way they can see, especially during covid, we saw a lot of hospitals and a lot of customers that were using using image and, and spatial detection within their, their video feeds to be able to determine who and what employees were at risk during covid. So there's a lot of different use cases that have been coming around. I think the novelty aspect of it is really interesting and I, I know my kids, my daughters love that, that portion of it, but really what's been happening has been exciting for quite a, quite a period of time in the enterprise space. We're just now starting to actually see those come to light in more of a, a consumer relevant kind of use case. So the technology that's been developed in the data center around all of these different use cases is now starting to feed in because we do have more powerful compute at our fingertips. We do have the ability to talk more about the framework and infrastructure that's that's right out at the edge. You know, I know Dave in the past you've said things like the data center of, you know, 20 years ago is now in my hand as, as my cell phone. That's right. And, and that's, that's a fact and I'm, it's exciting to think where it's gonna be in the next 10 or 20 years. >>One terabyte baby. Yeah. One terabyte. Yeah. It's mind bo. Exactly. It's mind boggling. Yeah. And it makes me feel old. >>Yeah, >>Me too. And, and that and, and Shamus, that all sounded great. A all I want is a picture of me as a superhero though, so you guys are already way ahead of the curve, you know, with, with, with that on that note, Seamus wrap us up with, with a, with kind of a summary of the, the highlights of what we just went through in terms of the performance you're seeing out of this latest gen architecture from a md. >>Absolutely. So within the TPC xai frameworks that Melin and my team have worked together to do, you know, we're seeing unprecedented price performance. So the fact that you can get 220% uplift gen on gen for some of these benchmarks and, you know, you can have a five to one consolidation means that if you're looking to refresh platforms that are historically legacy, you can get a, a huge amount of benefit, both in reduction in the number of units that you need to deploy and the, the amount of performance that you can get per unit. You know, Melinda had mentioned earlier around CPU performance and performance per wat, specifically on the Tu socket two U platform using the fourth generation a m d Epic, you know, we're seeing a 55% higher C P U performance per wat that is that, you know, when for people who aren't necessarily looking at these statistics, every generation of servers, that that's, that is a huge jump leap forward. >>That combined with 121% higher spec scores, you know, as a benchmark, those are huge. Normally we see, let's say a 40 to 60% performance improvement on the spec benchmarks, we're seeing 121%. So while that's really impressive at the top bin, we're actually seeing, you know, large percentile improvements across the mid bins as well, you know, things in the range of like 70 to 90% performance improvements in those standard bins. So it, it's a, it's a huge performance improvement, a power efficiency, which means customers are able to save energy, space and time based on, on their deployment size. >>Thanks for that Shamus, sadly, gentlemen, our time has expired. With that, I want to thank both of you. It's a very interesting conversation. Thanks for, thanks for being with us, both of you. Thanks for joining us here on the Cube for our coverage of AMD's fourth generation Epic launch. Additional information, including white papers and benchmarks plus editorial coverage can be found on does hardware matter.com.

Published Date : Dec 9 2022

SUMMARY :

I'm Dave Nicholson and I'm joining you here in our Palo Alto Studios. Shamus, what, give us a thumbnail sketch of what you do at Dell. and ensures that we can look at, you know, the performance metrics, benchmarks, and Dell, the combined leverage that AMD four generation processes and Shamus, talk about that relationship a little bit more. So, you know, ever since AM MD reentered the server space, We are delighted, you know, it's hard to find stronger partners That means that customers can get the most out you wanna, you wanna hit that first or you guys are integrated? So we, I'll tell you what, and make the most efficient use case by having things like telemetry within the platform It's kind of a potluck, you know, we, But on the lower end you have database applications or some But PPP c Xci is not just the only thing we are focusing on. Yeah, the two of you are at the tip of the spear from a performance perspective. the fact that balanced memory configs can give you marked performance improvements, but any generation in the past, there's a natural tendency to zero in on the top bin and say, the two of you what your thoughts are from a, from a general AI and ML perspective. Yeah, I can go first. So the couple of examples, Dave, that you mentioned are, I, I guess it's a blend of novelty have that high performance, you know, inferencing training models. So the technology that's been developed in the data center around all And it makes me feel old. so you guys are already way ahead of the curve, you know, with, with, with that on that note, So the fact that you can get 220% uplift gen you know, large percentile improvements across the mid bins as well, Thanks for that Shamus, sadly, gentlemen, our time has

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave NicholsonPERSON

0.99+

DellORGANIZATION

0.99+

EuropeLOCATION

0.99+

70QUANTITY

0.99+

40QUANTITY

0.99+

55%QUANTITY

0.99+

fiveQUANTITY

0.99+

DavePERSON

0.99+

220%QUANTITY

0.99+

Palo AltoLOCATION

0.99+

121%QUANTITY

0.99+

96 coresQUANTITY

0.99+

CaliforniaLOCATION

0.99+

AMDORGANIZATION

0.99+

Shamus JonesPERSON

0.99+

12 coresQUANTITY

0.99+

ShamusORGANIZATION

0.99+

ShamusPERSON

0.99+

2023DATE

0.99+

eightQUANTITY

0.99+

96 coreQUANTITY

0.99+

300QUANTITY

0.99+

bothQUANTITY

0.99+

twoQUANTITY

0.99+

dozensQUANTITY

0.99+

seven yearQUANTITY

0.99+

5QUANTITY

0.99+

FerrariORGANIZATION

0.99+

96 scoresQUANTITY

0.99+

60%QUANTITY

0.99+

90%QUANTITY

0.99+

Milland DoleyPERSON

0.99+

first guestQUANTITY

0.99+

thirdQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

amdORGANIZATION

0.99+

todayDATE

0.98+

LinPERSON

0.98+

20 years agoDATE

0.98+

MelindaPERSON

0.98+

One terabyteQUANTITY

0.98+

SeamusORGANIZATION

0.98+

one coreQUANTITY

0.98+

MelindPERSON

0.98+

fourth generationQUANTITY

0.98+

this yearDATE

0.97+

7 yearsQUANTITY

0.97+

Seamus JonesPERSON

0.97+

DallasLOCATION

0.97+

OneQUANTITY

0.97+

MelinPERSON

0.97+

oneQUANTITY

0.97+

6QUANTITY

0.96+

Milind DamlePERSON

0.96+

MelanPERSON

0.96+

firstQUANTITY

0.95+

8QUANTITY

0.94+

second generationQUANTITY

0.94+

SeamusPERSON

0.94+

TP C XTITLE

0.93+