Image Title

Search Results for Shreya Shah:

Bhavesh Patel, Dell Technologies & Shreya Shah, Dell Technologies | SuperComputing 22


 

(upbeat jingle) >> Cameraman: Just look, Mike. >> Good afternoon everyone, and welcome back to Supercomputing. We're live here with theCUBE in Dallas. I'm joined by my cohost, David. Wonderful to be sharing the afternoon with you. And we are going to be kicking things off with a very thrilling discussion from two important thought leaders at Dell. Bhavesh and Shreya, thank you so much for being on the show. Welcome. How you doing? How does it feel to be at Supercomputing? >> Pretty good. We really enjoying the show and enjoying a lot of customer conversations ongoing. >> Yeah. Are most of your customers here? >> Yes. Most of the customers are, mostly in the Hyatt over there and a lot of discussions ongoing. >> Yeah. Must be nice to see everybody show off. Are you enjoying the show so far, Shreya? >> Yeah, I missed this for two years and so it's nice to be back and meeting people in person. >> Yeah, definitely. We all missed it. So, it's been a very exciting week for Dell. Do you want to talk about what you're most excited about in the announcement portfolio that we saw yesterday? >> Absolutely. >> Go for it, Shreya. >> Yeah, so, you know, before we get into the portfolio side of the house, you know, we really wanted to, kind of, share our thoughts, in terms of, you know, what is it that's, kind of, moving HPC and supercomputing, you know, for a long time- >> Stock trends >> For a long time HPC and supercomputing has been driven by packing the racks, you know, maximizing the performance. And as the work that Bhavesh and I have been doing over the last, you know, couple of generations, we're seeing an emerging trend and that is the thermal dissipated power is actually exploding. And so the idea of packing the racks is now turning into, how do you maximize your performance, but are able to deliver the infrastructure in that limited kilowatts per rack that you have in your data center. >> So I, it's been interesting walking around the show seeing how many businesses associated with cooling- >> Savannah: So many. >> are here. And it's funny to see, they open up the cabinet, and it's almost 19th-century-looking technology. It's pipes and pumps and- >> Savannah: And very industrial-like. >> Yeah, very, very industrial-looking. Yeah, and I think, so that's where the, the trends are more in the power and cooling. That is what everybody is trying to solve from an industry perspective. And what we did when we looked at our portfolio, what we want to bring up in this timeframe for targeting more the HPC and AI space. There are a couple of vectors we had to look at. We had to look at cooling, we had to look at power where the trends are happening. We had to look at, what are the data center needs showing up, be it in the cooler space, be it in the HPC space, be it in the large install happening out there. So, looking at those trends and then factoring in, how do you build a node out? We said, okay, we need to diversify and build out an infrastructure. And that's what me and Shreya looked into, not only looking at the silicon diversity showing up, but more looking at, okay, there is this power, there is this cooling, there is silicon diversity. Now, how do you start packing it up and bringing it to the marketplace? So, kind of, those are some of the trends that we captured. And that's what you see, kind of, in the exhibit floor today, even. >> And Dell technology supports both, liquid cooling, air cooling. Do you have a preference? Is it more just a customer-based? >> It is going to be, and Shreya can allude to it, it's more workload and application-focused. That is what we want to be thinking about. And it's not going to be siloed into, okay, is we going to be just targeting air-cooling, we wanted to target a breadth between air to liquid. And that's how we built into our portfolio when we looked at our GPUs. >> To add to that, if we look at our customer landscape, we see that there's a peak between 35 to 45 kilowatts per rack. We see another peak at 60, we see another peak at 80, and we've got selects, you know, very specialized customers above hundred kilowatts per rack. And so, if we take that 35 to 45 kilowatts per rack, you know, you can pack maybe three or four of these chassis, right? And so, to what Bhavesh is saying, we're really trying to provide the flexibility for what our customers can deliver in their data centers. Whether it be at the 35 end where air cooling may make complete sense. As you get above 45 and above, maybe that's the time to pivot to a liquid-cool solution. >> So, you said that there, so there are situations where you could have 90 kilowatts being consumed by a rack of equipment. So, I live in California where we are very, very closely attuned to things like the price for a kilowatt hour of electricity. >> Seriously. >> And I'm kind of an electric car nerd, so, for the folks who really aren't as attuned, 90 kilowatts, that's like over a hundred horsepower. So, think about a hundred horsepower worth of energy being used for compute in one of these racks. It's insane. So, we, you can kind of imagine a layperson can kind of imagine the variables that go into this equation of, you know, how do we, how do we bring the power and get the maximum bang for, per kilowatt hour. But, are there any, are there any kind of interesting odd twists in your equations that you find when you're trying to figure out. Do you have a- >> Yeah, and we, a lot of these trends when we look at it, okay, it's not, we think about it more from a power density that we want to try to go and solve. We are mindful about all the, from an energy perspective where the energy prices are moving. So, what we do is we try to be optimizing right at the node level and how we going to do our liquid-cooling and air cooled infrastructure. So, it's trying to, how do you keep a balance with it? That's what we are thinking about. And thinking about it is not just delivering or consuming the power that is maybe not needed for that particular node itself. So, that's what we are thinking about. The other way we optimize when we built this infrastructure out is we are thinking about, okay, how are we go going to deliver it at the rack level and more keeping in mind as to how this liquid-cooling plumbing will happen. Where is it coming into the data center? Is it coming in the bottom of the floor? Are we going to do it on the left hand side of your rack or the right hand side? It's a big thing. It's like it becomes, okay, yeah, it doesn't matter which side you put it on, but there is a piece of it going into our decision as to how we are going to build that, no doubt. So, there are multiple factors coming in and besides the power and cooling, which we all touched upon, But, Shreya and me also look at is where this whole GPU and accelerators are moving into. So, we're not just looking at the current set of GPUs and where they're moving from a power perspective. We are looking at this whole silicon diversity that is happening out there. So, we've been looking at multiple accelerators. There are multiple companies out there and we can tell you there'll be over three 30 to 50 silicon companies out there that we are actively engaged and looking into. So, our decision in building this particular portfolio out was being mindful about what the maturity curve is from a software point of view. From a hardware point of view and what can we deliver, what the customer really needs in it, yeah. >> It's a balancing act, yeah. >> Bhavesh: It is a balancing act. >> Let's, let's stay in that zone a little bit. What other trends, Shreya, let's go to you on this one. What other trends are you seeing in the acceleration landscape? >> Yeah, I think you know, to your point, the balancing act is actually a very interesting paradigm. One of the things that Bhavesh and I constantly think about, and we call it the Goldilocks syndrome, which is, you know, at that 90 and and a hundred, right? Density matters. >> Savannah: A lot. >> But, what we've done is we have really figured out what that optimal point is, 'cause we don't want to be the thinnest most possible. You lose a lot of power redundancy, you lose a lot of I/O capability, you lose a lot of storage capability. And so, from our portfolio perspective, we've really tried to think about the Goldilocks syndrome and where that sweet spot is. >> I love that. I love the thought of you all just standing around server racks, having a little bit of porridge and determining >> the porridge. Exactly the thickness that you want in terms of the density trade off there. Yeah, that's, I love that, though. I mean it's very digestible. Are you seeing anything else? >> No, I think that's pretty much, Shreya summed it up and we think about what we are thinking about, where the technology features are moving and what we are thinking, in terms of our portfolio, so it is, yeah. >> So, just a lesson, you know, Shreya, a lesson for us, a rudimentary lesson. You put power into a CPU or a GPU and you're getting something out and a lot of what we get out is heat. Is there a measure, is there an objective measure of efficiency in these devices that we look at? Because you could think of a 100 watt light bulb, an incandescent light bulb is going to give out a certain amount of light and a certain amount of heat. A 100 watt equivalent led, in terms of the lumens that it's putting out, in terms of light, a lot more light for the power going in, a lot less heat. We have led lights around us, thankfully, instead of incandescent lights. >> Savannah: Otherwise we would be melting. >> But, what is, when you put power into a CPU or a GPU, how do you measure that efficiency? 'Cause it's sort of funny, 'cause it's like, it's not moving, so it's not like measuring, putting power into a vehicle and measuring forward motion and heat. You're measuring this, sort of, esoteric thing, this processing thing that you can't see or touch. But, I mean, how much per watt of power, how do you, how do you measure it I guess? Help us out, from the base up understanding, 'cause people generally, most people have never been in a data center before. Maybe they've put their hand behind the fan in a personal computer or they've had a laptop feel warm on their lap. But, we're talking about massive amounts of heat being generated. Can you, kind of, explain the fundamentals of that? >> So, the way we think about it is, you know, there's a performance per dollar metric. There's a performance per dollar per watt metric and that's where the power kind of comes in. But, on the flip side, we have something called PUE, power utilization efficiency from a data center aspect. And so, we try to marry up those concepts together and really try to find that sweet spot. >> Is there anything in the way of harvesting that heat to do other worthwhile work, I mean? >> Yes. >> You know, it's like, hey, everybody that works in the data center, you all have your own personal shower now, water heated. >> Recirculating, too. >> Courtesy of Intel AMD. >> Or a heated swimming pool. >> Right, a heated swimming pool. >> I like the pool. >> So, that's the circulation of, or recycling of that thermal heat that you're talking about, absolutely. And we see that our customers in the, you know, in the Europe region, actually a lot more advanced in terms of taking that power and doing something that's valuable with it, right? >> Cooking croissant and, and making lattes, probably right? >> (laughing) Or heating your home. >> Makes me want to go on >> vacation, a pool, croissants. >> That would be a good use. But, do you, it's more on the PUE aspect of it. It's more thinking about how are we more energy efficient in our design, even, so we are more thinking about what's the best efficiency we can get, but what's the amount of heat capture we can get? Are we just kind of wasting any heat out there? So, that's always the goal when designing these particular platforms, so that's something that we had kept in mind with a lot of our power and cooling experts within Dell. When thinking about, okay, is it, how much can we get, can we capture? If we are not capturing anything, then what are we, kind of, recirculating it back in order to get much better efficiency when we think about it at a rack level and for the other equipment which is going to be purely air-cooled out there and what can we do about it, so. >> Do you think both of these technologies are going to continue to work in tandem, air cooling and liquid cooling? Yeah, so we're not going to see- >> Yeah, we don't, kind of, when we think about our portfolio and what we see the trends moving in the future, I think so, air-cooling is definitely going to be there. There'll be a huge amount of usage for customers looking into air-cooling. Air-cooling is not going to go away. Liquid-cooling is definitely something that a lot of customers are looking into adopting. PUE become the bigger factor for it. How much can I heat capture with it? That's a bigger equation that is coming into the picture. And that's where we said, okay, we have a transition happening. And that's what you see in our portfolio now. >> Yeah, Intel is, Intel, excuse me, Dell is agnostic when it comes to things like Intel, AMD, Broadcom, Nvidia. So, you can look at this landscape and I think make a, you know, make a fair judgment. When we talk about GPU versus CPU, in terms of efficiency, do you see that as something that will live on into the future for some applications? Meaning look, GPU is the answer or is it simply a question of leveraging what we think of as CPU cores differently? Is this going to be, is this going to ebb and flow back and forth? Shreya, are things going to change? 'Cause right now, a lot of what's announced recently, in the high performance computer area, leverages GPUs. But, we're right in the season of AMD and Intel coming out with NextGen processor architectures. >> Savannah: Great point. >> Shreya: Yeah >> Any thoughts? >> Yeah, so what I'll tell you is that it is all application dependent. If you rewind, you know, a couple of generations you'll see that the journey for GPU just started, right? And so there is an ROI, a minimum threshold ROI that customers have to realize in order to move their workloads from CPU-based to GPU-based. As the technology evolves and matures, you'll have more and more applications that will fit within that bucket. Does that mean that everything will fit in that bucket? I don't believe so, but as, you know, the technology will continue to mature on the CPU side, but also on the GPU side. And so, depending on where the customer is in their journey, it's the same for air versus liquid. Liquid is not an if, but it's a when. And when the environment, the data center environment is ready to support that, and when you have that ROI that goes with it is when it makes sense to transition to one way or the other. >> That's awesome. All right, last question for you both in a succinct phrase, if possible, I won't character count. What do you hope that we get to talk about next year when we have you back on theCUBE? Shreya, we'll start with you. >> Ooh, that's a good one. I'm going to let Bhavesh go first. >> Savannah: Go for it. >> (laughs) >> What do you think, Bhavesh? Next year, I think so, what you'll see more, because I'm in the CTI group, more talking about where cache coherency is moving. So, that's what, I'll just leave it at that and we'll talk about it more. >> Savannah: All right. >> Dave: Tantalizing. >> I was going to say, a little window in there, yeah. And I think, to kind of add to that, I'm excited to see what the future holds with CPUs, GPUs, smart NICs and the integration of these technologies and where that all is headed and how that helps ultimately, you know, our customers being able to solve these really, really large and complex problems. >> The problems our globe faces. Wow, well it was absolutely fantastic to have you both on the show. Time just flew. David, wonderful questions, as always. Thank you all for tuning in to theCUBE. Here live from Dallas where we are broadcasting all about supercomputing, high-performance computing, and everything that a hardware nerd, like I, loves. My name is Savannah Peterson. We'll see you again soon. (upbeat jingle)

Published Date : Nov 15 2022

SUMMARY :

And we are going to be kicking things off We really enjoying the show Are most of your customers here? mostly in the Hyatt over there Are you enjoying the show so far, Shreya? and so it's nice to be back in the announcement portfolio have been doing over the last, you know, And it's funny to see, And that's what you see, Do you have a preference? And it's not going to maybe that's the time to pivot So, you said that there, and get the maximum bang and we can tell you there'll be Shreya, let's go to you on this one. Yeah, I think you know, to your point, about the Goldilocks syndrome I love the thought of Exactly the thickness that you want and we think about what and a lot of what we get out is heat. we would be melting. But, what is, when you put So, the way we think you all have your own personal shower now, So, that's the circulation of, Or heating your home. and for the other equipment And that's what you see and I think make a, you and when you have that ROI What do you hope that we get to talk about I'm going to let Bhavesh go first. because I'm in the CTI group, and how that helps ultimately, you know, to have you both on the show.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
ShreyaPERSON

0.99+

DavidPERSON

0.99+

SavannahPERSON

0.99+

Savannah PetersonPERSON

0.99+

NvidiaORGANIZATION

0.99+

CaliforniaLOCATION

0.99+

DavePERSON

0.99+

100 wattQUANTITY

0.99+

two yearsQUANTITY

0.99+

35QUANTITY

0.99+

DellORGANIZATION

0.99+

Shreya ShahPERSON

0.99+

DallasLOCATION

0.99+

AMDORGANIZATION

0.99+

EuropeLOCATION

0.99+

60QUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

BhaveshPERSON

0.99+

BroadcomORGANIZATION

0.99+

80QUANTITY

0.99+

90 kilowattsQUANTITY

0.99+

next yearDATE

0.99+

Bhavesh PatelPERSON

0.99+

Next yearDATE

0.99+

MikePERSON

0.99+

90QUANTITY

0.99+

yesterdayDATE

0.99+

fourQUANTITY

0.99+

45 kilowattsQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.98+

bothQUANTITY

0.98+

two important thought leadersQUANTITY

0.98+

over a hundred horsepowerQUANTITY

0.97+

firstQUANTITY

0.97+

GoldilocksOTHER

0.96+

SupercomputingORGANIZATION

0.96+

todayDATE

0.96+

theCUBEORGANIZATION

0.93+

CTIORGANIZATION

0.92+

OneQUANTITY

0.91+

50 siliconQUANTITY

0.9+

one wayQUANTITY

0.89+

19th-centuryDATE

0.83+

a hundredQUANTITY

0.78+

aboveQUANTITY

0.77+

coupleQUANTITY

0.76+

CameramanPERSON

0.74+

over three 30QUANTITY

0.74+

HyattLOCATION

0.73+

one of theseQUANTITY

0.68+

a hundred horsepowerQUANTITY

0.68+

hundred kilowatts perQUANTITY

0.67+

above 45QUANTITY

0.6+