Guido Appenzeller, Intel | HPE Discover 2021
>>Please >>welcome back to HP discover 2021 the virtual version. My name is Dave Volonte and you're watching the cube and we're here with Guido appenzeller who's the C. T. O. Of the data platforms group at Intel. Guido. Welcome to the cube. Come on in. >>Thanks. Dave. I appreciate it's great to be here today. So >>I'm interested in your role at the company. Let's talk about that. Your brand new. Tell us a little bit about your background. What attracted you to intel and what's your role here? >>Yeah. So I'm, you know, I grew up in the startup ecosystem of Silicon Valley came from my PhD and and and never left and uh you know, built software companies, worked at software companies worked at the embassy for a little bit and I think my, my initial reaction when the intel recruiter called me, it was like you got the wrong phone number, right? I'm a software guy that's probably not who you're looking for. And uh you know, we had a good conversation I think at Intel, you know, there's a, there's a realization that you need to look at what intel builds more as an overall system from novel systems perspective right, that you have the software stack and then the hardware components that we're getting more and more intricately linked and you know, you need the software to basically bridge across the different hardware components that intel is building. So I'm here now is the CEO for the data platform school. So that builds the data center for Arts here at Intel. And it's a really exciting job. These are exciting times that intel, you know, with, with Pat, you got a fantastic uh you know, CEO at the home, I worked with him before at december, so a lot of things to do. Um but I think a very exciting future. >>Well, I mean the data center is the wheelhouse of intel. I mean of course you, your ascendancy was a function of the pcs and the great volume and how you change that industry. But really data centers is where they, I mean I remember the days of people that until will never be the data center, it's just a toy and of course your dominant player there now. So your initial focus here is is really defining the vision. Uh and and I'd be interested in your thoughts on the future, what the data center looks like in the future, where you see intel playing a role. What what are you seeing is the big trends there. You know, Pat Pat Gelsinger talks about the waves. He says if you don't ride the waves you're gonna end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >>That's right. You want to surf the waves? Right? That's the way to do it. So look, I like to look at this in sort of in terms of major macro trends. Right? And I think the biggest thing that's happening um in the market right now is the cloud revolution. Right? And I think we're halfway through or something like that and this transition from the classic uh client server type model, uh you know that we're with enterprises running their own data centers to more of a cloud model where something is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise themselves that message to the absolute there's a variety of different models, but the provisioning models have changed, right? The it's it's much more of a turnkey type service. And when when we started out on this journey, I think the we build data centers the same way that we built them before. Although you know the way to deliver it had really changed. Right? That's going to morph a service model and we're really now starting to see the hardware diverge right there actually. Silicon that we need to build or to address these use cases diverge. And so I think one of the things that is kind of the most interesting for me is really to think through how does intel in the future build silicon? That's that's built for clouds. You know, like on prem clouds. Edge clouds, hyper scale cloud but basically built for these new use cases that have emerged. So >>just kind of quick aside, I mean to me, the definition of cloud is changing. It's evolving. It used to be this set of remote services in a hyper scale data center. It's now, you know, that experience is coming on prem it's connecting across clouds. It's moving out to the edge, it's supporting, you know, all kinds of different workloads. How do you see that? It's evolving Cloud. >>Yeah, I think, I mean, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate, right? And that is actually has major architectural implications. I mean, just to, you know, this is a perfect analogy, but if I build a single family home, right, where everything is owned by one party, uh you know, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense? Right, sorry. In my house here has actually open kitchen, it's the same room essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture, right? The kitchen from from your restaurants where the cooks are busy preparing the food and the dining room where the guests are sitting there separate. Right? I mean, the hotel staff has a dedicated place to work and the guests have a dedicated places to mingle, but they don't overlap typically. I think it's the same thing with architecture in the clouds. Right? That's you know, initially the assumption was it's all one thing. And now suddenly we're starting to see, you know, like a much much cleaner separation of these different different areas. I think a second major influences that the type of workloads we're seeing. It's just evolving incredibly quickly. Right? I mean, you know, 10 years ago, you know, things were mostly monolithic today. You know, most new workloads are micro service base and that that has a huge impact in uh you know, where where CPU cycles are spent, you know, a way we need to put in accelerators, you know, how we how we build silicon for that too. Give you an idea, I mean there's some really good research out of google and facebook where they run numbers. For example, if you just take a a standard system and you run a micro service based application, written a micro service based architecture, you can spend anywhere from, I want to say 25 in some cases over 80% of your CPU cycles. Just an overhead. Right. And just on marshalling the marshaling the protocols and uh the encryption and decryption of the packets and your service match that sits in between all these things. So I created a huge amount of overhead so for us, 80% go into these, into these overhead functions. Really our focus suddenly needs to be uh how do we enable um, that kind of infrastructure? >>Yeah, So let's talk a little bit more about workloads if we can. I mean the overhead, there's also sort of as the software, as the data center becomes software defined, you know, thanks thanks to your good work at VM where there's a lot of cores that are supporting that software defined data center and then >>that's exactly right as >>well. You mentioned micro services, container based applications, but but as well, you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, but it's really data oriented workloads versus kind of general purpose CRP and finance and HCM So those workloads are exploding and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is intel playing there? >>Look, I think the trend you're talking about is definitely Right, Right. We're getting more and more data centric, you know, shifting the data around becomes a larger and larger part of the overall workload in the data center. Ai is getting a ton of attention. Right? It's look, if I talked to the most operators, aI is still emerging category. Right. I mean, we're seeing, I'd say five, maybe 10% percent of workloads being A. I. Um it's growing the very high value workloads right now, very challenging workloads. Um but you know, it's still a smaller part of the overall mix. Now, Edge edge is big and it's too big. It's big. And it's complicated because of the way I think about edges. It's not just one homogeneous market, it's really a collection of separate sub markets, right? It's very heterogeneous, you know, it runs on a variety of different hardware. All right. It can be everything from, you know, a little a little server that's families that's strapped to a phone, telephone pole with an antenna on top of, you know, to greater micro cell. Or it can be, you know, something that's running inside a car, Right. I mean, you know, uh, modern cars has a small little data center inside, it can be something that runs in the industrial factory floor, right. The network operators, there's a pretty broad range of verticals that all looks slightly different in, in their requirements. And uh, you know, and it's, I think it's really interesting, right? It's one of those areas that really creates opportunities for, for vendors like, like HPV right to, to, to really shine and and address this, this heterogeneity with a, with a broad range of solutions. Very excited to work together with them in that space. >>Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. But so my question is, what's the role of the data center in this, this world of edge? How do you see it? >>Yeah. Look, I think in a sense, what the cloud revolution is doing is that it's showing us a leads to polarisation of a classic data into edge and clout. That makes sense. Right. It's splitting right before this was all mingled a little bit together. If my data centers in my basement anyways, you know what the edge, what's data says the same thing. Right? At the moment I'm moving some workloads in the clouds. I don't even know where they're running anymore than some other workloads that have to have a certain sense of locality. I need to keep closely. Right. And there's some workloads, you just just can't move into the cloud, right? I mean, there's uh if I'm generating a lot of time on the video data that I have to process, it's financially completely unattractive to shift all of that, you know, to, to essential location. I want to do this locally. Right? Will I ever connect my smoke detector with my sprinkler system via the cloud? No, I won't write just for if things go bad, right, they may not work anymore. So I need something that does this locally. So I think as many reasons, you know, why, why you want to keep something on, on premises And I think it's, I think it's a growing market, right? It's very exciting. You know, we're doing some some very good stuff with friends at hp. You know, the they have the pro line dl 1, 10, 10, 10 plus server with our latest third generation z johnson them uh, the open ran, you know, which is the radio access network for the telco space HP Edge Line service. Also, the third generation says it's a really nice products there that I think can really help addressing enterprises carriers, a number of different organizations. You know, these these alleged use cases, Can you >>explain you mentioned open randy rand. So we essentially think of that as kind of the software to find telco. >>Yeah, exactly. It's a software defined cellular. Right. I mean, actually, I learned a lot about that of the recent months, You know, when, when, when I was taking these classes at stanford, you know, these things were still dying down in analogue, Right. That basically a radio signal will be processed in a long way and, and digested. And today, typically the radio signal is immediately digitized and all the processing of the radio signal happens happens digitally and uh, you know, it happens on servers, right? Um, something HP servers and uh, you know, it's, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually visualize these servers and, you know, run a number of different cells inside the same server. Right? It's really complicated because you have to have fantastic real time guarantees, very sophisticated software stack. But it's, it's really fascinating news case. >>You know, a lot of times we have these debates and it may be somewhat academic, but I'd love to get your thoughts on the debate is about, okay, how much data that that is, you know, processed and inferred at the edge is actually gonna come back to the cloud most of the day, is going to stay at the edge. A lot of it's not even gonna be persisted. And the counter to that is so that's sort of the negative for the data center. But the counter that is, they're gonna be so much data. Even a small percentage of all the data that we're going to create is going to create so much more data, you know, back in the cloud, back in the data center. What's your take on that? >>Look? I think there's different applications that are easier to do in certain places. Right? I mean, look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, you know, lots of different services. Um, you have four. If you need very specialized hardware. If I want to run a big learning task where something need 1000 machines. Right. And then this runs for a couple of days and then I don't need to do that for for another month or two. Right. For that is really great. There's on demand infrastructure, right? Having having all this capability up there, uh you know, at the same time it costs money to send the data up there, Right. If I just look at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the edge. Um so I think we'll we'll see, you know, customers picking and choosing what they want to do. Where. Right. And and there's a role for both. Right. Absolutely. And so, you know, I think there's there's certain categories, I mean, at the end of the day, um, why do I absolutely need to have something at the edge? And there's a couple of, I think good, good use cases. I mean one is, let me ask you a few phrases, but I think it's three primary reasons. Right? Um, one is simply a bandwidth, Right? What I'm saying? Okay, my my video data, like I have have 100 and four K video cameras, you know, with 60 frames a second feet, there's no way I'm going to move into the cloud. It's just cost prohibitive. I have a hard time getting a line that allows you to do this right. Um, there might be latency, right. If I don't want to reliably react in a very short period of time, I can't do that in the cloud. I need to do this locally with me. Um, I can't even do this in my data center. This has to be very, very closely coupled. And then there's this idea of faith sharing, I think, you know, that if I want to make sure that if things go wrong right, uh, the system is still intact, right. You know, anything that's an emergency kind of backup, emergency type procedure, right? If things go wrong, I can't rely on there'll be a good internet connection, I need to handle things things locally. Like, you know, that's the smoke detector and sprinkler system. Right? And so for for, for all of these, right, there's good reasons why we need to move things close to the edge. So I think there'll be a creative tension between the two, Right? But both are huge markets and I think there's, there's great opportunities for, for hp ahead to uh, you know, to, to work on these two cases. >>Yeah, for sure. Top brand in that compute business. So before we wrap up today, you know, thinking about your, your role, I mean part of your role is the trend spotter. You're right, you gotta, you're, you're kind of driving innovation, riding, surfing the waves as you said, you know, skating to the park, all >>the all my perfect crystal ball right here, Yeah, got all the cliches. >>Right? Yes, yeah. Right foot's a little pressure on you. But so what are some of the things that you're overseeing that you're, you're looking towards in terms of innovation projects, particularly obviously in the data center space, what's really exciting you >>look, I mean there's a lot of them and I pretty much all the, you know, the interesting ideas I get from talking to customers, right? You talk to to the sophisticated customers, you try to understand the problems that are trying to solve that they cancel right now and that that gives you ideas to just to pick a couple. Right? I mean, one thing, what area I'm probably thinking about a lot is how can we built in a sense, better accelerators for the infrastructure functions. Right. So, so no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can, I can reduce the amount of CPU cycles I I spent on, you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. Right? So clearly, if this is a large chunk of the overall uh cycle budget, right? We need to find ways to, to to shrink that right to to make this more efficient. Right? So that I think so this basically infrastructure function acceleration, it sounds probably as unsexy as any topic could sound, but I think this is actually really, really interesting area. One of the big levers we have right now in the data set. >>I would agree. I think that's actually really exciting because you actually can pick up a lot of the wasted cycles now and that's that drops right to the bottom line. But >>exactly. I mean it's you know, it's kind of funny. I mean we're still measuring so much with speck and rates of Cpus right performances like, well, They may actually make measuring the wrong thing, right? If 80% of the cycles of my upper spent an overhead right then the speed of the CPU doesn't matter as much. Right? It's other functions that end. So that's one um the second big one is memory is becoming a bigger and bigger issue. Right? And and it's it's memory cost because you know, memory prices, they used to have declined the same rate that, you know, our core counts and and and you know, Fox speeds increased. That's no longer the case. That we've run to some scaling limits there some physical scaling limits where memory prices are becoming stagnant and this is becoming a major pain point for everybody was building servers. Right. So I think we need to find ways how we can leverage memory more efficiently. Right, share memory more efficiently. We have some really cool ideas and in that space that we're working on. >>Yeah, let me just sorry to interrupt. But Pat hinted to that and your big announcement, I mean you talk about system on package I think is what he used to talk about what I call disaggregated memory and better sharing of that memory resource. And I mean that seems to be a clear benefit of value creation for the industry. >>Exactly, right. I mean, if this becomes a larger for our customers, this becomes a larger part of the overall cost, right? We want to help them address that issue. And you know, and then the third one is um, you know, we're seeing more and more data center operators effectively power limited. Right? So we need to reduce the overall power of systems or, you know, uh maybe to some degree, just figure out better ways of cooling these systems. But I think there's a there's a lot of innovation that can be done their right to both make these data centers more economical, but also to make them a little more green today, data centers have gotten big enough that if you look at the total amount of energy that we're spending in this world is mankind. Right. A chunk of that is going just to data centers. Right. And so if we're spending energy at that scale, right. I think we have to start thinking about how can we build data centers that are more energy officials? I'll do the same thing with less energy in the future. >>Well, thank you for for laying those out. I mean you guys have been long term partners with with HP and now of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. I would be and thanks so much for coming on the cube. >>It's great to be here. Great to be at the HP show. Thanks >>For being with us for HP Discover 2021 the virtual version. You're watching the Cube, the leader in digital tech coverage. Right back.
SUMMARY :
Welcome to the cube. So What attracted you to intel and what's your role here? And uh you know, we had a good conversation I think at Intel, you know, there's a, What what are you seeing is the big trends there. is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise It's moving out to the edge, it's supporting, you know, all kinds of different workloads. I mean, just to, you know, this is a perfect analogy, the software, as the data center becomes software defined, you know, thanks thanks to your good work at you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, I mean, you know, uh, modern cars has a small little data center inside, Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. So I think as many reasons, you know, why, why you want to keep something on, explain you mentioned open randy rand. you know, these things were still dying down in analogue, Right. is going to create so much more data, you know, back in the cloud, back in the data center. at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the you know, skating to the park, all space, what's really exciting you you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. I think that's actually really exciting because you I mean it's you know, it's kind of funny. And I mean that seems to be a clear benefit of value creation And you know, and then the third one is um, you know, we're seeing more and more data center operators of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. It's great to be here. For being with us for HP Discover 2021 the virtual version.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
60 frames | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Guido | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
25 | QUANTITY | 0.99+ |
Pat Pat Gelsinger | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
december | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.99+ |
hp | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.98+ |
Gelsinger | PERSON | 0.98+ |
one party | QUANTITY | 0.98+ |
four K | QUANTITY | 0.98+ |
four | QUANTITY | 0.97+ |
Guido appenzeller | PERSON | 0.97+ |
2021 | DATE | 0.97+ |
second | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
two cases | QUANTITY | 0.96+ |
10% percent | QUANTITY | 0.95+ |
stanford | ORGANIZATION | 0.94+ |
three primary reasons | QUANTITY | 0.94+ |
over 80% | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.91+ |
third | QUANTITY | 0.91+ |
10 | COMMERCIAL_ITEM | 0.91+ |
10 plus | COMMERCIAL_ITEM | 0.9+ |
single family | QUANTITY | 0.88+ |
z johnson | PERSON | 0.87+ |
Discover 2021 | COMMERCIAL_ITEM | 0.87+ |
dl 1 | COMMERCIAL_ITEM | 0.79+ |
couple | QUANTITY | 0.77+ |
second feet | QUANTITY | 0.76+ |
second big | QUANTITY | 0.74+ |
third generation | QUANTITY | 0.73+ |
rand | ORGANIZATION | 0.73+ |
days | QUANTITY | 0.7+ |
HPV | ORGANIZATION | 0.7+ |
H P | ORGANIZATION | 0.63+ |
things | QUANTITY | 0.62+ |
T. O. | PERSON | 0.57+ |
month | QUANTITY | 0.52+ |
E. | PERSON | 0.5+ |
generation | OTHER | 0.47+ |
Edge | COMMERCIAL_ITEM | 0.42+ |
HPE | ORGANIZATION | 0.37+ |
Guido Appenzeller, Intel | HPE Discover 2021
(soft music) >> Welcome back to HPE Discover 2021, the virtual version, my name is Dave Vellante and you're watching theCUBE and we're here with Guido Appenzeller, who is the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. >> Aww, thanks Dave, I appreciate it. It's great to be here today. >> So I'm interested in your role at the company, let's talk about that, you're brand new, tell us a little bit about your background. What attracted you to Intel and what's your role here? >> Yeah, so I'm, I grew up with the startup ecosystem of Silicon Valley, I came from my PhD and never left. And, built software companies, worked at software companies worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me, was like, Hey you got the wrong phone number, I'm a software guy, that's probably not who you're looking for. And, but we had a good conversation but I think at Intel, there's a realization that you need to look at what Intel builds more as this overall system from an overall systems perspective. That the software stack and then the hardware components are all getting more and more intricately linked and, you need the software to basically bridge across the different hardware components that Intel is building. So again, I was the CTO for the Data Platforms Group, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, with Pat, I've got a fantastic CEO at the helm. I've worked with him before at VMware. So a lot of things to do but I think a very exciting future. >> Well, I mean the, the data centers the wheelhouse of Intel, of course your ascendancy was a function of the PCs and the great volume and how you change that industry but really data centers is where, I remember the days people said, Intel will never be at the data center, it's just the toy. And of course, you're dominant player there now. So your initial focus here is really defining the vision and I'd be interested in your thoughts on the future what the data center looks like in the future where you see Intel playing a role, what are you seeing as the big trends there? Pat Gelsinger talks about the waves, he says, if you don't ride the waves you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >> Yeah, that's right. You want to surf the waves, that's the way to do it. So look, I like to look at this and sort of in terms of major macro trends, And I think that the biggest thing that's happening in the market right now is the cloud revolution. And I think we're well halfway through or something like that. And this transition from the classic, client server type model, that way with enterprises running all data centers to more of a cloud model where something is run by hyperscale operators or maybe run by an enterprise themselves of (indistinct) there's a variety of different models. but the provisioning models have changed. It's much more of a turnkey type service. And when we started out on this journey I think the, we built data centers the same way that we built them before. Although, the way to deliver IT have really changed, it's going through more of a service model and we really know starting to see the hardware diverge, the actual silicon that we need to build and how to address these use cases, diverge. And so I think one of the things that is probably most interesting for me is really to think through, how does Intel in the future build silicon that's built for clouds, like on-prem clouds, edge clouds, hyperscale clouds, but basically built for these new use cases that have emerged. >> So just a quick, kind of a quick aside, to me the definition of cloud is changing, it's evolving and it used to be this set of remote services in a hyperscale data center, it's now that experience is coming on-prem it's connecting across clouds, it's moving out to the edge it's supporting, all kinds of different workloads. How do you see that sort of evolving cloud? >> Yeah, I think, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate. And that is actually has major architectural implications, it just, this is a perfect analogy, but if I build a single family home, where everything is owned by one party, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense. So, in my house here is actually the open kitchen, it's the same room, essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture. The kitchen from your restaurants where the cooks are busy preparing the food and the dining room, where the guests are sitting, they are separate. The hotel staff has a dedicated place to work and the guests have a dedicated places to mingle but they don't overlap, typically. I think it's the same thing with architecture in the clouds. So, initially the assumption was it's all one thing and now suddenly we're starting to see like a much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing it's just evolving incredibly quickly, 10 years ago, things were mostly monolithic, today most new workloads are microservice based, and that has a huge impact in where CPU cycles are spent, where we need to put an accelerators, how we build silicon for that to give you an idea, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based an application but in the microservice-based architecture you can spend anywhere from I want to say 25 in some cases, over 80% of your CPU cycles just on overhead, and just on, marshaling demarshaling the protocols and the encryption and decryption of the packets and your service mesh that sits in between all of these things, that created a huge amount of overhead. So for us might have 80% go into these overhead functions really all focus on this needs to be on how do we enable that kind of infrastructure? >> Yeah, so let's talk a little bit more about workloads if we can, the overhead there's also sort of, as the software as the data center becomes software defined thanks to your good work at VMware, it is a lot of cores that are supporting that software-defined data center. And then- >> It's at VMware, yeah. >> And as well, you mentioned microservices container-based applications, but as well, AI is coming into play. And what is, AI is just kind of amorphous but it's really data-oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding, and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? >> I think the trends you're talking about is definitely right, and we're getting more and more data centric, shifting the data around becomes a larger and larger part of the overall workload in the data center. And AI is getting a ton of attention. Look if I talk to the most operators AI is still an emerging category. We're seeing, I'd say five, maybe 10% percent of workloads being AI is growing, they're very high value workloads. And they're very challenging workloads, but it's still a smaller part of the overall mix. Now edge is big and edge is two things, it's big and it's complicated because of the way I think about edge is it's not just one homogeneous market, it's really a collection of separate sub markets It's, very heterogeneous, it runs on a variety of different hardware. Edge can be everything from a little server, that's fanless, it's strapped to a phone, a telephone pole with an antenna on top of it, to aid a microcell, or it can be something that's running inside a car, modern cars has a small little data center inside. It can be something that runs on an industrial factory floor, the network operators, there's pretty broad range of verticals that all looks slightly different in their requirements. And, it's, I think it's really interesting, it's one of those areas that really creates opportunities for vendors like HPE, to really shine and address this heterogeneity with a broad range of solutions, very excited to work together with them in that space. >> Yeah, so I'm glad you brought HPE into the discussion, 'cause we're here at HPE Discover, I want to connect that. But so when I think about HPE strategy, I see a couple of opportunities for them. Obviously Intel is going to play in every part of the edge, the data center, the near edge and the far edge, and I gage HPE does as well with Aruba. Aruba is going to go to the far edge. I'm not sure at this point, anyway it's not yet clear to me how far, HPE's traditional server business goes to the, inside of automobiles, we'll see, but it certainly will be at the, let's call it the near edge as a consolidation point- >> Yeah. >> Et cetera and look the edge can be a race track, it could be a retail store, it could be defined in so many ways. Where does it make sense to process the data? But, so my question is what's the role of the data center in this world of edge? How do you see it? >> Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us, it leads to polarization of a classic data into edge and cloud, if that makes sense, it's splitting, before this was all mingled a little bit together, if my data centers my basement anyways, what's the edge, what's data center? It's the same thing. The moment I'm moving some workloads to the clouds I don't even know where they're running anymore then some other workloads that have to have a certain sense of locality, I need to keep closely. And there are some workloads you just can't move into the cloud. There's, if I'm generating lots of all the video data that I have to process, it's financially a completely unattractive to shift all of that, to a central location, I want to do this locally. And will I ever connect my smoke detector with my sprinkler system be at the cloud? No I won't, this stuff, if things go bad, that may not work anymore. So I need something that's that does this locally. So I think there's many reasons, why you want to keep something on premises. And I think it's a growing market, it's very exciting, we're doing some very good stuff with friends like HPE, they have the ProLiant DL, one 10 Gen10 Plus server with our latest a 3rd Generation Xeons on them the Open RAN, which is the radio access network in the telco space. HP Edgeline servers, also a 3rd Generation Xeons there're some really nice products there that I think can really help addressing enterprises, carriers and a number of different organizations, these edge use cases. >> Can you explain, you mentioned Open RAN, vRAN, should we essentially think of that as kind of the software-defined telco? >> Yeah, exactly. It's software-defined cellular. I actually, I learned a lot about that over the recent months. When I was taking these classes at Stanford, these things were still done in analog, that doesn't mean a radio signal will be processed in an analog way and digest it and today typically the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And, it happens on servers, some of them HPE servers. And, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and, run a number of different cells, inside the same server. And it's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. >> A lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. And debate is about, how much data that is processed and inferred at the edge is actually going to come back to the cloud, most of the data is going to stay at the edge, a lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative is at the data center, but then the counter that is there going to be so much data, even a small percentage of all the data that we're going to create is going to create so much more data, back in the cloud, back in the data center. What's your take on that? >> Look, I think there's different applications that are easier to do in certain places. Look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, lots of different services. You'll have first, if you need very specialized hardware, if I wanted to run the bigger learning task where somebody needed a 1000 machines, and then this runs for a couple of days, and then I don't need to do that for another month or two, for that is really great. There's on demand infrastructure, having all this capability up there, at the same time it costs money to send the data up there. If I just look at the hardware cost, it's much much cheaper to build it myself, in my own data center or in the edge. So I think we'll see, customers picking and choosing what they want to do where, and that there's a role for both, absolutely. And so, I think there's certain categories. At the end of the day why do I absolutely need to have something at the edge? There's a couple of, I think, good use cases. One is, let me actually rephrase a little bit. I think it's three primary reasons. One is simply a bandwidth, where I'm saying, my video data, like I have a 100 4K video cameras, with 60 frames per second feeds, there's no way I'm going to move that into the cloud. It's just, cost prohibitive- >> Right. >> I have a hard time even getting (indistinct). There might be latency, if I need want to reliably react in a very short period of time, I can't do that in the cloud, I need to do this locally with me. I can't even do this in my data center. This has to be very closely coupled. And, then there's this idea of fade sharing. I think, if I want to make sure that if things go wrong, the system is still intact, anything that's sort of an emergency kind of a backup, an emergency type procedure, if things go wrong, I can't rely on the big good internet connection, I need to handle things, things locally, that's the smoke detector and the sprinkler system. And so for all of these, there's good reasons why we need to move things close to the edge so I think there'll be a creative tension between the two but both are huge markets. And I think there's great opportunities for HP ahead to work on all these use cases. >> Yeah, for sure, top brand is in that compute business. So before we wrap up today, thinking about your role, part of your role is a trend spotter. You're kind of driving innovation righty, surfing the waves as you said, skating to the puck, all the- >> I've got my perfect crystal ball right here, yeah I got. >> Yeah, all the cliches. (Dave chuckles) puts a little pressure on you, but, so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects particularly obviously in the data center space, what's really exciting you? >> Look, there's a lot of them and I pretty much all the interesting ideas I get from talking to customers. You talk to the sophisticated customers, you try to understand the problems that they're trying to solve and they can't solve right now, and that gives you ideas to just to pick a couple, one thing what area I'm probably thinking about a lot is how can we build in a sense better accelerators for the infrastructure functions? So, no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spend on microservice marshaling demarshaling, service mesh, storage acceleration and these things like that. And so well clearly, if this is a large chunk of the overall cycle budget, we need to find ways to shrink that to make this more efficient. So then I think, so this basic infrastructure function acceleration, sounds probably as unsexy as any topic would sound but I think this is actually really, really interesting area and one of the big levers we have right now in the data center. >> Yeah, I would agree Guido, I think that's actually really exciting because, you actually can pick up a lot of the wasted cycles now and that drops right to the bottom line, but please- >> Yeah, exactly. And it's kind of funny we're still measuring so much with SPEC and rates of CPU's performances, it's like, well, we may actually be measuring the wrong thing. If 80% of the cycles of my app are spent in overhead, then the speed of the CPU doesn't matter as much, it's other functions that (indistinct). >> Right. >> So that's one. >> The second big one is memory is becoming a bigger and bigger issue, and it's memory cost 'cause, memory prices, they used to sort of decline at the same rate that our core counts and then clock speeds increased, that's no longer the case. So we've run to some scaling limits, there's some physical scaling limits where memory prices are becoming stagnant. And this has become a major pain point for everybody who's building servers. So I think we need to find ways how we can leverage memory more efficiently, share memory more efficiently. We have some really cool ideas in that space that we're working on. >> Well, yeah. And Pat, let me just sorry to interrupt but Pat hinted to that and your big announcement. He talked about system on package and I think is what you used to talk about what I call disaggregated memory and better sharing of that memory resource. And that seems to be a clear benefit of value creation for the industry. >> Exactly. If this becomes a larger, if for our customers this becomes a larger part of the overall costs, we want to help them address that issue. And the third one is, we're seeing more and more data center operators that effectively power limited. So we need to reduce the overall power of systems, or maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there to both make these data centers more economical but also to make them a little more Green. Today data centers have gotten big enough that if you look at the total amount of energy that we're spending, this world as mankind, a chunk of that is going just to data center. And so if we're spending energy at that scale, I think we have to start thinking about how can we build data centers that are more energy efficient that are also doing the same thing with less energy in the future. >> Well, thank you for laying those out, you guys have been long-term partners with HP and now of course HPE, I'm sure Gelsinger is really happy to have you on board, Guido I would be and thanks so much for coming to theCUBE. >> It's great to be here and great to be at the HP show. >> And thanks for being with us for HPE Discover 2021, the virtual version, you're watching theCUBE the leader in digital tech coverage, be right back. (soft music)
SUMMARY :
2021, the virtual version, It's great to be here today. and what's your role here? so that builds the data data center of the future? the actual silicon that we need to build it's moving out to the edge is that the type of workloads we're seeing as the data center It's at VMware, And as well, you mentioned and larger part of the overall the data center, the near the role of the data center lots of all the video data about that over the recent months. And the counter to that is, move that into the cloud. and the sprinkler system. righty, surfing the waves I've got my perfect in the data center space, of the overall cycle If 80% of the cycles of my that's no longer the case. And that seems to be a clear benefit that are also doing the same thing happy to have you on board, great to be at the HP show. the virtual version,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Guido | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Data Platforms Group | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one party | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
ProLiant DL | COMMERCIAL_ITEM | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
three primary reasons | QUANTITY | 0.96+ |
second | QUANTITY | 0.95+ |
Data Platforms Group | ORGANIZATION | 0.94+ |
Open RAN | TITLE | 0.94+ |
10% percent | QUANTITY | 0.94+ |
vRAN | TITLE | 0.92+ |
HPE Discover | ORGANIZATION | 0.91+ |
Stanford | ORGANIZATION | 0.91+ |
HPE | TITLE | 0.89+ |
over 80% | QUANTITY | 0.89+ |
single family home | QUANTITY | 0.88+ |
10 Gen10 Plus | COMMERCIAL_ITEM | 0.83+ |
HPE Discover 2021 | EVENT | 0.81+ |
couple | QUANTITY | 0.81+ |
60 frames per second feeds | QUANTITY | 0.79+ |
one thing | QUANTITY | 0.77+ |
HP | EVENT | 0.75+ |
Edgeline | COMMERCIAL_ITEM | 0.74+ |
4K | QUANTITY | 0.74+ |
couple of days | QUANTITY | 0.73+ |
second big | QUANTITY | 0.72+ |
3rd Generation | COMMERCIAL_ITEM | 0.72+ |
month | QUANTITY | 0.69+ |
Aruba | ORGANIZATION | 0.6+ |
telco | ORGANIZATION | 0.57+ |
Discover 2021 | EVENT | 0.55+ |
theCUBE | ORGANIZATION | 0.54+ |
Guido Appenzeller | HPE Discover 2021
(soft music) >> Welcome back to HPE Discover 2021, the virtual version, my name is Dave Vellante and you're watching theCUBE and we're here with Guido Appenzeller, who is the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. >> Aww, thanks Dave, I appreciate it. It's great to be here today. >> So I'm interested in your role at the company, let's talk about that, you're brand new, tell us a little bit about your background. What attracted you to Intel and what's your role here? >> Yeah, so I'm, I grew up with the startup ecosystem of Silicon Valley, I came from my PhD and never left. And, built software companies, worked at software companies worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me, was like, Hey you got the wrong phone number, I'm a software guy, that's probably not who you're looking for. And, but we had a good conversation but I think at Intel, there's a realization that you need to look at what Intel builds more as this overall system from an overall systems perspective. That the software stack and then the hardware components are all getting more and more intricately linked and, you need the software to basically bridge across the different hardware components that Intel is building. So again, I was the CTO for the Data Platforms Group, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, with Pat, I've got a fantastic CEO at the helm. I've worked with him before at VMware. So a lot of things to do but I think a very exciting future. >> Well, I mean the, the data centers the wheelhouse of Intel, of course your ascendancy was a function of the PCs and the great volume and how you change that industry but really data centers is where, I remember the days people said, Intel will never be at the data center, it's just the toy. And of course, you're dominant player there now. So your initial focus here is really defining the vision and I'd be interested in your thoughts on the future what the data center looks like in the future where you see Intel playing a role, what are you seeing as the big trends there? Pat Gelsinger talks about the waves, he says, if you don't ride the waves you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >> Yeah, that's right. You want to surf the waves, that's the way to do it. So look, I like to look at this and sort of in terms of major macro trends, And I think that the biggest thing that's happening in the market right now is the cloud revolution. And I think we're well halfway through or something like that. And this transition from the classic, client server type model, that way with enterprises running all data centers to more of a cloud model where something is run by hyperscale operators or maybe run by an enterprise themselves of (indistinct) there's a variety of different models. but the provisioning models have changed. It's much more of a turnkey type service. And when we started out on this journey I think the, we built data centers the same way that we built them before. Although, the way to deliver IT have really changed, it's going through more of a service model and we really know starting to see the hardware diverge, the actual silicon that we need to build and how to address these use cases, diverge. And so I think one of the things that is probably most interesting for me is really to think through, how does Intel in the future build silicon that's built for clouds, like on-prem clouds, edge clouds, hyperscale clouds, but basically built for these new use cases that have emerged. >> So just a quick, kind of a quick aside, to me the definition of cloud is changing, it's evolving and it used to be this set of remote services in a hyperscale data center, it's now that experience is coming on-prem it's connecting across clouds, it's moving out to the edge it's supporting, all kinds of different workloads. How do you see that sort of evolving cloud? >> Yeah, I think, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate. And that is actually has major architectural implications, it just, this is a perfect analogy, but if I build a single family home, where everything is owned by one party, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense. So, in my house here is actually the open kitchen, it's the same room, essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture. The kitchen from your restaurants where the cooks are busy preparing the food and the dining room, where the guests are sitting, they are separate. The hotel staff has a dedicated place to work and the guests have a dedicated places to mingle but they don't overlap, typically. I think it's the same thing with architecture in the clouds. So, initially the assumption was it's all one thing and now suddenly we're starting to see like a much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing it's just evolving incredibly quickly, 10 years ago, things were mostly monolithic, today most new workloads are microservice based, and that has a huge impact in where CPU cycles are spent, where we need to put an accelerators, how we build silicon for that to give you an idea, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based an application but in the microservice-based architecture you can spend anywhere from I want to say 25 in some cases, over 80% of your CPU cycles just on overhead, and just on, marshaling demarshaling the protocols and the encryption and decryption of the packets and your service mesh that sits in between all of these things, that created a huge amount of overhead. So for us might have 80% go into these overhead functions really all focus on this needs to be on how do we enable that kind of infrastructure? >> Yeah, so let's talk a little bit more about workloads if we can, the overhead there's also sort of, as the software as the data center becomes software defined thanks to your good work at VMware, it is a lot of cores that are supporting that software-defined data center. And then- >> It's at VMware, yeah. >> And as well, you mentioned microservices container-based applications, but as well, AI is coming into play. And what is, AI is just kind of amorphous but it's really data-oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding, and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? >> I think the trends you're talking about is definitely right, and we're getting more and more data centric, shifting the data around becomes a larger and larger part of the overall workload in the data center. And AI is getting a ton of attention. Look if I talk to the most operators AI is still an emerging category. We're seeing, I'd say five, maybe 10% percent of workloads being AI is growing, they're very high value workloads. So (indistinct) any workloads, but it's still a smaller part of the overall mix. Now edge is big and edge is two things, it's big and it's complicated because of the way I think about edge is it's not just one homogeneous market, it's really a collection of separate sub markets It's, very heterogeneous, it runs on a variety of different hardware. Edge can be everything from a little server, that's (indistinct), it's strapped to a phone, a telephone pole with an antenna on top of it, to (indistinct) microcell, or it can be something that's running inside a car, modern cars has a small little data center inside. It can be something that runs on an industrial factory floor, the network operators, there's pretty broad range of verticals that all looks slightly different in their requirements. And, it's, I think it's really interesting, it's one of those areas that really creates opportunities for vendors like HPE, to really shine and address this heterogeneity with a broad range of solutions, very excited to work together with them in that space. >> Yeah, so I'm glad you brought HPE into the discussion, 'cause we're here at HPE Discover, I want to connect that. But so when I think about HPE strategy, I see a couple of opportunities for them. Obviously Intel is going to play in every part of the edge, the data center, the near edge and the far edge, and I gage HPE does as well with Aruba. Aruba is going to go to the far edge. I'm not sure at this point, anyway it's not yet clear to me how far, HPE's traditional server business goes to the, inside of automobiles, we'll see, but it certainly will be at the, let's call it the near edge as a consolidation point- >> Yeah. >> Et cetera and look the edge can be a race track, it could be a retail store, it could be defined in so many ways. Where does it make sense to process the data? But, so my question is what's the role of the data center in this world of edge? How do you see it? >> Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us, it leads to polarization of a classic data into edge and cloud, if that makes sense, it's splitting, before this was all mingled a little bit together, if my data centers my basement anyways, what's the edge, what's data center? It's the same thing. The moment I'm moving some workloads to the clouds I don't even know where they're running anymore then some other workloads that have to have a certain sense of locality, I need to keep closely. And there are some workloads you just can't move into the cloud. There's, if I'm generating lots of all the video data that I have to process, it's financially a completely unattractive to shift all of that, to a central location, I want to do this locally. And will I ever connect my smoke detector with my sprinkler system be at the cloud? No I won't (Guido chuckles) this stuff, if things go bad, that may not work anymore. So I need something that's that does this locally. So I think there's many reasons, why you want to keep something on premises. And I think it's a growing market, it's very exciting, we're doing some very good stuff with friends like HPE, they have the ProLiant DL, one 10 Gen10 Plus server with our latest a 3rd Generation Xeons on them the Open RAN, which is the radio access network in the telco space. HP Edgeline servers, also a 3rd Generation Xeons there're some really nice products there that I think can really help addressing enterprises, carriers and a number of different organizations, these edge use cases. >> Can you explain, you mentioned Open RAN, vRAN, should we essentially think of that as kind of the software-defined telco? >> Yeah, exactly. It's software-defined cellular. I actually, I learned a lot about that over the recent months. When I was taking these classes at Stanford, these things were still done in analog, that doesn't mean a radio signal will be processed in an analog way and digest it and today typically the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And, it happens on servers, some of them HPE servers. And, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and, run a number of different cells, inside the same server. And it's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. >> A lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. And debate is about, how much data that is processed and inferred at the edge is actually going to come back to the cloud, most of the data is going to stay at the edge, a lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative is at the data center, but then the counter that is there going to be so much data, even a small percentage of all the data that we're going to create is going to create so much more data, back in the cloud, back in the data center. What's your take on that? >> Look, I think there's different applications that are easier to do in certain places. Look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, lots of different services. You'll have first, if you need very specialized hardware, if I wanted to run the bigger learning task where somebody needed a 1000 machines, and then this runs for a couple of days, and then I don't need to do that for another month or two, for that is really great. There's on demand infrastructure, having all this capability up there, at the same time it costs money to send the data up there. If I just look at the hardware cost, it's much much cheaper to build it myself, in my own data center or in the edge. So I think we'll see, customers picking and choosing what they want to do where, and that there's a role for both, absolutely. And so, I think there's certain categories. At the end of the day why do I absolutely need to have something at the edge? There's a couple of, I think, good use cases. One is, let me actually rephrase a little bit. I think it's three primary reasons. One is simply a bandwidth, where I'm saying, my video data, like I have a 100 4K video cameras, with 60 frames per second feeds, there's no way I'm going to move that into the cloud. It's just, cost prohibitive- >> Right. >> I have a hard time even getting (indistinct). There might be latency, if I need want to reliably react in a very short period of time, I can't do that in the cloud, I need to do this locally with me. I can't even do this in my data center. This has to be very closely coupled. And, then there's this idea of fade sharing. I think, if I want to make sure that if things go wrong, the system is still intact, anything that's sort of an emergency kind of a backup, an emergency type procedure, if things go wrong, I can't rely on the big good internet connection, I need to handle things, things locally, that's the smoke detector and the sprinkler system. And so for all of these, there's good reasons why we need to move things close to the edge so I think there'll be a creative tension between the two but both are huge markets. And I think there's great opportunities for HP ahead to work on all these use cases. >> Yeah, for sure, top brand is in that compute business. So before we wrap up today, thinking about your role, part of your role is a trend spotter. You're kind of driving innovation righty, surfing the waves as you said, skating to the puck, all the- >> I've got my perfect crystal ball right here, yeah I got. >> Yeah, all the cliches. (Dave chuckles) puts a little pressure on you, but, so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects particularly obviously in the data center space, what's really exciting you? >> Look, there's a lot of them and I pretty much all the interesting ideas I get from talking to customers. You talk to the sophisticated customers, you try to understand the problems that they're trying to solve and they can't solve right now, and that gives you ideas to just to pick a couple, one thing what area I'm probably thinking about a lot is how can we build in a sense better accelerators for the infrastructure functions? So, no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spend on microservice marshaling demarshaling, service mesh, storage acceleration and these things like that. And so well clearly, if this is a large chunk of the overall cycle budget, we need to find ways to shrink that to make this more efficient. So then I think, so this basic infrastructure function acceleration, sounds probably as unsexy as any topic would sound but I think this is actually really, really interesting area and one of the big levers we have right now in the data center. >> Yeah, I would agree Guido, I think that's actually really exciting because, you actually can pick up a lot of the wasted cycles now and that drops right to the bottom line, but please- >> Yeah, exactly. And it's kind of funny we're still measuring so much with SPEC and rates of CPU's performances, it's like, well, we may actually be measuring the wrong thing. If 80% of the cycles of my app are spent in overhead, then the speed of the CPU doesn't matter as much, it's other functions that (indistinct). >> Right. >> So that's one. >> The second big one is memory is becoming a bigger and bigger issue, and it's memory cost 'cause, memory prices, they used to sort of decline at the same rate that our core counts and then clock speeds increased, that's no longer the case. So we've run to some scaling limits, there's some physical scaling limits where memory prices are becoming stagnant. And this has become a major pain point for everybody who's building servers. So I think we need to find ways how we can leverage memory more efficiently, share memory more efficiently. We have some really cool ideas in that space that we're working on. >> Well, yeah. And Pat, let me just sorry to interrupt but Pat hinted to that and your big announcement. He talked about system on package and I think is what you used to talk about what I call disaggregated memory and better sharing of that memory resource. And that seems to be a clear benefit of value creation for the industry. >> Exactly. If this becomes a larger, if for our customers this becomes a larger part of the overall costs, we want to help them address that issue. And the third one is, we're seeing more and more data center operators that effectively power limited. So we need to reduce the overall power of systems, or maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there to both make these data centers more economical but also to make them a little more Green. Today data centers have gotten big enough that if you look at the total amount of energy that we're spending, this world as mankind, a chunk of that is going just to data center. And so if we're spending energy at that scale, I think we have to start thinking about how can we build data centers that are more energy efficient that are also doing the same thing with less energy in the future. >> Well, thank you for laying those out, you guys have been long-term partners with HP and now of course HPE, I'm sure Gelsinger is really happy to have you on board, Guido I would be and thanks so much for coming to theCUBE. >> It's great to be here and great to be at the HP show. >> And thanks for being with us for HPE Discover 2021, the virtual version, you're watching theCUBE the leader in digital tech coverage, be right back. (soft music)
SUMMARY :
2021, the virtual version, It's great to be here today. and what's your role here? so that builds the data data center of the future? the actual silicon that we need to build it's moving out to the edge is that the type of workloads we're seeing as the data center It's at VMware, And as well, you mentioned and larger part of the overall the data center, the near the role of the data center lots of all the video data about that over the recent months. And the counter to that is, move that into the cloud. and the sprinkler system. righty, surfing the waves I've got my perfect in the data center space, of the overall cycle If 80% of the cycles of my that's no longer the case. And that seems to be a clear benefit that are also doing the same thing happy to have you on board, great to be at the HP show. the virtual version,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Guido | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
60 frames | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
Gelsinger | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Data Platforms Group | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
one party | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10 years ago | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
Today | DATE | 0.98+ |
VMware | ORGANIZATION | 0.97+ |
ProLiant DL | COMMERCIAL_ITEM | 0.97+ |
three primary reasons | QUANTITY | 0.96+ |
second | QUANTITY | 0.96+ |
Data Platforms Group | ORGANIZATION | 0.94+ |
10% percent | QUANTITY | 0.93+ |
Open RAN | TITLE | 0.9+ |
over 80% | QUANTITY | 0.89+ |
single family home | QUANTITY | 0.88+ |
HPE Discover | ORGANIZATION | 0.87+ |
HPE | TITLE | 0.85+ |
vRAN | TITLE | 0.85+ |
couple | QUANTITY | 0.82+ |
Stanford | ORGANIZATION | 0.81+ |
4K | QUANTITY | 0.79+ |
telco | ORGANIZATION | 0.79+ |
Aruba | LOCATION | 0.79+ |
second feeds | QUANTITY | 0.78+ |
couple of days | QUANTITY | 0.77+ |
one thing | QUANTITY | 0.77+ |
HPE Discover 2021 | EVENT | 0.75+ |
10 Gen10 Plus | COMMERCIAL_ITEM | 0.75+ |
HP | EVENT | 0.75+ |
Edgeline | COMMERCIAL_ITEM | 0.74+ |
theCUBE | ORGANIZATION | 0.67+ |