Udayan Mukherjee, Intel & Manish Singh, Dell Techhnologies | MWC Barcelona 2023
(soft corporate jingle) >> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat jingle intro) >> Welcome back to Barcelona. We're here live at the Fira. (laughs) Just amazing day two of MWC23. It's packed today. It was packed yesterday. It's even more packed today. All the news is flowing. Check out siliconangle.com. John Furrier is in the studio in Palo Alto breaking all the news. And, we are here live. Really excited to have Udayan Mukherjee who's the Senior Fellow and Chief Architect of wireless product at Network and Edge for Intel. And, Manish Singh is back. He's the CTO of Telecom Systems Business at Dell Jets. Welcome. >> Thank you. >> Thank you >> We're going to talk about greening the network. I wonder, Udayan, if you could just set up why that's so important. I mean, it's obvious that it's an important thing, great for the environment, but why is it extra important in Telco? >> Yeah, thank you. Actually, I'll tell you, this morning I had a discussion with an operator. The first thing he said, that the electricity consumption is more expensive nowadays that total real estate that he's spending money on. So, it's like that is the number one thing that if you can change that, bring that power consumption down. And, if you talk about sustainability, look what is happening in Europe, what's happening in all the electricity areas. That's the critical element that we need to address. Whether we are defining chip, platforms, storage systems, that's the number one mantra right now. You know, reduce the power. Electricity consumption, because it's a sustainable planet that we are living in. >> So, you got CapEx and OpEx. We're talking about the big piece of OpEx is now power consumption? >> Power Consumption >> That's the point. Okay, so in my experience, servers are the big culprit for power consumption, which is powered by core semiconductors and microprocessors. So, what's the strategy to reduce the power consumption? You're probably not going to reduce the bill overall. You maybe just can keep pace, but from a technical standpoint, how do you attack that? >> Yeah, there are multiple defined ways of adding. Obviously the process technology, that micro (indistinct) itself is evolving to make it more low-power systems. But, even within the silicon, the server that we develop, if you look in a CPU, there is a lot of power states. So, if you have a 32 code platform, as an example, every code you can vary the frequency and the C-states, power states. So, if you look into any traffic, whether it's a radio access network, packet code. At any given time the load is not peak. So, your power consumption, actual what we are drawing from the wall, it also needs to vary with that. So, that's how if you look into this there's a huge savings. If you go to Intel booth or Ericson booth or anyone, you will see right now every possible, the packet code, radio access network, everything network. They're talking about our energy consumption, how they're lowering this. These states, as we call it power states, C-state P-state they've built in intel chip for a long time. The cloud providers are taking advantage of it. But Telcos, with even two generation before they used to actually switch it off in the bios. I say no, we need peak. Now, that thing is changing. Now, it's all like, how do I take advantage of the built in technologies? >> I remember the enterprise virtualization, Manish, was a big play. I remember PG&E used to give rebates to customers that would install virtualized software, VMware. >> And SSDs. >> Yeah. And SSDs, you know, yes. Because, the spinning disc was, but, nowhere near with a server consumption. So, how virtualized is the telco network? And then, what I'm saying is there other things, other knobs, you can of course turn. So, what's your perspective on this as a server player? >> Yeah, absolutely. Let me just back up a little bit and start at the big picture to share what Udayan said. Here, day two, every conversation I've had yesterday and today morning with every operator, every CTO, they're coming in and first topic they're talking about is energy. And, the reason is, A, it's the right thing to do, sustainability, but, it's also becoming a P&L issue. And, the reason it's becoming a P&L issue is because we are in this energy inflationary environment where the energy costs are constantly going up. So, it's becoming really important for the service providers to really drive more efficiency onto their networks, onto their infrastructure. Number one. Two, then to your question on what all knobs need to be turned on, and what are the knobs? So, Udayan talked about within the intel, silicon, the C-states, P-states and all these capabilities that are being brought up, absolutely important. But again, if we take a macro view of it. First of all, there are opportunities to do infrastructure audit. What's on, why is it on, does it need to be on right now? Number two, there are opportunities to do infrastructure upgrade. And, what I mean by that is as you go from previous generation servers to next generation servers, better cooling, better performance. And through all of that you start to gain power usage efficiency inside a data center. And, you take that out more into the networks you start to achieve same outcomes on the network site. Think about from a cooling perspective, air cooling but for that matter, even liquid cooling, especially inside the data centers. All opportunities around PUE, because PUE, power usage efficiency and improvement on PUE is an opportunity. But, I'll take it even further. Workloads that are coming onto it, core, RAN, these workloads based on the dynamic traffic. Look, if you look at the traffic inside a network, it's not constant, it's varied. As the traffic patterns change, can you reduce the amount of infrastructure you're using? I.e. reduce the amount of power that you're using and when the traffic loads are going up. So, the workloads themselves need to become more smarter about that. And last, but not the least. From an orchestration layer if you think about it, where you are placing these workloads, and depending on what's available, you can start to again, drive better energy outcomes. And, not to forget acceleration. Where you need acceleration, can you have the right hardware infrastructures delivering the right kind of accelerations to again, improve those energy efficiency outcomes. So, it's a complex problem. But, there are a lot of levers, lot of tools that are in place that the service providers, the technology builders like us, are building the infrastructure, and then the workload providers all come together to really solve this problem. >> Yeah, Udayan, Manish mentioned this idea of moving from one generation to a new generation and gaining benefits. Out there on the street, if you will. Most of the time it's an N plus 2 migration. It's not just moving from last generation to this next generation, but it's really a generation ago. So, those significant changes in the dynamics around power density and cooling are meaningful? You talk about where performance should be? We start talking about the edge. It's hard to have a full-blown raised data center floor edge everywhere. Do these advances fundamentally change the kinds of things that you can do at the base of a tower? >> Yeah, absolutely. Manish talked about that, the dynamic nature of the workload. So, we are using a lot of this AIML to actually predict. Like for example, your multiple force in a systems. So, why is the 32 core as a system, why is all running? So, your traffic profile in the night times. So, you are in the office areas, in the night has gone home and nowadays everybody's working from remote anyway. So, why is this thing a full blown, spending the TDP, the total power and extreme powers. You bring it down, different power states, C-states. We talked about it. Deeper C-states or P-states, you bring the frequency down. So, lot of those automation, even at the base of the tower. Lot of our deployment right now, we are doing a whole bunch of massive MIMO deployment. Virtual RAN in Verizon network. All actually cell-site deployment. Those eight centers are very close to the cell-site. And, they're doing aggressive power management. So, you don't have to go to a huge data centers, even there's a small rack of systems, four to five, 10 systems, you can do aggressive power management. And, you built it up that way. >> Okay. >> If I may just build on what Udayan said. I mean if you look at the radio access network, right? And, let's start at the bottom of the tower itself. The infrastructure that's going in there, especially with Open RAN, if you think about it, there are opportunities now to do a centralized RAN where you could do more BBU pooling. And, with that, not only on a given tower but across a given given coverage area, depending on what the traffics are, you can again get the infrastructure to become more efficient in terms of what traffic, what needs are, and really start to benefit. The pooling gains which is obviously going to give you benefit on the CapEx side, but from an energy standpoint going to give you benefits on the OpEx side of things. So that's important. The second thing I will say is we cannot forget, especially on the radio access side of things, that it's not just the bottom of the tower what's happening there. What's happening on the top of the tower especially with the radio, that's super important. And, that goes into how do you drive better PA efficiency, how do you drive better DPD in there? This is where again, applying AI machine learning there is a significant amount of opportunity there to improve the PA performance itself. But then, not only that, looking at traffic patterns. Can you do sleep modes, micro sleep modes to deep sleep modes. Turning down the cells itself, depending on the traffic patterns. So, these are all areas that are now becoming more and more important. And, clearly with our ecosystem of partners we are continuing to work on these. >> So we hear from the operators, it's an OpEx issue. It's hitting the P&L. They're in search of PUE of one. And, they've historically been wasteful, they go full throttle. And now, you're saying with intelligence you can optimize that consumption. So, where does the intelligence live? Is it in the rig. Where is it all throughout the network? Is it in the silicon? Maybe you could paint a picture as to where those smarts exist. >> I can start. It's across the stack. It starts, we talked about the C-states, P-states. If you want to take advantage of that, that intelligence is in the workload, which has to understand when can I really start to clock things down or turn off the cores. If you really look at it from a traffic pattern perspective you start to really look at a rig level where you can have power. And, we are working with the ecosystem partners who are looking at applying machine learning on that to see what can we really start to turn on, turn off, throttle things down, depending on what the, so yes, it's across the stack. And lastly, again, I'll go back to cannot forget orchestration, where you again have the ability to move some of these workloads and look at where your workload placements are happening depending on what infrastructure is and what the traffic needs are at that point in time. So it's, again, there's no silver bullet. It has to be looked across the stack. >> And, this is where actually if I may, last two years a sea change has happened. People used to say, okay there are C-states and P-states, there's silicon every code. OS operating system has a governor built in. We rely on that. So, that used to be the way. Now that applications are getting smarter, if you look at a radio access network or the packet core on the control plane signaling application, they're more aware of the what is the underlying silicon power state sleep states are available. So, every time they find some of these areas there's no enough traffic there, they immediately goes to a transition. So, the workload has become more intelligent. The rig application we talked about. Every possible rig application right now are apps on xApps. Most of them are on energy efficiency. How are they using it? So, I think lot more even the last two years. >> Can I just say one more thing there right? >> Yeah. >> We cannot forget the infrastructure as well, right? I mean, that's the most important thing. That's where the energy is really getting drawn in. And, constant improvement on the infrastructure. And, I'll give you some data points, right? If you really look at the power at servers, right? From 2013 to 2023, like a decade. 85% energy intensity improvement, right? So, these gains are coming from performance with better cooling, better technology applications. So, that's super critical, that's important. And, also to just give you another data point. Apart from the infrastructure what cache layers we are running and how much CPU and compute requirements are there, that's also important. So, looking at it from a cache perspective are we optimizing the required infrastructure blocks for radio access versus core? And again, really taking that back to energy efficiency outcomes. So, some of the work we've been doing with Wind River and Red Hat and some of our ecosystem partners around that for radio access network versus core. Really again, optimizing for those different use cases and the outcomes of those start to come in from an energy utilization perspective >> So, 85% improvement in power consumption. Of course you're doing, I don't know, 2, 300% more work, right? So, let's say, and I'm just sort of spit balling numbers but, let's say that historically powers on the P&L has been, I don't know, single digits, maybe 10%. Now, it's popping up the much higher. >> Udayan: Huge >> Right? >> I mean, I don't know what the number is. Is it over 20% in some cases or is it, do you have a sense of that? Or let's say it is. The objective I presume is you're probably not going to lower the power bill overall, but you're going to be able to lower the percent of cost on the OpEx as you grow, right? I mean, we're talking about 5G networks. So much more data >> Capacity increasing. >> Yeah, and so is it, am I right about that the carriers, the best they can hope for is to sort of stay even on that percentage or maybe somewhat lower that percentage? Or, do you think they can actually cut the bill? What's the goal? What are they trying to do? >> The goal is to cut the bill. >> It is! >> And the way you get started to cut the bill is, as I said, first of all on the radio side. Start to see where the improvements are and look, there's not a whole lot there to be done. I mean, the PS are as efficient as they can be, but as I said, there are things in DPD and all that still can be improved. But then, sleep modes and all, yes there are efficiencies in there. But, I'll give you one important, another interesting data point. We did a work with ACG Research on our 16G platform. The power edge service that we have recently launched based on Intel's Sapphire Rapids. And, if you look at the study there. 30% TCR reduction, 10% in CapEx gains, 30% in OpEx gains from moving away from these legacy monolithic architectures to cloud native architectures. And, a large part of that OpEx gain really starts to come from energy to the point of 800 metric tonnes of carbon reduction to the point of you could have, and if you really translate that to around 160 homes electric use per year, right? So yes, I mean the opportunity there is to reduce the bill. >> Wow, that's big, big goal guys. We got to run. But, thank you for informing the audience on the importance and how you get there. So, appreciate that. >> One thing that bears mentioning really quickly before we wrap, a lot of these things we're talking about are happening in remote locations. >> Oh, back to that point of distributed nature of telecom. >> Yes, we talked about a BBU being at the base of a tower that could be up on a mountain somewhere. >> No, you made the point. You can't just say, oh, hey we're going to go find ambient air or going to go... >> They don't necessarily... >> Go next to a waterfall. >> We don't necessarily have the greatest hydro tower. >> All right, we got to go. Thanks you guys. Alright, keep it right there. Wall to wall coverage is day two of theCUBE's coverage of MWC 23. Stay right there, we'll be right back. (corporate outro jingle)
SUMMARY :
that drive human progress. John Furrier is in the studio about greening the network. So, it's like that is the number one thing We're talking about the big piece of OpEx reduce the power consumption? So, if you look into any traffic, I remember the enterprise Because, the spinning disc was, So, the workloads themselves the kinds of things that you So, you are in the office areas, to give you benefit on the CapEx side, Is it in the rig. that intelligence is in the workload, So, the workload has and the outcomes of those start to come in historically powers on the P&L on the OpEx as you grow, right? And the way you get on the importance and how you get there. before we wrap, a lot of these Oh, back to that point of being at the base of a tower No, you made the point. the greatest hydro tower. Thanks you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Manish Singh | PERSON | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Udayan Mukherjee | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
85% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
2, 300% | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
32 code | QUANTITY | 0.99+ |
Udayan | PERSON | 0.99+ |
eight centers | QUANTITY | 0.99+ |
one generation | QUANTITY | 0.99+ |
Manish | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
OpEx | ORGANIZATION | 0.99+ |
two generation | QUANTITY | 0.99+ |
today morning | DATE | 0.99+ |
10 systems | QUANTITY | 0.99+ |
32 core | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
today | DATE | 0.99+ |
800 metric tonnes | QUANTITY | 0.98+ |
2023 | DATE | 0.98+ |
ACG Research | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.98+ |
over 20% | QUANTITY | 0.98+ |
first topic | QUANTITY | 0.98+ |
around 160 homes | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
xApps | TITLE | 0.97+ |
intel | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
second thing | QUANTITY | 0.96+ |
Dell Jets | ORGANIZATION | 0.95+ |
Two | QUANTITY | 0.94+ |
last two years | DATE | 0.94+ |
first thing | QUANTITY | 0.93+ |
Dell Techhnologies | ORGANIZATION | 0.9+ |
P&L | ORGANIZATION | 0.9+ |
day two | QUANTITY | 0.89+ |
Ericson | ORGANIZATION | 0.89+ |
this morning | DATE | 0.88+ |
one more thing | QUANTITY | 0.88+ |
Edge | ORGANIZATION | 0.88+ |
MWC 23 | EVENT | 0.87+ |
MWC | EVENT | 0.86+ |
Telecom Systems Business | ORGANIZATION | 0.84+ |
Number two | QUANTITY | 0.8+ |
MWC23 | EVENT | 0.8+ |
first | QUANTITY | 0.78+ |
Network | ORGANIZATION | 0.78+ |
5G | QUANTITY | 0.76+ |
One thing | QUANTITY | 0.76+ |
OpEx | TITLE | 0.7+ |
single digits | QUANTITY | 0.69+ |
RAN | TITLE | 0.68+ |
theCUBE | ORGANIZATION | 0.63+ |
two | QUANTITY | 0.62+ |
16G | OTHER | 0.61+ |
Verizon | ORGANIZATION | 0.57+ |
Udayan | ORGANIZATION | 0.56+ |
OpEx | OTHER | 0.53+ |
SiliconANGLE News | Intel Accelerates 5G Network Virtualization
(energetic music) >> Welcome to the Silicon Angle News update Mobile World Congress theCUBE coverage live on the floor for four days. I'm John Furrier, in the studio here. Dave Vellante, Lisa Martin onsite. Intel in the news, Intel accelerates 5G network virtualization with radio access network boost for Xeon processors. Intel, well known for power and computing, they today announced their integrated virtual radio access network into its latest fourth gen Intel Xeon system on a chip. This move will help network operators gear up their efforts to deliver Cloud native features for next generation 5G core and edge networks. This announcement came today at MWC, formerly knows Mobile World Congress. In Barcelona, Intel is taking the latest step in its mission to virtualize the world's networks, including Core, Open RAN and Edge. Network virtualization is the key capability for communication service providers as they migrate from fixed function hardware to programmable software defined platforms. This provides greater agility and greater cost efficiency. According to Intel, this is the demand for agile, high performance, scalable networks requiring adoption. Fully virtualized software based platforms run on general purpose processors. Intel believes that network operators need to accelerate network virtualization to get the most out of these new architectures, and that's where it can be made its mark. With Intel vRAN Boost, it delivers twice the capability and capacity gains over its previous generation of silicon with the same power envelope with 20% in power savings that results from an integrated acceleration. In addition, Intel announced new infrastructure power manager for 5G core reference software that's designed to work with vRAN Boost. Intel also showcased its new Intel Converged Edge media platform designed to deliver multiple video services from a shared multi-tenant architecture. The platform leverages Cloud native scalability to respond to the shifting demands. Lastly, Intel announced a range of Agilex 7 Field Programmable Gate Arrays and eASIC N5X structured applications specific integrated circuits designed for individual cloud communications and embedded applications. Intel is targeting the power consumption which is energy and more horsepower for chips, which is going to power the industrial internet edge. That's going to be Cloud native. Big news happening at Mobile World Congress. theCUBE is there. Go to siliconangle.com for all the news and special report and live feed on theCUBE.net. (energetic music)
SUMMARY :
Intel in the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
20% | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.98+ |
twice | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
four days | QUANTITY | 0.98+ |
fourth gen | QUANTITY | 0.96+ |
theCUBE.net | OTHER | 0.9+ |
Xeon | COMMERCIAL_ITEM | 0.86+ |
MWC | EVENT | 0.84+ |
vRAN Boost | TITLE | 0.82+ |
Agilex | TITLE | 0.78+ |
Silicon Angle | ORGANIZATION | 0.77+ |
7 Field Programmable | COMMERCIAL_ITEM | 0.76+ |
SiliconANGLE News | ORGANIZATION | 0.76+ |
eASIC | TITLE | 0.75+ |
theCUBE | ORGANIZATION | 0.63+ |
N5X | COMMERCIAL_ITEM | 0.62+ |
5G | QUANTITY | 0.55+ |
Gate Arrays | OTHER | 0.41+ |
David Schmidt, Dell Technologies and Scott Clark, Intel | SuperComputing 22
(techno music intro) >> Welcome back to theCube's coverage of SuperComputing Conference 2022. We are here at day three covering the amazing events that are occurring here. I'm Dave Nicholson, with my co-host Paul Gillin. How's it goin', Paul? >> Fine, Dave. Winding down here, but still plenty of action. >> Interesting stuff. We got a full day of coverage, and we're having really, really interesting conversations. We sort of wrapped things up at Supercomputing 22 here in Dallas. I've got two very special guests with me, Scott from Intel and David from Dell, to talk about yeah supercomputing, but guess what? We've got some really cool stuff coming up after this whole thing wraps. So not all of the holiday gifts have been unwrapped yet, kids. Welcome gentlemen. >> Thanks so much for having us. >> Thanks for having us. >> So, let's start with you, David. First of all, explain the relationship in general between Dell and Intel. >> Sure, so obviously Intel's been an outstanding partner. We built some great solutions over the years. I think the market reflects that. Our customers tell us that. The feedback's strong. The products you see out here this week at Supercompute, you know, put that on display for everybody to see. And then as we think about AI in machine learning, there's so many different directions we need to go to help our customers deliver AI outcomes. Right, so we recognize that AI has kind of spread outside of just the confines of everything we've seen here this week. And now we've got really accessible AI use cases that we can explain to friends and family. We can talk about going into retail environments and how AI is being used to track inventory, to monitor traffic, et cetera. But really what that means to us as a bunch of hardware folks is we have to deliver the right platforms and the right designs for a variety of environments, both inside and outside the data center. And so if you look at our portfolio, we have some great products here this week, but we also have other platforms, like the XR4000, our shortest rack server ever that's designed to go into Edge environments, but is also built for those Edge AI use cases that supports GPUs. It supports AI on the CPU as well. And so there's a lot of really compelling platforms that we're starting to talk about, have already been talking about, and it's going to really enable our customers to deliver AI in a variety of ways. >> You mentioned AI on the CPU. Maybe this is a question for Scott. What does that mean, AI on the CPU? >> Well, as David was talking about, we're just seeing this explosion of different use cases. And some of those on the Edge, some of them in the Cloud, some of them on Prem. But within those individual deployments, there's often different ways that you can do AI, whether that's training or inference. And what we're seeing is a lot of times the memory locality matters quite a bit. You don't want to have to pay necessarily a cost going across the PCI express bus, especially with some of our newer products like the CPU Max series, where you can have a huge about of high bandwidth memory just sitting right on the CPU. Things that traditionally would have been accelerator only, can now live on a CPU, and that includes both on the inference side. We're seeing some really great things with images, where you might have a giant medical image that you need to be able to do extremely high resolution inference on or even text, where you might have a huge corpus of extremely sparse text that you need to be able to randomly sample very efficiently. >> So how are these needs influencing the evolution of Intel CPU architectures? >> So, we're talking to our customers. We're talking to our partners. This presents both an opportunity, but also a challenge with all of these different places that you can put these great products, as well as applications. And so we're very thoughtfully trying to go to the market, see where their needs are, and then meet those needs. This industry obviously has a lot of great players in it, and it's no longer the case that if you build it, they will come. So what we're doing is we're finding where are those choke points, how can we have that biggest difference? Sometimes there's generational leaps, and I know David can speak to this, can be huge from one system to the next just because everything's accelerated on the software side, the hardware side, and the platforms themselves. >> That's right, and we're really excited about that leap. If you take what Scott just described, we've been writing white papers, our team with Scott's team, we've been talking about those types of use cases using doing large image analysis and leveraging system memory, leveraging the CPU to do that, we've been talking about that for several generations now. Right, going back to Cascade Lake, going back to what we would call 14th generation power Edge. And so now as we prepare and continue to unveil, kind of we're in launch season, right, you and I were talking about how we're in launch season. As we continue to unveil and launch more products, the performance improvements are just going to be outstanding and we'll continue that evolution that Scott described. >> Yeah, I'd like to applaud Dell just for a moment for its restraint. Because I know you could've come in and taken all of the space in the convention center to show everything that you do. >> Would have loved to. >> In the HPC space. Now, worst kept secrets on earth at this point. Vying for number one place is the fact that there is a new Mission Impossible movie coming. And there's also new stuff coming from Intel. I know, I think allegedly we're getting close. What can you share with us on that front? And I appreciate it if you can't share a ton of specifics, but where are we going? David just alluded to it. >> Yeah, as David talked about, we've been working on some of these things for many years. And it's just, this momentum is continuing to build, both in respect to some of our hardware investments. We've unveiled some things both here, both on the CPU side and the accelerator side, but also on the software side. OneAPI is gathering more and more traction and the ecosystem is continuing to blossom. Some of our AI and HPC workloads, and the combination thereof, are becoming more and more viable, as well as displacing traditional approaches to some of these problems. And it's this type of thing where it's not linear. It all builds on itself. And we've seen some of these investments that we've made for a better half of a decade starting to bear fruit, but that's, it's not just a one time thing. It's just going to continue to roll out, and we're going to be seeing more and more of this. >> So I want to follow up on something that you mentioned. I don't know if you've ever heard that the Charlie Brown saying that sometimes the most discouraging thing can be to have immense potential. Because between Dell and Intel, you offer so many different versions of things from a fit for function perspective. As a practical matter, how do you work with customers, and maybe this is a question for you, David. How do you work with customers to figure out what the right fit is? >> I'll give you a great example. Just this week, customer conversations, and we can put it in terms of kilowatts to rack, right. How many kilowatts are you delivering at a rack level inside your data center? I've had an answer anywhere from five all the way up to 90. There's some that have been a bit higher that probably don't want to talk about those cases, kind of customers we're meeting with very privately. But the range is really, really large, right, and there's a variety of environments. Customers might be ready for liquid today. They may not be ready for it. They may want to maximize air cooling. Those are the conversations, and then of course it all maps back to the workloads they wish to enable. AI is an extremely overloaded term. We don't have enough time to talk about all the different things that tuck under that umbrella, but the workloads and the outcomes they wish to enable, we have the right solutions. And then we take it a step further by considering where they are today, where they need to go. And I just love that five to 90 example of not every customer has an identical cookie cutter environment, so we've got to have the right platforms, the right solutions, for the right workloads, for the right environments. >> So, I like to dive in on this power issue, to give people who are watching an idea. Because we say five kilowatts, 90 kilowatts, people are like, oh wow, hmm, what does that mean? 90 kilowatts is more than 100 horse power if you want to translate it over. It's a massive amount of power, so if you think of EV terms. You know, five kilowatts is about a hairdryer's around a kilowatt, 1,000 watts, right. But the point is, 90 kilowatts in a rack, that's insane. That's absolutely insane. The heat that that generates has got to be insane, and so it's important. >> Several houses in the size of a closet. >> Exactly, exactly. Yeah, in a rack I explain to people, you know, it's like a refrigerator. But, so in the arena of thermals, I mean is that something during the development of next gen architectures, is that something that's been taken into consideration? Or is it just a race to die size? >> Well, you definitely have to take thermals into account, as well as just the power of consumption themselves. I mean, people are looking at their total cost of ownership. They're looking at sustainability. And at the end of the day, they need to solve a problem. There's many paths up that mountain, and it's about choosing that right path. We've talked about this before, having extremely thoughtful partners, we're just not going to common-torily try every single solution. We're going to try to find the ones that fit that right mold for that customer. And we're seeing more and more people, excuse me, care about this, more and more people wanting to say, how do I do this in the most sustainable way? How do I do this in the most reliable way, given maybe different fluctuations in their power consumption or their power pricing? We're developing more software tools and obviously partnering with great partners to make sure we do this in the most thoughtful way possible. >> Intel put a lot of, made a big investment by buying Habana Labs for its acceleration technology. They're based in Israel. You're based on the west coast. How are you coordinating with them? How will the Habana technology work its way into more mainstream Intel products? And how would Dell integrate those into your servers? >> Good question. I guess I can kick this off. So Habana is part of the Intel family now. They've been integrated in. It's been a great journey with them, as some of their products have launched on AWS, and they've had some very good wins on MLPerf and things like that. I think it's about finding the right tool for the job, right. Not every problem is a nail, so you need more than just a hammer. And so we have the Xeon series, which is incredibly flexible, can do so many different things. It's what we've come to know and love. On the other end of the spectrum, we obviously have some of these more deep learning focused accelerators. And if that's your problem, then you can solve that problem in incredibly efficient ways. The accelerators themselves are somewhere in the middle, so you get that kind of Goldilocks zone of flexibility and power. And depending on your use case, depending on what you know your workloads are going to be day in and day out, one of these solutions might work better for you. A combination might work better for you. Hybrid compute starts to become really interesting. Maybe you have something that you need 24/7, but then you only need a burst to certain things. There's a lot of different options out there. >> The portfolio approach. >> Exactly. >> And then what I love about the work that Scott's team is doing, customers have told us this week in our meetings, they do not want to spend developer's time porting code from one stack to the next. They want that flexibility of choice. Everyone does. We want it in our lives, in our every day lives. They need that flexibility of choice, but they also, there's an opportunity cost when their developers have to choose to port some code over from one stack to another or spend time improving algorithms and doing things that actually generate, you know, meaningful outcomes for their business or their research. And so if they are, you know, desperately searching I would say for that solution and for help in that area, and that's what we're working to enable soon. >> And this is what I love about oneAPI, our software stack, it's open first, heterogeneous first. You can take SYCL code, it can run on competitor's hardware. It can run on Intel hardware. It's one of these things that you have to believe long term, the future is open. Wall gardens, the walls eventually crumble. And we're just trying to continue to invest in that ecosystem to make sure that the in-developer at the end of the day really gets what they need to do, which is solving their business problem, not tinkering with our drivers. >> Yeah, I actually saw an interesting announcement that I hadn't been tracking. I hadn't been tracking this area. Chiplets, and the idea of an open standard where competitors of Intel from a silicone perspective can have their chips integrated via a universal standard. And basically you had the top three silicone vendors saying, yeah, absolutely, let's work together. Cats and dogs. >> Exactly, but at the end of the day, it's whatever menagerie solves the problem. >> Right, right, exactly. And of course Dell can solve it from any angle. >> Yeah, we need strong partners to build the platforms to actually do it. At the end of the day, silicone without software is just sand. Sand with silicone is poorly written prose. But without an actual platform to put it on, it's nothing, it's a box that sits in the corner. >> David, you mentioned that 90% of power age servers now support GPUs. So how is this high-performing, the growth of high performance computing, the demand, influencing the evolution of your server architecture? >> Great question, a couple of ways. You know, I would say 90% of our platforms support GPUs. 100% of our platforms support AI use cases. And it goes back to the CPU compute stack. As we look at how we deliver different form factors for customers, we go back to that range, I said that power range this week of how do we enable the right air coolant solutions? How do we deliver the right liquid cooling solutions, so that wherever the customer is in their environment, and whatever footprint they have, we're ready to meet it? That's something you'll see as we go into kind of the second half of launch season and continue rolling out products. You're going to see some very compelling solutions, not just in air cooling, but liquid cooling as well. >> You want to be more specific? >> We can't unveil everything at Supercompute. We have a lot of great stuff coming up here in the next few months, so. >> It's kind of like being at a great restaurant when they offer you dessert, and you're like yeah, dessert would be great, but I just can't take anymore. >> It's a multi course meal. >> At this point. Well, as we wrap, I've got one more question for each of you. Same question for each of you. When you think about high performance computing, super computing, all of the things that you're doing in your partnership, driving artificial intelligence, at that tip of the spear, what kind of insights are you looking forward to us being able to gain from this technology? In other words, what cool thing, what do you think is cool out there from an AI perspective? What problem do you think we can solve in the near future? What problems would you like to solve? What gets you out of bed in the morning? Cause it's not the little, it's not the bits and the bobs and the speeds and the feats, it's what we're going to do with them, so what do you think, David? >> I'll give you an example. And I think, I saw some of my colleagues talk about this earlier in the week, but for me what we could do in the past two years to unable our customers in a quarantine pandemic environment, we were delivering platforms and solutions to help them do their jobs, help them carry on in their lives. And that's just one example, and if I were to map that forward, it's about enabling that human progress. And it's, you know, you ask a 20 year version of me 20 years ago, you know, if you could imagine some of these things, I don't know what kind of answer you would get. And so mapping forward next decade, next two decades, I can go back to that example of hey, we did great things in the past couple of years to enable our customers. Just imagine what we're going to be able to do going forward to enable that human progress. You know, there's great use cases, there's great image analysis. We talked about some. The images that Scott was referring to had to do with taking CAT scan images and being able to scan them for tumors and other things in the healthcare industry. That is stuff that feels good when you get out of bed in the morning, to know that you're enabling that type of progress. >> Scott, quick thoughts? >> Yeah, and I'll echo that. It's not one specific use case, but it's really this wave front of all of these use cases, from the very micro of developing the next drug to finding the next battery technology, all the way up to the macro of trying to have an impact on climate change or even the origins of the universe itself. All of these fields are seeing these massive gains, both from the software, the hardware, the platforms that we're bringing to bear to these problems. And at the end of the day, humanity is going to be fundamentally transformed by the computation that we're launching and working on today. >> Fantastic, fantastic. Thank you, gentlemen. You heard it hear first, Intel and Dell just committed to solving the secrets of the universe by New Years Eve 2023. >> Well, next Supercompute, let's give us a little time. >> The next Supercompute Convention. >> Yeah, next year. >> Yeah, SC 2023, we'll come back and see what problems have been solved. You heard it hear first on theCube, folks. By SC 23, Dell and Intel are going to reveal the secrets of the universe. From here, at SC 22, I'd like to thank you for joining our conversation. I'm Dave Nicholson, with my co-host Paul Gillin. Stay tuned to theCube's coverage of Supercomputing Conference 22. We'll be back after a short break. (techno music)
SUMMARY :
covering the amazing events Winding down here, but So not all of the holiday gifts First of all, explain the and the right designs for What does that mean, AI on the CPU? that you need to be able to and it's no longer the case leveraging the CPU to do that, all of the space in the convention center And I appreciate it if you and the ecosystem is something that you mentioned. And I just love that five to 90 example But the point is, 90 kilowatts to people, you know, And at the end of the day, You're based on the west coast. So Habana is part of the Intel family now. and for help in that area, in that ecosystem to make Chiplets, and the idea of an open standard Exactly, but at the end of the day, And of course Dell can that sits in the corner. the growth of high performance And it goes back to the CPU compute stack. in the next few months, so. when they offer you dessert, and the speeds and the feats, in the morning, to know And at the end of the day, of the universe by New Years Eve 2023. Well, next Supercompute, From here, at SC 22, I'd like to thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Maribel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Equinix | ORGANIZATION | 0.99+ |
Matt Link | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Indianapolis | LOCATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Scott | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tim Minahan | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Stephanie Cox | PERSON | 0.99+ |
Akanshka | PERSON | 0.99+ |
Budapest | LOCATION | 0.99+ |
Indiana | LOCATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
October | DATE | 0.99+ |
India | LOCATION | 0.99+ |
Stephanie | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Chris Lavilla | PERSON | 0.99+ |
2006 | DATE | 0.99+ |
Tanuja Randery | PERSON | 0.99+ |
Cuba | LOCATION | 0.99+ |
Israel | LOCATION | 0.99+ |
Keith Townsend | PERSON | 0.99+ |
Akanksha | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Akanksha Mehrotra | PERSON | 0.99+ |
London | LOCATION | 0.99+ |
September 2020 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
David Schmidt | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
$45 billion | QUANTITY | 0.99+ |
October 2020 | DATE | 0.99+ |
Africa | LOCATION | 0.99+ |
Jen Huffstetler, Intel | HPE Discover 2022
>> Announcer: theCube presents HPE Discover 2022 brought to you by HPE. >> Hello and welcome back to theCube's continuous coverage HPE Discover 2022 and from Las Vegas the formerly Sands Convention Center now Venetian, John Furrier and Dave Vellante here were excited to welcome in Jen Huffstetler. Who's the Chief product Sustainability Officer at Intel Jen, welcome to theCube thanks for coming on. >> Thank you very much for having me. >> You're really welcome. So you dial back I don't know, the last decade and nobody really cared about it but some people gave it lip service but corporations generally weren't as in tune, what's changed? Why has it become so top of mind? >> I think in the last year we've noticed as we all were working from home that we had a greater appreciation for the balance in our lives and the impact that climate change was having on the world. So I think across the globe there's regulations industry and even personally, everyone is really starting to think about this a little more and corporations specifically are trying to figure out how are they going to continue to do business in these new regulated environments. >> And IT leaders generally weren't in tune cause they weren't paying the power bill for years it was the facilities people, but then they started to come together. How should leaders in technology, business tech leaders, IT leaders, CIOs, how should they be thinking about their sustainability goals? >> Yeah, I think for IT leaders specifically they really want to be looking at the footprint of their overall infrastructure. So whether that is their on-prem data center, their cloud instances, what can they do to maximize the resources and lower the footprint that they contribute to their company's overall footprint. So IT really has a critical role to play I think because as you'll find in IT, the carbon footprint of the data center of those products in use is actually it's fairly significant. So having a focus there will be key. >> You know compute has always been one of those things where, you know Intel's been makes chips so that, you know heat is important in compute. What is Intel's current goals? Give us an update on where you guys are at. What's the ideal goal in the long term? Where are you now? You guys always had a focus on this for a long, long time. Where are we now? Cause I won't say the goalpost of changed, they're changing the definitions of what this means. What's the current state of Intel's carbon footprint and overall goals? >> Yeah, no thanks for asking. As you mentioned, we've been invested in lowering our environmental footprint for decades in fact, without action otherwise, you know we've already lowered our carbon footprint by 75%. So we're really in that last mile. And that is why when we recently announced a very ambitious goal Net-Zero 2040 for our scope one and two for manufacturing operations, this is really an industry leading goal. And partly because the technology doesn't even exist, right? For the chemistries and for making the silicon into the sand into, you know, computer chips yet. And so by taking this bold goal, we're going to be able to lead the industry, partner with academia, partner with consortia, and that drive is going to have ripple effects across the industry and all of the components in semiconductors. >> Is there a changing definition of Net-Zero? What that means, cause some people say they're Net-Zero and maybe in one area they might be but maybe holistically across the company as it becomes more of a broader mandate society, employees, partners, Wall Street are all putting pressure on companies. Is the Net-Zero conversation changed a little bit or what's your view on that? >> I think we definitely see it changing with changing regulations like those coming forth from the SEC here in the US and in Europe. Net-Zero can't just be lip service anymore right? It really has to be real reductions on your footprint. And we say then otherwise and even including in our supply chain goals what we've taken new goals to reduce, but our operations are growing. So I think everybody is going through this realization that you know, with the growth, how do we keep it lower than it would've been otherwise, keep focusing on those reductions and have not just renewable credits that could have been bought in one location and applied to a different geographical location but real credible offsets for where the the products manufactured or the computes deployed. >> Jen, when you talk about you've reduced already by 75% you're on that last mile. We listened to Pat Gelsinger very closely up until recently he was the number one most frequently had on theCube guest. He's been busy I guess. But as you apply that discipline to where you've been, your existing business and now Pat's laid out this plan to increase the Foundry business how does that affect your... Are you able to carry through that reduction to, you know, the new foundries? Do you have to rethink that? How does that play in? >> Certainly, well, the Foundry expansion of our business with IBM 2.0 is going to include the existing factories that already have the benefit of those decades of investment and focus. And then, you know we have clear goals for our new factories in Ohio, in Europe to achieve goals as well. That's part of the overall plan for Net-Zero 2040. It's inclusive of our expansion into Foundry which means that many, many many more customers are going to be able to benefit from the leadership that Intel has here. And then as we onboard acquisitions as any company does we need to look at the footprint of the acquisition and see what we can do to align it with our overall goals. >> Yeah so sustainable IT I don't know for some reason was always an area of interest to me. And when we first started, even before I met you, John we worked with PG&E to help companies get rebates for installing technologies that would reduce their carbon footprint. >> Jen: Very forward thinking. >> And it was a hard thing to get, you know, but compute was the big deal. And there were technologies and I remember virtualization at the time was one and we would go in and explain to the PG&E engineers how that all worked. Cause they had metrics and that they wanted to see, but anyway, so virtualization was clearly one factor. What are the technologies today that people should be paying, flash storage was another one. >> John: AI's going to have a big impact. >> Reduce the spinning disk, but what are the ones today that are going to have an impact? >> Yeah, no, that's a great question. We like to think of the built in acceleration that we have including some of the early acceleration for virtualization technologies as foundational. So built in accelerated compute is green compute and it allows you to maximize the utilization of the transistors that you already have deployed in your data center. This compute is sitting there and it is ready to be used. What matters most is what you were talking about, John that real world workload performance. And it's not just you know, a lot of specsmanship around synthetic benchmarks, but AI performance with the built in acceleration that we have in Xeon processors with the Intel DL Boost, we're able to achieve four X, the AI performance per Watts without you know, doing that otherwise. You think about the consolidation you were talking about that happened with virtualization. You're basically effectively doing the same thing with these built in accelerators that we have continued to add over time and have even more coming in our Sapphire Generation. >> And you call that green compute? Or what does that mean, green compute? >> Well, you are greening your compute. >> John: Okay got it. >> By increasing utilization of your resources. If you're able to deploy AI, utilize the telemetry within the CPU that already exists. We have customers KDDI in Japan has a great Proofpoint that they already announced on their 5G data center, lowered their data center power by 20%. That is real bottom line impact as well as carbon footprint impact by utilizing all of those built in capabilities. So, yeah. >> We've heard some stories earlier in the event here at Discover where there was some cooling innovations that was powering moving the heat to power towns and cities. So you start to see, and you guys have been following this data center and been part of the whole, okay and hot climates, you have cold climates, but there's new ways to recycle energy where's that cause that sounds very Sci-Fi to me that oh yeah, the whole town runs on the data center exhaust. So there's now systems thinking around compute. What's your reaction to that? What's the current view on re-engineering a system to take advantage of that energy or recycling? >> I think when we look at our vision of sustainable compute over this horizon it's going to be required, right? We know that compute helps to solve society's challenges and the demand for it is not going away. So how do we take new innovations looking at a systems level as compute gets further deployed at the edge, how do we make it efficient? How do we ensure that that compute can be deployed where there is air pollution, right? So some of these technologies that you have they not only enable reuse but they also enable some you know, closing in of the solution to make it more robust for edge deployments. It'll allow you to place your data center wherever you need it. It no longer needs to reside in one place. And then that's going to allow you to have those energy reuse benefits either into district heating if you're in, you know Northern Europe or there's examples with folks putting greenhouses right next to a data center to start growing food in what we're previously food deserts. So I don't think it's science fiction. It is how we need to rethink as a society. To utilize everything we have, the tools at our hand. >> There's a commercial on the radio, on the East Coast anyway, I don't know if you guys have heard of it, it's like, "What's your one thing?" And the gentleman comes on, he talks about things that you can do to help the environment. And he says, "What's your one thing?" So what's the one thing or maybe it's not just one that IT managers should be doing to affect carbon footprint? >> The one thing to affect their carbon footprint, there are so many things. >> Dave: Two, three, tell me. >> I think if I was going to pick the one most impactful thing that they could do in their infrastructure is it's back to John's comment. It's imagine if the world deployed AI, all the benefits not only in business outcomes, you know the revenue, lowering the TCO, but also lowering the footprint. So I think that's the one thing they could do. If I could throw in a baby second, it would be really consider how you get renewable energy into your computing ecosystem. And then you know, at Intel, when we're 80% renewable power, our processors are inherently low carbon because of all the work that we've done others have less than 10% renewable energy. So you want to look for products that have low carbon by design, any Intel based system and where you can get renewables from your grid to ask for it, run your workload there. And even the next step to get to sustainable computing it's going to take everyone, including every enterprise to think differently and really you know, consider what would it look like to bring renewables onto my site? If I don't have access through my local utility and many customers are really starting to evaluate that. >> Well Jen its great to have you on theCube. Great insight into the current state of the art of sustainability and carbon footprint. My final question for you is more about the talent out there. The younger generation coming in I'll say the pressure, people want to work for a company that's mission driven we know that, the Wall Street impact is going to be financial business model and then save the planet kind of pressure. So there's a lot of talent coming in. Is there awareness at the university level? Is there a course where can, do people get degrees in sustainability? There's a lot of people who want to come into this field what are some of the talent backgrounds of people learning or who might want to be in this field? What would you recommend? How would you describe how to onboard into the career if they want to contribute? What are some of those factors? Cause it's not new, new, but it's going to be globally aware. >> Yeah well there certainly are degrees with focuses on sustainability maybe to look at holistically at the enterprise, but where I think the globe is really going to benefit, we didn't really talk about the software inefficiency. And as we delivered more and more compute over the last few decades, basically the programming languages got more inefficient. So there's at least 35% inefficiency in the software. So being a software engineer, even if you're not an AI engineer. So AI would probably be the highest impact being a software engineer to focus on building new applications that are going to be efficient applications that they're well utilizing the transistor that they're not leaving zombie you know, services running that aren't being utilized. So I actually think-- >> So we got a program in assembly? (all laughing) >> (indistinct), would get really offended. >> Get machine language. I have to throw that in sorry. >> Maybe not that bad. (all laughing) >> That's funny, just a joke. But the question is what's my career path. What's a hot career in this area? Sustainability, AI totally see that. Anything else, any other career opportunities you see or hot jobs or hot areas to work on? >> Yeah, I mean, just really, I think it takes every architect, every engineer to think differently about their design, whether it's the design of a building or the design of a processor or a motherboard we have a whole low carbon architecture, you know, set of actions that are we're underway that will take to the ecosystem. So it could really span from any engineering discipline I think. But it's a mindset with which you approach that customer problem. >> John: That system thinking, yeah. >> Yeah sustainability designed in. Jen thanks so much for coming back in theCube, coming on theCube. It's great to have you. >> Thank you. >> All right. Dave Vellante for John Furrier, we're sustaining theCube. We're winding down day three, HPE Discover 2022. We'll be right back. (upbeat music)
SUMMARY :
brought to you by HPE. the formerly Sands Convention I don't know, the last decade and the impact that climate but then they started to come together. and lower the footprint What's the ideal goal in the long term? into the sand into, you but maybe holistically across the company that you know, with the growth, to where you've been, that already have the benefit to help companies get rebates at the time was one and it is ready to be used. the CPU that already exists. and been part of the whole, And then that's going to allow you And the gentleman comes on, The one thing to affect And even the next step to to have you on theCube. that are going to be would get really offended. I have to throw that in sorry. Maybe not that bad. But the question is what's my career path. or the design of a It's great to have you. Dave Vellante for John Furrier,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jen Huffstetler | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Europe | LOCATION | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
80% | QUANTITY | 0.99+ |
Japan | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Jen | PERSON | 0.99+ |
SEC | ORGANIZATION | 0.99+ |
75% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Two | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Northern Europe | LOCATION | 0.99+ |
one factor | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
Pat | PERSON | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
one location | QUANTITY | 0.98+ |
20% | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
one thing | QUANTITY | 0.97+ |
first | QUANTITY | 0.97+ |
Net-Zero | ORGANIZATION | 0.96+ |
one place | QUANTITY | 0.96+ |
DL Boost | COMMERCIAL_ITEM | 0.96+ |
last decade | DATE | 0.95+ |
today | DATE | 0.93+ |
decades | QUANTITY | 0.92+ |
day three | QUANTITY | 0.9+ |
one area | QUANTITY | 0.9+ |
East Coast | LOCATION | 0.9+ |
KDDI | ORGANIZATION | 0.89+ |
Discover | ORGANIZATION | 0.88+ |
less than 10% renewable | QUANTITY | 0.86+ |
Wall Street | LOCATION | 0.86+ |
Sands Convention Center | LOCATION | 0.84+ |
theCube | ORGANIZATION | 0.83+ |
four X | QUANTITY | 0.82+ |
Wall | ORGANIZATION | 0.82+ |
least 35% | QUANTITY | 0.75+ |
Chief | PERSON | 0.75+ |
IBM 2.0 | ORGANIZATION | 0.74+ |
Sustainability Officer | PERSON | 0.72+ |
last few decades | DATE | 0.69+ |
second | QUANTITY | 0.63+ |
Net-Zero 2040 | TITLE | 0.62+ |
Generation | COMMERCIAL_ITEM | 0.6+ |
HPE Discover 2022 | COMMERCIAL_ITEM | 0.55+ |
2022 | COMMERCIAL_ITEM | 0.55+ |
every engineer | QUANTITY | 0.54+ |
5G | QUANTITY | 0.54+ |
-Zero | OTHER | 0.54+ |
HPE | COMMERCIAL_ITEM | 0.48+ |
Street | LOCATION | 0.47+ |
Ajay Mungara, Intel | Red Hat Summit 2022
>>mhm. Welcome back to Boston. This is the cubes coverage of the Red Hat Summit 2022. The first Red Hat Summit we've done face to face in at least two years. 2019 was our last one. We're kind of rounding the far turn, you know, coming up for the home stretch. My name is Dave Valentin here with Paul Gillon. A J monger is here is a senior director of Iot. The Iot group for developer solutions and engineering at Intel. AJ, thanks for coming on the Cube. Thank you so much. We heard your colleague this morning and the keynote talking about the Dev Cloud. I feel like I need a Dev Cloud. What's it all about? >>So, um, we've been, uh, working with developers and the ecosystem for a long time, trying to build edge solutions. A lot of time people think about it. Solutions as, like, just computer the edge. But what really it is is you've got to have some component of the cloud. There is a network, and there is edge and edge is complicated because of the variety of devices that you need. And when you're building a solution, you got to figure out, like, where am I going to push the computer? How much of the computer I'm going to run in the cloud? How much of the computer? I'm gonna push it at the network and how much I need to run it at the edge. A lot of times what happens for developers is they don't have one environment where all of the three come together. And so what we said is, um, today the way it works is you have all these edge devices that customers by the instal, they set it up and they try to do all of that. And then they have a cloud environment they do to their development. And then they figure out how all of this comes together. And all of these things are only when they are integrating it at the customer at the solution space is when they try to do it. So what we did is we took all of these edge devices, put it in the cloud and gave one environment for cloud to the edge. Very good to your complete solution. >>Essentially simulates. >>No, it's not >>simulating span. So the cloud spans the cloud, the centralised cloud out to the edge. You >>know, what we did is we took all of these edge devices that will theoretically get deployed at the edge like we took all these variety of devices and putting it put it in a cloud environment. So these are non rack mountable devices that you can buy in the market today that you just have, like, we have about 500 devices in the cloud that you have from atom to call allusions to F. P. G s to head studio cards to graphics. All of these devices are available to you. So in one environment you have, like, you can connect to any of the cloud the hyper scholars, you could connect to any of these network devices. You can define your network topology. You could bring in any of your sources that is sitting in the gate repository or docker containers that may be sitting somewhere in a cloud environment, or it could be sitting on a docker hub. You can pull all of these things together, and we give you one place where you can build it where you can test it. You can performance benchmark it so you can know when you're actually going to the field to deploy it. What type of sizing you need. So >>let me show you, understand? If I want to test, uh, an actual edge device using 100 gig Ethernet versus an Mpls versus the five G, you can do all that without virtualizing. >>So all the H devices are there today, and the network part of it, we are building with red hat together where we are putting everything on this environment. So the network part of it is not quite yet solved, but that's what we want to solve. But the goal is here is you can let's say you have five cameras or you have 50 cameras with different type of resolutions. You want to do some ai inference type of workloads at the edge. What type of compute you need, what type of memory you need, How many devices do you need and where do you want to push the data? Because security is very important at the edge. So you gotta really figure out like I've got to secure the data on flight. I want to secure the data at Brest, and how do you do the governance of it. How do you kind of do service governance? So that all the services different containers that are running on the edge device, They're behaving well. You don't have one container hogging up all the memory or hogging up all the compute, or you don't have, like, certain points in the day. You might have priority for certain containers. So all of these mortals, where do you run it? So we have an environment that you could run all of that. >>Okay, so take that example of AI influencing at the edge. So I've got an edge device and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing in real time. You got something? They become some kind of streaming data coming in, and I want you to persist, uh, every hour on the hour. I want to save that time stamp. Or if the if some event, if a deer runs across the headlights, I want you to persist that day to send that back to the cloud and you can develop that tested, benchmark >>it right, and then you can say that. Okay, look in this environment I have, like, five cameras, like at different angles, and you want to kind of try it out. And what we have is a product which is into, um, open vino, which is like an open source product, which does all of the optimizations you need for age in France. So you develop the like to recognise the deer in your example. I developed the training model somewhere in the cloud. Okay, so I have, like, I developed with all of the things have annotated the different video streams. And I know that I'm recognising a deer now. Okay, so now you need to figure out Like when the deer is coming and you want to immediately take an action. You don't want to send all of your video streams to the cloud. It's too expensive. Bandwidth costs a lot. So you want to compute that inference at the edge? Okay. In order to do that inference at the edge, you need some environment. You should be able to do it. And to build that solution What type of age device do you really need? What type of compute you need? How many cameras are you computing it? What different things you're not only recognising a deer, probably recognising some other objects could do all of that. In fact, one of the things happened was I took my nephew to San Diego Zoo and he was very disappointed that he couldn't see the chimpanzees. Uh, that was there, right, the gorillas and other things. So he was very sad. So I said, All right, there should be a better way. I saw, like there was a stream of the camera feed that was there. So what we did is we did an edge in friends and we did some logic to say, At this time of the day, the gorillas get fed, so there's likelihood of you actually seeing the gorilla is very high. So you just go at that point and so that you see >>it, you >>capture, That's what you do, and you want to develop that entire solution. It's based on whether, based on other factors, you need to bring all of these services together and build a solution, and we offer an environment that allows you to do it. Will >>you customise the the edge configuration for the for the developer If if they want 50 cameras. That's not You don't have 50 cameras available, right? >>It's all cameras. What we do is we have a streaming capability that we support so you can upload all your videos. And you can say I want to now simulate 50 streams. Want to simulate 30 streams? Or I want to do this right? Or just like two or three videos that you want to just pull in. And you want to be able to do the infant simultaneously, running different algorithms at the edge. All of that is supported, and the bigger challenge at the edge is developing. Solution is fine. And now when you go to actual deployment and post deployment monitoring, maintenance, make sure that you're like managing it. It's very complicated. What we have seen is over 50% 51% to be precise of developers are developed some kind of a cloud native applications recently, right? So that we believe that if you bring that type of a cloud native development model to the edge, then you're scaling problem. Your maintenance problem, you're like, how do you actually deploy it? All of these challenges can be better managed, Um, and if you run all of that is an orchestration later on kubernetes and we run everything on top of open shift, so you have a deployment ready solution already there it's everything is containerised everything. You have it as health charged Dr Composed. You have all their you have tested and in this environment, and now you go take that to the deployment. And if it is there on any standard kubernetes environment or in an open ship, you can just straight away deploy your application. >>What's that edge architecture looked like? What's Intel's and red hats philosophy around? You know what's programmable and it's different. I know you can run a S, a p a data centre. You guys got that covered? What's the edge look like? What's that architecture of silicon middleware? Describe that for us. >>So at the edge, you think about it, right? It can run traditional, Uh, in an industrial PC. You have a lot of Windows environment. You have a lot of the next. They're now in a in an edge environment. Quite a few of these devices. I'm not talking about Farage where there are tiny micro controllers and these devices I'm talking about those devices that connect to these forage devices. Collect the data. Do some analytics do some compute that type of thing. You have foraged devices. Could be a camera. Could be a temperature sensor. Could be like a weighing scale. Could be anything. It could be that forage and then all of that data instead of pushing all the data to the cloud. In order for you to do the analysis, you're going to have some type of an edge set of devices where it is collecting all this data, doing some decisions that's close to the data. You're making some analysis there, all of that stuff, right? So you need some analysis tools, you need certain other things. And let's say that you want to run like, UH, average costs or rail or any of these operating systems at the edge. Then you have an ability for you to manage all of that. Using a control note, the control node can also sit at the edge. In some cases, like in a smart factory, you have a little data centre in a smart factory or even in a retail >>store >>behind a closet. You have, like a bunch of devices that are sitting there, correct. And those devices all can be managed and clustered in an environment. So now the question is, how do you deploy applications to that edge? How do you collect all the data that is sitting through the camera? Other sensors and you're processing it close to where the data is being generated make immediate decisions. So the architecture would look like you have some club which does some management of this age devices management of this application, some type of control. You have some network because you need to connect to that. Then you have the whole plethora of edge, starting from an hybrid environment where you have an entire, like a mini data centre sitting at the edge. Or it could be one or two of these devices that are just collecting data from these sensors and processing it that is the heart of the other challenge. The architecture varies from different verticals, like from smart cities to retail to healthcare to industrial. They have all these different variations. They need to worry about these, uh, different environments they are going to operate under, uh, they have different regulations that they have to look into different security protocols that they need to follow. So your solution? Maybe it is just recognising people and identifying if they are wearing a helmet or a coal mine, right, whether they are wearing a safety gear equipment or not, that solution versus you are like driving in a traffic in a bike, and you, for safety reasons. We want to identify the person is wearing a helmet or not. Very different use cases, very different environments, different ways in which you are operating. But that is where the developer needs to have. Similar algorithms are used, by the way, but how you deploy it very, quite a bit. >>But the Dev Cloud make sure I understand it. You talked about like a retail store, a great example. But that's a general purpose infrastructure that's now customised through software for that retail environment. Same thing with Telco. Same thing with the smart factory, you said, not the far edge, right, but that's coming in the future. Or is that well, that >>extends far edge, putting everything in one cloud environment. We did it right. In fact, I put some cameras on some like ipads and laptops, and we could stream different videos did all of that in a data centre is a boring environment, right? What are you going to see? A bunch of racks and service, So putting far edge devices there didn't make sense. So what we did is you could just have an easy ability for you to stream or connect or a Plourde This far edge data that gets generated at the far edge. Like, say, time series data like you can take some of the time series data. Some of the sensor data are mostly camera data videos. So you upload those videos and that is as good as your streaming those videos. Right? And that means you are generating that data. And then you're developing your solution with the assumption that the camera is observing whatever is going on. And then you do your age inference and you optimise it. You make sure that you size it, and then you have a complete solution. >>Are you supporting all manner of microprocessors at the edge, including non intel? >>Um, today it is all intel, but the plan, because we are really promoting the whole open ecosystem and things like that in the future. Yes, that is really talking about it, so we want to be able to do that in the future. But today it's been like a lot of the we were trying to address the customers that we are serving today. We needed an environment where they could do all of this, for example, and what circumstances would use I five versus i nine versus putting an algorithm on using a graphics integrated graphics versus running it on a CPU or running it on a neural computer stick. It's hard, right? You need to buy all those devices you need to experiment your solutions on all of that. It's hard. So having everything available in one environment, you could compare and contrast to see what type of a vocal or makes best sense. But it's not >>just x 86 x 86 your portfolio >>portfolio of F. P. G s of graphics of like we have all what intel supports today and in future, we would want to open it up. So how >>do developers get access to this cloud? >>It is all free. You just have to go sign up and register and, uh, you get access to it. It is difficult dot intel dot com You go there, and the container playground is all available for free for developers to get access to it. And you can bring in container workloads there, or even bare metal workloads. Um, and, uh, yes, all of it is available for you >>need to reserve the endpoint devices. >>Comment. That is where it is. An interesting technology. >>Govern this. Correct. >>So what we did was we built a kind of a queuing system. Okay, So, schedule, er so you develop your application in a controlled north, and only you need the edge device when you're scheduling that workload. Okay, so we have this scheduling systems, like we use Kafka and other technologies to do the scheduling in the container workload environment, which are all the optimised operators that are available in an open shift, um, environment. So we regard those operators. Were we installed it. So what happens is you take your work, lord, and you run it. Let's say on an I seven device, when you're running that workload and I summon device, that device is dedicated to you. Okay, So and we've instrumented each of these devices with telemetry so we could see at the point your workload is running on that particular device. What is the memory looking like power looking like How hard is the device running? What is a compute looking like? So we capture all that metrics. Then what you do is you take it and run it on a 99 or run it on a graphic, so can't run it on an F p g a. Then you compare and contrast. And you say Huh? Okay for this particular work, Lord, this device makes best sense. In some cases, I'll tell you. Right, Uh, developers have come back and told me I don't need a bigger process that I need bigger memory. >>Yeah, sure, >>right. And some cases they've said, Look, I have I want to prioritise accuracy over performance because if you're in a healthcare setting, accuracy is more important. In some cases, they have optimised it for the size of the device because it needs to fit in the right environment in the right place. So every use case where you optimise is up to the solution up to the developer, and we give you an ability for you to do that kind >>of folks are you seeing? You got hardware developers, you get software developers are right, people coming in. And >>we have a lot of system integrators. We have enterprises that are coming in. We are seeing a lot of, uh, software solution developers, independent software developers. We also have a lot of students are coming in free environment for them to kind of play with in sort of them having to buy all of these devices. We're seeing those people. Um I mean, we are pulling through a lot of developers in this environment currently, and, uh, we're getting, of course, feedback from the developers. We are just getting started here. We are continuing to improve our capabilities. We are adding, like, virtualisation capabilities. We are working very closely with red hat to kind of showcase all the goodness that's coming out of red hat, open shift and other innovations. Right? We heard, uh, like, you know, in one of the open shift sessions, they're talking about micro shifts. They're talking about hyper shift, the talking about a lot of these innovations, operators, everything that is coming together. But where do developers play with all of this? If you spend half your time trying to configure it, instal it and buy the hardware, Trying to figure it out. You lose patience. What we have time, you lose time. What is time and it's complicated, right? How do you set up? Especially when you involve cloud. It has network. It has got the edge. You need all of that right? Set up. So what we have done is we've set up everything for you. You just come in. And by the way, not only just that what we realised is when you go talk to customers, they don't want to listen to all our optimizations processors and all that. They want to say that I am here to solve my retail problem. I want to count the people coming into my store, right. I want to see that if there is any spills that I recognise and I want to go clean it up before a customer complaints about it or I have a brain tumour segmentation where I want to identify if the tumour is malignant or not, right and I want to telehealth solutions. So they're really talking about these use cases that are talking about all these things. So What we did is we build many of these use cases by talking to customers. We open sourced it and made it available on Death Cloud for developers to use as a starting point so that they have this retail starting point or they have this healthcare starting point. All these use cases so that they have all the court we have showed them how to contain arise it. The biggest problem is developers still don't know at the edge how to bring a legacy application and make it cloud native. So they just wrap it all into one doctor and they say, OK, now I'm containerised got a lot more to do. So we tell them how to do it, right? So we train these developers, we give them an opportunity to experiment with all these use cases so that they get closer and closer to what the customer solutions need to be. >>Yeah, we saw that a lot with the early cloud where they wrapped their legacy apps in a container, shove it into the cloud. Say it's really hosting a legacy. Apps is all it was. It wasn't It didn't take advantage of the cloud. Never Now people come around. It sounds like a great developer. Free resource. Take advantage of that. Where do they go? They go. >>So it's def cloud dot intel dot com >>death cloud dot intel dot com. Check it out. It's a great freebie, AJ. Thanks very much. >>Thank you very much. I really appreciate your time. All right, >>keep it right there. This is Dave Volonte for Paul Dillon. We're right back. Covering the cube at Red Hat Summit 2022. >>Mhm. Yeah. Mhm. Mm.
SUMMARY :
We're kind of rounding the far turn, you know, coming up for the home stretch. devices that you need. So the cloud spans the cloud, the centralised You can pull all of these things together, and we give you one place where you can build it where gig Ethernet versus an Mpls versus the five G, you can do all that So all of these mortals, where do you run it? and I've developed an application, and I'm going to say Okay, I want you to do the AI influencing So you develop the like to recognise the deer in your example. and we offer an environment that allows you to do it. you customise the the edge configuration for the for the developer So that we believe that if you bring that type of a cloud native I know you can run a S, a p a data So at the edge, you think about it, right? So now the question is, how do you deploy applications to that edge? Same thing with the smart factory, you said, So what we did is you could just have an easy ability for you to stream or connect You need to buy all those devices you need to experiment your solutions on all of that. portfolio of F. P. G s of graphics of like we have all what intel And you can bring in container workloads there, or even bare metal workloads. That is where it is. So what happens is you take your work, So every use case where you optimise is up to the You got hardware developers, you get software developers are What we have time, you lose time. container, shove it into the cloud. Check it out. Thank you very much. Covering the cube at Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Valentin | PERSON | 0.99+ |
Ajay Mungara | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
France | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
50 cameras | QUANTITY | 0.99+ |
five cameras | QUANTITY | 0.99+ |
50 streams | QUANTITY | 0.99+ |
30 streams | QUANTITY | 0.99+ |
Dave Volonte | PERSON | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Paul Dillon | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
three videos | QUANTITY | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
about 500 devices | QUANTITY | 0.98+ |
Red Hat Summit | EVENT | 0.98+ |
ipads | COMMERCIAL_ITEM | 0.98+ |
Iot | ORGANIZATION | 0.98+ |
Kafka | TITLE | 0.98+ |
each | QUANTITY | 0.97+ |
Windows | TITLE | 0.97+ |
three | QUANTITY | 0.97+ |
AJ | PERSON | 0.97+ |
first | QUANTITY | 0.96+ |
red hat | TITLE | 0.96+ |
Death Cloud | TITLE | 0.95+ |
one doctor | QUANTITY | 0.94+ |
over 50% 51% | QUANTITY | 0.93+ |
Farage | ORGANIZATION | 0.92+ |
intel dot com | ORGANIZATION | 0.9+ |
intel | ORGANIZATION | 0.9+ |
this morning | DATE | 0.89+ |
one cloud | QUANTITY | 0.88+ |
San Diego Zoo | LOCATION | 0.87+ |
99 | QUANTITY | 0.86+ |
one container | QUANTITY | 0.86+ |
one environment | QUANTITY | 0.86+ |
2019 | DATE | 0.85+ |
half | QUANTITY | 0.85+ |
one place | QUANTITY | 0.84+ |
least two years | QUANTITY | 0.83+ |
Dev Cloud | TITLE | 0.81+ |
monger | PERSON | 0.77+ |
time | QUANTITY | 0.76+ |
I five | OTHER | 0.76+ |
P. G | PERSON | 0.75+ |
red hat | TITLE | 0.74+ |
two of | QUANTITY | 0.73+ |
Brest | ORGANIZATION | 0.63+ |
nine | TITLE | 0.61+ |
86 | OTHER | 0.58+ |
devices | QUANTITY | 0.56+ |
things | QUANTITY | 0.51+ |
five G | OTHER | 0.49+ |
86 | QUANTITY | 0.48+ |
Cloud | TITLE | 0.46+ |
seven | COMMERCIAL_ITEM | 0.4+ |
dot | ORGANIZATION | 0.34+ |
cloud | TITLE | 0.32+ |
Does Intel need a Miracle?
(upbeat music) >> Welcome everyone, this is Stephanie Chan with theCUBE. Recently analyst Dave Ross RADIO entitled, Pat Gelsinger has a vision. It just needs the time, the cash and a miracle where he highlights why he thinks Intel is years away from reversing position in the semiconductor industry. Welcome Dave. >> Hey thanks, Stephanie. Good to see you. >> So, Dave you been following the company closely over the years. If you look at Wall Street Journal most analysts are saying to hold onto Intel. can you tell us why you're so negative on it? >> Well, you know, I'm not a stock picker Stephanie, but I've seen the data there are a lot of... some buys some sells, but most of the analysts are on a hold. I think they're, who knows maybe they're just hedging their bets they don't want to a strong controversial call that kind of sitting in the fence. But look, Intel still an amazing company they got tremendous resources. They're an ICON and they pay a dividend. So, there's definitely an investment case to be made to hold onto the stock. But I would generally say that investors they better be ready to hold on to Intel for a long, long time. I mean, Intel's they're just not the dominant player that it used to be. And the challenges have been mounting for a decade and look competitively Intel's fighting a five front war. They got AMD in both PCs and the data center the entire Arm Ecosystem` and video coming after with the whole move toward AI and GPU they're dominating there. Taiwan Semiconductor is by far the leading fab in the world with terms of output. And I would say even China is kind of the fifth leg of that stool, long term. So, lot of hurdles to jump competitively. >> So what are other sources of Intel's trouble sincere besides what you just mentioned? >> Well, I think they started when PC volumes peaked which was, or David Floyer, Wikibon wrote back in 2011, 2012 that he tells if it doesn't make some moves, it's going to face some trouble. So, even though PC volumes have bumped up with the pandemic recently, they pair in comparison to the wafer volume that are coming out of the Arm Ecosystem, and TSM and Samsung factories. The volumes of the Arm Ecosystem, Stephanie they dwarf the output of Intel by probably 10 X in semiconductors. I mean, the volume in semiconductors is everything. And because that's what costs down and Intel they just knocked a little cost manufacture any anymore. And in my view, they may never be again, not without a major change in the volume strategy, which of course Gelsinger is doing everything he can to affect that change, but they're years away and they're going to have to spend, north of a 100 billion dollars trying to get there, but it's all about volume in the semiconductor game. And Intel just doesn't have it right now. >> So you mentioned Pat Gelsinger he was a new CEO last January. He's a highly respected CEO and in truth employed more than four decades, I think he has knowledge and experience. including 30 years at Intel where he began his career. What's your opinion on his performance thus far besides the volume and semiconductor industry position of Intel? >> Well, I think Gelsinger is an amazing executive. He's a technical visionary, he's an execution machine, he's doing all the right things. I mean, he's working, he was at the state of the union address and looking good in a suit, he's saying all the right things. He's spending time with EU leaders. And he's just a very clear thinker and a super strong strategist, but you can't change Physics. The thing about Pat is he's known all along what's going on with Intel. I'm sure he's watched it from not so far because I think it's always been his dream to run the company. So, the fact that he's made a lot of moves. He's bringing in new management, he's repairing some of the dead wood at Intel. He's launched, kind of relaunched if you will, the Foundry Business. But I think they're serious about that. You know, this time around, they're spinning out mobile eye to throw off some cash mobile eye was an acquisition they made years ago to throw off some more cash to pay for the fabs. They have announced things like; a fabs in Ohio, in the Heartland, Ze in Heartland which is strikes all the right chords with the various politicians. And so again, he's doing all the right things. He's trying to inject. He's calling out his best Andrew Grove. I like to say who's of course, The Iconic CEO of Intel for many, many years, but again you can't change Physics. He can't compress the cycle any faster than the cycle wants to go. And so he's doing all the right things. It's just going to take a long, long time. >> And you said that competition is better positioned. Could you elaborate on why you think that, and who are the main competitors at this moment? >> Well, it's this Five Front War that I talked about. I mean, you see what's happened in Arm changed everything, Intel remember they passed on the iPhone didn't think it could make enough money on smartphones. And that opened the door for Arm. It was eager to take Apple's business. And because of the consumer volumes the semiconductor industry changed permanently just like the PC volume changed the whole mini computer business. Well, the smartphone changed the economics of semiconductors as well. Very few companies can afford the capital expense of building semiconductor fabrication facilities. And even fewer can make cutting edge chips like; five nanometer, three nanometer and beyond. So companies like AMD and Invidia, they don't make chips they design them and then they ship them to foundries like TSM and Samsung to manufacture them. And because TSM has such huge volumes, thanks to large part to Apple it's further down or up I guess the experience curve and experience means everything in terms of cost. And they're leaving Intel behind. I mean, the best example I can give you is Apple would look at the, a series chip, and now the M one and the M one ultra, I think about the traditional Moore's law curve that we all talk about two X to transistor density every two years doubling. Intel's lucky today if can keep that pace up, let's assume it can. But meanwhile, look at Apple's Arm based M one to M one Ultra transition. It occurred in less than two years. It was more like, 15 or 18 months. And it went from 16 billion transistors on a package to over a 100 billion. And so we're talking about the competition Apple in this case using Arm standards improving it six to seven X inside of a two year period while Intel's running it two X. And that says it all. So Intel is on a curve that's more expensive and slower than the competition. >> Well recently, until what Lujan Harrison did with 5.4 billion So it can make more check order companies last February I think the middle of February what do you think of that strategic move? >> Well, it was designed to help with Foundry. And again, I said left that out of my things that in Intel's doing, as Pat's doing there's a long list actually and many more. Again I think, it's an Israeli based company they're a global company, which is important. One of the things that Pat stresses is having a a presence in Western countries, I think that's super important, he'd like to get the percentage of semiconductors coming out of Western countries back up to at least maybe not to where it was previously but by the end of the decade, much more competitive. And so that's what that acquisition was designed to do. And it's a good move, but it's, again it doesn't change Physics. >> So Dave, you've been putting a lot of content out there and been following Intel for years. What can Intel do to go back on track? >> Well, I think first it needs great leadership and Pat Gelsinger is providing that. Since we talked about it, he's doing all the right things. He's manifesting his best. Andrew Grove, as I said earlier, splitting out the Foundry business is critical because we all know Moore's law. This is Right Law talks about volume in any business not just semiconductors, but it's crucial in semiconductors. So, splitting out a separate Foundry business to make chips is important. He's going to do that. Of course, he's going to ask Intel's competitors to allow Intel to manufacture their chips which they very well may well want to do because there's such a shortage right now of supply and they need those types of manufacturers. So, the hope is that that's going to drive the volume necessary for Intel to compete cost effectively. And there's the chips act. And it's EU cousin where governments are going to possibly put in some money into the semiconductor manufacturing to make the west more competitive. It's a key initiative that Pat has put forth and a challenge. And it's a good one. And he's making a lot of moves on the design side and committing tons of CapEx in these new fabs as we talked about but maybe his best chance is again the fact that, well first of all, the market's enormous. It's a trillion dollar market, but secondly there's a very long term shortage in play here in semiconductors. I don't think it's going to be cleared up in 2022 or 2023. It's just going to be keep being an explotion whether it's automobiles and factory devices and cameras. I mean, virtually every consumer device and edge device is going to use huge numbers of semiconductor chip. So, I think that's in Pat's favor, but honestly Intel is so far behind in my opinion, that I hope by the end of this decade, it's going to be in a position maybe a stronger number two position, and volume behind TSM maybe number three behind Samsung maybe Apple is going to throw Intel some Foundry business over time, maybe under pressure from the us government. And they can maybe win that account back but that's still years away from a design cycle standpoint. And so again, maybe in the 2030's, Intel can compete for top dog status, but that in my view is the best we can hope for this national treasure called Intel. >> Got it. So we got to leave it right there. Thank you so much for your time, Dave. >> You're welcome Stephanie. Good to talk to you >> So you can check out Dave's breaking analysis on theCUBE.net each Friday. This is Stephanie Chan for theCUBE. We'll see you next time. (upbeat music)
SUMMARY :
It just needs the time, Good to see you. closely over the years. but most of the analysts are on a hold. I mean, the volume in far besides the volume And so he's doing all the right things. And you said that competition And because of the consumer volumes I think the middle of February but by the end of the decade, What can Intel do to go back on track? And so again, maybe in the 2030's, Thank you so much for your time, Dave. Good to talk to you So you can check out
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
Ohio | LOCATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
2022 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
30 years | QUANTITY | 0.99+ |
Andrew Grove | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Invidia | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
5.4 billion | QUANTITY | 0.99+ |
Gelsinger | PERSON | 0.99+ |
10 X | QUANTITY | 0.99+ |
less than two years | QUANTITY | 0.99+ |
six | QUANTITY | 0.99+ |
M one | COMMERCIAL_ITEM | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
M one ultra | COMMERCIAL_ITEM | 0.99+ |
fifth leg | QUANTITY | 0.99+ |
15 | QUANTITY | 0.99+ |
five nanometer | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
Heartland | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
18 months | QUANTITY | 0.99+ |
seven | QUANTITY | 0.99+ |
Iconic | ORGANIZATION | 0.98+ |
five front | QUANTITY | 0.98+ |
three nanometer | QUANTITY | 0.98+ |
Dave Ross | PERSON | 0.98+ |
two year | QUANTITY | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
last February | DATE | 0.97+ |
last January | DATE | 0.97+ |
Lujan Harrison | PERSON | 0.97+ |
middle of February | DATE | 0.97+ |
first | QUANTITY | 0.96+ |
One | QUANTITY | 0.96+ |
16 billion transistors | QUANTITY | 0.96+ |
100 billion dollars | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
theCUBE.net | OTHER | 0.95+ |
both PCs | QUANTITY | 0.94+ |
Five Front War | EVENT | 0.94+ |
Breaking Analysis: Pat Gelsinger has the Vision Intel Just Needs Time, Cash & a Miracle
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is "Breaking Analysis" with Dave Vellante. >> If it weren't for Pat Gelsinger, Intel's future would be a disaster. Even with his clear vision, fantastic leadership, deep technical and business acumen, and amazing positivity, the company's future is in serious jeopardy. It's the same story we've been telling for years. Volume is king in the semiconductor industry, and Intel no longer is the volume leader. Despite Intel's efforts to change that dynamic With several recent moves, including making another go at its Foundry business, the company is years away from reversing its lagging position relative to today's leading foundries and design shops. Intel's best chance to survive as a leader in our view, will come from a combination of a massive market, continued supply constraints, government money, and luck, perhaps in the form of a deal with apple in the midterm. Hello, and welcome to this week's "Wikibon CUBE Insights, Powered by ETR." In this "Breaking Analysis," we'll update you on our latest assessment of Intel's competitive position and unpack nuggets from the company's February investor conference. Let's go back in history a bit and review what we said in the early 2010s. If you've followed this program, you know that our David Floyer sounded the alarm for Intel as far back as 2012, the year after PC volumes peaked. Yes, they've ticked up a bit in the past couple of years but they pale in comparison to the volumes that the ARM ecosystem is producing. The world has changed from people entering data into machines, and now it's machines that are driving all the data. Data volumes in Web 1.0 were largely driven by keystrokes and clicks. Web 3.0 is going to be driven by machines entering data into sensors, cameras. Other edge devices are going to drive enormous data volumes and processing power to boot. Every windmill, every factory device, every consumer device, every car, will require processing at the edge to run AI, facial recognition, inference, and data intensive workloads. And the volume of this space compared to PCs and even the iPhone itself is about to be dwarfed with an explosion of devices. Intel is not well positioned for this new world in our view. Intel has to catch up on the process, Intel has to catch up on architecture, Intel has to play catch up on security, Intel has to play catch up on volume. The ARM ecosystem has cumulatively shipped 200 billion chips to date, and is shipping 10x Intel's wafer volume. Intel has to have an architecture that accommodates much more diversity. And while it's working on that, it's years behind. All that said, Pat Gelsinger is doing everything he can and more to close the gap. Here's a partial list of the moves that Pat is making. A year ago, he announced IDM 2.0, a new integrated device manufacturing strategy that opened up its world to partners for manufacturing and other innovation. Intel has restructured, reorganized, and many executives have boomeranged back in, many previous Intel execs. They understand the business and have a deep passion to help the company regain its prominence. As part of the IDM 2.0 announcement, Intel created, recreated if you will, a Foundry division and recently acquired Tower Semiconductor an Israeli firm, that is going to help it in that mission. It's opening up partnerships with alternative processor manufacturers and designers. And the company has announced major investments in CAPEX to build out Foundry capacity. Intel is going to spin out Mobileye, a company it had acquired for 15 billion in 2017. Or does it try and get a $50 billion valuation? Mobileye is about $1.4 billion in revenue, and is likely going to be worth more around 25 to 30 billion, we'll see. But Intel is going to maybe get $10 billion in cash from that, that spin out that IPO and it can use that to fund more FABS and more equipment. Intel is leveraging its 19,000 software engineers to move up the stack and sell more subscriptions and high margin software. He got to sell what he got. And finally Pat is playing politics beautifully. Announcing for example, FAB investments in Ohio, which he dubbed Silicon Heartland. Brilliant! Again, there's no doubt that Pat is moving fast and doing the right things. Here's Pat at his investor event in a T-shirt that says, "torrid, bringing back the torrid pace and discipline that Intel is used to." And on the right is Pat at the State of the Union address, looking sharp in shirt and tie and suit. And he has said, "a bet on Intel is a hedge against geopolitical instability in the world." That's just so good. To that statement, he showed this chart at his investor meeting. Basically it shows that whereas semiconductor manufacturing capacity has gone from 80% of the world's volume to 20%, he wants to get it back to 50% by 2030, and reset supply chains in a market that has become important as oil. Again, just brilliant positioning and pushing all the right hot buttons. And here's a slide underscoring that commitment, showing manufacturing facilities around the world with new capacity coming online in the next few years in Ohio and the EU. Mentioning the CHIPS Act in his presentation in The US and Europe as part of a public private partnership, no doubt, he's going to need all the help he can get. Now, we couldn't resist the chart on the left here shows wafer starts and transistor capacity growth. For Intel, overtime speaks to its volume aspirations. But we couldn't help notice that the shape of the curve is somewhat misleading because it shows a two-year (mumbles) and then widens the aperture to three years to make the curve look steeper. Fun with numbers. Okay, maybe a little nitpick, but these are some of the telling nuggets we pulled from the investor day, and they're important. Another nitpick is in our view, wafers would be a better measure of volume than transistors. It's like a company saying we shipped 20% more exabytes or MIPS this year than last year. Of course you did, and your revenue shrank. Anyway, Pat went through a detailed analysis of the various Intel businesses and promised mid to high double digit growth by 2026, half of which will come from Intel's traditional PC they center in network edge businesses and the rest from advanced graphics HPC, Mobileye and Foundry. Okay, that sounds pretty good. But it has to be taken into context that the balance of the semiconductor industry, yeah, this would be a pretty competitive growth rate, in our view, especially for a 70 plus billion dollar company. So kudos to Pat for sticking his neck out on this one. But again, the promise is several years away, at least four years away. Now we want to focus on Foundry because that's the only way Intel is going to get back into the volume game and the volume necessary for the company to compete. Pat built this slide showing the baby blue for today's Foundry business just under a billion dollars and adding in another $1.5 billion for Tower Semiconductor, the Israeli firm that it just acquired. So a few billion dollars in the near term future for the Foundry business. And then by 2026, this really fuzzy blue bar. Now remember, TSM is the new volume leader, and is a $50 billion company growing. So there's definitely a market there that it can go after. And adding in ARM processors to the mix, and, you know, opening up and partnering with the ecosystems out there can only help volume if Intel can win that business, which you know, it should be able to, given the likelihood of long term supply constraints. But we remain skeptical. This is another chart Pat showed, which makes the case that Foundry and IDM 2.0 will allow expensive assets to have a longer useful life. Okay, that's cool. It will also solve the cumulative output problem highlighted in the bottom right. We've talked at length about Wright's Law. That is, for every cumulative doubling of units manufactured, cost will fall by a constant percentage. You know, let's say around 15% in semiconductor world, which is vitally important to accommodate next generation chips, which are always more expensive at the start of the cycle. So you need that 15% cost buffer to jump curves and make any money. So let's unpack this a bit. You know, does this chart at the bottom right address our Wright's Law concerns, i.e. that Intel can't take advantage of Wright's Law because it can't double cumulative output fast enough? Now note the decline in wafer starts and then the slight uptick, and then the flattening. It's hard to tell what years we're talking about here. Intel is not going to share the sausage making because it's probably not pretty, But you can see on the bottom left, the flattening of the cumulative output curve in IDM 1.0 otherwise known as the death spiral. Okay, back to the power of Wright's Law. Now, assume for a second that wafer density doesn't grow. It does, but just work with us for a second. Let's say you produce 50 million units per year, just making a number up. That gets you cumulative output to $100 million in, sorry, 100 million units in the second year to take you two years to get to that 100 million. So in other words, it takes two years to lower your manufacturing cost by, let's say, roughly 15%. Now, assuming you can get wafer volumes to be flat, which that chart showed, with good yields, you're at 150 now in year three, 200 in year four, 250 in year five, 300 in year six, now, that's four years before you can take advantage of Wright's Law. You keep going at that flat wafer start, and that simplifying assumption we made at the start and 50 million units a year, and well, you get to the point. You get the point, it's now eight years before you can get the Wright's Law to kick in, and you know, by then you're cooked. But now you can grow the density of transistors on a chip, right? Yes, of course. So let's come back to Moore's Law. The graphic on the left says that all the growth is in the new stuff. Totally agree with that. Huge term that Pat presented. Now he also said that until we exhaust the periodic table of elements, Moore's Law is alive and well, and Intel is the steward of Moore's Law. Okay, that's cool. The chart on the right shows Intel going from 100 billion transistors today to a trillion by 2030. Hold that thought. So Intel is assuming that we'll keep up with Moore's Law, meaning a doubling of transistors every let's say two years, and I believe it. So bring that back to Wright's Law, in the previous chart, it means with IDM 2.0, Intel can get back to enjoying the benefits of Wright's Law every two years, let's say, versus IDM 1.0 where they were failing to keep up. Okay, so Intel is saved, yeah? Well, let's bring into this discussion one of our favorite examples, Apple's M1 ARM-based chip. The M1 Ultra is a new architecture. And you can see the stats here, 114 billion transistors on a five nanometer process and all the other stats. The M1 Ultra has two chips. They're bonded together. And Apple put an interposer between the two chips. An interposer is a pathway that allows electrical signals to pass through it onto another chip. It's a super fast connection. You can see 2.5 terabytes per second. But the brilliance is the two chips act as a single chip. So you don't have to change the software at all. The way Intel's architecture works is it takes two different chips on a substrate, and then each has its own memory. The memory is not shared. Apple shares the memory for the CPU, the NPU, the GPU. All of it is shared, meaning it needs no change in software unlike Intel. Now Intel is working on a new architecture, but Apple and others are way ahead. Now let's make this really straightforward. The original Apple M1 had 16 billion transistors per chip. And you could see in that diagram, the recently launched M1 Ultra has $114 billion per chip. Now if you take into account the size of the chips, which are increasing, and the increase in the number of transistors per chip, that transistor density, that's a factor of around 6x growth in transistor density per chip in 18 months. Remember Intel, assuming the results in the two previous charts that we showed, assuming they were achievable, is running at 2x every two years, versus 6x for the competition. And AMD and Nvidia are close to that as well because they can take advantage of TSM's learning curve. So in the previous chart with Moore's Law, alive and well, Intel gets to a trillion transistors by 2030. The Apple ARM and Nvidia ecosystems will arrive at that point years ahead of Intel. That means lower costs and significantly better competitive advantage. Okay, so where does that leave Intel? The story is really not resonating with investors and hasn't for a while. On February 18th, the day after its investor meeting, the stock was off. It's rebound a little bit but investors are, you know, they're probably prudent to wait unless they have really a long term view. And you can see Intel's performance relative to some of the major competitors. You know, Pat talked about five nodes in for years. He made a big deal out of that, and he shared proof points with Alder Lake and Meteor Lake and other nodes, but Intel just delayed granite rapids last month that pushed it out from 2023 to 2024. And it told investors that we're going to have to boost spending to turn this ship around, which is absolutely the case. And that delay in chips I feel like the first disappointment won't be the last. But as we've said many times, it's very difficult, actually, it's impossible to quickly catch up in semiconductors, and Intel will never catch up without volume. So we'll leave you by iterating our scenario that could save Intel, and that's if its Foundry business can eventually win back Apple to supercharge its volume story. It's going to be tough to wrestle that business away from TSM especially as TSM is setting up shop in Arizona, with US manufacturing that's going to placate The US government. But look, maybe the government cuts a deal with Apple, says, hey, maybe we'll back off with the DOJ and FTC and as part of the CHIPS Act, you'll have to throw some business at Intel. Would that be enough when combined with other Foundry opportunities Intel could theoretically produce? Maybe. But from this vantage point, it's very unlikely Intel will gain back its true number one leadership position. If it were really paranoid back when David Floyer sounded the alarm 10 years ago, yeah, that might have made a pretty big difference. But honestly, the best we can hope for is Intel's strategy and execution allows it to get competitive volumes by the end of the decade, and this national treasure survives to fight for its leadership position in the 2030s. Because it would take a miracle for that to happen in the 2020s. Okay, that's it for today. Thanks to David Floyer for his contributions to this research. Always a pleasure working with David. Stephanie Chan helps me do much of the background research for "Breaking Analysis," and works with our CUBE editorial team. Kristen Martin and Cheryl Knight to get the word out. And thanks to SiliconANGLE's editor in chief Rob Hof, who comes up with a lot of the great titles that we have for "Breaking Analysis" and gets the word out to the SiliconANGLE audience. Thanks, guys. Great teamwork. Remember, these episodes are all available as podcast wherever you listen. Just search "Breaking Analysis Podcast." You'll want to check out ETR's website @etr.ai. We also publish a full report every week on wikibon.com and siliconangle.com. You could always get in touch with me on email, david.vellante@siliconangle.com or DM me @dvellante, and comment on my LinkedIn posts. This is Dave Vellante for "theCUBE Insights, Powered by ETR." Have a great week. Stay safe, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, and Intel is the steward of Moore's Law.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Stephanie Chan | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
David | PERSON | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
Ohio | LOCATION | 0.99+ |
February 18th | DATE | 0.99+ |
Mobileye | ORGANIZATION | 0.99+ |
2012 | DATE | 0.99+ |
$100 million | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Arizona | LOCATION | 0.99+ |
Wright | PERSON | 0.99+ |
18 months | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2023 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
6x | QUANTITY | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
20% | QUANTITY | 0.99+ |
15% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
2x | QUANTITY | 0.99+ |
$50 billion | QUANTITY | 0.99+ |
100 million | QUANTITY | 0.99+ |
$1.5 billion | QUANTITY | 0.99+ |
2030s | DATE | 0.99+ |
2030 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
last year | DATE | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
2020s | DATE | 0.99+ |
50% | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
two-year | QUANTITY | 0.99+ |
10x | QUANTITY | 0.99+ |
apple | ORGANIZATION | 0.99+ |
February | DATE | 0.99+ |
two chips | QUANTITY | 0.99+ |
15 billion | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Tower Semiconductor | ORGANIZATION | 0.99+ |
M1 Ultra | COMMERCIAL_ITEM | 0.99+ |
2024 | DATE | 0.99+ |
70 plus billion dollar | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
A year ago | DATE | 0.99+ |
200 billion chips | QUANTITY | 0.99+ |
SiliconANGLE | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
three years | QUANTITY | 0.99+ |
CHIPS Act | TITLE | 0.99+ |
second year | QUANTITY | 0.99+ |
about $1.4 billion | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
Rick Echevarria, Intel | Splunk .conf21
>>Well, hi everybody. I'm John Walls here and welcome back to the cubes, continuing coverage and splunk.com 21. And we've talked a lot about data, obviously, um, and a number of partnerships and the point of resources that it's going on in this space. And certainly a very valuable partnership that Splunk has right now is one with Intel. And with me to talk a little bit more about that is Rick Echavarria, who is the vice president of sales and the marketing group at Intel. Rick. Good to see it today. Thanks for joining us on the queue. It's >>Good to see you, John, and thanks for having us. >>You bet. No glad to have you as part of the.com coverage as well. Um, well, first off, let's just for folks at home, uh, who would like to learn more about this relationship, the Splunk Intel partnership, if you would give us that the 30,000 foot picture of it right now, in terms of, of how it began and how it's evolved to the point where it resides today. >>Yeah. Uh, sure. Glad to do that. You know, Splunk is had for many years, uh, position as, as one of the world's best, uh, security information and event management platform. So just like many customers in the cybersecurity space, they're probably trying to retire their technical debt. And, and what are the areas of important focuses to SIM space, right? The SIM segment within cybersecurity. And so the initial engagement between Intel and Splunk started with the information security group at Intel, looking to, again, retire the technical debt, bring next generation SIM technology. And that started, uh, the engagement with Splunk again, to go solve the cybersecurity challenges. One of the things that we quickly learned is that, uh, those flung offers a great platform, you know, from a SIM point of view, as you know, the cyber security segment, the surface area of attack, the number of attacks kids were increased. >>And we quickly realized that this needed to be a collaboration in order for us to be able to work together, to optimize our infrastructure. So it could scale, it could be performance, it could be reliable, uh, to protect Intel's business. And as we started to work with Splunk, we realized, Hey, this is a great opportunity. Intel is benefiting from it. Why don't we start working together and create a reference architecture so that our joint customers also benefit from the collaboration that we have in the cybersecurity space, as we were building the Intel cybersecurity infrastructure platform. So that re that was really the beginning of, uh, of the collaboration around described here and a little bit more, >>Right? So, so you had this, this good working relationship and said, Hey, why don't we get together? Let's get the band together and see what we can do for our car joint clients down the road. Right. So, so what about those benefits that, because you've now you've got this almost as force multiplier right. Of, of Intel's experience. And then what Splunk has been able to do in the data analytics world. Um, what kind of values are being derived, do you think with that partnership? >>Well, obviously we feel much better about our cyber security posture. Um, and, uh, and what's sort of interesting, John, is that we realized that we were what started out as a conversation on SIM. Uh, it really turned out to be an opportunity for us to look at Splunk as a data platform. And, you know, in the technology world, you sometimes hear people talk about the horizontal capabilities. Then the vertical usage is really the security. Uh, the SIM technology. It really became one of several, sorry about the noise in the background. One, uh, became a vertical application. And then we realized that we can apply this platform to some other usages. And in addition to that, you know, when you think about cybersecurity and what we use for SIM that tends to be part of your core systems in it, we started to explore what can we do with what could we do with other data types for other different types of applications. >>And so what we, what we decided to do is we would go explore usages of this data at the edge, uh, of, of the network, and really started to move into much more of that operational technology space. When we realized that Splunk could really, uh, that we could integrate that we can ingest other types of data. And that started a second collaboration around our open Vino technology and our AI capabilities at the edge with the ingestion and the machine learning capabilities of Splunk, so that we can take things like visual data and start creating dashboards for, for example, uh, managing the flow of people, you know, especially in COVID environment. So, uh, and understanding utilization of spaces. So it really started with SIM is moved to the edge. And now we realized that there's a continuum in this data platform that we can build other usages around. >>What was that learning curve like when you went out to the edge, because a lot of people are talking about it, right. And there was a lot of banter about this is where we have to be, but you guys put your money where your mouth was, right? Yeah. You went out, you, you explored that frontier. And, and so what was that like? And, and, and what I guess maybe kind of being early in, uh, what advantage do you think that has given you as that process has matured a little bit? >>Well, it's really interesting John, because what really accelerated our engagement with Splunk in that space was the pandemic. And we had, uh, in 2020 Intel announced the pandemic response technology initiative, where we decided we were going to invest $50 million in accelerating technologies and solutions and partnerships to go solve some of the biggest challenges that depend on them. It was presenting to the world at large. And one of those areas was around companies trying to figure out how to, how to manage spaces, how to manage, you know, the number of people that are in a particular space and social distancing and things of that nature. And, you know, we ended up engaging with Splunk and this collaboration, again, to start looking at visual data, right, integrating that with our open Vino platform and again, their machine learning and algorithms, and start then creating what you would call more operational technology types of application based on visual data. Now these will have other applications that could be used for security usages. It could be used for, again, social distancing, uh, the utilization of acids, but their pandemic and that program that ends the launch is really what became the catalyst for our collaboration with Splunk that allowed us to expand into space. >>Right. And you've done a tremendous amount of work in the healthcare space. I mean, especially in the last year and a half with Penn and the pandemic, um, can you give just a couple of examples of that maybe the variety of uses and the variety of, uh, processes that you've had an influence in, because I think it's pretty impressive. >>Yeah. We, um, there's quite a bit of breadth in the types of solutions we've deployed as part of the pandemic response. John, you can think of some of the, I wouldn't call these things basic things, but you think about telehealth and that improving the telehealth experience all the way to creating privacy aware or sorry, solutions for privacy sensitive usage is where you're doing things like getting multiple institutions to share their data with the right privacy, uh, which, you know, going back to secure and privacy with the right, uh, protections for that data, but being allowed, allowing organization a and organization B partner together use data, create algorithms that both organizations benefit from it. An example of that is, is work we've done around x-ray, uh, and using x-rays to detect COVID on certain populations. So we've gone from those, you know, data protection, algorithm, development, development type of solutions to, to work that we've done in tele-health. So, uh, and, and a lot of other solutions in between, obviously in the high-performance, uh, space we've invested in high-performance computing for, to help the researchers, uh, find cures, uh, for the current pandemic and then looking at future pandemic. So it's been quite a breadth of, uh, uh, of solutions and it's really a Testament also to the breadth of Intel's portfolio and partnerships to be able to, uh, enable so much in such a short amount of time. >>I totally agree, man. Just reading it a little bit about it, about that work, and you talk about the, the breadth of that, the breadth and the depth of that is certainly impressive. So just in general, we'll just put healthcare in this big lump of customers. So what, what do you think the value proposition of your partnership with Splunk is in terms of providing, you know, ultimate value to your customers, because you're dealing with so many different sectors. Um, but if you could just give a summary from your perspective, this is what we do. This is why this power. >>Yeah. Well, customers, uh, talk about transformation. You know, there's a lot of conversation around transformation, right before the pandemic and through and center, but there's a lot of talk about companies wanting to transform and, you know, in order to be able to transform what are the key elements of that is, uh, to be able to capture the right data and then take, turn that data into the right outcomes. And that is something that requires obviously the capabilities and the ability to capture, to ingest, to analyze the data and to do that on an infrastructure that is going to scale with your business, that is going to be reliable. And that is going to be, to give you the flexibility for the types of solutions that you're wanting to apply. And that's really what this blog, uh, collaboration with Intel is going to do. It's, it's just a great example, John, uh, of the strategy that our CEO, pat Gelsinger recently talked about the importance of software to our business. >>This plump collaboration is right in the center of that. They have capabilities in SIM in it observability, uh, in many other areas that his whole world is turning data into, you know, into outcomes into results. But that has to be done on an infrastructure that again, will scale with your business, just like what's the case with Intel and our cybersecurity platform, right? We need to collaborate to make sure that this was going to scale with the demand demands of our business, and that requires close integration of, of hardware and software. The other point that I will make is that the, what started out as a collaboration with between Intel and Splunk, it's also expanding to other partners in the ecosystem. So I like to talk to you a little bit on a work stream that we have ongoing between Intel Splunk, HPE and the Lloyd. >>And why is that important is because, uh, as customers are deploying solutions, they're going to be deploying applications and they're going to have data in multiple environments on premise across multiple clouds. And we have to give, uh, these customers the ability to go gather the data from multiple sources. And that's part of the effort that we're developing with HPE and the Lloyd's will allow people to gather data, perform their analytics, regardless, regardless of their where their data is and be able to deploy the Splunk platform across these multiple environments, whether it's going to be on prem or it's going to be in a pure cloud environment, or it's going to be in a hybrid with multiple clouds, and you're willing to give our customers the most flexibility that we can. And that's where that collaboration with Deloitte and HP is going to come into play. >>Right. And you understand Splunk, right? You will get the workload. I mean, it's, it's totally, there's great familiarity there, which is a great value for that customer base, because you could apply that. So, so, um, obviously you're giving us like multiple thumbs up about the partnership. What excites you the most about going forward? Because as you know, it's all about, you know, where are we going from here? Yes. Now where we've been. So in terms of where you're going together in that partnership, well, what excites you about that? >>Well, first of all, we're excited because it's just a great example of the value that we can deliver to customers when you really understand their pain points and then have the capability to integrate solutions that encompass software and hardware together. So I think that the fact that we've been able to do the work on, on that core SIM space, where we now have a reference architecture that shows how you could really scale and deliver that a Splunk solution for your cybersecurity needs in a, in a scale of one reliable and with high levels of security, of course. And the fact that we then also been able to co-develop fairly quickly solutions for the edge, allows customers now to have that data platform that can scale and can access a lot of different data types from the edge to the cloud. That is really unique. I think it provides a lot of flexibility and it is applicable to a lot of vertical industry segments and a lot of customers >>And be attractive to a lot of customers. That's for sure rec edge of area. We appreciate the time, always a good to see you. And we certainly appreciate your joining us here on the cube to talk about.com for 21. And your relationship with the folks at Splunk. >>Yeah. Thank you, John. >>You bet. Uh, talking about Intel spot, good partnership. Long time, uh, partnership that has great plans going forward, but we continue our coverage here of.com 21. You're watching the cube.
SUMMARY :
And with me to talk a No glad to have you as part of the.com coverage as well. And that started, uh, the engagement with Splunk again, to go solve the really the beginning of, uh, of the collaboration around described here and a little bit more, Um, what kind of values are being derived, do you think with that partnership? And in addition to that, you know, when you think about cybersecurity and managing the flow of people, you know, especially in COVID environment. uh, what advantage do you think that has given you as that process has matured a little bit? to figure out how to, how to manage spaces, how to manage, you know, um, can you give just a couple of examples of that maybe the variety of uses and the to share their data with the right privacy, uh, which, you know, you know, ultimate value to your customers, because you're dealing with so many different sectors. And that is going to be, So I like to talk to you a little bit on a work stream that we have ongoing And that's part of the effort that we're developing with HPE and the Lloyd's will allow people to gather well, what excites you about that? to customers when you really understand their pain points and then have the And be attractive to a lot of customers. uh, partnership that has great plans going forward, but we continue our coverage here of.com 21.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Odie | PERSON | 0.99+ |
Mitzi Chang | PERSON | 0.99+ |
Ruba | PERSON | 0.99+ |
Rebecca Knight | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Alicia | PERSON | 0.99+ |
Peter Burris | PERSON | 0.99+ |
Josh | PERSON | 0.99+ |
Scott | PERSON | 0.99+ |
Jarvis | PERSON | 0.99+ |
Rick Echevarria | PERSON | 0.99+ |
2012 | DATE | 0.99+ |
Rebecca | PERSON | 0.99+ |
Bruce | PERSON | 0.99+ |
Acronis | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Infosys | ORGANIZATION | 0.99+ |
Thomas | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
Deloitte | ORGANIZATION | 0.99+ |
Anant | PERSON | 0.99+ |
Mahesh | PERSON | 0.99+ |
Scott Shadley | PERSON | 0.99+ |
Adam | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Alicia Halloran | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
Nadir Salessi | PERSON | 0.99+ |
Miami Beach | LOCATION | 0.99+ |
Mahesh Ram | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
January of 2013 | DATE | 0.99+ |
America | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Bruce Bottles | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Asia Pacific | LOCATION | 0.99+ |
March | DATE | 0.99+ |
David Cope | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Rick Echavarria | PERSON | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
John Walls | PERSON | 0.99+ |
China | LOCATION | 0.99+ |
July of 2017 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Catalina | LOCATION | 0.99+ |
Newport | LOCATION | 0.99+ |
Zappos | ORGANIZATION | 0.99+ |
NGD Systems | ORGANIZATION | 0.99+ |
50 terabytes | QUANTITY | 0.99+ |
Guido Appenzeller, Intel | HPE Discover 2021
>>Please >>welcome back to HP discover 2021 the virtual version. My name is Dave Volonte and you're watching the cube and we're here with Guido appenzeller who's the C. T. O. Of the data platforms group at Intel. Guido. Welcome to the cube. Come on in. >>Thanks. Dave. I appreciate it's great to be here today. So >>I'm interested in your role at the company. Let's talk about that. Your brand new. Tell us a little bit about your background. What attracted you to intel and what's your role here? >>Yeah. So I'm, you know, I grew up in the startup ecosystem of Silicon Valley came from my PhD and and and never left and uh you know, built software companies, worked at software companies worked at the embassy for a little bit and I think my, my initial reaction when the intel recruiter called me, it was like you got the wrong phone number, right? I'm a software guy that's probably not who you're looking for. And uh you know, we had a good conversation I think at Intel, you know, there's a, there's a realization that you need to look at what intel builds more as an overall system from novel systems perspective right, that you have the software stack and then the hardware components that we're getting more and more intricately linked and you know, you need the software to basically bridge across the different hardware components that intel is building. So I'm here now is the CEO for the data platform school. So that builds the data center for Arts here at Intel. And it's a really exciting job. These are exciting times that intel, you know, with, with Pat, you got a fantastic uh you know, CEO at the home, I worked with him before at december, so a lot of things to do. Um but I think a very exciting future. >>Well, I mean the data center is the wheelhouse of intel. I mean of course you, your ascendancy was a function of the pcs and the great volume and how you change that industry. But really data centers is where they, I mean I remember the days of people that until will never be the data center, it's just a toy and of course your dominant player there now. So your initial focus here is is really defining the vision. Uh and and I'd be interested in your thoughts on the future, what the data center looks like in the future, where you see intel playing a role. What what are you seeing is the big trends there. You know, Pat Pat Gelsinger talks about the waves. He says if you don't ride the waves you're gonna end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >>That's right. You want to surf the waves? Right? That's the way to do it. So look, I like to look at this in sort of in terms of major macro trends. Right? And I think the biggest thing that's happening um in the market right now is the cloud revolution. Right? And I think we're halfway through or something like that and this transition from the classic uh client server type model, uh you know that we're with enterprises running their own data centers to more of a cloud model where something is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise themselves that message to the absolute there's a variety of different models, but the provisioning models have changed, right? The it's it's much more of a turnkey type service. And when when we started out on this journey, I think the we build data centers the same way that we built them before. Although you know the way to deliver it had really changed. Right? That's going to morph a service model and we're really now starting to see the hardware diverge right there actually. Silicon that we need to build or to address these use cases diverge. And so I think one of the things that is kind of the most interesting for me is really to think through how does intel in the future build silicon? That's that's built for clouds. You know, like on prem clouds. Edge clouds, hyper scale cloud but basically built for these new use cases that have emerged. So >>just kind of quick aside, I mean to me, the definition of cloud is changing. It's evolving. It used to be this set of remote services in a hyper scale data center. It's now, you know, that experience is coming on prem it's connecting across clouds. It's moving out to the edge, it's supporting, you know, all kinds of different workloads. How do you see that? It's evolving Cloud. >>Yeah, I think, I mean, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate, right? And that is actually has major architectural implications. I mean, just to, you know, this is a perfect analogy, but if I build a single family home, right, where everything is owned by one party, uh you know, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense? Right, sorry. In my house here has actually open kitchen, it's the same room essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture, right? The kitchen from from your restaurants where the cooks are busy preparing the food and the dining room where the guests are sitting there separate. Right? I mean, the hotel staff has a dedicated place to work and the guests have a dedicated places to mingle, but they don't overlap typically. I think it's the same thing with architecture in the clouds. Right? That's you know, initially the assumption was it's all one thing. And now suddenly we're starting to see, you know, like a much much cleaner separation of these different different areas. I think a second major influences that the type of workloads we're seeing. It's just evolving incredibly quickly. Right? I mean, you know, 10 years ago, you know, things were mostly monolithic today. You know, most new workloads are micro service base and that that has a huge impact in uh you know, where where CPU cycles are spent, you know, a way we need to put in accelerators, you know, how we how we build silicon for that too. Give you an idea, I mean there's some really good research out of google and facebook where they run numbers. For example, if you just take a a standard system and you run a micro service based application, written a micro service based architecture, you can spend anywhere from, I want to say 25 in some cases over 80% of your CPU cycles. Just an overhead. Right. And just on marshalling the marshaling the protocols and uh the encryption and decryption of the packets and your service match that sits in between all these things. So I created a huge amount of overhead so for us, 80% go into these, into these overhead functions. Really our focus suddenly needs to be uh how do we enable um, that kind of infrastructure? >>Yeah, So let's talk a little bit more about workloads if we can. I mean the overhead, there's also sort of as the software, as the data center becomes software defined, you know, thanks thanks to your good work at VM where there's a lot of cores that are supporting that software defined data center and then >>that's exactly right as >>well. You mentioned micro services, container based applications, but but as well, you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, but it's really data oriented workloads versus kind of general purpose CRP and finance and HCM So those workloads are exploding and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is intel playing there? >>Look, I think the trend you're talking about is definitely Right, Right. We're getting more and more data centric, you know, shifting the data around becomes a larger and larger part of the overall workload in the data center. Ai is getting a ton of attention. Right? It's look, if I talked to the most operators, aI is still emerging category. Right. I mean, we're seeing, I'd say five, maybe 10% percent of workloads being A. I. Um it's growing the very high value workloads right now, very challenging workloads. Um but you know, it's still a smaller part of the overall mix. Now, Edge edge is big and it's too big. It's big. And it's complicated because of the way I think about edges. It's not just one homogeneous market, it's really a collection of separate sub markets, right? It's very heterogeneous, you know, it runs on a variety of different hardware. All right. It can be everything from, you know, a little a little server that's families that's strapped to a phone, telephone pole with an antenna on top of, you know, to greater micro cell. Or it can be, you know, something that's running inside a car, Right. I mean, you know, uh, modern cars has a small little data center inside, it can be something that runs in the industrial factory floor, right. The network operators, there's a pretty broad range of verticals that all looks slightly different in, in their requirements. And uh, you know, and it's, I think it's really interesting, right? It's one of those areas that really creates opportunities for, for vendors like, like HPV right to, to, to really shine and and address this, this heterogeneity with a, with a broad range of solutions. Very excited to work together with them in that space. >>Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. But so my question is, what's the role of the data center in this, this world of edge? How do you see it? >>Yeah. Look, I think in a sense, what the cloud revolution is doing is that it's showing us a leads to polarisation of a classic data into edge and clout. That makes sense. Right. It's splitting right before this was all mingled a little bit together. If my data centers in my basement anyways, you know what the edge, what's data says the same thing. Right? At the moment I'm moving some workloads in the clouds. I don't even know where they're running anymore than some other workloads that have to have a certain sense of locality. I need to keep closely. Right. And there's some workloads, you just just can't move into the cloud, right? I mean, there's uh if I'm generating a lot of time on the video data that I have to process, it's financially completely unattractive to shift all of that, you know, to, to essential location. I want to do this locally. Right? Will I ever connect my smoke detector with my sprinkler system via the cloud? No, I won't write just for if things go bad, right, they may not work anymore. So I need something that does this locally. So I think as many reasons, you know, why, why you want to keep something on, on premises And I think it's, I think it's a growing market, right? It's very exciting. You know, we're doing some some very good stuff with friends at hp. You know, the they have the pro line dl 1, 10, 10, 10 plus server with our latest third generation z johnson them uh, the open ran, you know, which is the radio access network for the telco space HP Edge Line service. Also, the third generation says it's a really nice products there that I think can really help addressing enterprises carriers, a number of different organizations. You know, these these alleged use cases, Can you >>explain you mentioned open randy rand. So we essentially think of that as kind of the software to find telco. >>Yeah, exactly. It's a software defined cellular. Right. I mean, actually, I learned a lot about that of the recent months, You know, when, when, when I was taking these classes at stanford, you know, these things were still dying down in analogue, Right. That basically a radio signal will be processed in a long way and, and digested. And today, typically the radio signal is immediately digitized and all the processing of the radio signal happens happens digitally and uh, you know, it happens on servers, right? Um, something HP servers and uh, you know, it's, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually visualize these servers and, you know, run a number of different cells inside the same server. Right? It's really complicated because you have to have fantastic real time guarantees, very sophisticated software stack. But it's, it's really fascinating news case. >>You know, a lot of times we have these debates and it may be somewhat academic, but I'd love to get your thoughts on the debate is about, okay, how much data that that is, you know, processed and inferred at the edge is actually gonna come back to the cloud most of the day, is going to stay at the edge. A lot of it's not even gonna be persisted. And the counter to that is so that's sort of the negative for the data center. But the counter that is, they're gonna be so much data. Even a small percentage of all the data that we're going to create is going to create so much more data, you know, back in the cloud, back in the data center. What's your take on that? >>Look? I think there's different applications that are easier to do in certain places. Right? I mean, look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, you know, lots of different services. Um, you have four. If you need very specialized hardware. If I want to run a big learning task where something need 1000 machines. Right. And then this runs for a couple of days and then I don't need to do that for for another month or two. Right. For that is really great. There's on demand infrastructure, right? Having having all this capability up there, uh you know, at the same time it costs money to send the data up there, Right. If I just look at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the edge. Um so I think we'll we'll see, you know, customers picking and choosing what they want to do. Where. Right. And and there's a role for both. Right. Absolutely. And so, you know, I think there's there's certain categories, I mean, at the end of the day, um, why do I absolutely need to have something at the edge? And there's a couple of, I think good, good use cases. I mean one is, let me ask you a few phrases, but I think it's three primary reasons. Right? Um, one is simply a bandwidth, Right? What I'm saying? Okay, my my video data, like I have have 100 and four K video cameras, you know, with 60 frames a second feet, there's no way I'm going to move into the cloud. It's just cost prohibitive. I have a hard time getting a line that allows you to do this right. Um, there might be latency, right. If I don't want to reliably react in a very short period of time, I can't do that in the cloud. I need to do this locally with me. Um, I can't even do this in my data center. This has to be very, very closely coupled. And then there's this idea of faith sharing, I think, you know, that if I want to make sure that if things go wrong right, uh, the system is still intact, right. You know, anything that's an emergency kind of backup, emergency type procedure, right? If things go wrong, I can't rely on there'll be a good internet connection, I need to handle things things locally. Like, you know, that's the smoke detector and sprinkler system. Right? And so for for, for all of these, right, there's good reasons why we need to move things close to the edge. So I think there'll be a creative tension between the two, Right? But both are huge markets and I think there's, there's great opportunities for, for hp ahead to uh, you know, to, to work on these two cases. >>Yeah, for sure. Top brand in that compute business. So before we wrap up today, you know, thinking about your, your role, I mean part of your role is the trend spotter. You're right, you gotta, you're, you're kind of driving innovation, riding, surfing the waves as you said, you know, skating to the park, all >>the all my perfect crystal ball right here, Yeah, got all the cliches. >>Right? Yes, yeah. Right foot's a little pressure on you. But so what are some of the things that you're overseeing that you're, you're looking towards in terms of innovation projects, particularly obviously in the data center space, what's really exciting you >>look, I mean there's a lot of them and I pretty much all the, you know, the interesting ideas I get from talking to customers, right? You talk to to the sophisticated customers, you try to understand the problems that are trying to solve that they cancel right now and that that gives you ideas to just to pick a couple. Right? I mean, one thing, what area I'm probably thinking about a lot is how can we built in a sense, better accelerators for the infrastructure functions. Right. So, so no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can, I can reduce the amount of CPU cycles I I spent on, you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. Right? So clearly, if this is a large chunk of the overall uh cycle budget, right? We need to find ways to, to to shrink that right to to make this more efficient. Right? So that I think so this basically infrastructure function acceleration, it sounds probably as unsexy as any topic could sound, but I think this is actually really, really interesting area. One of the big levers we have right now in the data set. >>I would agree. I think that's actually really exciting because you actually can pick up a lot of the wasted cycles now and that's that drops right to the bottom line. But >>exactly. I mean it's you know, it's kind of funny. I mean we're still measuring so much with speck and rates of Cpus right performances like, well, They may actually make measuring the wrong thing, right? If 80% of the cycles of my upper spent an overhead right then the speed of the CPU doesn't matter as much. Right? It's other functions that end. So that's one um the second big one is memory is becoming a bigger and bigger issue. Right? And and it's it's memory cost because you know, memory prices, they used to have declined the same rate that, you know, our core counts and and and you know, Fox speeds increased. That's no longer the case. That we've run to some scaling limits there some physical scaling limits where memory prices are becoming stagnant and this is becoming a major pain point for everybody was building servers. Right. So I think we need to find ways how we can leverage memory more efficiently. Right, share memory more efficiently. We have some really cool ideas and in that space that we're working on. >>Yeah, let me just sorry to interrupt. But Pat hinted to that and your big announcement, I mean you talk about system on package I think is what he used to talk about what I call disaggregated memory and better sharing of that memory resource. And I mean that seems to be a clear benefit of value creation for the industry. >>Exactly, right. I mean, if this becomes a larger for our customers, this becomes a larger part of the overall cost, right? We want to help them address that issue. And you know, and then the third one is um, you know, we're seeing more and more data center operators effectively power limited. Right? So we need to reduce the overall power of systems or, you know, uh maybe to some degree, just figure out better ways of cooling these systems. But I think there's a there's a lot of innovation that can be done their right to both make these data centers more economical, but also to make them a little more green today, data centers have gotten big enough that if you look at the total amount of energy that we're spending in this world is mankind. Right. A chunk of that is going just to data centers. Right. And so if we're spending energy at that scale, right. I think we have to start thinking about how can we build data centers that are more energy officials? I'll do the same thing with less energy in the future. >>Well, thank you for for laying those out. I mean you guys have been long term partners with with HP and now of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. I would be and thanks so much for coming on the cube. >>It's great to be here. Great to be at the HP show. Thanks >>For being with us for HP Discover 2021 the virtual version. You're watching the Cube, the leader in digital tech coverage. Right back.
SUMMARY :
Welcome to the cube. So What attracted you to intel and what's your role here? And uh you know, we had a good conversation I think at Intel, you know, there's a, What what are you seeing is the big trends there. is, you know, run by by hyper scale operators or it may be run you know by uh by an enterprise It's moving out to the edge, it's supporting, you know, all kinds of different workloads. I mean, just to, you know, this is a perfect analogy, the software, as the data center becomes software defined, you know, thanks thanks to your good work at you know, aI is coming into play and what it is, you know, a i is this kind of amorphous, I mean, you know, uh, modern cars has a small little data center inside, Yeah, I'm glad you brought HP into the discussion because we're here at HP discover I want to connect them. So I think as many reasons, you know, why, why you want to keep something on, explain you mentioned open randy rand. you know, these things were still dying down in analogue, Right. is going to create so much more data, you know, back in the cloud, back in the data center. at the hardware cost is much, much cheaper to to build myself, you know, in my own data center or in the you know, skating to the park, all space, what's really exciting you you know, Microsoft's marshalling the marshaling service mesh, you know, storage acceleration and these things like that. I think that's actually really exciting because you I mean it's you know, it's kind of funny. And I mean that seems to be a clear benefit of value creation And you know, and then the third one is um, you know, we're seeing more and more data center operators of course H P E. I'm sure Gelsinger's really happy to have you on board Guido. It's great to be here. For being with us for HP Discover 2021 the virtual version.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Volonte | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
60 frames | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Guido | PERSON | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
100 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
25 | QUANTITY | 0.99+ |
Pat Pat Gelsinger | PERSON | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
december | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
10 years ago | DATE | 0.99+ |
hp | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
third one | QUANTITY | 0.98+ |
Gelsinger | PERSON | 0.98+ |
one party | QUANTITY | 0.98+ |
four K | QUANTITY | 0.98+ |
four | QUANTITY | 0.97+ |
Guido appenzeller | PERSON | 0.97+ |
2021 | DATE | 0.97+ |
second | QUANTITY | 0.97+ |
One | QUANTITY | 0.97+ |
two cases | QUANTITY | 0.96+ |
10% percent | QUANTITY | 0.95+ |
stanford | ORGANIZATION | 0.94+ |
three primary reasons | QUANTITY | 0.94+ |
over 80% | QUANTITY | 0.93+ |
one thing | QUANTITY | 0.91+ |
third | QUANTITY | 0.91+ |
10 | COMMERCIAL_ITEM | 0.91+ |
10 plus | COMMERCIAL_ITEM | 0.9+ |
single family | QUANTITY | 0.88+ |
z johnson | PERSON | 0.87+ |
Discover 2021 | COMMERCIAL_ITEM | 0.87+ |
dl 1 | COMMERCIAL_ITEM | 0.79+ |
couple | QUANTITY | 0.77+ |
second feet | QUANTITY | 0.76+ |
second big | QUANTITY | 0.74+ |
third generation | QUANTITY | 0.73+ |
rand | ORGANIZATION | 0.73+ |
days | QUANTITY | 0.7+ |
HPV | ORGANIZATION | 0.7+ |
H P | ORGANIZATION | 0.63+ |
things | QUANTITY | 0.62+ |
T. O. | PERSON | 0.57+ |
month | QUANTITY | 0.52+ |
E. | PERSON | 0.5+ |
generation | OTHER | 0.47+ |
Edge | COMMERCIAL_ITEM | 0.42+ |
HPE | ORGANIZATION | 0.37+ |
Guido Appenzeller, Intel | HPE Discover 2021
(soft music) >> Welcome back to HPE Discover 2021, the virtual version, my name is Dave Vellante and you're watching theCUBE and we're here with Guido Appenzeller, who is the CTO of the Data Platforms Group at Intel. Guido, welcome to theCUBE, come on in. >> Aww, thanks Dave, I appreciate it. It's great to be here today. >> So I'm interested in your role at the company, let's talk about that, you're brand new, tell us a little bit about your background. What attracted you to Intel and what's your role here? >> Yeah, so I'm, I grew up with the startup ecosystem of Silicon Valley, I came from my PhD and never left. And, built software companies, worked at software companies worked at VMware for a little bit. And I think my initial reaction when the Intel recruiter called me, was like, Hey you got the wrong phone number, I'm a software guy, that's probably not who you're looking for. And, but we had a good conversation but I think at Intel, there's a realization that you need to look at what Intel builds more as this overall system from an overall systems perspective. That the software stack and then the hardware components are all getting more and more intricately linked and, you need the software to basically bridge across the different hardware components that Intel is building. So again, I was the CTO for the Data Platforms Group, so that builds the data center products here at Intel. And it's a really exciting job. And these are exciting times at Intel, with Pat, I've got a fantastic CEO at the helm. I've worked with him before at VMware. So a lot of things to do but I think a very exciting future. >> Well, I mean the, the data centers the wheelhouse of Intel, of course your ascendancy was a function of the PCs and the great volume and how you change that industry but really data centers is where, I remember the days people said, Intel will never be at the data center, it's just the toy. And of course, you're dominant player there now. So your initial focus here is really defining the vision and I'd be interested in your thoughts on the future what the data center looks like in the future where you see Intel playing a role, what are you seeing as the big trends there? Pat Gelsinger talks about the waves, he says, if you don't ride the waves you're going to end up being driftwood. So what are the waves you're driving? What's different about the data center of the future? >> Yeah, that's right. You want to surf the waves, that's the way to do it. So look, I like to look at this and sort of in terms of major macro trends, And I think that the biggest thing that's happening in the market right now is the cloud revolution. And I think we're well halfway through or something like that. And this transition from the classic, client server type model, that way with enterprises running all data centers to more of a cloud model where something is run by hyperscale operators or maybe run by an enterprise themselves of (indistinct) there's a variety of different models. but the provisioning models have changed. It's much more of a turnkey type service. And when we started out on this journey I think the, we built data centers the same way that we built them before. Although, the way to deliver IT have really changed, it's going through more of a service model and we really know starting to see the hardware diverge, the actual silicon that we need to build and how to address these use cases, diverge. And so I think one of the things that is probably most interesting for me is really to think through, how does Intel in the future build silicon that's built for clouds, like on-prem clouds, edge clouds, hyperscale clouds, but basically built for these new use cases that have emerged. >> So just a quick, kind of a quick aside, to me the definition of cloud is changing, it's evolving and it used to be this set of remote services in a hyperscale data center, it's now that experience is coming on-prem it's connecting across clouds, it's moving out to the edge it's supporting, all kinds of different workloads. How do you see that sort of evolving cloud? >> Yeah, I think, there's the biggest difference to me is that sort of a cloud starts with this idea that the infrastructure operator and the tenant are separate. And that is actually has major architectural implications, it just, this is a perfect analogy, but if I build a single family home, where everything is owned by one party, I want to be able to walk from the kitchen to the living room pretty quickly, if that makes sense. So, in my house here is actually the open kitchen, it's the same room, essentially. If you're building a hotel where your primary goal is to have guests, you pick a completely different architecture. The kitchen from your restaurants where the cooks are busy preparing the food and the dining room, where the guests are sitting, they are separate. The hotel staff has a dedicated place to work and the guests have a dedicated places to mingle but they don't overlap, typically. I think it's the same thing with architecture in the clouds. So, initially the assumption was it's all one thing and now suddenly we're starting to see like a much cleaner separation of these different areas. I think a second major influence is that the type of workloads we're seeing it's just evolving incredibly quickly, 10 years ago, things were mostly monolithic, today most new workloads are microservice based, and that has a huge impact in where CPU cycles are spent, where we need to put an accelerators, how we build silicon for that to give you an idea, there's some really good research out of Google and Facebook where they run numbers. And for example, if you just take a standard system and you run a microservice based an application but in the microservice-based architecture you can spend anywhere from I want to say 25 in some cases, over 80% of your CPU cycles just on overhead, and just on, marshaling demarshaling the protocols and the encryption and decryption of the packets and your service mesh that sits in between all of these things, that created a huge amount of overhead. So for us might have 80% go into these overhead functions really all focus on this needs to be on how do we enable that kind of infrastructure? >> Yeah, so let's talk a little bit more about workloads if we can, the overhead there's also sort of, as the software as the data center becomes software defined thanks to your good work at VMware, it is a lot of cores that are supporting that software-defined data center. And then- >> It's at VMware, yeah. >> And as well, you mentioned microservices container-based applications, but as well, AI is coming into play. And what is, AI is just kind of amorphous but it's really data-oriented workloads versus kind of general purpose ERP and finance and HCM. So those workloads are exploding, and then we can maybe talk about the edge. How are you seeing the workload mix shift and how is Intel playing there? >> I think the trends you're talking about is definitely right, and we're getting more and more data centric, shifting the data around becomes a larger and larger part of the overall workload in the data center. And AI is getting a ton of attention. Look if I talk to the most operators AI is still an emerging category. We're seeing, I'd say five, maybe 10% percent of workloads being AI is growing, they're very high value workloads. And they're very challenging workloads, but it's still a smaller part of the overall mix. Now edge is big and edge is two things, it's big and it's complicated because of the way I think about edge is it's not just one homogeneous market, it's really a collection of separate sub markets It's, very heterogeneous, it runs on a variety of different hardware. Edge can be everything from a little server, that's fanless, it's strapped to a phone, a telephone pole with an antenna on top of it, to aid a microcell, or it can be something that's running inside a car, modern cars has a small little data center inside. It can be something that runs on an industrial factory floor, the network operators, there's pretty broad range of verticals that all looks slightly different in their requirements. And, it's, I think it's really interesting, it's one of those areas that really creates opportunities for vendors like HPE, to really shine and address this heterogeneity with a broad range of solutions, very excited to work together with them in that space. >> Yeah, so I'm glad you brought HPE into the discussion, 'cause we're here at HPE Discover, I want to connect that. But so when I think about HPE strategy, I see a couple of opportunities for them. Obviously Intel is going to play in every part of the edge, the data center, the near edge and the far edge, and I gage HPE does as well with Aruba. Aruba is going to go to the far edge. I'm not sure at this point, anyway it's not yet clear to me how far, HPE's traditional server business goes to the, inside of automobiles, we'll see, but it certainly will be at the, let's call it the near edge as a consolidation point- >> Yeah. >> Et cetera and look the edge can be a race track, it could be a retail store, it could be defined in so many ways. Where does it make sense to process the data? But, so my question is what's the role of the data center in this world of edge? How do you see it? >> Yeah, look, I think in a sense what the cloud revolution is doing is that it's showing us, it leads to polarization of a classic data into edge and cloud, if that makes sense, it's splitting, before this was all mingled a little bit together, if my data centers my basement anyways, what's the edge, what's data center? It's the same thing. The moment I'm moving some workloads to the clouds I don't even know where they're running anymore then some other workloads that have to have a certain sense of locality, I need to keep closely. And there are some workloads you just can't move into the cloud. There's, if I'm generating lots of all the video data that I have to process, it's financially a completely unattractive to shift all of that, to a central location, I want to do this locally. And will I ever connect my smoke detector with my sprinkler system be at the cloud? No I won't, this stuff, if things go bad, that may not work anymore. So I need something that's that does this locally. So I think there's many reasons, why you want to keep something on premises. And I think it's a growing market, it's very exciting, we're doing some very good stuff with friends like HPE, they have the ProLiant DL, one 10 Gen10 Plus server with our latest a 3rd Generation Xeons on them the Open RAN, which is the radio access network in the telco space. HP Edgeline servers, also a 3rd Generation Xeons there're some really nice products there that I think can really help addressing enterprises, carriers and a number of different organizations, these edge use cases. >> Can you explain, you mentioned Open RAN, vRAN, should we essentially think of that as kind of the software-defined telco? >> Yeah, exactly. It's software-defined cellular. I actually, I learned a lot about that over the recent months. When I was taking these classes at Stanford, these things were still done in analog, that doesn't mean a radio signal will be processed in an analog way and digest it and today typically the radio signal is immediately digitized and all the processing of the radio signal happens digitally. And, it happens on servers, some of them HPE servers. And, it's a really interesting use case where we're basically now able to do something in a much, much more efficient way by moving it to a digital, more modern platform. And it turns out you can actually virtualize these servers and, run a number of different cells, inside the same server. And it's really complicated because you have to have fantastic real-time guarantees versus sophisticated software stack. But it's a really fascinating use case. >> A lot of times we have these debates and it's maybe somewhat academic, but I'd love to get your thoughts on it. And debate is about, how much data that is processed and inferred at the edge is actually going to come back to the cloud, most of the data is going to stay at the edge, a lot of it's not even going to be persisted. And the counter to that is, so that's sort of the negative is at the data center, but then the counter that is there going to be so much data, even a small percentage of all the data that we're going to create is going to create so much more data, back in the cloud, back in the data center. What's your take on that? >> Look, I think there's different applications that are easier to do in certain places. Look, going to a large cloud has a couple of advantages. You have a very complete software ecosystem around you, lots of different services. You'll have first, if you need very specialized hardware, if I wanted to run the bigger learning task where somebody needed a 1000 machines, and then this runs for a couple of days, and then I don't need to do that for another month or two, for that is really great. There's on demand infrastructure, having all this capability up there, at the same time it costs money to send the data up there. If I just look at the hardware cost, it's much much cheaper to build it myself, in my own data center or in the edge. So I think we'll see, customers picking and choosing what they want to do where, and that there's a role for both, absolutely. And so, I think there's certain categories. At the end of the day why do I absolutely need to have something at the edge? There's a couple of, I think, good use cases. One is, let me actually rephrase a little bit. I think it's three primary reasons. One is simply a bandwidth, where I'm saying, my video data, like I have a 100 4K video cameras, with 60 frames per second feeds, there's no way I'm going to move that into the cloud. It's just, cost prohibitive- >> Right. >> I have a hard time even getting (indistinct). There might be latency, if I need want to reliably react in a very short period of time, I can't do that in the cloud, I need to do this locally with me. I can't even do this in my data center. This has to be very closely coupled. And, then there's this idea of fade sharing. I think, if I want to make sure that if things go wrong, the system is still intact, anything that's sort of an emergency kind of a backup, an emergency type procedure, if things go wrong, I can't rely on the big good internet connection, I need to handle things, things locally, that's the smoke detector and the sprinkler system. And so for all of these, there's good reasons why we need to move things close to the edge so I think there'll be a creative tension between the two but both are huge markets. And I think there's great opportunities for HP ahead to work on all these use cases. >> Yeah, for sure, top brand is in that compute business. So before we wrap up today, thinking about your role, part of your role is a trend spotter. You're kind of driving innovation righty, surfing the waves as you said, skating to the puck, all the- >> I've got my perfect crystal ball right here, yeah I got. >> Yeah, all the cliches. (Dave chuckles) puts a little pressure on you, but, so what are some of the things that you're overseeing that you're looking towards in terms of innovation projects particularly obviously in the data center space, what's really exciting you? >> Look, there's a lot of them and I pretty much all the interesting ideas I get from talking to customers. You talk to the sophisticated customers, you try to understand the problems that they're trying to solve and they can't solve right now, and that gives you ideas to just to pick a couple, one thing what area I'm probably thinking about a lot is how can we build in a sense better accelerators for the infrastructure functions? So, no matter if I run an edge cloud or I run a big public cloud, I want to find ways how I can reduce the amount of CPU cycles I spend on microservice marshaling demarshaling, service mesh, storage acceleration and these things like that. And so well clearly, if this is a large chunk of the overall cycle budget, we need to find ways to shrink that to make this more efficient. So then I think, so this basic infrastructure function acceleration, sounds probably as unsexy as any topic would sound but I think this is actually really, really interesting area and one of the big levers we have right now in the data center. >> Yeah, I would agree Guido, I think that's actually really exciting because, you actually can pick up a lot of the wasted cycles now and that drops right to the bottom line, but please- >> Yeah, exactly. And it's kind of funny we're still measuring so much with SPEC and rates of CPU's performances, it's like, well, we may actually be measuring the wrong thing. If 80% of the cycles of my app are spent in overhead, then the speed of the CPU doesn't matter as much, it's other functions that (indistinct). >> Right. >> So that's one. >> The second big one is memory is becoming a bigger and bigger issue, and it's memory cost 'cause, memory prices, they used to sort of decline at the same rate that our core counts and then clock speeds increased, that's no longer the case. So we've run to some scaling limits, there's some physical scaling limits where memory prices are becoming stagnant. And this has become a major pain point for everybody who's building servers. So I think we need to find ways how we can leverage memory more efficiently, share memory more efficiently. We have some really cool ideas in that space that we're working on. >> Well, yeah. And Pat, let me just sorry to interrupt but Pat hinted to that and your big announcement. He talked about system on package and I think is what you used to talk about what I call disaggregated memory and better sharing of that memory resource. And that seems to be a clear benefit of value creation for the industry. >> Exactly. If this becomes a larger, if for our customers this becomes a larger part of the overall costs, we want to help them address that issue. And the third one is, we're seeing more and more data center operators that effectively power limited. So we need to reduce the overall power of systems, or maybe to some degree just figure out better ways of cooling these systems. But I think there's a lot of innovation that can be done there to both make these data centers more economical but also to make them a little more Green. Today data centers have gotten big enough that if you look at the total amount of energy that we're spending, this world as mankind, a chunk of that is going just to data center. And so if we're spending energy at that scale, I think we have to start thinking about how can we build data centers that are more energy efficient that are also doing the same thing with less energy in the future. >> Well, thank you for laying those out, you guys have been long-term partners with HP and now of course HPE, I'm sure Gelsinger is really happy to have you on board, Guido I would be and thanks so much for coming to theCUBE. >> It's great to be here and great to be at the HP show. >> And thanks for being with us for HPE Discover 2021, the virtual version, you're watching theCUBE the leader in digital tech coverage, be right back. (soft music)
SUMMARY :
2021, the virtual version, It's great to be here today. and what's your role here? so that builds the data data center of the future? the actual silicon that we need to build it's moving out to the edge is that the type of workloads we're seeing as the data center It's at VMware, And as well, you mentioned and larger part of the overall the data center, the near the role of the data center lots of all the video data about that over the recent months. And the counter to that is, move that into the cloud. and the sprinkler system. righty, surfing the waves I've got my perfect in the data center space, of the overall cycle If 80% of the cycles of my that's no longer the case. And that seems to be a clear benefit that are also doing the same thing happy to have you on board, great to be at the HP show. the virtual version,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Guido | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Guido Appenzeller | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
1000 machines | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
One | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
ORGANIZATION | 0.99+ | |
two | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
25 | QUANTITY | 0.99+ |
Data Platforms Group | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
one party | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
HPE | ORGANIZATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Today | DATE | 0.98+ |
ProLiant DL | COMMERCIAL_ITEM | 0.97+ |
VMware | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
three primary reasons | QUANTITY | 0.96+ |
second | QUANTITY | 0.95+ |
Data Platforms Group | ORGANIZATION | 0.94+ |
Open RAN | TITLE | 0.94+ |
10% percent | QUANTITY | 0.94+ |
vRAN | TITLE | 0.92+ |
HPE Discover | ORGANIZATION | 0.91+ |
Stanford | ORGANIZATION | 0.91+ |
HPE | TITLE | 0.89+ |
over 80% | QUANTITY | 0.89+ |
single family home | QUANTITY | 0.88+ |
10 Gen10 Plus | COMMERCIAL_ITEM | 0.83+ |
HPE Discover 2021 | EVENT | 0.81+ |
couple | QUANTITY | 0.81+ |
60 frames per second feeds | QUANTITY | 0.79+ |
one thing | QUANTITY | 0.77+ |
HP | EVENT | 0.75+ |
Edgeline | COMMERCIAL_ITEM | 0.74+ |
4K | QUANTITY | 0.74+ |
couple of days | QUANTITY | 0.73+ |
second big | QUANTITY | 0.72+ |
3rd Generation | COMMERCIAL_ITEM | 0.72+ |
month | QUANTITY | 0.69+ |
Aruba | ORGANIZATION | 0.6+ |
telco | ORGANIZATION | 0.57+ |
Discover 2021 | EVENT | 0.55+ |
theCUBE | ORGANIZATION | 0.54+ |
Breaking Analysis with Dave Vellante: Intel, Too Strategic to Fail
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Braking Analysis with Dave Vellante. >> Intel's big announcement this week underscores the threat that the United States faces from China. The US needs to lead in semiconductor design and manufacturing. And that lead is slipping because Intel has been fumbling the ball over the past several years, a mere two months into the job, new CEO Pat Gelsinger wasted no time in setting a new course for perhaps, the most strategically important American technology company. We believe that Gelsinger has only shown us part of his plan. This is the beginning of a long and highly complex journey. Despite Gelsinger's clear vision, his deep understanding of technology and execution ethos, in order to regain its number one position, Intel we believe we'll need to have help from partners, competitors and very importantly, the US government. Hello everyone and welcome to this week's Wikibon CUBE insights powered by ETR. In this breaking analysis we'll peel the onion Intel's announcement of this week and explain why we're perhaps not as sanguine as was Wall Street on Intel's prospects. And we'll lay out what we think needs to take place for Intel to once again, become top gun and for us to gain more confidence. By the way this is the first time we're broadcasting live with Braking Analysis. We're broadcasting on the CUBE handles on Twitch, Periscope and YouTube and going forward we'll do this regularly as a live program and we'll bring in the community perspective into the conversation through chat. Now you may recall that in January, we kind of dismissed analysis that said Intel didn't have to make any major strategic changes to its business when they brought on Pat Gelsinger. Rather we said the exact opposite. Our view at time was that the root of Intel's problems could be traced to the fact that it wasn't no longer the volume leader. Because mobile volumes dwarf those of x86. As such we said that Intel couldn't go up the learning curve for next gen technologies as fast as its competitors and it needed to shed its dogma of being highly vertically integrated. We said Intel needed to more heavily leverage outsourced foundries. But more specifically, we suggested that in order for Intel to regain its volume lead, it needed to, we said at the time, spin out its manufacturing, create a joint venture sure with a volume leader, leveraging Intel's US manufacturing presence. This, we still believe with some slight refreshes to our thinking based on what Gelsinger has announced. And we'll talk about that today. Now specifically there were three main pieces and a lot of details to Intel's announcement. Gelsinger made it clear that Intel is not giving up its IDM or integrated device manufacturing ethos. He called this IDM 2.0, which comprises Intel's internal manufacturing, leveraging external Foundries and creating a new business unit called Intel Foundry Services. It's okay. Gelsinger said, "We are not giving up on integrated manufacturing." However, we think this is somewhat nuanced. Clearly Intel can't, won't and shouldn't give up on IDM. However, we believe Intel is entering a new era where it's giving designers more choice. This was not explicitly stated. However we feel like Intel's internal manufacturing arm will have increased pressure to serve its designers in a more competitive manner. We've already seen this with Intel finally embracing EUV or extreme ultraviolet lithography. Gelsinger basically said that Intel didn't lean into EUV early on and that it created more complexity in its 10 nanometer process, which dominoed into seven nanometer and as you know the rest of the story and Intel's delays. But since mid last year, it's embraced the technology. Now as a point of reference, Samsung started applying EUV for its seven nanometer technology in 2018. And it began shipping in early 2020. So as you can see, it takes years to get this technology into volume production. The point is that Intel realizes it needs to be more competitive. And we suspect, it will give more freedom to designers to leverage outsource manufacturing. But Gelsinger clearly signaled that IDM is not going away. But the really big news is that Intel is setting up a new division with a separate PNL that's going to report directly to Pat. Essentially it's hanging out a shingle and saying, we're open for business to make your chips. Intel is building two new Fabs in Arizona and investing $20 billion as part of this initiative. Now well Intel has tried this before earlier last decade. Gelsinger says that this time we're serious and we're going to do it right. We'll come back to that. This organizational move while not a spin out or a joint venture, it's part of the recipe that we saw as necessary for Intel to be more competitive. Let's talk about why Intel is doing this. Look at lots has changed in the world of semiconductors. When you think about it back when Pat was at Intel in the '90s, Intel was the volume leader. It crushed the competition with x86. And the competition at the time was coming from risk chips. And when Apple changed the game with iPod and iPhone and iPad, the volume equation flipped to mobile. And that led to big changes in the industry. Specifically, the world started to separate design from manufacturing. We now see firms going from design to tape out in 12 months versus taking three years. A good example is Tesla and his deal with ARM and Samsung. And what's happened is Intel has gone from number one in Foundry in terms of clock speed, wafer density, volume, lowest cost, highest margin to falling behind. TSMC, Samsung and alternative processor competitors like NVIDIA. Volume is still the maker of kings in this business. That hasn't changed and it confers advantage in terms of cost, speed and efficiency. But ARM wafer volumes, we estimate are 10x those of x86. That's a big change since Pat left Intel more than a decade ago. There's also a major chip shortage today. But you know this time, it feels a little different than the typical semiconductor boom and bust cycles. Semiconductor consumption is entering a new era and new use cases emerging from automobiles to factories, to every imaginable device piece of equipment, infrastructure, silicon is everywhere. But the biggest threat of all is China. China wants to be self-sufficient in semiconductors by 2025. It's putting approximately $60 billion into new chip Fabs, and there's more to come. China wants to be the new economic leader of the world and semiconductors are critical to that goal. Now there are those poopoo the China threat. This recent article from Scott Foster lays out some really good information. But the one thing that caught our attention is a statement that China's semiconductor industry is nowhere near being a major competitor in the global market. Let alone an existential threat to the international order and the American way of life. I think Scotty is stuck in the engine room and can't see the forest of the trees, wake up. Sure. You can say China is way behind. Let's take an example. NAND. Today China is at about 64 3D layers whereas Micron they're at 172. By 2022 China's going to be at 128. Micron, it's going to be well over 200. So what's the big deal? We say talk to us in 2025 because we think China will be at parody. That's just one example. Now the type of thinking that says don't worry about China and semi's reminds me of the epic lecture series that Clay Christiansen gave as a visiting professor at Oxford University on the history of, and the economics of the steel industry. Now if you haven't watched this series, you should. Basically Christiansen took the audience through the dynamics of steel production. And he asked the question, "Who told the steel manufacturers that gross margin was the number one measure of profitability? Was it God?" he joked. His point was, when new entrance came into the market in the '70s, they were bottom feeders going after the low margin, low quality, easiest to make rebar sector. And the incumbents nearly pulled back and their mix shifted to higher margin products and their gross margins went up and life was good. Until they lost the next layer. And then the next, and then the next, until it was game over. Now, one of the things that got lost in Pat's big announcement on the 23rd of March was that Intel guided the street below consensus on revenue and earnings. But the stock went up the next day. Now when asked about gross margin in the Q&A segment of the announcement, yes, gross margin is a if not the key metric in semi's in terms of measuring profitability. When asked Intel CFO George Davis explained that with the uptick in PCs last year there was a product shift to the lower margin PC sector and that put pressure on gross margins. It was a product mix thing. And revenue because PC chips are less expensive than server chips was affected as were margins. Now we shared this chart in our last Intel update showing, spending momentum over time for Dell's laptop business from ETR. And you can see in the inset, the unit growth and the market data from IDC, yes, Dell's laptop business is growing, everybody's laptop business is growing. Thank you COVID. But you see the numbers from IDC, Gartner, et cetera. Now, as we pointed out last time, PC volumes had peaked in 2011 and that's when the long arm of rights law began to eat into Intel's dominance. Today ARM wafer production as we said is far greater than Intel's and well, you know the story. Here's the irony, the very bucket that conferred volume adventures to Intel PCs, yes, it had a slight uptick last year, which was great news for Dell. But according to Intel it pulled down its margins. The point is Intel is loving the high end of the market because it's higher margin and more profitable. I wonder what Clay Christensen would say to that. Now there's more to this story. Intel's CFO blame the supply constraints on Intel's revenue and profit pressures yet AMD's revenue and profits are booming. So RTSMCs. Only Intel can't seem to thrive when there's this massive chip shortage. Now let's get back to Pat's announcement. Intel is for sure, going forward investing $20 billion in two new US-based fabrication facilities. This chart shows Intel's investments in US R&D, US CapEx and the job growth that's created as a result, as well as R&D and CapEx investments in Ireland and Israel. Now we added the bar on the right hand side from a Wall Street journal article that compares TSMC CapEx in the dark green to that of Intel and the light green. You can see TSMC surpass the CapEx investment of Intel in 2015, and then Intel took the lead back again. And in 2017 was, hey it on in 2018. But last year TSMC took the lead, again. And appears to be widening that lead quite substantially. Leading us to our conclusion that this will not be enough. These moves by Intel will not be enough. They need to do more. And a big part of this announcement was partnerships and packaging. Okay. So here's where it gets interesting. Intel, as you may know was late to the party with SOC system on a chip. And it's going to use its packaging prowess to try and leap frog the competition. SOC bundles things like GPU, NPU, DSUs, accelerators caches on a single chip. So better use the real estate if you will. Now Intel wants to build system on package which will dis-aggregate memory from compute. Now remember today, memory is very poorly utilized. What Intel is going to do is to create a package with literally thousands of nodes comprising small processors, big processors, alternative processors, ARM processors, custom Silicon all sharing a pool of memory. This is a huge innovation and we'll come back to this in a moment. Now as part of the announcement, Intel trotted out some big name customers, prospects and even competitors that it wants to turn into prospects and customers. Amazon, Google, Satya Nadella gave a quick talk from Microsoft to Cisco. All those guys are designing their own chips as does Ericsson and look even Qualcomm is on the list, a competitor. Intel wants to earn the right to make chips for these firms. Now many on the list like Microsoft and Google they'd be happy to do so because they want more competition. And Qualcomm, well look if Intel can do a good job and be a strong second sourced, why not? Well, one reason is they compete aggressively with Intel but we don't like Intel so much but it's very possible. But the two most important partners on this slide are one IBM and two, the US government. Now many people were going to gloss over IBM in this announcement, but we think it's one of the most important pieces of the puzzle. Yes. IBM and semiconductors. IBM actually has some of the best semiconductor technology in the world. It's got great architecture and is two to three years ahead of Intel with POWER10. Yes, POWER. IBM is the world's leader in terms of dis-aggregating compute from memory with the ability to scale to thousands of nodes, sound familiar? IBM leads in power density, efficiency and it can put more stuff closer together. And it's looking now at a 20x increase in AI inference performance. We think Pat has been thinking about this for a while and he said, how can I leave leap frog system on chip. And we think he thought and said, I'll use our outstanding process manufacturing and I'll tap IBM as a partner for R&D and architectural chips to build the next generation of systems that are more flexible and performant than anything that's out there. Now look, this is super high end stuff. And guess who needs really high end massive supercomputing capabilities? Well, the US military. Pat said straight up, "We've talked to the government and we're honored to be competing for the government/military chips boundary." I mean, look Intel in my view was going to have to fall down into face not win this business. And by making the commitment to Foundry Services we think they will get a huge contract from the government, as large, perhaps as $10 billion or more to build a secure government Foundry and serve the military for decades to come. Now Pat was specifically asked in the Q&A section is this Foundry strategy that you're embarking on viable without the help of the US government? Kind of implying that it was a handout or a bailout. And Pat of course said all the right things. He said, "This is the right thing for Intel. Independent of the government, we haven't received any commitment or subsidies or anything like that from the US government." Okay, cool. But they have had conversations and I have no doubt, and Pat confirmed this, that those conversations were very, very positive that Intel should head in this direction. Well, we know what's happening here. The US government wants Intel to win. It needs Intel to win and its participation greatly increases the probability of success. But unfortunately, we still don't think it's enough for Intel to regain its number one position. Let's look at that in a little bit more detail. The headwinds for Intel are many. Look it can't just flick a switch and catch up on manufacturing leadership. It's going to take four years. And lots can change in that time. It tells market momentum as well as we pointed out earlier is headed in the wrong direction from a financial perspective. Moreover, where is the volume going to come from? It's going to take years for Intel to catch up for ARMS if it never can. And it's going to have to fight to win that business from its current competitors. Now I have no doubt. It will fight hard under Pat's excellent leadership. But the Foundry business is different. Consider this, Intel's annual CapEx expenditures, if you divide that by their yearly revenue it comes out to about 20% of revenue. TSMC spends 50% of its revenue each year on CapEx. This is a different animal, very service oriented. So look, we're not pounding the table saying Intel's worst days are over. We don't think they are. Now, there are some positives, I'm showing those in the right-hand side. Pat Gelsinger was born for this job. He proved that the other day, even though we already knew it. I have never seen him more excited and more clearheaded. And we agreed that the chip demand dynamic is going to have legs in this decade and beyond with Digital, Edge, AI and new use cases that are going to power that demand. And Intel is too strategic to fail. And the US government has huge incentives to make sure that it succeeds. But it's still not enough in our opinion because like the steel manufacturers Intel's real advantage today is increasingly in the high end high margin business. And without volume, China is going to win this battle. So we continue to believe that a new joint venture is going to emerge. Here's our prediction. We see a triumvirate emerging in a new joint venture that is led by Intel. It brings x86, that volume associated with that. It brings cash, manufacturing prowess, R&D. It brings global resources, so much more than we show in this chart. IBM as we laid out brings architecture, it's R&D, it's longstanding relationships. It's deal flow, it can funnel its business to the joint venture as can of course, parts of Intel. We see IBM getting a nice licensed deal from Intel and or the JV. And it has to get paid for its contribution and we think it'll also get a sweet deal and the manufacturing fees from this Intel Foundry. But it's still not enough to beat China. Intel needs volume. And that's where Samsung comes in. It has the volume with ARM, has the experience and a complete offering across products. We also think that South Korea is a more geographically appealing spot in the globe than Taiwan with its proximity to China. Not to mention that TSMC, it doesn't need Intel. It's already number one. Intel can get a better deal from number two, Samsung. And together these three we think, in this unique structure could give it a chance to become number one by the end of the decade or early in the 2030s. We think what's happening is our take, is that Intel is going to fight hard to win that government business, put itself in a stronger negotiating position and then cut a deal with some supplier. We think Samsung makes more sense than anybody else. Now finally, we want to leave you with some comments and some thoughts from the community. First, I want to thank David Foyer. His decade plus of work and knowledge of this industry along with this collaboration made this work possible. His fingerprints are all over this research in case you didn't notice. And next I want to share comments from two of my colleagues. The first is Serbjeet Johal. He sent this to me last night. He said, "We are not in our grandfather's compute era anymore. Compute is getting spread into every aspect of our economy and lives. The use of processors is getting more and more specialized and will intensify with the rise in edge computing, AI inference and new workloads." Yes, I totally agree with Sarbjeet. And that's the dynamic which Pat is betting and betting big. But the bottom line is summed up by my friend and former IDC mentor, Dave Moschella. He says, "This is all about China. History suggests that there are very few second acts, you know other than Microsoft and Apple. History also will say that the antitrust pressures that enabled AMD to thrive are the ones, the very ones that starved Intel's cash. Microsoft made the shift it's PC software cash cows proved impervious to competition. The irony is the same government that attacked Intel's monopoly now wants to be Intel's protector because of China. Perhaps it's a cautionary tale to those who want to break up big tech." Wow. What more can I add to that? Okay. That's it for now. Remember I publish each week on wikibon.com and siliconangle.com. These episodes are all available as podcasts. All you got to do is search for Braking Analysis podcasts and you can always connect with me on Twitter @dvellante or email me at david.vellante, siliconangle.com As always I appreciate the comments on LinkedIn and in clubhouse please follow me so that you're notified when we start a room and start riffing on these topics. And don't forget to check out etr.plus for all the survey data. This is Dave Vellante for theCUBE insights powered by ETR, be well, and we'll see you next time. (upbeat music)
SUMMARY :
in Palo Alto in Boston, in the dark green to that of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave Moschella | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
January | DATE | 0.99+ |
2018 | DATE | 0.99+ |
2025 | DATE | 0.99+ |
Ireland | LOCATION | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
$20 billion | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Clay Christensen | PERSON | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
three years | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Gartner | ORGANIZATION | 0.99+ |
Clay Christiansen | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Israel | LOCATION | 0.99+ |
David Foyer | PERSON | 0.99+ |
12 months | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Christiansen | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
20x | QUANTITY | 0.99+ |
Serbjeet Johal | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
mid last year | DATE | 0.99+ |
Breaking Analysis: Pat Gelsinger Must Channel Andy Grove and Recreate Intel
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Much of the discussion around Intel's current challenges, is focused on manufacturing issues and it's ongoing market share skirmish with AMD. Of course, that's very understandable. But the core issue Intel faces is that it has lost the volume game forever. And in Silicon volume is king. As such incoming CEO Pat Gelsinger faces some difficult decisions. I mean, on the one hand he could take some logical steps to shore up the company's execution, maybe outsource a portion of its manufacturing. Make some incremental changes that would unquestionably please Wall Street and probably drive shareholder value when combined with the usual stock buybacks and dividends. On the other hand, Gelsinger could make much more dramatic moves shedding it's vertically integrated heritage and transforming Intel into a leading designer of chips for the emerging multi-trillion dollar markets that are highly fragmented and generally referred to as the edge. We believe Intel has no choice. It must create a deep partnership in our view with a semiconductor manufacturer with aspirations to manufacture on US soil and focus Intel's resources on design. Hello, everyone. And welcome to this week's Wikibon's Cube Insights powered by ETR. In this breaking analysis will put forth our prognosis for what Intel's future looks like and lay out what we think the company needs to do not only to maintain its relevance but to regain the position it once held as perhaps the most revered company in tech. Let's start by looking at some of the fundamental factors that we've been tracking and that have shaped and are shaping Intel and our thinking around Intel today. First, it's really important to point out that new CEO Gelsinger is walking into a really difficult situation. Intel's ascendancy and its dominance it was created by PC volumes. And its development of an ecosystem that the company created around the x86 instruction set. In semiconductors volume is everything. The player with the highest volumes has the lowest manufacturing costs. And the math around learning curves is very clear and it's compelling. It's based on Wright's law named after Theodore Wright T.P Wright. He was an aeronautical engineer and he discovered that for every cumulative doubling of units manufactured, costs are going to fall by a constant percentage. Now in semiconductor way for manufacturing that cost is roughly around 22% declines. And when you consider the economics of manufacturing a next generation technology, for example going from ten nanometers to seven nanometers this becomes huge. Because the cost of making seven nanometer tech for example is much higher relative to 10 nanometers. But if you can fit more circuits on a chip your wafer costs can drop by 30% or even more. Now this learning curve benefit is why volume is so important. If the time it takes to double volume is elongated then the learning curve benefit they get elongated as well and it become less competitive from a cost standpoint. And that's exactly what is happening to Intel. You see x86 PC volumes, they peaked in 2011 and that marked the beginning of the end of Intel's dominance from manufacturing and cost standpoint. You know, ironically HDD hard disk drive volumes peaked around the same time and you're seeing a similar fundamental shift in that market relative to flash. Now because Intel has a vertically integrated model it's designers are limited by the constraints in the manufacturing process. What used to be Intel's ace in the hole its process manufacturing has become a hindrance, frustrating Intel's chip designers and really seeding advantage to a number of competitors including AMD, ARM and Nvidia. Now, during this time we've seen high profile innovators adapting alternative processors companies like Apple which chose its own design based on ARM for the M1. Tesla is a fascinating case study where Intel was really not in the running. AWS probably Intel's largest customer is developing its own chips. You know through Intel, a little bone at the recent reinvent it announced its use of Intel's Habana chips in a practically the same sentence that talked about how it was developing a similar chip that would provide even better price performance. And just last month it was reported that Microsoft Intel's monopoly partner in the PC era was developing its own ARM-based chips for the surface PCs and for its servers. Intel's Zenith was marked by those peak PC volumes that we talked about. Now to stress this point this chart shows x86 PC volumes over time. That red highlighted area shows the peak years. Now, volumes actually grew in 2020 in part due to COVID which is not really reflected in this chart but the volume game was lost for Intel. When it has been widely reported that in 2005 Steve Jobs approached Intel as it was replacing IBM microprocessors with with Intel processors for the Mac and asked Intel to develop the chip for the iPhone Intel passed and the die was cast. Now to the earlier point, PC markets are actually quite good if you're Dell. Here's some ETR data that shows Dell's laptop net score. Net score is a measure of spending momentum for 2020 and into 2021. Dell's client business has been very good and profitable and frankly, it's been a pleasant surprise. You know, PCs they're doing well. And as you can see in this chart, Dell has momentum. There's approximately 275 million maybe as high as 300 million PC units shipped worldwide in 2020, you know up double digits by some estimates. However, ARM chip units shipped exceeded 20 billion units last year worldwide. And it's not apples to apples. You know, we're comparing x86 based PCs to ARM chips. So this excludes x86 servers, but the way for volume for ARM dwarfs that of x86 probably by a factor of 10 times. Back to Wright's law, how long is it going to take Intel to double wafer volumes? It's not going to happen. And trust me, Pat Gelsinger understands this dynamic probably better than anyone in the world and certainly better than I do. And as you look out to the future, the story for Intel and it's vertically integrated approach it's even tougher. This chart shows Wikibon's 2020 forecast for ARM based compared to x86 based PCs. It also includes some other devices but as you can see what happens by the end of the decade is ARM really starts to eat in to x86. As we've seen with the M1 at Apple, ARM is competing in PCs in much better position for these emerging devices that support things like video and virtual reality systems. And we think even will start to eat into the enterprise. So again, the volume game is over for Intel, period. They're never going to win it back. Well, you might ask what about revenue? Intel still dominates in the data center right? Well, yes. And that is much higher revenue per unit but we still believe that revenue from ARM-based systems are going to surpass that of x86 by the end of the decade. Arm compute revenue is shown in the orange area in this chart with x86 in the blue. This means to us that Intel's last mot is going to be its position in the data center. It has to protect that at all costs. Now the market knows this. It knows something's wrong with Intel. And you can see that is reflected in the valuations of semiconductor companies. This chart compares the trailing 12 month revenue in the market valuations for Intel, Nvidia, AMD and Qualcomm. And you can see at a trailing 12 month multiple revenue with 3 X compared to about 22 X for Nvidia about 10 X for AMT and Qualcomm, Intel is lagging behind in the street's view. And Intel, as you can see here, it's now considered a cheap stock by many, you know. Here's a graph that shows the performance over the past 12 months compared to the NASDAQ which you can see that major divergence. NASDAQ has been powered part by COVID and all the new tech and the work from home. The stock reacted very well to the appointment of Gelsinger. That's no surprise. The question people are asking is what's next for Intel? How will Pat turn the company's fortunes around? How long is it going to take? What moves can he and should he make? How will they be received by the market? And internally, very importantly, within Intel's culture. These are big chewy questions and people are split on what should be done. I've heard everything from Pat should just clean up the execution issues. It's no.. This is, you know, very workable and not make any major strategic moves all the way to Intel should do a hybrid outsourced model to Intel should aggressively move out of manufacturing. Let me read some things from Barron's and some other media. Intel has fallen behind rivals and the rest of tech Intel is replacing Bob Swan. Investors are cheering the move. Intel would likely turn to Taiwan semiconductor for chips. Here's who benefits most. So let's take a look at some of the opinions that are inside these articles. So, first one I'm going to pull out Intel has indicated a willingness to try new things and investors expect the company to announce a hybrid manufacturing approach in January. Now, if you take a look at that and you quote a CEO Swan, he says, what has changed is that we have much more flexibility in our designs. And with that type of design we have the ability to move things in and move things out. And that gives us a little more flexibility about what we will make and what we might take from the outside. So let's unpack that a little bit. The new Intel, we know is a highly vertically integrated workflow from design to manufacturing production. But to me, the designers are the artists and the flexibility you would think would come from outsourcing manufacturer to give designers more flexibility to take advantage of say seven nanometer or five nanometer process technologies versus having to wait for Intel to catch up. It used to be that Intel's process was the industry's best and it could supercharge a design or even mask certain design challenges so that Intel could maintain its edge but that's no longer the case. Here's a sentiment from an analyst, Daniel Donnelly. Donnelly is at Citi. It says he's confident. Donnelly is confident that Intel's decision to outsource more of its production won't result in the company divesting its entire manufacturing segment. And he cited three reasons. One, it would take roughly three years to bring a chip to market. And two, Intel would have to share IP. And three, it would hurt Intel's profit margins. He said it would negatively impact gross margins by 10 points and would cause a 25% decline in EPS. Now I don't know about this. I would... To that I would say one, Intel needs to reduce its current cycle time, to go from design to production from let's say three to four years where it is today. It's got to get it under you know, at least at two years maybe even less. Second, I would say is what good is intellectual property if it's not helping you win in the market? And three, I think profitability is nuance. So here's another take from a UBS analyst. His name is Timothy Arcuri. And he says, quote, We see but no option but for Intel to aggressively pursue an outsourcing strategy. He wrote that Intel could be 80% outsourced by 2026. And just by going to 50% outsourcing, he said would save the company $4 billion annually in CapEx and 25% would drop to free cashflow. So look, maybe Gelsinger has to sacrifice some gross margin in EPS for the time being. Reduce the cost of goods sold by outsourcing manufacturing lower its CapEx and fund innovation in design with free cash flow. Here's our take, Pat Gelsinger needs to look in the mirror and ask what would Andy Grove do? You know, Grove's quote that only the paranoid survive its famous less well-known are the words that proceeded that quote. Success breeds complacency and complacency breeds failure. Intel in our view is headed on a path to a long drawn out failure if it doesn't act aggressively. It simply can't compete on cost as an integrated manufacturer because it doesn't have the volume. So what will Pat Gelsinger do? You know, we've probably done 30 Cube interviews with Pat and I just don't think he's taking the job to make some incremental changes to Intel to get the stock price back up. Why would that excite Pat Gelsinger? Trends, markets, people, society, he's a dot connector and he loves Intel deeply. And he's a legend at the company. Here's what we strongly believe. We think Intel has to do a deal with TSM or maybe Samsung perhaps some kind of joint venture or other innovative structure that both protects its IP and secures its future. You know, both of these manufacturers would love to have a stronger US presence. In markets where Intel has many manufacturing facilities they may even be willing to take a loss to get this started and deeply partner with Intel for some period of time This would allow Intel to better compete on a cost basis with AMD. It would protect its core data center revenue and allow it to fight the fight in PCs with better cost structures. Maybe even gain some share that could count for, you know another $10 billion to the top line. Intel should focus on reducing its cycle times and unleashing its designers to create new solutions. Let a manufacturing partner who has the learning curve advantages enable Intel designers to innovate and extend ecosystems into new markets. Autonomous vehicles, factory floor use cases, military security, distributed cloud the coming telco explosion with 5G, AI inferencing at the edge. Bite the bullet, give up on yesterday's playbook and reinvent Intel for the next 50 years. That's what we'd like to see. And that's what we think Gelsinger will conclude when he channels his mentor. What do you think? Please comment on my LinkedIn posts. You can DM me at dvellante or email me at david.vellante@siliconangle.com. I publish weekly on wikibon.com and siliconangle.com. These episodes remember are also available as podcasts for your listening pleasure. Just search Breaking Analysis podcast. Many thanks to my friend and colleague David Floyer who contributed to this episode and that has done great work in the last better part of the last decade and has really thought through some of the cost factors that we talked about today. Also don't forget to check out etr.plus for all the survey action. Thanks for watching this episode of the Cube Insights powered by ETR. Be well. And we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and that marked the beginning
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Donnelly | PERSON | 0.99+ |
Andy Grove | PERSON | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Daniel Donnelly | PERSON | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
2011 | DATE | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
UBS | ORGANIZATION | 0.99+ |
Steve Jobs | PERSON | 0.99+ |
Timothy Arcuri | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Gelsinger | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
25% | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
10 nanometers | QUANTITY | 0.99+ |
50% | QUANTITY | 0.99+ |
Bob Swan | PERSON | 0.99+ |
10 times | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
ten nanometers | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Pat | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
Grove | PERSON | 0.99+ |
12 month | QUANTITY | 0.99+ |
three reasons | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
2005 | DATE | 0.99+ |
three years | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Wright | PERSON | 0.99+ |
NASDAQ | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
2026 | DATE | 0.99+ |
AMT | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
10 points | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Second | QUANTITY | 0.99+ |
TSM | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
seven nanometers | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Mac | COMMERCIAL_ITEM | 0.99+ |
3 X | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
last month | DATE | 0.99+ |
last year | DATE | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
approximately 275 million | QUANTITY | 0.98+ |
five nanometer | QUANTITY | 0.98+ |
Rebecca Weekly, Intel Corporation | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Welcome back to the Cubes Coverage of 80 Bus Reinvent 2020. This is the Cube virtual. I'm your host, John Ferrier normally were there in person, a lot of great face to face, but not this year with the pandemic. We're doing a lot of remote, and he's got a great great content guest here. Rebecca Weekly, who's the senior director and senior principal engineer at for Intel's hyper scale strategy and execution. Rebecca. Thanks for coming on. A lot of great news going on around Intel on AWS. Thanks for coming on. >>Thanks for having me done. >>So Tell us first, what's your role in Intel? Because obviously compute being reimagined. It's going to the next level, and we're seeing the sea change that with Cove in 19, it's putting a lot of pressure on faster, smaller, cheaper. This is the cadence of Moore's law. This is kind of what we need. More horsepower. This is big theme of the event. What's what's your role in intel? >>Oh, well, my team looks after a joint development for product and service offerings with Intel and A W s. So we've been working with AWS for more than 14 years. Um, various projects collaborations that deliver a steady beat of infrastructure service offerings for cloud applications. So Data Analytics, ai ml high performance computing, Internet of things, you name it. We've had a project or partnership, several in those the main faces on thanks to that relationship. You know, today, customers Committee choose from over 220 different instance types on AWS global footprint. So those feature Intel processors S, P. J s ai accelerators and more, and it's been incredibly rewarding an incredibly rewarding partnership. >>You know, we've been covering Intel since silicon angle in the Cube was formed 10 years ago, and this is what we've been to every reinvent since the first one was kind of a smaller one. Intel's always had a big presence. You've always been a big partner, and we really appreciate the contribution of the industry. Um, you've been there with with Amazon. From the beginning, you've seen it grow. You've seen Amazon Web services become, ah, big important player in the enterprise. What's different this year from your perspective. >>Well, 2020 has been a challenging here for sure. I was deeply moved by the kinds of partnership that we were able to join forces on within telling a W s, uh, to really help those communities across the globe and to address all the different crisis is because it it hasn't just been one. This has been, ah, year of of multiple. Um, sometimes it feels like rolling crisis is So When the pandemic broke out in India in March of this year, there were schools that were forced to close, obviously to slow the spread of the disease. And with very little warning, a bunch of students had to find themselves in remote school out of school. Uh, so the Department of Education in India engaged career launcher, which is a partner program that we also sponsor and partner with, and it really they had to come up with a distance learning solutions very quickly, uh, that, you know, really would provide Children access to quality education while they were remote. For a long as they needed to be so Korean launcher turned to intel and to a W s. We helped design infrastructure solution to meet this challenge and really, you know, within the first, the first week, more than 100 teachers were instructing classes using that online portal, and today it serves more than 165,000 students, and it's going to accommodate more than a million over the fear. Um, to me, that's just a perfect example of how Cove it comes together with technology, Thio rapidly address a major shift in how we're approaching education in the times of the pandemic. Um, we also, you know, saw kind of a climate change set of challenges with the wildfires that occurred this year in 2020. So we worked with a partner, Roman, as well as a partner who is a partner with AWS end until and used the EEC Thio C five instances that have the second Gen Beyond available processors. And we use them to be able to help the Australian researchers who were dealing with that wildfire increase over 60 fold the number of parallel wildfire simulations that they could perform so they could do better forecasting of who needed to leave their homes how they could manage those scenarios. Um, and we also were able toe work with them on a project to actually thwart the extinction of the Tasmanian Devils. Uh, in also in Australia. So again, that was, you know, an HPC application. And basically, by moving that to the AWS cloud and leveraging those e c two instances, we were able to take their analysis time from 10 days to six hours. And that's the kind of thing that makes the cloud amazing, right? We work on technology. We hope that we get thio, empower people through that technology. But when you can deploy that technology a cloud scale and watch the world's solve problems faster, that has made, I would say 2020 unique in the positivity, right? >>Yeah. You don't wanna wish this on anyone, but that's a real upside for societal change. I mean, I love your passion on that. I think this is a super important worth calling out that the cloud and the cloud scale With that kind of compute power and differentiation, you gets faster speed to value not just horsepower, but speed to value. This is really important. And it saved lives that changes lives. You know, this is classic change. The world kind of stuff, and it really is on center stage on full display with Cove. I really appreciate, uh, you making that point? It's awesome. Now with that, I gotta ask you, as the strategist for hyper scale intel, um, this is your wheelhouse. You get the fashion for the cloud. What kind of investments are you making at Intel To make more advancements in the clock? You take a minute, Thio, share your vision and what intel is working on? >>Sure. I mean, obviously were known more for our semiconductor set of investments. But there's so much that we actually do kind of across the cloud innovation landscape, both in standards, open standards and bodies to enable people to work together across solutions across the world. But really, I mean, even with what we do with Intel Capital, right, we're investing. We've invested in a bunch of born in the cloud start up, many of whom are on top of AWS infrastructure. Uh, and I have found that to be a great source of insights, partnerships, you know, again how we can move the needle together, Thio go forward. So, in the space of autonomous learning and adopt is one of the start ups we invested in. And they've really worked to use methodologies to improve European Health Co network monitoring. So they were actually getting a ton of false positive running in their previous infrastructure, and they were able to take it down from 50 k False positive the day to 50 using again a I on top of AWS in the public cloud. Um, using obviously and a dog, you know, technology in the space of a I, um we've also seen Capsule eight, which is an amazing company that's enabling enterprisers enterprises to modernize and migrate their workloads without compromising security again, Fully born in the cloud able to run on AWS and help those customers migrate to the public cloud with security, we have found them to be an incredible partner. Um, using simple voice commands on your on your smartphone hypersonic is another one of the companies that we've invested in that lets business decision makers quickly visualized insects insight from their disparate data sources. So really large unstructured data, which is the vast majority of data stored in the world that is exploding. Being able to quickly discern what should we do with this. How should we change something about our company using the power of the public cloud? I'm one of the last ones that I absolutely love to cover kind of the wide scope of the waves. That cloud is changing the innovation landscape, Um, Model mine, which is basically a company that allows people thio take decades of insights out of the mainframe data and do something with it. They actually use Amazon's cloud Service, the cloud storage service. So they were able Teoh Teik again. Mainframe data used that and be able to use Amazon's capabilities. Thio actually create, you know, meaningful insights for business users. So all of those again are really exciting. There's a bunch of information on the Intel sponsor channel with demos and videos with those customer stories and many, many, many more. Using Amazon instances built on Intel technology, >>you know that Amazon has always been in about startup born in the cloud. You mentioned that Intel has always been investing with Intel Capital, um, generations of great investments. Great call out there. Can you tell us more about what, uh, Amazon technology about the new offerings and Amazon has that's built on Intel because, as you mentioned at the top of the interview, there's been a long, long standing partnership since inception, and it continues. Can you take a minute to explain some of the offerings built on the Intel technology that Amazon's offering? >>Well, I've always happened to talk about Amazon offerings on Intel products. That's my day job. You know, really, we've spent a lot of time this year listening to our customer feedback and working with Amazon to make sure that we are delivering instances that are optimized for fastest compute, uh, better virtual memory, greater storage access, and that's really being driven by a couple of very specific workloads. So one of the first that we are introducing here it reinvents is the n five the n instant, and that's really ah, high frequency, high speed, low Leighton see network variants of what was, you know, the traditional Amazon E. C two and five. Um, it's powered by a second Gen Intel scalable processors, The Cascade late processors and really these have the highest all court turbo CPU performance from the on scalable processors in the club, with a frequency up to 4.5 gigahertz. That is really exciting for HPC work clothes, uh, for gaining for financial applications. Simulation modeling applications thes are ones where you know, automation, Um, in the automotive space in the aerospace industries, energy, Telkom, all of them can really benefit from that super low late and see high frequency. So that's really what the M five man is all about, um, on the br to others that we've introduced here today and that they are five beats and that is that can utilize up thio 60 gigabits per second of Amazon elastic block storage and really again that bandwidth and the 260 I ops that it can deliver is great for large relational databases. So the database file systems kind of workload. This is really where we are super excited. And again, this is built on Cascade Lake. The 2nd 10. Yeah, and it takes It takes advantage of many different aspects of how we're optimizing in that processor. So we were excited to partner with customers again using E. B s as well as various other solutions to ensure that data ingestion times for applications are reduced and they can see the delivery to what you were mentioning before right time to results. It's all about time to results on the last one is t three e. N. 33 e n is really the new D three instant. It's again on the Alexa Cascade Lake. We offer those for high density with high density local hard drive storage so very cost optimized but really allowing you to have significantly higher network speed and disk throughput. So very cost optimized for storage applications that seven x more storage capacity, 80% lower costs given terabytes of storage compared to the previous B two instances. So we will really find that that would be ideal for workloads in distributed and clustered file system, Big data and analytics. Of course, you need a lot of capacity on high capacity data lakes. You know, normally you want to optimize a day late for performance, but if you need tons of capacity, you need to walk that line. And I think the three and really will help you do that. And and of course, I would be absolutely remiss to not mention that last month we announced the Amazon Web Services Partnership with us on an Intel select solution, which is the first, you know, cloud Service provider to really launching until select solution there. Um, and it's an HPC space, So this is really about in high performance computing. Developers can spend weeks for months researching, you know, to manage compute storage network software configuration options. It's not a field that has gone fully cloud native by default, and those recipes air still coming together. So this is where the AWS parallel cluster solution using. It's an Intel Select solution for simulation and modeling on top of AWS. We're really excited about how it's going to make it easier for scientists and researchers like the ones I mentioned before, but also I t administrators to deploy and manage and just automatically scale those high performance computing clusters in Aws Cloud. >>Wow, that's a lot. A lot of purpose built e mean, no, you guys were really nailing. I mean, low late and see you got stories, you got density. I mean, these air use cases where there's riel workloads that require that kind of specialty and or e means beyond general purpose. Now, you're kind of the general purpose of the of the use case. This is what cloud does this is amazing. Um, final comments this year. I want to get your thoughts because you mentioned Cloud Service provider. You meant to the select program, which is an elite thing, right? Okay, we're anticipating Mawr Cloud service providers. We're expecting Mawr innovation around chips and silicon and software. This is just getting going. It feels like to me, it's just the pulse is different this year. It's faster. The cadence has changed. As a strategist, What's your final comments? Where is this all going? Because this is pretty different. Its's not what it was pre code, but I feel like this is going to continue transforming and being faster. What's your thoughts? >>Absolutely. I mean, the cloud has been one of the biggest winners in a time of, you know, incredible crisis for our world. I don't think anybody has come out of this time without understanding remote work, you know, uh, remote retail, and certainly a business transformation is inevitable and required thio deliver in a disaster recovery kind of business continuity environment. So the cloud will absolutely continue on continue to grow as we enable more and more people to come to it. Um, I personally, I couldn't be more excited than to be able Thio leverage a long term partnership, incredible strength of that insulin AWS partnership and these partnerships with key customers across the ecosystem. We do so much with SVS Os Vives s eyes MSP, you know, name your favorite flavor of acronym, uh, to help end users experience that digital transformation effectively, whatever it might be. And as we learn, we try and take those learnings into any environment. We don't care where workloads run. We care that they run best on our architecture. Er and that's really what we're designing. Thio. And when we partner between the software, the algorithm on the hardware, that's really where we enable the best and user demand and the end use their time to incite and use your time to market >>best. >>Um, so that's really what I'm most excited about. That's obviously what my team does every day. So that's of course, what I'm gonna be most excited about. Um, but that's certainly that's that's the future that you see. And I think it is a bright and rosy one. Um, you know, I I won't say things I'm not supposed to say, but certainly do be sure to tune into the Cube interview with It's on. And you know, also Chatan, who's the CEO of Havana and obviously shaken, is here at A W s, a Z. They talk about some exciting new projects in the AI face because I think that is when we talk about the software, the algorithms and the hardware coming together, the specialization of compute where it needs to go to help us move forward. But also, the complexity of managing that heterogeneity at scale on what that will take and how much more we need to do is an industry and as partners to make that happen. Um, that is the next five years of managing. You know how we are exploding and specialized hardware. I'm excited about that, >>Rebecca. Thank you for your great insight there and thanks for mentioning the Cube interviews. And we've got some great news coming. We'll be breaking that as it gets announced. The chips in the Havana labs will be great stuff. I wouldn't be remiss if I didn't call out the intel. Um, work hard, play hard philosophy. Amazon has a similar approach. You guys do sponsor the party every year replay party, which is not gonna be this year. So we're gonna miss that. I think they gonna have some goodies, as Andy Jassy says, Plan. But, um, you guys have done a great job with the chips and the performance in the cloud. And and I know you guys have a great partner. Concerts provide a customer in Amazon. It's great showcase. Congratulations. >>Thank you so much. I hope you all enjoy olive reinvents even as you adapt to New time. >>Rebecca Weekly here, senior director and senior principal engineer. Intel's hyper scale strategy and execution here in the queue breaking down the Intel partnership with a W s. Ah, lot of good stuff happening under the covers and compute. I'm John for your host of the Cube. We are the Cube. Virtual Thanks for watching
SUMMARY :
It's the Cube with digital coverage It's going to the next level, and we're seeing the sea change that with Cove in 19, ai ml high performance computing, Internet of things, you name it. and this is what we've been to every reinvent since the first one was kind of a smaller one. by the kinds of partnership that we were able to join forces on within telling a W I really appreciate, uh, you making that point? I'm one of the last ones that I absolutely love to cover kind of the wide scope of the waves. about the new offerings and Amazon has that's built on Intel because, as you mentioned at the top of the interview, and researchers like the ones I mentioned before, but also I t administrators to deploy it's just the pulse is different this year. I mean, the cloud has been one of the biggest winners in a time of, that's the future that you see. And and I know you guys have a great partner. I hope you all enjoy olive reinvents even as you adapt to in the queue breaking down the Intel partnership with a W s. Ah, lot of good stuff happening under the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Australia | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
Rebecca | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
John Ferrier | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
10 days | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
more than a million | QUANTITY | 0.99+ |
more than 165,000 students | QUANTITY | 0.99+ |
European Health Co | ORGANIZATION | 0.99+ |
Intel Capital | ORGANIZATION | 0.99+ |
Chatan | PERSON | 0.99+ |
Telkom | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
50 k | QUANTITY | 0.99+ |
more than 100 teachers | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
Department of Education | ORGANIZATION | 0.99+ |
five beats | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
six hours | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
more than 14 years | QUANTITY | 0.99+ |
John | PERSON | 0.98+ |
last month | DATE | 0.98+ |
50 | QUANTITY | 0.98+ |
Havana | LOCATION | 0.98+ |
10 years ago | DATE | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
both | QUANTITY | 0.97+ |
first week | QUANTITY | 0.97+ |
2nd 10 | QUANTITY | 0.97+ |
Thio | PERSON | 0.97+ |
Moore | PERSON | 0.97+ |
over 60 fold | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
pandemic | EVENT | 0.96+ |
A W s. | ORGANIZATION | 0.96+ |
Cube | COMMERCIAL_ITEM | 0.96+ |
seven | QUANTITY | 0.95+ |
decades | QUANTITY | 0.94+ |
five instances | QUANTITY | 0.94+ |
Thio | TITLE | 0.93+ |
over 220 different instance | QUANTITY | 0.93+ |
Trish Damkroger, Intel | AWS re:Invent 2020
>>from around the globe. It's the Cube with digital coverage of AWS reinvent 2020 sponsored by Intel, AWS and our community partners. Everyone welcome back to the cubes. Coverage of AWS Reinvent Amazon Web services Annual conference theme. Cuba's normally there in person. This year we can't be. It's a virtual event. This is the Cube virtual. I'm your host for the Cube. John Ferrier Tresh Damn Kroger, VP of G M and G m of the high performance computing team at Intel is here in the Cube until a big part of the cube every year. Trish, thank you for coming on Were remote. We can't be in person. Um, good to see you. >>Good to see you. >>I'm really impressed with Reinvent Has grown from kind of small show eight years ago to now kind of a bellwether. And and every year it's the same story. More scale, more performance, lower prices. This is kind of the intel cadence that we've seen of Intel over the years. But high performance computing, which >>has been >>around for a while, has gotten much more mainstream thinking because it's applying now to scale. So I want to get your thoughts and and just set the table real quick. What is high performance computing mean these days from Intel? And has that relate to what people are experiencing >>e high performance computing? Um, yes, it's been traditionally known as something that's, you know, in the in the labs and the government, you know, not used widely. But high performance computing is truly just changing the world is what you can dio Cove. It is a great example of where they've taken high performance computing to speed up the discovery of drugs and vaccines for or cova 19. They use it every day. You know, whether it's making Pampers or Clorox boxes. So they are those bottles so that they, when you drop them, they don't break, um, to designing airplanes and designing, um, Caterpillar tractors. So it is pervasive throughout. And, um, sometimes people don't realize that high performance computing infrastructure is kind of that basics that you use whenever you need to do something with dense compute. >>So what some examples of workloads can you just share? I mean, obviously Xeon processor. We've covered that many times, but I mean from a workload standpoint, what kind of workloads are high performance computing kind of related or unable or ideal for that's out there, >>right? Z on scalable processors are the foundation for high performance computing. If you look at what most people run high performance computing on its see on, and I think that it's so broad. So if you look at seismic processing or molecular dynamics for the drug discovery type work or if you think about, um, open foam for fluid dynamics or, um, you know, different financial trade service, you know, frequency, fats, frequency trading or low. I can't even think of that word. But anyway, trading is very common using high performance computing. I mean, it's just used pervasively throughout. >>Yeah, and you're seeing you're seeing the cloud of clarification of that. I want to get your thoughts. The next question is, you know it's not just Intel hardware. You mentioned Zeon, but HBC in AWS were here. It reinvent. Can you share how that plays out? What's your what's your What's your take on that? Because it's not just hard work and you just take them into explain relationship, >>right? So we definitely have seen the growth of high performance computing in the cloud over the last couple of years. We've talked about this for, you know, probably a decade, and we've definitely seen that shift. And with AWS, we have this wonderful partnership where Intel is not only bringing the hardware like you say, the Z on scalable processors, but we're also having accelerators and then on that whole software ecosystem where we work closely with our I s V and O S v partners. And when we bring, um, not only compilers but also analyzers in our full to tool suite so people can move between an on Prem situation Thio Public cloud like aws. Um, seamlessly. >>So talk about the developer impact. As I say, it's that learning show reinvent. There's a lot of developers here. I'll see mainstream you're seeing, you know, obviously the born in the cloud. But now you're seeing large scale enterprises and big businesses. You mentioned financial services from high frequency trading to oil and gas. Every vertical has a need for cloud and and what, you should be traditionally on premises compute. So you have. You're kind of connecting those dots here with AWS. Um, what is some of the developer angle here? Because they're in the cloud to they want to develop. How does how does the developer, um, engage with you guys on HPC in Amazon, >>Right? Well, there's there's a couple ways. I mean, so we do work with some of our partners eso that they could help move those workloads to the cloud. So an example is 69 which recently helped a customer successfully port a customized version of the in car models for prediction across scales. So they chose the C 59 18 x large instance type because this is what really deliver the highest performance and the lowest price for compute ratio. Another great example is P. K. I, which is a partner out of the UK, has worked with our customers to implement AI in retail and other segments running on Intel Instances of the EEC too. So I think these air just so you could have people help you migrate your workloads into the cloud. But then also, one of the great things I would like to talk about is, um a ws has come out with the parallel cluster, which is an Intel select solution, which really helps, um, ease that transition from on Prem to cloud. >>That's awesome. Um, let's get into that parallel cluster and you mentioned Intel Select Solution program. There's been some buzz on that. Can you take a minute to explain what that is? I >>mean, the HBC has, AH reputation of being hard, and the whole philosophy between behind the Intel Select solution is to make it easier for our customers to run HBC workloads in the cloud or on Prem and with E Intel Select Solution. It's also about scaling your job across a large number of notes, so we've made it a significant investment into the full stack. So this is from the silicon level all the way up to the application level so that we ensure that your application runs best on Intel and we bring together all the everything that you need into. Basically, it's a reference design. So it's a recipe where we jointly created it with our I, C, P and O S V partners and our open source environment for all the different relevant workloads. And so Amazon Web Services is the first cloud service provider to actually verify a service such as Intel Select Solution and this is this is amazing because this truly means that somebody can say it works today on Prem, and I know it will work exactly the same in AWS Cloud. >>That's huge. And I wanna just call that out because I think it's worth noting. You guys just don't throw this around like in the industry like doing these kind of partnerships. Intel's been pretty hard core on the quality, and so having a cloud service provider kind of go through the thing, it's really notable you mentioned parallel cluster um, deal. What is Can you just tie that together? Because if I get this right, the Intel, uh, select solution with the cloud service provider Amazon is a reference designed for how to go on premise or edge or revenue. It is to cloud in and out of cloud. How does this parallel cluster project fit into all this? Can you just unpack that a little bit? >>Right. So the parallel cluster basically, um, it's a parallel cluster until select solution. And there's three instances that we're featuring with the Intel Xeon Scalable processor, which gives you a variety of compute characteristics. So the select solution gives you the compute, the storage, the memory the networking that you need. You know, it says the specifications for what you need to run a non optimal way. And then a WS has allowed us to take some of the C five or some of the instances, and we are on. Three different instances were on the C five, in instance. But that's for your compute optimize work clothes. We're on the in five instance and that's really for a balanced between higher memory per core ratio. And then you have your are five and instance at a W s that's really targeted for that memory intensive workloads. And so all of these are accessible within the single A. W s parallel cholesterol environment on bits at scale. And it's really you're choosing of what you want to take and do. And then on top of that, the they're enabled with the next generation AWS Nitro system, which delivers 100 gigabits of networking for the HBC workloads. So that is huge for HPC. >>I was gonna get to the Nitro is my one of my top questions. Thanks >>for >>thanks for clarifying that. You know, I'm old enough to remember the old days when you have the intel inside the PC a shell of, ah box and create all that great productivity value. But with cloud, it's almost like we're seeing that again. You just hit on some key points you have. Yeah, this is HPC is like memory storage. You've got networking a compute. All these things kind of all kind of working together. If I get that right, you just kind of laid that out there. And it's not an intel Has to be intel. Everything. Your intel inside the cloud now and on premise, which is the There is no on premise anymore. It's cloud operations. If I get this right because you're essentially bridging the two worlds with the chips, you bring on premise which could be edge a big edge or small legend in cloud. Is that right? I mean, this is kind of where this is >>going. Yeah, so I mean, what I think about so a lot of them. The usages for HBC in the cloud is burst capacity. Most HBC centers are 100% not 100% because they have to do maintenance, but 95% utilized, so there is no more space. And so when you have a need to do a larger run or you need thio, you know, have something done quickly you burst to the cloud. That's just what you need to do now. I mean, or you want to try out different instances. So you want to see whether maybe that memory intensive workload would work better? Maybe in kind of that are five in instance, and that gives you that opportunity to see and also, you know, maybe what you want to purchase. So truly, we're entering this hybrid cloud bottle where you can't, um the demand for high performance computing is so large that you've got to be able to burst to the cloud. >>I think you guys got it right. I'm really impressed. And I like what I'm seeing. And I think you talked about earlier the top of the interview, government labs and whatnot. I think those are the early adopters because when they need more power and they usually don't have a lot of big budgets, a little max out and then go to the cloud Whether it's, you know, computing, you know what's going on in the ocean and climate change are all these things that they work on that need massive compute and power. That's a a pretext to enterprise. So if you can't connect the dots, you're kind of right in line with what we're seeing. So super impressive. Thanks for sharing that. Final thoughts on this is that performance. So Okay, the next question is, OK, all great. You're looking good off the tee or looking down the road. Clear path to success in the future. How does the performance compare in the cloud versus on premise? >>It could be well, and that's one of the great things about the Intel select solution because we have optimized that reference designed so that you can get the performance you're used to on Prem in the AWS Cloud. And so that is what's so cool honestly, about this opportunity So we can help you know, that small and medium business that doesn't maybe have this resource is or even those industries that do. And they know they're already a reference using that modeling SIM reference design, and they can now just burst to the cloud and it will work. But the performance they expect >>Trish, great to have you on great insight. Thanks for sharing all the great goodness from Intel and the A W s final thoughts on the on the partnership. We're not in person. And by the way, Intel usually has a huge presence. The booth is usually right behind the cube stage, which you guys sponsor. Thank you very much greater. Always partner with you. Great party. You sponsor the replay, which is always great, and it's always great party and great partnership. Good content. We're not there this year. What's the relationship like? And you take a minute to explain your final thoughts on a Amazon Web services and intel. >>Yeah, I know we have, Ah, Long term partnership 14 plus year partnership with AWS. And I mean, I think it's with the your, um taking Intel Select solution. It's going to be even a richer partnership we're gonna have in the future. So I'm thrilled that I have the opportunity to talk about it and really talk about how excited I am to be able Thio bring Mawr HBC into the world. It's all about the democratization of HBC because HBC changes the world >>well. Tricia, congratulations on the select program with AWS and the first cloud service provider really is a nice directional indicator of what's gonna happen. Futures laid out. Of course. Intel's in front. Thank you for coming. I appreciate it. >>Oh, thank you, John. >>Okay, that's the cubes. Virtual coverage Cube. Virtual. We're not in person. Aws reinvent 2020 is virtual. Three weeks were over the next three weeks, we're gonna bring you coverage. Of course. Cube Live in studio in Palo Alto will be covering a lot of the news. Stay with us from or coverage after this short break. Thank you.
SUMMARY :
It's the Cube with digital coverage This is kind of the intel cadence that we've seen of Intel over the years. And has that relate to what is kind of that basics that you use whenever you need to do something So what some examples of workloads can you just share? So if you look at seismic processing Because it's not just hard work and you just take them into explain We've talked about this for, you know, um, engage with you guys on HPC in Amazon, so you could have people help you migrate your workloads into the cloud. Um, let's get into that parallel cluster and you mentioned Intel Select Solution program. is the first cloud service provider to actually verify a service such as Intel Select the thing, it's really notable you mentioned parallel cluster um, deal. So the select solution gives you the compute, the storage, I was gonna get to the Nitro is my one of my top questions. You know, I'm old enough to remember the old days when you have the intel inside And so when you have a need to do a larger run or And I think you talked about earlier the top of the interview, have optimized that reference designed so that you can get the performance you're used to on Prem And you take a minute to explain your final thoughts on And I mean, I think it's with the Tricia, congratulations on the select program with AWS and the first cloud service provider Three weeks were over the next three weeks, we're gonna bring you coverage.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Trish Damkroger | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Tricia | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
WS | ORGANIZATION | 0.99+ |
UK | LOCATION | 0.99+ |
John Ferrier | PERSON | 0.99+ |
Trish | PERSON | 0.99+ |
14 plus year | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Tresh Damn Kroger | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three instances | QUANTITY | 0.99+ |
HBC | ORGANIZATION | 0.99+ |
eight years ago | DATE | 0.99+ |
69 | OTHER | 0.99+ |
Amazon Web | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
100 gigabits | QUANTITY | 0.97+ |
Three weeks | QUANTITY | 0.97+ |
Cuba | LOCATION | 0.97+ |
three weeks | QUANTITY | 0.97+ |
Xeon | COMMERCIAL_ITEM | 0.96+ |
Three different instances | QUANTITY | 0.96+ |
this year | DATE | 0.96+ |
Zeon | ORGANIZATION | 0.95+ |
first cloud | QUANTITY | 0.94+ |
G M | ORGANIZATION | 0.94+ |
2020 | DATE | 0.94+ |
P. K. I | PERSON | 0.94+ |
intel | ORGANIZATION | 0.93+ |
This year | DATE | 0.93+ |
Nitro | COMMERCIAL_ITEM | 0.93+ |
first cloud service | QUANTITY | 0.92+ |
Pampers | ORGANIZATION | 0.91+ |
five instance | QUANTITY | 0.9+ |
VP | PERSON | 0.89+ |
A W | ORGANIZATION | 0.89+ |
two worlds | QUANTITY | 0.87+ |
Thio | PERSON | 0.87+ |
single | QUANTITY | 0.81+ |
cova | OTHER | 0.8+ |
last couple of years | DATE | 0.8+ |
C five | COMMERCIAL_ITEM | 0.78+ |
aws | ORGANIZATION | 0.76+ |
Mawr | PERSON | 0.73+ |
Clorox | ORGANIZATION | 0.7+ |
Cube Live | TITLE | 0.69+ |
Prem | ORGANIZATION | 0.68+ |
C 59 18 | COMMERCIAL_ITEM | 0.67+ |
reinvent 2020 | EVENT | 0.66+ |
year | QUANTITY | 0.64+ |
couple ways | QUANTITY | 0.63+ |
Web services Annual conference | EVENT | 0.62+ |
Reinvent | EVENT | 0.62+ |
E Intel | COMMERCIAL_ITEM | 0.62+ |
Bill Pearson, Intel | CUBE Conversation, August 2020
>> Narrator: From theCUBE studios in Palo Alto in Boston, connecting with our leaders all around the world. This is theCUBE conversation. >> Welcome back everybody. Jeff Frick here with theCUBE we are in our Palo Alto studios today. We're still getting through COVID, thankfully media was a necessary industry, so we've been able to come in and keep a small COVID crew, but we can still reach out to the community and through the magic of the internet and camera's on laptops, we can reach out and touch base with our friends. So we're excited to have somebody who's talking about and working on kind of the next big edge, the next big cutting thing going on in technology. And that's the internet of things you've heard about it the industrial Internet of Things. There's a lot of different words for it. But the foundation of it is this company it's Intel. We're happy to have joined us Bill Pearson. He is the Vice President of Internet of Things often said IoT for Intel, Bill, great to see you. >> Same Jeff. Nice to be here. >> Yeah, absolutely. So I just was teasing getting ready for this interview, doing a little homework and I saw you talking about Internet of Things in a 2015 interview, actually referencing a 2014 interview. So you've been at this for a while. So before we jump into where we are today, I wonder if you can share, you know, kind of a little bit of a perspective of what's happened over the last five or six years. >> I mean, I think data has really grown at a tremendous pace, which has changed the perception of what IoT is going to do for us. And the other thing that's been really interesting is the rise of AI. And of course we need it to be able to make sense of all that data. So, you know, one thing that's different is today where we're really focused on how do we take that data that is being produced at this rapid rate and really make sense of it so that people can get better business outcomes from that. >> Right, right. But the thing that's so interesting on the things part of the Internet of Things and even though people are things too, is that the scale and the pace of data that's coming off, kind of machine generated activity versus people generated is orders of magnitude higher in terms of the frequency, the variety, and all kind of your classic big data meme. So that's a very different challenge then, you know, kind of the growth of data that we had before and the types of data, 'cause it's really gone kind of exponential across every single vector. >> Absolutely. It has, I mean, we've seen estimates that data is going to increase by about five times as much as it is today, over the next, just a couple years. So it's exponential as you said. >> Right. The other thing that's happened is Cloud. And so, you know, kind of breaking the mold of the old mold roar, all the compute was either in your mini computer or data center or mainframe or on your laptop. Now, you know, with Cloud and instant connectivity, you know, it opens up a lot of different opportunities. So now we're coming to the edge and Internet of Things. So when you look at kind of edge in Internet of Things, kind of now folding into this ecosystem, you know, what are some of the tremendous benefits that we can get by leveraging those things that we couldn't with kind of the old infrastructure and our old way kind of gathering and storing and acting on data? >> Yeah. So one of the things we're doing today with the edge is really bringing the compute much closer to where all the data is being generated. So these sensors and devices are generating tons and tons of data and for a variety of reasons, we can't send it somewhere else to get processed. You know, there may be latency requirements for that control loop that you're running in your factory or there's bandwidth constraints that you have, or there's just security or privacy reasons to keep it onsite. And so you've got to process a lot of this data onsite and maybe some estimates or maybe half of the data is going to remain onsite here. And when you look at that, you know, that's where you need compute. And so the edge is all about taking compute, bringing it to where the data is, and then being able to use the intelligence, the AI and analytics to make sense of that data and take actions in real time. >> Right, right. But it's a complicated situation, right? 'Cause depending on where that edge is, what the device is, does it have power? Does it not have power? Does it have good connectivity? Does it not have good connectivity? Does it have even the ability to run those types of algorithms or does it have to send it to some interim step, even if it doesn't have, you know, kind of the ability to send it all the way back to the Cloud or all the way back to the data center for latency. So as you kind of slice and dice all these pieces of the chain, where do you see the great opportunity for Intel, where's a good kind of sweet spot where you can start to bring in some compute horsepower and you can start to bring in some algorithmic processing and actually do things between just the itty-bitty sensor at the itty-bitty end of the chain versus the data center that's way, way upstream and far, far away. >> Yeah. Our business is really high performance compute and it's this idea of taking all of these workloads and bringing them in to this high performance compute to be able to run multiple software defined workloads on single boxes, to be able to then process and analyze and store all that data that's being created at the edge, do it in a high performance way. And whether that's a retail smart shelf, for example, that we can do realtime inventory on that shelf, as things are coming and going, or whether it's a factory and somebody's doing, you know,real time defect detection of something moving across their textile line. So all of that comes down to being able to have the compute horsepower, to make sense of the data and do something with it. >> Right, right. So you wouldn't necessarily like in your shelf example that the compute might be done there at the local store or some aggregation point beyond just that actual, you know, kind of sensor that's underneath that one box of tide, if you will. >> Absolutely. Yeah, you could have that on-prem, a big box that does multiple shelves, for example. >> Okay, great. So there's a great example and you guys have the software development kit, you have a lot of resources for developers and in one of the case studies that I just wanted to highlight before we jump into the dev side was I think Audi was the customer. And it really illustrates a point that we talked about a lot in kind of the big data meme, which is, you know, people used to take action on a sample of data after the fact. And I think this case here we're talking about running 1,000 cars a day through this factory, they're doing so many welds, 5 million welds a day, and they would pull one at the end of the day, sample a couple welds and did we have a good day or not? Versus what they're doing now with your technology is actually testing each and every weld as it's being welded, based on data that's coming off the welding machine and they're inspecting every single weld. So I just love you've been at this for a long time. When you talk to customers about what is possible from a business point of view, when you go from after the fact with a sample of data, to in real time with all the data, how that completely changes your view and ability to react to your business. >> Yeah. I mean, it makes people be able to make better decisions in real time. You know, as you've got cameras on things like textile manufacturers or footwear manufacturers, or even these realtime inventory examples you mentioned, people are going to be able to make and can make decisions in real time about how to stock that shelf, what to order about what to pull off the line, am I getting a good product or not? And this has really changed, as you said, we don't have to go back and sample anymore. You can tell right now as that part is passing through your manufacturing line, or as that item is sitting on your shelf, what's happening to it. It's really incredible. >> So let's talk about developers. So you've got a lot of resources available for developers and everyone knows Intel obviously historically in PCs and data centers. And you would do what they call design wins back when I was there, many moons ago, right? You try to get a design win and then, you know, they're going to put your microprocessors and a bunch of other components in a device. When you're trying to work with, kind of Cutting Edge Developers in kind of new fields and new areas, this feels like a much more direct touch to the actual people building the applications than the people that are really just designing the systems of which Intel becomes a core part of. I wonder if you could talk about, you know, the role developers and really Intel's outreach to developers and how you're trying to help them, you know, kind of move forward in this new crazy world. >> Yeah, developers are essential to our business. They're essential to IoT. Developers, as you said, create the applications that are going to really make the business possible. And so we know the value of developers and want to make sure that they have the tools and resources that they need to use our products most effectively. We've done some things around OpenVINO toolkit as an example, to really try and simplify, democratize AI application so that more developers can take advantage of this and, you know, take the ambitions that they have to do something really interesting for their business, and then go put it into action. And the whole, you know, our whole purpose is making sure we can actually accomplish that. >> Right. So let's talk about OPenVINO. It's an interesting topic. So I actually found out what OpeVINO means, Open Visual Inference and Neural Optimization toolkit,. So it's a lot about computer vision. So I will, you know, and computer vision is an interesting early AI application that I think a lot of people are familiar with through Google photos or other things where, you know, suddenly they're putting together little or a highlight movies for you, or they're pulling together all the photos of a particular person or a particular place. So the computer vision is pretty interesting. Inference is a special subset of AI. So I wonder, you know, you guys are way behind OpenVINO. Where do you see the opportunities in visualization? What are some of the instances that you're seeing with the developers out there doing innovative things around computer vision? >> Yeah, there's a whole variety of used cases with computer vision. You know, one that we talked about earlier here was looking at defect detection. There's a company that we work with that has a 360 degree view. They use cameras all around their manufacturing line. And from there, they didn't know what a good part looks like and using inference and OpenVINO, they can tell when a bad part goes through or there's a defect in their line and they can go and pull that and make corrections as needed. We've also seen, you know, use cases like smart shopping, where there's a point of sale fraud detection. We call it, you know, is the item being scanned the same as the item that is actually going through the line. And so we can be much smarter about understanding retail. One example that I saw was a customer who was trying to detect if it was a vodka or potatoes that was being scanned in an automated checkout system. And again, using cameras and OpenVINO, they can tell the difference. >> We haven't talked about a computer testing yet. We're still sticking with computer vision and the natural language processing. I know one of the areas you're interested in and it's going to only increase in importance is education. Especially with what's going on, I keep waiting for someone to start rolling out some national, you know, best practice education courses for kindergartens and third graders and sixth graders. And you know, all these poor teachers that are learning to teach on the fly from home, you guys are doing a lot of work in education. I wonder if you can share, I think your work doing some work with Udacity. What are you doing? Where do you see the opportunity to apply some of this AI and IoT in education? >> Yeah, we launched the Nanodegree with Udacity, and it's all about OpenVINO and Edge AI and the idea is, again, get more developers educated on this technology, take a leader like your Udacity, partner with them to make the coursework available and get more developers understanding using and building things using Edge AI. And so we partnered with them as part of their million developer goal. We're trying to get as many developers as possible through that. >> Okay. And I would be remiss if we talked about IoT and I didn't throw 5G into the conversation. So 5G is a really big deal. I know Intel has put a ton of resources behind it and have been talking about it for a long, long time. You know, I think the huge value in 5G is a lot around IoT as opposed to my handset going faster, which is funny that they're actually releasing 5G handsets out there. But when you look at 5G combined with the other capabilities in IoT, again, how do you see 5G being this kind of step function in ability to do real time analysis and make real time business decisions? >> Well, I think it brings more connectivity certainly and bandwidth and reduces latency. But the cool thing about it is when you look at the applications of it, you know, we talked about factories. A lot of those factors may want to have a private 5G networks that are running inside that factory, running all the machines or robots or things in there. And so, you know, it brings capabilities that actually make a difference in the world of IoT and the things that developers are trying to build. >> That's great. So before I let you go, you've been at this for a while. You've been at Intel for a while. You've seen a lot of big sweeping changes, kind of come through the industry, you know, as you sit back with a little bit of perspective, and it's funny, even IoT, like you said, you've been talking about it for five years and 5G we've been been waiting for it, but the waves keep coming, right? That's kind of the fun of being in this business. As you sit there where you are today, you know, kind of looking forward the next couple of years, couple of four or five years, you know, what has just surprised you beyond compare and what are you still kind of surprised that's it's still a little bit lagging that you would have expected to see a little bit more progress at this point. >> You know, to me the incredible thing about the computing industry is just the insatiable demand that the world has for compute. It seems like we always come up with, our customers always come up with more and more uses for this compute power. You know, as we've talked about data and the exponential growth of data and now we need to process and analyze and store that data. It's impressive to see developers just constantly thinking about new ways to apply their craft and, you know, new ways to use all that available computing power. And, you know, I'm delighted 'cause I've been at this for a while, as you said, and I just see this continuing to go far as far as the eye can see. >> Yeah, yeah. I think you're right. There's no shortage of opportunity. I mean, the data explosion is kind of funny. The data has always been there, we just weren't keeping track of it before. And the other thing that as I look at Jira, Internet of Things, kind of toolkit, you guys have such a broad portfolio now where a lot of times people think of Intel pretty much as a CPU company, but as you mentioned, you got to FPGAs and VPUs and Vision Solutions, stretch applications Intel has really done a good job in terms of broadening the portfolio to go after, you know, kind of this disparate or kind of sharding, if you will, of all these different types of computer applications have very different demands in terms of power and bandwidth and crunching utilization to technical (indistinct). >> Yeah. Absolutely the various computer architectures really just to help our customers with the needs, whether it's high power or low performance, a mixture of both, being able to use all of those heterogeneous architectures with a tool like OpenVINO, so you can program once, right once and then run your application across any of those architectures, help simplify the life of our developers, but also gives them the compute performance, the way that they need it. >> Alright Bill, well keep at it. Thank you for all your hard work. And hopefully it won't be five years before we're checking in to see how far this IoT thing is going. >> Hopefully not, thanks Jeff. >> Alright Bill. Thanks a lot. He's bill, I'm Jeff. You're watching theCUBE. Thanks for watching, we'll see you next time. (upbeat music)
SUMMARY :
all around the world. And that's the internet of and I saw you talking And the other thing that's is that the scale and the pace of data So it's exponential as you said. And so, you know, kind of breaking the AI and analytics to kind of the ability to send it So all of that comes down to being able just that actual, you know, Yeah, you and in one of the case studies And this has really changed, as you said, to help them, you know, And the whole, you know, So I wonder, you know, you We've also seen, you know, and the natural language processing. and the idea is, again, But when you look at 5G and the things that developers couple of four or five years, you know, to apply their craft and, you know, to go after, you know, a mixture of both, being able to use Thank you for all your hard work. we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jeff Frick | PERSON | 0.99+ |
Jeff | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
2014 | DATE | 0.99+ |
Bill Pearson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
August 2020 | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
360 degree | QUANTITY | 0.99+ |
Audi | ORGANIZATION | 0.99+ |
Bill | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one box | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
OpenVINO | TITLE | 0.98+ |
Udacity | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.97+ |
One example | QUANTITY | 0.97+ |
bill | PERSON | 0.97+ |
1,000 cars a day | QUANTITY | 0.96+ |
one | QUANTITY | 0.96+ |
each | QUANTITY | 0.95+ |
one thing | QUANTITY | 0.94+ |
single boxes | QUANTITY | 0.94+ |
OpeVINO | TITLE | 0.93+ |
couple | QUANTITY | 0.91+ |
sixth graders | QUANTITY | 0.91+ |
million developer | QUANTITY | 0.91+ |
about five times | QUANTITY | 0.91+ |
5 million welds a day | QUANTITY | 0.9+ |
Edge AI | TITLE | 0.9+ |
ORGANIZATION | 0.9+ | |
once | QUANTITY | 0.89+ |
next couple of years | DATE | 0.89+ |
six years | QUANTITY | 0.87+ |
COVID | ORGANIZATION | 0.86+ |
many moons ago | DATE | 0.86+ |
four | QUANTITY | 0.84+ |
half of | QUANTITY | 0.84+ |
5G | TITLE | 0.83+ |
Cloud | TITLE | 0.82+ |
Internet of | ORGANIZATION | 0.8+ |
tons and tons of data | QUANTITY | 0.8+ |
single vector | QUANTITY | 0.78+ |
OPenVINO | TITLE | 0.78+ |
Jira | ORGANIZATION | 0.76+ |
single weld | QUANTITY | 0.74+ |
OpenVINO | ORGANIZATION | 0.71+ |
third | QUANTITY | 0.68+ |
Vice | PERSON | 0.67+ |
couple years | QUANTITY | 0.65+ |
every | QUANTITY | 0.63+ |
ton of resources | QUANTITY | 0.62+ |
case studies | QUANTITY | 0.58+ |
data | QUANTITY | 0.55+ |
last | DATE | 0.53+ |
Open Visual Inference | TITLE | 0.51+ |
COVID | OTHER | 0.5+ |
Nayaki Nayyar, Ivanti and Stephanie Hallford, Intel | CUBE Conversation, July 2020
(calm music) >> Announcer: From theCUBE Studios in Palo Alto and Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. >> Welcome to this CUBE Conversation. I'm Lisa Martin, and today, I'm talking to Ivanti again and Intel, some breaking news. So please welcome two guests, the EVP and Chief Product Officer of Ivanti, Nayaki Nayyar. She's back, and we've also got the VP and GM of Business Client Salute Platforms for Intel, Stephanie Hallford. Nayaki and Stephanie, it's great to have you on the program. >> It's great to be back here with you Lisa, and Stephanie glad to have you here with us, thank you. >> Thank you, we're excited >> Yeah, you guys are going to break some news for us, so let's go ahead and start. Nayaki, hot off the presses is Ivanti's announcement of its new hyper-automation platform, Ivanti Neurons, helping organizations now in this new next normal of so much remote work. Now, just on the heels of that, you're announcing a new strategic partnership with Intel. Tell me about that. >> So Lisa, like we announced, our Ivanti Neurons platform that is helping our customers and all the IT organizations around the world to deal with this explosive growth of remote workers, the devices that would work is used, the data that it's getting from those devices, and also the security challenges, and Neurons really help address what we call discover all the devices, manage those devices, self-heal those devices, self-secure the devices, and with this partnership with Intel, we are extremely excited about the potential our customers and the benefits that customers can get. Intel is offering what they call Device as a Service, which includes both the hardware and software, and with this partnership, we are announcing the integration between Intel's vPro platform and Ivanti's Neurons platform, which is what we are so excited about. Our joint customers, joint enterprises that are using both the products can now benefit from this out of the box integration to take advantage of this Device as a Service combined offer. >> So Stephanie, talk to us from Intel's perspective. This is an integration of Intel's Endpoint Management Assistant with Ivanti Neurons. How does this drive up the value for the EMA solution for your customers who are already using it? >> Right, well, so vPro is just to step everyone back, vPro is the number one enterprise platform trusted now for over 14 years. We are in a vast majority of enterprises around the world, and that's because vPro is essentially our best performing CPUs, our highest level of security, our highest level manageability, which is our EMA or "Emma" manageability solution, which Ivanti is integrating, and also stability, so that is the promise to IT managers for a stable, the Intel Stable Image platform, and what that allows is IT managers to know that we will keep as much stability and fast forward and push through any fixes as quickly as possible on those vPro devices because we understand that IT networks usually QUAL, you know, not all at one time, but it's sequential. So vPro is our number one enterprise built for business, validated, enabled, and we're super excited today because we're taking that remote manageability solution that comes with vPro, and we are marrying it with Ivanti's top class in point management solution, and Ivanti is a world leader in managing and protecting endpoints, and today more than ever, because IT's remote and Intel. For instance, our IT over one weekend had to figure out how to support a hundred thousand remote workers, so the ability for Ivanti to now have our remote manageability in band, out of band, on-prem, in the cloud, it really rounds out. Ivanti's already fantastic world-class solution, so it's a fantastic start to what I foresee is going to be a great partnership. >> And probably a big target install base. Now, can you talk to me a little bit about COVID as a catalyst for this partnership? So many companies, the stuff they talked about a great example of Intel pivoting over a weekend for a hundred thousand people. We're hearing so many different numbers of an explosion of devices, but also experts and even C-suite from tech companies projecting maybe 30 to 40% of the workforce only will go back, so talk to me about COVID as really driving the necessity for organizations to benefit from this type of technology. >> Yeah, so Lisa, like Stephanie said, right, as Intel had to take hundred thousand employees remote over a weekend, that is true for pretty much every company, every organization, every enterprise independent of industry vertical that they had to take all their workforce and move them to be primarily remote workers, and the stats of BFC is what used to be, I would say, three to four percent before COVID of remote working. Post-COVID or during COVID, as we say, it's going to be around 30, 40, 50%, and this is a conversation and a challenge. Every IT organization, every C-level exec, and, in most cases, I'm also seeing this become a board conversation that they're trying to figure out not just how to support remote workers for a short time, but for a longer time as this becomes the new normal or the next normal, whatever you call that, Lisa, and really helping employees through this transition and providing what we call a seamless experience as we employees are working from home or on the move or location agnostic, being able to provide a experience, a service experience that understands what employee's preferences are, what their needs are, and providing that consumer with experiences, what this joint offering between Intel and Ivanti really brings together for our joint customers. >> So you talked about this being elevated to the board level conversation, you know, and this is something that we're hearing a lot of that suddenly there's so much more visibility and focus on certain parts of businesses, and survival is, so many businesses are at risk. Stephanie, I'd like to get your perspective on how this joint solution with Intel and Ivanti, do you see this as an opportunity to give your customers not just a competitive advantage, but for maybe some of those businesses who might be in jeopardy like a survival strategy? >> Absolutely, I mean, the, you know, while we both Ivanti and Intel have our own IT challenges and we support our workers directly, we are broadly experienced in supporting many many companies that frankly, perhaps, weren't planning for these types of instances, remote manageability overnight, security and cyber threats getting more and more sophisticated, but, you know, tech companies like Ivanti, like Intel, we have been thinking about this and experiencing and planning for these things and bringing them out in our products for some time, and so I think it is a great opportunity when we come together and we bring that, you know, IP expertise and IT expertise, both IP technical and that IT insight, and we bring it to customers who are of all industries, whether it be healthcare or financial or medium businesses who are increasingly being managed by service providers who can utilize this type of device as a service and endpoint manageability. Most companies and certainly all IT managers will tell you they're overwhelmed. They are traditionally squeezed on budget, and they have the massive requirement to take their companies entirely cloud and cloud oriented or maybe a hybrid of cloud and on-prem, and they really would prefer to leave network security and network management to experts, and that's where we can come in with our platform, with our intelligence, we work hard to continue to build that product roadmap to stay ahead of cyber threats. Our vPro platform, for instance, has what we call Intel Hardware Shield to set up technologies that actually protects against cyber attack, even under the OS, so if the OS is down or there's a cyber attack around the OS, we actually can lock down the BIOS and the Firmware and alert the OS and have that communication, which allows the system to protect those areas that need to be protected or lock down or encrypt those areas, so this is the type of thing we bring to the party, and than Ivanti has that absolute in Point Management credibility that there's just, I think, ease, So if IT managers are worried about moving to the cloud and getting workers remote and, you know, managing cyber threats, they really would prefer to leave this management and security of their network to experts like Ivanti, and so we're thrilled to kind of combine that expertise and give IT managers a little bit of peace of mind. >> I think it's even more than giving IT managers a peace of mind, but so talk to me, Nayaki, about how these technologies work together. So for example, when we talked about the Neurons and the hyper-automation platform that you just announced, you were talking about the discovery, the self-healing, self-securing of all these devices within an organization that they may not even know they have EDGE devices on-prem cloud. Talk to me about how these two technologies work together. Is it discovering all these devices first, self-security, self-healing? How does then EMA come into play? >> So let me give an analogy in our consumer world, Lisa. We all are used to or getting used to cars where they automatically heal themselves. I have a car sitting in my garage that I haven't taken to a workshop for last four years since I bought it, so it's almost a similar experience that combined offering things to our customers where all these endpoints, like Stephanie said, we are, I would say, one of the leading providers in the endpoint management where we support today. Ivanti supports over 40 million endpoints for our customers, and combining that with a strong vPro platform from Intel, that combined offering, which is what we call Device as a Service, so that the IT departments or the enterprises don't have to really worry about how we are discovering all of those devices, managing those devices. Self-healing, like if there's any performance issues, configuration drift issues, if there are any security vulnerabilities, anomalies on those devices, it automatically heals them. I mean, that is the beauty of it where IT doesn't have to worry about trying to do it reactively. These neurons detect and self-heal those devices automatically in the background, and almost augmenting IT with what I call these automation bots that are constantly running in the background on these devices and self-healing and self-securing those devices. So that's a benefit every organization, every company, every enterprise, every IT department gets from this joint offering, and if I were on their side, on the other side, I can really sleep at night knowing those devices are now not just being managed, but are secure because now we are able to auto-heal or auto-secure those devices in the background continuously. >> Let's talk about speed cause that's one of the things, speed and scale, we talk about with every different technology, but right now there's so much uncertainty across the globe, so for joint customers, Stephanie talked about the, you know, the large install base of customers on the vPro platform, how quickly would they be able to leverage this joint solution to really get those endpoints under management and start dialing down some of the risks like device sprawl and security threats? >> So the joint offering is available today and being released the integration between both the platforms with this announcement, so companies that have both of our platforms and solutions can start implementing it and really getting the benefit out of it. They don't have to wait for another three months or six months. Right after this release, they should be able to integrate the two platforms, discover everything that they have across their entire network, manage those, secure those devices and use these neurons to automatically heal and service those endpoints. >> So this is something that could get up and running pretty quickly? >> It's an AutoBox connection and integration that we worked very closely, Stephanie's team and my team had been working for months now, and, yeah, this is an exciting announcement not just from the product perspective, but also the benefit it gives our customers, the speed, the accuracy, and the service experience that they can provide to their end user, employees, customers, and consumers, I think, that's super beneficial for everyone. >> Absolutely, and then that 360 degree view. Stephanie, we'll wrap it up with you. Talk to us about how this new strategic partnership is a facilitator or an accelerant of Intel's device as a service vision. >> Well, you know, first off, I wanted to commend Nayaki's team because our engineers were so impressed. They, you know, felt like they were working with the PhD advanced version of so many other engineering partners they'd ever come across, so I think we have a very strong engineering culture between our two companies and the speed at which we were able to integrate our solutions, and at the same time start thinking about what we may be able to do in the future, should we put our heads together and start doing a joint product roadmap on opportunities in the future, network connectivity, wifi connectivity, all sorts of ideas, so huge congratulations to the engineering teams because the speed at which we were able to integrate and get a product offering out was impressive, but, you know, secondarily, on to your question on device as a service, this is going to be by far where the future moves. We know that companies will tend to continue to look for ways to have sustainability in their environments, and so when you have Device as a Service, you're able to do things like into end supporting that device from its start into a network to when you end of life a device and how you end of life that device has severe, some sustainability and costs, you know, complexities, and if we're able to manage that device from end to end and provide servicing to alert IT managers and self-heal before problems happen, that helps obviously not only with business models and, you know, protecting data, but it also helps in keeping systems running and being alert to when systems begin to degrade or if there are issues or if it's time to refresh because the hardware is not new enough to take advantage of the new software capabilities, then you're able to end of life that device in a sustainable way, in a safe way, and, even to some degree, provide some opportunity for remediation of data and, you know, remote erase and continue to provide that security all the way into the end, so when we look at device as a service, it's more than just one aspect. It's really taking a device and being responsible for the security, the manageability, the self-healing from beginning to end, and I know that all IT managers need that, appreciate that, and frankly don't have the time or skillsets to be able to provide that in their own house. So I think there's the beginnings today, and I think we have a huge upside to what we can do in the future. I look at Intel's strengths in enterprise and how long we have been, you know, operating in enterprises around the world. Ivanti's, you know, in the vast majority of Fortune 100s, and when you've got kind of engineering powerhouses that are coming together and brainstorming it's, I think, it's a great partnership for relief for customer pain points in the future, which unfortunately there's going to be more probably. >> And this is just the beginning. >> I think that's one thing we can guarantee. It's what, sorry? >> Yeah, and it's just the beginning. This partnership is just the beginning. You will see lot more happening between both the companies as we define the roadmap into the future, so we are super excited about all the work, the joint teams, and, Stephanie, I want to take this opportunity to thank you, your leadership, and your entire organization for helping us with this partnership. >> We're excited by it, we are, we know it's just the beginning of great things to come. >> Well, just the beginning means we have to have more conversations. The cultural fit really sounds like it's really there, and there's tight alignment with Ivanti and Intel. Ladies, thank you so much for joining me. Nayaki, great to have you back on the program. >> Thank you, thank you, Lisa. Thank you for hosting us, and, Stephanie, it's always a pleasure talking to you, thank you. >> Likewise, looking forward to the launch and all the customer reactions. >> Absolutely. >> Yes, all right, thanks Nayaki, thanks Stephanie. For my guests, I'm Lisa Martin. You're watching this CUBE Conversation. (calm music)
SUMMARY :
leaders all around the world, to have you on the program. and Stephanie glad to have Now, just on the heels of that, and all the IT organizations So Stephanie, talk to us so that is the promise to so talk to me about COVID as really and the stats of BFC is what to the board level conversation, you know, and the Firmware and alert the OS and the hyper-automation so that the IT departments and being released the integration and the service experience Absolutely, and then and how long we have been, you know, thing we can guarantee. Yeah, and it's just the beginning. of great things to come. Well, just the beginning means we have a pleasure talking to you, and all the customer reactions. Yes, all right, thanks
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nayaki | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ivanti | PERSON | 0.99+ |
Stephanie Hallford | PERSON | 0.99+ |
July 2020 | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
Nayaki Nayyar | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
30 | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
three months | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
360 degree | QUANTITY | 0.99+ |
two platforms | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Ivanti | ORGANIZATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
40 | QUANTITY | 0.99+ |
over 14 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
50% | QUANTITY | 0.98+ |
BFC | ORGANIZATION | 0.98+ |
hundred thousand employees | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
two technologies | QUANTITY | 0.98+ |
Ivanti Neurons | TITLE | 0.97+ |
EMA | ORGANIZATION | 0.97+ |
40% | QUANTITY | 0.97+ |
four percent | QUANTITY | 0.97+ |
Neurons | TITLE | 0.97+ |
one time | QUANTITY | 0.96+ |
over 40 million endpoints | QUANTITY | 0.96+ |
vPro | TITLE | 0.95+ |
Monica Livingston, Intel | HPE Discover 2020
>> Narrator: From around the globe, it's theCUBE! Covering HPE Discover Virtual Experience, brought to you by HPE. >> Artificial Intelligence, Monica Livingston, hey Monica, welcome to theCUBE! >> Hi Lisa, thank you for having me. >> So, AI is a big topic, but let's just get an understanding, Intel's approach to artificial intelligence? >> Yeah, so at Intel, we look at AI As a workload and a tool that is becoming ubiquitous across all of our compute solutions. We have customers that are using AI in the Cloud, in the data center, at the Edge, so our goal is to infuse as much performance as we can for AI into our base platform and then where acceleration is needed we will have accelerator solutions for those particular areas. An example of where we are infusing AI performance into our base platform is the Intel Deep Learning Boost feature set which is in our second generation Intel Xeon Scalable Processors and this feature alone provides up to 30x performance improvement for Deep Learning Inference on the CPU over the previous generation and we are continuing infusing AI into our base platform with the third generation Intel Xeon Scalable Processors which are launching later this month. Intel will continue that leadership by including support for bfloat16. Bfloat16 is a new format that enables Deep Learning training with similar accuracy but essentially using less data so it increases AI throughput. Another example is memory, so both inference and training require quite a bit of memory and with Intel Octane for system memory, customers are able to expand large pools of memory closer to the CPU, and where that's particularly relevant is in areas where data sets are enlarged like imaging, with lots of images and lots of high resolution images, like medical diagnostic or seismic imaging, we are able to perform some of these models without tiling, and tiling is where, if you are memory-constrained, you essentially have to take that picture and chop it up in little pieces and process each piece and then stitch it back together at the end whereas that loses a lot of context for the AI model, so if you're able to process that entire picture, then you are getting a much better result and that is the benefit of having that memory accessible to the compute. So, when you are buying the latest and greatest HPE servers, you will have built-in AI performance with Intel Xeon Scalable and Octane for system memory. >> A couple things that you said that piqued my interests are 30x improvement in performance, if you talk about that with respect to the Deep Learning Booster, 30x is a huge factor and you also said that your solution from a memory perspective doesn't require tiling and I heard context. Context is key to have context in the data, to be able to understand and interpret and make inferences, so, talk to me about some of those big changes that you're releasing, what were some of the customer-compelling events or maybe industry opportunities that drove Intel to make such huge performance gains in second generation. >> Right, so second generation, these are the processors that are out now, so these are features that our customers are using today, third generation is coming out this month but for second generation, Deep Learning Boost, what's really important is the software optimization and the fact that we're able to use the hooks that we've built into the hardware but then use software to make sure that we are optimizing performance on those platforms and it's extremely relevant to talk about software in the AI space because AI solutions can get super expensive, you can easily pay 2 to 3x what you should be paying if you don't have optimized software because then what you do is you're just throwing more and more compute, more and more hardware at the problem, but it's not optimized and so what's really impactful is being able to run a vast number of AI applications on your base platform, that essentially means that you can run that in a mixed workload environment together with your other applications and you're not standing up separate infrastructure. Now, of course, there will be some applications that do need separate infrastructure that do need alliances and accelerators and for that, we will have a host of accelerators, we have FPGAs today for real time low latency inference, we have Movidius VPU for low-power vision applications at the Edge, but by and large, if you're looking at classical machine learning, if you're looking at analytics, Deep Learning inference, that can run on a base platform today and I think that's what's important in ensuring that more and more customers are able to run AI at scale, it's not just a matter of running a POC in a back lab, you do that on the infrastructure that you have available, not an issue, but when you are looking to scale, the cost is going to be significantly important and that's why it's important for us to make sure that we are building in as much performance as is feasible into the base platform and then offering software tools to allow customers to see that performance. >> Okay, so talking about the technology components, performance, memory, what's needed to scale on the technology side, I want to then kind of look at the business side, because we know a lot of customers in any industry undertake AI projects and they run into pitfalls where they're not able to even get off the ground, so converse to the technology side, what is it that you're seeing, what are the pitfalls that customers can avoid on the business side to get these AI projects designed and launched? >> Yeah, so on the business side, I mean you really have to start with a very solid business plan for why you're doing AI and it's even less about just the AI piece, but you have to have a very solid business plan for your solution as a whole. If you're doing AI just to do AI because you saw that it's a top trend for 2020 so you must do AI, that's likely going to not result in success. You have to make sure that you're understanding why you're doing AI, if you have a workload that could be easily solved, or a problem that could be easily solved with data analytics, use data analytics, AI should be used where appropriate, a way to provide true benefit and I think if you can demonstrate that, you're a long way in getting your project off the ground, and then there's several other pitfalls like data, do you have enough data, is it close enough to your compute in order to be accessible and feasible, do you have the resources that are skilled in AI that can get your solution off the ground, do you have a plan for what to do after you've deployed your solution because these files need to be maintained on a regular basis, so some sort of maintenance program needs to be in place and then infrastructure, cost can be prohibitive a lot of times if you're not able to leverage a good amount of your base infrastructure and that's really where we spend a lot of time with customers in trying to understand what their model is trying to do and can they use their base infrastructure, can they reuse as much of what they have, what is their current utilization, do they maybe have cycles in off times if their utilization is diurnal and during the night they have early Utilization, can you train your models at night rather than putting up a whole new set of infrastructure that likely will not be approved by management, let's be honest. >> And I imagine that that is all part of the joint better marketing strategy that HPE and Intel have together to have such conversations like that with customers, to help really build a robust business plan. >> Yeah, so HPE's fantastic at consulting with customers from beginning to end, looking at solutions and they've got a whole suite of storage solutions as well which are crucial for AI and Intel works together with HPE to create reference architectures for AI and then we do joint training as well. But yes, talking to your HPE rep and leveraging your ecosystem I think is incredibly important because the ecosystem is so diverse and there are a lot of resources available from ISVs to hardware providers to consulting companies that are able to support with AI. >> So Monica, the ecosystem is incredibly important, but how do you work with customers, HPE and Intel together, to help the customer, whether its in biotech or manufacturing to build an ecosystem or partnership that can help the customer really define the business plan of what they want to do to get that for us functional collaboration and buy-in and support and launch a successful AI project. >> Yeah it really does take a village, but both Intel and HPE have an extensive partner network, these are partners that we work with to optimize their solution, in HPE's case, they validate their solutions on HPE hardware to ensure that it runs smoothly and for our customers, we have the ability to match-make with partners in the ecosystem and generally, the way it works, is in specific segments, we have a list of partners that we can draw from and we introduce those to the customer, the customer generally has a couple of meetings with them to see which one is a better fit, and then they go from there, but essentially, it is just making sure that solutions are validated and optimized and then giving our customers a choice of which partners are the best fit for them. >> Last question for you, Monica, we are in the middle of COVID-19 and we see things on the news every day about contact tracing, for example, social distancing, and a lot of the things that are talked about on the news are human contact tracers, people being involved in manual processes, what are some of the opportunities that you see for AI to really help drive some of these because time is of the essence, yet, there's the ethics issue with AI, right? >> Yes, yes, and the ethics issue is not something that AI can solve on its own, unfortunately, the ethics conversation is something we need to have broader as a society and from a privacy perspective, how are we going to be mindful and respectful while also being able to use some of the data to protect society especially in a situation like this, so, contact tracing is extremely important, this is something that in areas that have a wide system of cameras installed, that's something that is doable from an algorithmic perspective and there's several partners of ours that are looking at that, and actually, the technology itself, I don't think is as insurmountable as the logistical aspect and the privacy and the ethical aspect and regulation around it, making sure that it's not used for the wrong purposes, but certainly with COVID, there is a new aspect of AI use cases, and contact tracing is obviously one of them, the others that we are seeing is essentially, companies are adapting a lot of their existing AI solutions or solutions that use AI to accommodate or to account for COVID, like, companies that have observations done and so if they were doing facial recognition either in metro stations or stadiums or banks, they now are adding features to their systems to detect social distancing, for example, or detect if somebody is wearing a mask. The technology, again, itself is not that difficult, but in the implementation and the use and the governance around it, I think, is a lot more complex, and then, I would be remiss not to mention remote learning which is huge now, I think all of our children are learning remote at this point and being able to use AI in curriculums and being able to really pinpoint where a child is having a hard time understanding a concept and then giving them more support in that area is definitely something that our partners are looking at and it's something that (webcam scrambles) with my children and the tools that they're using and so instead of reading to their teacher for their reading test, they're reading to their computer and the computer's able to pinpoint some very specific issues that maybe a teacher would not see as easily and then of course, the teacher has the ability to go back with you and listen and make sure that there weren't any issues with dialects or anything like that, so it's really just an interesting reinforcement of the teacher/student learning with the added algorithmic impact as well. >> Right, a lot of opportunity is going to come out of COVID, some maybe more accelerated than others because as you mentioned, it's very complex. Monica, I wish we had more time, this has been a really fascinating conversation about what Intel and HPE are doing with respect to AI. Glad to have you back 'cause this topic is just too big, but we thank you so much for your time. >> Thank you. >> For my guest Monica Livingston, I'm Lisa Martin, you're watching theCUBE's coverage of HPE Discover 2020, thanks for watching.
SUMMARY :
brought to you by HPE. and that is the benefit of having and make inferences, so, talk to me the cost is going to be to be accessible and feasible, do you have like that with customers, are able to support with AI. that can help the customer really define and generally, the way it and so instead of reading to their teacher Glad to have you back 'cause of HPE Discover 2020, thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Monica Livingston | PERSON | 0.99+ |
Monica | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
2 | QUANTITY | 0.99+ |
COVID-19 | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
each piece | QUANTITY | 0.99+ |
third generation | QUANTITY | 0.99+ |
second generation | QUANTITY | 0.99+ |
30x | QUANTITY | 0.98+ |
3x | QUANTITY | 0.98+ |
Octane | COMMERCIAL_ITEM | 0.97+ |
HPE Discover 2020 | TITLE | 0.97+ |
today | DATE | 0.96+ |
bfloat16 | COMMERCIAL_ITEM | 0.95+ |
both | QUANTITY | 0.95+ |
third generation | QUANTITY | 0.95+ |
Bfloat16 | COMMERCIAL_ITEM | 0.93+ |
this month | DATE | 0.93+ |
Xeon Scalable | COMMERCIAL_ITEM | 0.92+ |
later this month | DATE | 0.92+ |
Xeon | COMMERCIAL_ITEM | 0.91+ |
theCUBE | ORGANIZATION | 0.86+ |
one of them | QUANTITY | 0.83+ |
several partners | QUANTITY | 0.8+ |
up to 30x | QUANTITY | 0.76+ |
lot | QUANTITY | 0.76+ |
customers | QUANTITY | 0.75+ |
time | QUANTITY | 0.69+ |
couple things | QUANTITY | 0.63+ |
COVID | OTHER | 0.62+ |
Movidius | ORGANIZATION | 0.6+ |
Processors | COMMERCIAL_ITEM | 0.58+ |
Octane | ORGANIZATION | 0.56+ |
Learning Boost | OTHER | 0.56+ |
day | QUANTITY | 0.55+ |
Deep | COMMERCIAL_ITEM | 0.55+ |
images | QUANTITY | 0.53+ |
couple of | QUANTITY | 0.51+ |
Last | QUANTITY | 0.5+ |
Scalable | OTHER | 0.45+ |
COVID | TITLE | 0.43+ |
Deep Learning Boost | COMMERCIAL_ITEM | 0.39+ |
VPU | TITLE | 0.34+ |
Evaristus Mainsah, IBM & Kit Ho Chee, Intel | IBM Think 2020
>> Announcer: From theCUBE studios in Palo Alto and Boston, it's theCUBE, covering IBM Think brought to you by IBM. >> Hi, there, this is Dave Vellante. We're back at the IBM Think 2020 Digital Event Experience are socially responsible and distant. I'm here in the studios in Marlborough, our team in Palo Alto. We've been going wall to wall coverage of IBM Think, Kit Chee here is the Vice President, and general manager of Cloud and Enterprise sales at Intel. Kit, thanks for coming on. Good to see you. >> Thank you, Dave. Thank you for having me on. >> You're welcome, and Evaristus Mainsah, Mainsah is here. Mainsah, he is the general manager of the IBM Cloud Pack Ecosystem for the IBM Cloud. Evaristus, it's good to see you again. Thank you very much, I appreciate your time. >> Thank you, Dave. Thank you very much. Thanks for having me. >> You're welcome, so Kit, let me start with you. How are you guys doing? You know, there's this pandemic, never seen it before. How're things where you are? >> Yeah, so we were quite fortunate. Intel's had an epidemic leadership team. For about 15 years now, we have a team consisting of medical safety and operational professionals, and this same team has, who has navigated as across several other health issues like bad flu, Ebola, Zika and each one and one virus then navigating us at this point with this pandemic. Obviously, our top priority as it would be for IBM is protecting the health and well being of employees while keeping the business running for our customers. The company has taken the following measures to take care of it direct and indirect workforce, Dave and to ensure business continuity throughout the developing situation. They're from areas like work from home policies, keeping hourly workers home and reimbursing for daycare, elderly care, helping with WiFi policies. So that's been what we've been up to Intel's manufacturing and supply chain operations around the world world are working hard to meet demand and we are collaborating with supply pains of our customers and partners globally as well. And more recently, we have about $16 Million to support communities, from frontline health care workers and technology initiatives like online education, telemedicine and compute need to research. So that's what we've been up to date. Pretty much, you know, busy. >> You know, every society that come to you, I have to say my entire career have been in the technology business and you know, sometimes you hear negative toward the big tech but, but I got to say, just as Kit was saying, big tech has really stepped up in this crisis. IBM has been no different and, you know, tech for good and I was actually I'm really proud. How are you doing in New York City? >> Evaristus: No, thank you, Dave, for that, you know, we are, we're doing great and, and our focus has been absolutely the same, so obviously, because we provide services to clients. At a time like this, your clients need you even more, but we need to focus on our employees to make sure that their health and their safety and their well being is protected. And so we've taken this really seriously, and actually, we have two ways of doing this. One of them is just on to purpose as a, as a company, on our clients, but the other is trying to activate the ecosystem because problems of this magnitude require you to work across a broad ecosystem to, to bring forth in a solution that are long lasting, for example, we have a call for code, which where we go out and we ask developers to use their skills and open source technologies to help solve some technical problems. This year, the focus was per AVADA initiatives around computing resources, how you track the Coronavirus and other services that are provided free of charge to our clients. Let me give you a bit more color, so, so IBM recently formed the high performance computing consortium made up of the feYderal government industry and academic leaders focus on providing high performance computing to solve the COVID-19 problem. So we're currently we have 33 members, now we have 27 active products, deploying something like 400 teraflops as our petaflop 400 petaflops of compute to solve the problem. >> Well, it certainly is challenging times, but at the same time, you're both in the, in the sweet spot, which is Cloud. I've talked to a number of CIOs who have said, you know, this is really, we had a cloud strategy before but we're really accelerating our cloud strategy now and, and we see this as sort of a permanent effect. I mean, Kit, you guys, big, big on ecosystem, you, you want frankly, a level playing field, the more optionality that you can give to customers, you know, the better and Cloud is really been exploding and you guys are powering, you know, all the world's Clouds. >> We are, Dave and honestly, that's a huge responsibility that we undertake. Before the pandemic, we saw the market through the lens of four key mega trends and the experiences we are all having currently now deepens our belief in the importance of addressing these mega trends, but specifically, we see marketplace needs around key areas of cloudification of everything below point, the amount of online activities that have spiked just in the last 60 days. It's a testimony of that. Pervasive AI is the second big area that we have seen and we are now resolute on investments in that area, 5G network transformation and the edge build out. Applications run the business and we know enterprise IT faces challenges when deploying applications that require data movement between Clouds and Cloud native technologies like containers and Kubernetes will be key enablers in delivering end to end data analytics, AI, machine learning and other critical workloads and Cloud environments at the edge. Pairing Intel's data centric portfolio, including Intel's obtain SSPs with Red Hat, Openshift, and IBM Cloud Paks, enterprise can now break through storage bottlenecks and have unconstrained data availability in the hybrid and multicloud environments, so we're pretty happy with the progress we're making that together with IBM. >> Yeah, Evaristus, I mean, you guys are making some big bets. I've, you know, written and discussed in my breaking analysis, I think a lot of people misunderstand IBM Cloud, Ginni Rometty arm and a bow said, hey, you know, we're after only 20% of the workloads are in cloud, we're going after the really difficult to move workloads and the hybrid workloads, that's really the fourth foundation that Arvin you know, talks about, that you and IBM has built, you know, your mainframes, you have middleware services, and in hybrid Cloud is really that fourth sort of platform that you're building out, but you're making some bets in AI. You got other services in the Cloud like, like blockchain, you know, quantum, we've been having really interesting discussions around quantum, so I wonder if you can talk a little bit about sort of where you're allocating resources, some of the big bets that, that you're making for the next decade. >> Well, thank you very much, Dave, for that. I think what we're seeing with clients is that there's increasing focus on and, and really an acceptance, that the best way to take advantage of the Cloud is through a hybrid cloud strategy, infused with data, so it's not just the Cloud itself, but actually what you need to do to data in order to make sure that you can really, truly transform yourself digitally, to enable you to, to improve your operations, and in use your data to improve the way that you work and improve the way that you serve your clients. And what we see is and you see studies out there that say that if you adopt a hybrid cloud strategy, instead of 2.5 times more effective than a public cloud only strategy, and Why is that? Well, you get thi6ngs such as you know, the opportunity to move your application, the extent to which you move your applications to the Cloud. You get things such as you know, reduction in, in, in risk, you, you get a more flexible architecture, especially if you focus on open certification, reduction and certification reduction, some of the tools that you use, and so we see clients looking at that. The other thing that's really important, especially in this moment is business agility, and resilience. Our business agility says that if my customers used to come in, now, they can't come in anymore, because we need them to stay at home, we still need to figure out a way to serve them and we write our applications quickly enough in order to serve this new client, service client in a new way. And well, if your applications haven't been modernized, even if you've moved to the Cloud, you don't have the opportunity to do that and so many clients that have made that transformation, figure out they're much more agile, they can move more easily in this environment, and we're seeing the whole for clients saying yes, I do need to move to the Cloud, but I need somebody to help improve my business agility, so that I can transform, I can change with the needs of my clients, and with the demands of competition and this leads you then to, you know, what sort of platform do you need to enable you to do this, it's something that's open, so that you can write that application once you can run it anywhere, which is why I think the IBM position with our ecosystem and Red Hat with this open container Kubernetes environment that allows you to write application once and deploy it anywhere, is really important for clients in this environment, especially, and the Cloud Paks which is developed, which I, you know, General Manager of the Cloud Pak Ecosystem, the logic of the Cloud Paks is exactly that you'll want plans and want to modernize one, write the applications that are cloud native so that they can react more quickly to market conditions, they can react more quickly to what the clients need and they, but if they do so, they're not unlocked in a specific infrastructure that keeps them away from some of the technologies that may be available in other Clouds. So we have talked about it blockchain, we've got, you know, Watson AI, AI technologies, which is available on our Cloud. We've got the weather, company assets, those are key asset for, for many, many clients, because weather influences more than we realize, so, but if you are locked in a Cloud that didn't give you access to any of those, because you hadn't written on the same platform, you know, that's not something that you you want to support. So Red Hat's platform, which is our platform, which is open, allows you to write your application once and deploy it anyways, particularly our customers in this particular environment together with the data pieces that come on top of that, so that you can scale, scale, because, you know, you've got six people, but you need 600 of them. How do you scale them or they can use data and AI in it? >> Okay, this must be music to your ears, this whole notion of you know, multicloud because, you know, Intel's pervasive and so, because the more Clouds that are out there, the better for you, better for your customers, as I said before, the more optionality. Can you6 talk a little bit about the rela6tionship today between IBM and Intel because it's obviously evolved over the years, PC, servers, you know, other collaboration, nearly the Cloud is, you know, the latest 6and probably the most rel6evant, you know, part of your, your collaboration, but, but talk more about what that's like you guys are doing together that's, that'6s interesting and relevant. >> You know, IBM and Intel have had a very rich history of collaboration starting with the invention of the PC. So for those of us who may take a PC for granted, that was an invention over 40 years ago, between the two companies, all the way to optimizing leadership, IBM software like BB2 to run the best on Intel's data center products today, right? But what's more germane today is the Red Hat piece of the study and how that plays into a partnership with IBM going forward, Intel was one of Red Hat's earliest investors back in 1998, again, something that most people may not realize that we were in early investment with Red Hat. And we've been a longtime pioneer of open source. In fact, Levin Shenoy, Intel's Executive Vice President of Data Platforms Group was part of COBOL Commies pick up a Red Hat summit just last week, you should definitely go listen to that session, but in summary, together Intel and Red Hat have made commercial open source viable and enterprise and worldwide competing globally. Basically, now we've65 used by nearly every vertical and horizontal industr6y. We are bringing our customers choice, scalability and speed of innovation for key technologies today, such as security, Telco, NFV, and containers, or even at ease and most recently Red Hat Openshift. We're very excited to see IBM Cloud Packs, for example, standardized on top of Openshift as that builds the foundation for IBM chapter two, and allows for Intel's value to scale to the Cloud packs and ultimately IBM customers. Intel began partnering with IBM on what is now called Pax over two years ago and we 6are committed to that success and scaling that, try ecosystem, hardware partners, ISVs and our channel. >> Yeah, so theCUBE by the way, covered Red Hat summit last week, Steve Minima and I did a detailed analysis. It was awesome, like if we do say so ourselves, but awesome in the sense of, it allowed us to really sort of unpack what's going on at Red Hat and what's happening at IBM. Evaristus, so I want to come back to you on this Cloud Pack, you got, it's, it's the kind of brand that you guys have, you got Cloud Packs all over the place, you got Cloud Packs for applications, data, integration, automation, multicloud management, what do we need to know about Cloud pack? What are the relevant components there? >> Evaristus: I think the key components is so this is think of this as you know, software that is designed that is Cloud native is designed for specific core use cases and it's built on Red Hat Enterprise Linux with Red Hat Openshift container Kubernetes environment, and then on top of that, so you get a set of common services that look right across all of them and then on top of that, you've got specific both open source and IBM software that deals with specific plant situations. So if you're dealing with applications, for example, the open source and IBM software would be the run times that you need to write and, and to blow applications to have setups. If you're dealing with data, then you've got Cloud Pack to data. The foundation is still Red Hat Enterprise Linux sitting on top of with Red Hat Openshift container Kubernetes environment sitting on top of that providing you with a set of common services and then you'll get a combination of IBM zone open, so IBM software as well as open source will have third party software that sits on top of that, as well as all of our AI infrastructure that sits on top of that and machine learning, to enable you to do everything that you need to do, data to get insights updates, you've got automation to speed up and to enable us to do work more efficiently, more effectively, to make your smart workers better, to make management easier, to help management manage work and processes, and then you've got multicloud management that allows you to see from a single pane, all of your applications that you've deployed in the different Cloud, because the idea here, of course, is that not all sitting in the same Cloud. Some of it is on prem, some of it is in other Cloud, and you want to be able to see and deploy applications across all of those. And then you've got the Cloud Pack to security, which has a combination of third party offerings, as well as ISV offerings, as well as AI offerings. Again, the structure is the same, REL, Red Hat Openshift and then you've got the software that enables you to manage all aspects of security and to deal with incidents when, when they arise. So that gives you data applications and then there's integration, as every time you start writing an application, you need to integrate, you need to access data security from someplace, you need to bring two pipes together for them to communicate and we use a Cloud Pack for integration to allow us to do that. You can open up API's and expose those API so others writing application and gain access to those API's. And again, this idea of resilience, this idea of agility, so you can make changes and you can adapt data things about it. So that's what the Cloud Pack provides for you and Intel has been an absolutely fantastic partner for us. One of the things that we do with Intel, of course, is to, to work on the reference architectures to help our certification program for our hardware OEMs so that we can scale that process, get many more OEMs adopt and be ready for the Cloud Packs and then we work with them on some of the ISV partners and then right up front. >> Got it, let's talk about the edge. Kity, you mentioned 5G. I mean it's a really exciting time, (laughs) You got windmills, you got autonomous vehicles, you got factories, you got to ship, you know, shipping containers. I mean, everything's getting instrumented, data everywhere and so I'm interested in, let's start with Intel's point of view on the edge, how that's going to evolve, you know what it means to Cloud. >> You know, Dave, it's, its definitely the future and we're excited to partner with IBM here. In addition to enterprise edge, the communication service providers think of the Telcos and take advantage of running standardized open software at the Telco edge, enabling a range of new workloads via scalable services, something that, you know, didn't happen in the past, right? Earlier this year, Intel announced a new C on second generation, scalable, atom based processes targeting the 5G radio access network, so this is a new area for us, in terms of investments going to 5G ran by deploying these new technologies, with Cloud native platforms like Red Hat Openshift and IBM Cloud Packs, comm service providers can now make full use of their network investments and bring new services such as Artificial Intelligence, augmented reality, virtual reality and gaming to the market. We've only touched the surface as it comes to 5G and Telco but IBM Red Hat and Intel compute together that I would say, you know, this space is super, super interesting, as more developed with just getting started. >> Evaristus, what do you think this means for Cloud and how that will evolve? Is this sort of a new Cloud that will form at the edge? Obviously, a lot of data is going to stay at the edge, probably new architectures are going to emerge and again, to me, it's all about data, you can create more data, push more data back to the Cloud, so you can model it. Some of the data is going to have to be done in real time at the edge, but it just really extends the network to new horizons. >> Evaristus: It does exactly that, Dave and we think of it and which is why I thought it will impact the same, right? You wouldn't be surprised to see that the platform is based on open containers and that Kubernetes is container environment provided by Red Hat and so whether your data ends up living at the edge or your data lives in a private data center, or it lives in some public Cloud, and how it flows between all of them. We want to make it easy for our clients to be able to do that. So this is very exciting for us. We just announced IBM Edge Application Manager that allows you to basically deploy and manage applications at endpoints of all these devices. So we're not talking about 2030, we're talking about thousands or hundreds of thousands. And in fact, we're working with, we're getting divided Intel's device onboarding, which will enable us to use that because you can get that and you can onboard devices very, very easily at scale, which if you get that combined with IBM Edge Application Manager, then it helps you onboard the devices and it helps you divide both central devices. So we think this is really important. We see lots of work that moving on the edge devices, many of these devices and endpoints now have sufficient compute to be able to run them, but right now, if they are IoT devices, the data has been transferred to hundreds of miles away to some data center to be processed and enormous pass and then only 1% of that actually is useful, right? 99% of it gets thrown away. Some of that actually has data residency requirements, so you may not be able to move the data to process, so why wouldn't you just process the data where the data is created around your analytics where the data is spread, or you have situations that are disconnected as well. So you can't actually do that. You don't want to stop this still in the supermarket, because there's, you lost connectivity with your data center and so the importance of being able to work offline and IBM Edge Application Manager actually allows you so it's tournament so you can do all of this without using lots of people because it's a process that is all sort or automated, but you can work whether you're connected or you're disconnected, and then you get replication when you get really, really powerful for. >> All right, I think the developer model is going to be really interesting here. There's so many new use cases and applications. Of course, Intel's always had a very strong developer ecosystem. You know, IBM understands the importance of developers. Guys, we've got to wrap up, but I wonder if you could each, maybe start with Kit. Give us your sense as to where you want to see this, this partnership go, what can we expect over the next, you know, two to five years and beyond? >> I think it's just the area of, you know, 5G, and how that plays out in terms of edge build out that we just touched on. I think that's a really interesting space, what Evaristus has said is spot on, you know, the processing, and the analytics at the edge is still fairly nascent today and that's growing. So that's one area, building out the Cloud for the different enterprise applications is the other one and obviously, it's going to be a hybrid world. It's not just a public Cloud world on prem world. So the whole hybrid build out What I call hybrid to DoD zero, it's a policy and so the, the work that both of us need to do IBM and Intel will be critical to ensure that, you know, enterprise IT, it has solutions across the hybrid sector. >> Great. Evaristus, give us the last word, bring us home. >> Evaristus: And I would agree with that as well, Kit. I will say this work that you do around the Intel's market ready solutions, right, where we can bring our ecosystem together to do even more on Edge, some of these use cases, this work that we're doing around blockchain, which I think you know, again, another important piece of work and, and I think what we really need to do is to focus on helping clients because many of them are working through those early cases right now, identify use cases that work and without commitment to open standards, using exactly the same standard across like what you've got on your open retail initiative, which we're going to do, I think is going to be really important to help you out scale, but I wanted to just add one more thing, Dave, if you if you permit me. >> Yeah. >> Evaristus: In this COVID era, one of the things that we've been able to do for customers, which has been really helpful, is providing free technology for 90 days to enable them to work in an offline situation to work away from the office. One example, for example, is the just the ability to transfer files and bandwidth, new bandwidth is an issue because the parents and the kids are all working from home, we have a protocol, IBM Aspera, which will make available customers for 90 days at no cost. You don't need to give us your credit card, just log on and use it to improve the way that you work. So your bandwidth feels as if you are in the office. We have what's an assistant that is now helping clients in more than 18 countries that keep the same thing, basically providing COVID information. So those are all available. There's a slew of offerings that we have. We just want listeners to know that they can go on the IBM website and they can gain those offerings they can deploy and use them now. >> That's huge. I knew about the 90 day program, I didn't realize a sparrow was part of that and that's really important because you're like, Okay, how am I going to get this file there? And so thank you for, for sharing that and guys, great conversation. You know, hopefully next year, we could be face to face even if we still have to be socially distant, but it was really a pleasure having you on. Thanks so much. Stay safe, and good stuff. I appreciate it. >> Evaristus: Thank you very much, Dave. Thank you, Kit. Thank you. >> Thank you, thank you. >> All right, and thank you for watching everybody. This is Dave Volante for theCUBE, our wall to wall coverage of the IBM Think 2020 Digital Event Experience. We'll be right back right after this short break. (upbeat music)
SUMMARY :
brought to you by IBM. and general manager of Cloud Thank you for having me on. Evaristus, it's good to see you again. Thank you very much. How are you guys doing? and to ensure business the technology business and you know, for that, you know, we and you guys are powering, you and the experiences we that Arvin you know, talks about, the extent to which you move the Cloud is, you know, and how that plays into a partnership brand that you guys have, and you can adapt data things about it. how that's going to evolve, you that I would say, you know, Some of the data is going to have and so the importance of the next, you know, to ensure that, you know, enterprise IT, the last word, bring us home. to help you out scale, improve the way that you work. And so thank you for, for sharing that Evaristus: Thank you very much, Dave. you for watching everybody.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Evaristus | PERSON | 0.99+ |
Steve Minima | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Mainsah | PERSON | 0.99+ |
Levin Shenoy | PERSON | 0.99+ |
99% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
600 | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
1998 | DATE | 0.99+ |
Dave Volante | PERSON | 0.99+ |
Evaristus Mainsah | PERSON | 0.99+ |
Marlborough | LOCATION | 0.99+ |
33 members | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
90 days | QUANTITY | 0.99+ |
New York City | LOCATION | 0.99+ |
2.5 times | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
27 active products | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
two companies | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
400 teraflops | QUANTITY | 0.99+ |
1% | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
COVID-19 | OTHER | 0.99+ |
hundreds of miles | QUANTITY | 0.99+ |
about $16 Million | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
six people | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
Cloud Paks | TITLE | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
five years | QUANTITY | 0.99+ |
hundreds of thousands | QUANTITY | 0.98+ |
Kit | PERSON | 0.98+ |
One example | QUANTITY | 0.98+ |
second generation | QUANTITY | 0.98+ |
more than 18 countries | QUANTITY | 0.98+ |
AVADA | ORGANIZATION | 0.98+ |
This year | DATE | 0.98+ |
Data Platforms Group | ORGANIZATION | 0.98+ |
Lisa Spelman, Intel | Red Hat Summit 2020
from around the globe it's the cube with digital coverage of Red Hat summit 2020 brought to you by Red Hat welcome back to the cubes coverage of Red Hat summit 2020 of course this year it's rather than all coming to San Francisco we are talking to red hat executives their partners and their customers where they are around the globe happy to welcome back one of our cube alumni Lisa Spellman who's a corporate vice president and general manager of the Intel Xeon and memory group Lisa thanks so much for joining us and where are you joining us from well thank you for having me and I'm a little further north than where the conference was gonna be held so I'm in Portland Oregon right now excellent yeah we've had you know customers from around the globe as part of the cube coverage here and of course you're near the mothership of Intel so Lisa you know but let's start of course you know the Red Hat partnership you know I've been the Intel executives on the keynote stage for for many years so talk about to start us off the Intel Red Hat partnership as it stands today in 2020 yeah you know on the keynote stage for many years and then actually again this year so despite the virtual nature of the event that we're having we're trying to still show up together and demonstrate together to our customers and our developer community really give them a sense for all the work that we're doing across the important transformations that are happening in the industry so we view this partnership in this event as important ways for us to connect and make sure that we have a chance to really share where we're going next and gather feedback on where our customers and that developer community need us to go together because it is a you know rich long history of partnership of the combination of our Hardware work and the open-source software work that we do with Red Hat and we see that every year increasing in value as we expand to more workloads and more market segments that we can help with our technology yeah well Lisa you know we've seen on the cube for for many years Intel strong partnerships across the industry from the data centers from the cloud I think we're gonna talk a little bit about edge for this discussion too though edge and 5g III think about all the hard work that Intel does especially with its partnership you know you talked about and I think that the early days of Red Hat you know the operating system things that were done as virtualization rolled out there's accelerations that gone through so when it comes to edge in 5g obviously big mega waves that we spend a lot of talking about what's what's Intel's piece obviously we know Intel chips go everywhere but when it comes to kind of the engineering work that gets done what are some of the pieces that Intel spork yeah and that's a great example actually of what I what we are seeing is this expansion of areas of workloads and investment and opportunity that we face so as we move forward into 5g becoming not the theoretical next thing but actually the thing that is starting to be deployed and transformed you can see a bunch of underlying work that Intel and Red Hat have done together in order to make that a reality so you look at they move from a very proprietary ASIC based type of workload with a single function running on it and what we've done is drive to have the virtualization capabilities that took over and provided so much value in the cloud data center also apply to the 5g network so the move to network function virtualization and software-defined networking and a lot of value being derived from the opportunity to run that on open source standard and have that open source community really come together to make it easier and faster to deploy those technologies and also to get good SLA s and quality of service while you're driving down your overall total cost of ownership so we've spent years working on that together in the 5g space and network space in general and now it's really starting to take off then that is very well connected to the edge so if you think about the edge as this point of content creation of where the actions happening and you start to think through how much of the compute or the value can I get out at the edge without everything having to go all the way back to the data center you start to again see how those open standards in very complex environments and help people manage their total cost of ownership and the complexity all right Lisa so when you're talking about edge solutions when I've been talking to Red Hat where their first deployments have really been talking to the service providers really I've seen it as an extension of what you were talking about network functions virtualization you know everybody talks about edges there's a lot of different edges out there the service providers being the first place we see things but you know all the way out even to the consumer edge and the device edge where Intel may or may not have you know some some devices there so help us understand you know where where you're sitting and where should we be looking as these technologies work you know it's a it's a great point we see the edge being developed by multiple types of organizations so yes the service providers are obviously there in so much as they already even own the location points out there if you think of all the myriad of poles with the the base stations and everything that's out there that's a tremendous asset to capitalize on you also see our cloud service provider customers moving towards the edge as well as they think of new developer services and capabilities and of course you see the enterprise edge coming in if you think of factory type of utilization methodologies or in manufacturing all of those are very enterprise based and are really focused on not that consumer edge but on the b2b edge or the you know the infrastructure edge is what you might think of it as but they're working through how do they add efficiency capability automation all into their existing work but making it better so at Intel the way that we look at that is it's all opportunities to provide the right foundation for that so when we look at the silicon products that we develop we gather requirements from that entire landscape and then we work through our silicon portfolio you know we have our portfolio really focused on the movement the storage and the processing of data and we try to look at that in a very holistic way and decide where the capability will best serve that workload so you do have a choice at times whether some new feature or capability goes into the CPU or the Zeon engine or you could think about whether that would be better served by being added into a smart egg type of capability and so those are just small examples of how we look at the entirety of the data flow in the edge and at what the use case is and then we utilize that to inform how we improve the silicon and where we add feature well Lisa as you were going through this it makes me also think about one of the other big mega waves out there artificial intelligence so lots of discussion as you were saying what goes where how we think about it cloud edge devices so how does AI intersect with this whole discussion of edge that we were just having yeah and you're probably gonna have to cut me off because I could go on for a long time on on this one but AI is such an exciting at capability that is coming through everywhere literally from the edge through the core network into the cloud and you see it infiltrating every single workload across the enterprise across cloud service providers across the network service providers so it is truly on its way to being completely pervasive and so again that presents the same opportunity for us so if you look at your silicon portfolio you need to be able to address artificial intelligence all the way from the edge to the cloud and that can mean adding silicon capabilities that can handle milliwatts like ruggedized super low power super long life you don't literally out at the edge and then all the way back to the data center where you're going for a much higher power at a higher capability for training of the models so we have built out a portfolio that addresses all of that and one of the interesting things about the edges people always think of it as a low compute area so they think of it as data collection but more and more of that data collection is also having a great benefit from being able to do an amount of compute and inference out at the edge so we see a tremendous amount of actual Zeon product being deployed out at the edge because of the need to actually deliver quite high-powered compute right there and that's improving customer experiences and it's changing use cases through again healthcare manufacturing automotive you see it in all the major fast mover edge industries yeah now we're really good points they make their Lisa we all got used to you know limitless compute in the cloud and therefore you know let's put everything there but of course we understand there's this little thing called the speed of light that makes it that much of the information that is collected at the edge can't go beyond it you know I saw a great presentation actually last year talking about the geosynchronous satellites they collect so much information and you know you can't just beam it back and forth so I better have some compute there so you know we've known for a long time that the challenge of you know of our day has been distributed architectures and edge just you know changes that you know the landscape and the surface area that we need the touch so much more when I think about all those areas obviously security is an area that comes up so how does Intel and its partners make sure that no matter where my data is and you talk about the various memory that you know security is still considered at each aspect of the environment oh it's a huge focus because if you think of people and phrases they used to say like oh we got to have the fat pipe or the dumb pipe to get you know data back and or there is no such thing as a dumb pipe anymore everything is smart the entire way through the lifecycle and so with that smartness you need to have security embedded from the get-go into that work flow and what people need to understand is they undergo their edge deployments and start that work is that your obligation for the security of that data begins the you collect that data it doesn't start when it's back to the cloud or back in the data center so you own it and need to be on it from the beginning so we work across our Silicon portfolio and then our software ecosystem to think through it in terms of that entire pipeline of the data movement and making sure that there's not breakdowns in each of the handoff chain it's a really complex problem and it is not one that Intel is able to solve alone nor any individual silicon or software vendor along the way and I will say that some of the security work over the past couple years has led to a bringing together of the industry to address problems together whether they be on any other given day a friend or a foe when it comes to security I feel like I've seen just an amazing increase over the past two two and a half years on the collaboration to solve these problems together and ultimately I think that leads to a better experience for our users and for our customers so we are investing in it not just at the new features from the silicon perspective but in also understanding newer and more advanced threat or attack surfaces that can happen inside of the silicon or the software component all right so Lisa final question I have for you want to circle back to where we started it's Red Hat summit this week-long partnerships as I mentioned we see Intel it all the cloud shows you partner with all the hardware software providers and the like so big message from Red Hat is the open hybrid cloud to talk about how that fits in with everything that Intel is doing it's an area of really strong interconnection between us and Red Hat because we have a vision of that open hybrid cloud that is very well aligned and the part about it is that it is rooted not just in here's my feature here's my feature from either one of us it's rooted in what our customers need and what we see our enterprise customers driving towards that desire to utilize the cloud to in prove their capabilities and services but also maintain that capability inside their own house as well so that they have really viable work load transformation they have opportunities for their total cost of ownership and can fundamentally use technology to drive their business forward all right well Lisa Spellman thank you so much for all the update from Intel and definitely look forward to seeing the breakouts the keynotes and the like yes me too all right lots more coverage here from the cube redhead summit 2020 I'm Stu minimun and thanks as always for watching [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Spellman | PERSON | 0.99+ |
Lisa Spellman | PERSON | 0.99+ |
Lisa Spelman | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
2020 | DATE | 0.99+ |
Lisa | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Portland Oregon | LOCATION | 0.98+ |
this year | DATE | 0.98+ |
Intel | ORGANIZATION | 0.98+ |
red hat | ORGANIZATION | 0.98+ |
Red Hat | EVENT | 0.97+ |
Red Hat summit 2020 | EVENT | 0.96+ |
Red Hat | TITLE | 0.96+ |
first deployments | QUANTITY | 0.95+ |
Intel Xeon | ORGANIZATION | 0.95+ |
Red Hat summit 2020 | EVENT | 0.95+ |
first place | QUANTITY | 0.94+ |
Zeon | ORGANIZATION | 0.94+ |
Stu minimun | PERSON | 0.93+ |
each aspect | QUANTITY | 0.93+ |
one | QUANTITY | 0.91+ |
one of the interesting things | QUANTITY | 0.88+ |
Red Hat Summit 2020 | EVENT | 0.86+ |
single function | QUANTITY | 0.86+ |
two and a half years | QUANTITY | 0.84+ |
today | DATE | 0.83+ |
big | EVENT | 0.81+ |
years | QUANTITY | 0.81+ |
every single workload | QUANTITY | 0.79+ |
mega waves | EVENT | 0.78+ |
past couple years | DATE | 0.77+ |
many years | QUANTITY | 0.77+ |
every year | QUANTITY | 0.74+ |
redhead summit | EVENT | 0.71+ |
each | QUANTITY | 0.71+ |
Zeon | COMMERCIAL_ITEM | 0.61+ |
Red Hat | TITLE | 0.58+ |
of poles | QUANTITY | 0.58+ |
edge | ORGANIZATION | 0.58+ |
Red Hat | EVENT | 0.54+ |
5g | ORGANIZATION | 0.46+ |
two | DATE | 0.46+ |
5g | QUANTITY | 0.38+ |
Casimir Wierzynski, Intel | RSAC USA 2020
>>Fly from San Francisco. It's the cube covering RSA conference, 2020 San Francisco brought to you by Silicon angle media. >>Hello and welcome back to the cube coverage here in San Francisco, the Moscone center for RSA Congress 2020 for all the coverage period for three days. I'm John, host of the cube. You know, as cybersecurity goes to the next level, as cloud computing goes, continues to go more enterprise, large scale AI and machine learning have become critical managing the data. We've got a great guest here from Intel, Kaz Borzynski, senior director of the AI price with Intel. Thanks for joining us. Oh thanks. So data is a huge, huge data problem when it comes down to cybersecurity, uh, and and generally across the enterprise. Now it's well known, well documented, but you're here giving a talk about machine learning privacy because everyone wants to know who the bad guys are. So do the bad guys deserve privacy? Okay, we'll get to that later. But first tell about your talk and give a talk here at RSA. >>We'll get into other stuff later. I gave a talk, so thanks for having me. I gave a talk on a whole suite of exciting new techniques known as privacy preserving machine learning. So this is a set of machine learning techniques that help people realize the promise of AI and machine learning. But we know that machine learning systems rely on underlying data to train. So how can you also respect the privacy and the security of the underlying data while still being able to train and use AI systems and just take it, where are you within the Intel sphere? Because Intel osseous surgery obviously chips and power to all the enterprises in large. Skip. How are you on the software side AI group? Explain where you are. And so I'm in the AI group at Intel, but I have the most fun job at Intel. I think so cause I work in the CTO office of the AI group, which means I get to think about more futuristic, you know, where is AI going? >>What are some of the major inflection points? One of these that we've been looking at for the last couple of years is this kind of collision course between the need for data to train machine learning systems to unlock all the power of AI, but still the need to keep data private. Yeah, and I think that's generally consistent with our editorial in our research, which is the confluence of cloud native, large scale cloud computing, multi-cloud and AI or machine learning, all kinds of coming together. Those are multigenerational technologies that are coming. So that's, this wave is big. That's right. And I think one thing that's kind of maybe underappreciated about machine learning, especially in production is it's almost always a multi-party interaction. So you'll have, let's say one party that owns data and other party may own a model. They're running a system on somebody else's hardware. So because of the nature of digital data, if you want to share things, you have to worry about what other parties may be doing with those data. >>Because you bring up a great point I want to get your reaction and thoughts on is that, is that it's multidisciplinary. Now as people aren't breaking into the field. I mean people are really excited about AI. I mean you talk to someone who's 12 years old, they see a Tesla, they see software, they see all these things, they see all this cool stuff. So machine learning, which powers AI is very enticing to anyone that's got kind of technical or nerdy background and social attracting a lot of young people. So it's not just getting a computer science degree. There's so much more to AI because you talk about why, what someone needs to be successful too. And to engage in the AI wave. You don't need to just be a code or you could be outside the scope because it's an integrated model or is it's very much, so my group at Intel is better, very heterogeneous. >>So what have got a, you know, kind of mathematicians, but I also have coders. I have, uh, an attorney who's a public policy expert. I have cryptographers. Uh, I think there's a number of ways to get involved in, in meaning my, my background is actually a neuroscience. So, um, it makes sense. Good. Stitch it all together. Yeah. Well, societal changes has to be the, the algorithm needs training they need to learn. So having the most diverse input seems to me to be a, a posture the industry is taking and what's, is that right? Is that the right way to think about it? How should we be thinking about how to make AI highly effective versus super scary? Right. Well, one of the efforts that we're making, part of my message here is that to make these systems better, generally more data helps, right? If you can expand the availability of data, that's always going to help machine learning systems. >>And so we're trying to unlock data silos that may exist across countries, across the organizations. So for example, you know, in healthcare you could have multiple hospitals that have patient data. If somehow they could pool all their data together, you would get much more effective models, much better patient outcomes, but for very good privacy reasons, they're not allowed to do that. So there's these interesting ideas like federated learning where you could somehow decentralize the machine learning process so that you can still respect privacy but get the statistical power. That's a double down on that for a second cause I want to explore that. I think this is the most important story that's not being talked about. It's nuance a little bit. Yeah. You know, healthcare, you had HIPAA, which was built for all the right reasons back then, but now when you start to get into much more of a cross pollination of data, you need to manage the benefit of why it existed with privacy. >>So encryption, homomorphic encryption for instance, data and use. Yes. Okay. When it's being used, not just in flight or being arrested becomes, now you have the three triads of data. Yes. This is now causing a new formula for encryption privacy. What are some of the state of the art mindset thinkings around how to make data open a usable but yet either secure, encrypted or protected. That's right. So it's kind of this paradox of how do I use the data but not actually get the data. You mentioned homomorphic encryption. So this is one of the most kind of leading edge techniques in this area where somehow you're able to, there are ways of doing math on the data while it stays encrypted and the answer that comes out, it's still encrypted and it's only the actual owner of the data who can reveal the answer. So it seems like magic, but with this capability you enable all kinds of new use cases that wouldn't be possible before where third parties can act on, you know, your sensitive data without ever being exposed to it in any way. >>So discovery and leverage of the days that what you're getting at in terms of the benefits, I mean use cases. So stay on that. They used cases of the, of this new idea. Yeah. Is discovery and usage. How would that work? Well, so when we talked about federated learning and pooling across hospitals, that's one set of techniques. Homomorphic encryption would be, for example, suppose that some AI system has already been trained, but I'd like to use it on sensitive data. How do I do that in such a way that the third party service isn't, you know, this what makes, I think machine learning different from different types of data. You know, security problems is that machine learning, you have to operate on the data. You're not just storing it, you're not just moving it around. So how do you, yeah, and this is a key thing. >>So I've got to ask you the question because one of the things that's a real interesting trade off these days is AI and machine learning is really can create great benefits, but also people just go the knee jerk reaction of, you know, Oh my God, it's scary. My privacy. So it's a frontline with Amazon, just facial recognition. Oh my God, it's evil. Yeah. So there's a lot of scared people that might not be informed. Yeah. How should companies invest in machine learning and AI from your opinion? On how should they think about the next 10 year trajectory starting today, thinking about how to invest, what's the right way to think about it, build a team. Yeah. What's your thoughts on that? Because, and this is the number one challenge right now. Yeah. Well I think the, uh, some of this scary issues that you mentioned, you know, there are legitimately scary. >>They're going to have to be resolved, not by companies, but probably, you know, by society and kind of our delegates. So lawmakers, regulators, part of what we're trying to do at the technical level is give society and regulators a, a more flexible set of tools around which you can slice and dice data privacy and so on, so that it's not just all or none. Right. I think that's kind of my main goal as a, as an organization. I think again, the, this idea of having a heterogeneous set of talents, you know, you're going to need policy experts and applied mathematicians and linguists and you know, neuroscientists. So diversity is a huge opportunity, very much so. Not just diversity of people, but diverse data, diverse data, diverse kind of mindsets, approaches to problems that are hard but very promising. If so. Okay. Let's flip to the other side of the spectrum, which is what should people not do? >>What does, what's a, what's a fail failure formula one dimensional thinking? What's a, what's an identification of something that's not, may not go in the right way? Well, you know, one, uh, distinguishing feature of the machine learning field, and it's kind of a cultural thing, but it's given it a lot of traction is it's fundamentally, it had been a very open culture. So there's a lot of, uh, sharing of methods. It's a very, uh, collaborative academic field. So I think within a company you want to kind of be re you want to be part of that culture too. So every company is going to have its secret sauce. It's things that it needs to keep proprietary, but it's very important for companies to engage this broader community of researchers. So you're saying, which I would want, maybe I'm what I would agree with, but I'll just say it. >>You can agree or disagree to be successful, you got to be open. If you're data-driven, you've gotta be open. That's right. There's more JD equals better data. That's why more data, more approaches to data, kind of more eyes on the problem. But you know, still you can definitely keep your proprietary, you know, it kind of forces organizations to think about what are our core strengths that we really want to keep proprietary. But then other things let's, you know, open. All right. So what's the coolest thing you've working on right now? What are some of the fun projects you guys are digging into and you've got a great job. Sounds like you're excited about it. I mean, AI I think is the most exciting thing. I mean I wish I could be 20 again in computer science or whatever field. Cause I think AI is more than a multigenerational things. >>Super exciting as a technical person. But what are you working on that you're excited about? So I'm very excited about taking some of these things like homomorphic encryption and making them much more available to developers, to data scientists because it's asking too much for a data scientist to also be a kind of a post quantum crypto expert. So we've written an open source package called H E transformer, H G for homomorphic encryption. It allows the data scientists to kind of do their normal data science and Python or whatever they're used to, but then they kind of flick a switch and suddenly their model is able to run on encrypted data. Can you just take a minute to explain why homomorphic encryption trend right now is really important? I mean, give a peek into the why because this is something that is now becoming much more real. >>Yeah. The data in use kind of philosophy. Why now? Why is it so important right now? Well, I think, uh, the, because of cloud in the, the power of cloud and the fact that you know, data are collected in one place and possibly processed in another place, you're going to have to, you know, your data are moving around and they're being operated on. So if you can know that, you know, as long as my data are moving around and people are operating on it but it's staying encrypted the whole time, you know, not just in transit, that gives a much higher level of comfort around and the applications are going to probably be onboarded. I mean you can almost imagine new applications will emerge from this application discovery cataloging and API integration points. I mean you can almost imagine the trust will go up and you can also kind of end up with these different business models where you have entities that compete in some spheres but they may decide to collaborate in other ways. >>So for example, banks could compete on, you know, lending and so on under normal activities. But in terms of fraud detection, they may decide, Hey, maybe we can make some Alliance where we cross check with each other as models on certain transactions, but I'm not actually giving you any transaction data. So that's maybe okay. Right. So that's a very powerful, it's really interesting. I mean I think the uh, the compute power has allowed, the overhead seems to be much more robust because people are working on this for in the eighties and nineties I remember. Yes. But it was just so expensive overhead while that's right. Yeah. So you bring up a great point here. So, and this is one of the areas where Intel is really pushing, my team is pushing these techniques have been around for 20 years. Initially they were maybe like 10 million times slower than real time. >>So people thought, okay, this is interesting, you know, mathematically, but not practical. There've been massive improvements just in the last two years where now things are running, you know, a hundred times slower than, than kind of un-encrypted math. But still that, that means that something that you know would take 50 milliseconds now takes five seconds. That's still not an unreasonable, you're my new friend. Now, my best friend on AI. Um, and I got a business to run and I'm going to ask you, what should I do? I really want to leverage machine learning and AI in my business. Okay, I'm investing in more tech. I got cloud and building my own software. How should I be investing? How do I build out a great machine learning AI scene and then ultimately capabilities? How should I do that? Okay, well I would start with a team that has a kind of a research mindset, not because you want them to come in and like write research papers, but the path from research into production is so incredibly short in AI. >>You know, you have things that are papers one year and they're going into production at Google search and within a year. So you kind of need that research mindset. I think another thing is that you want to, uh, you're gonna, you're going to require a very close collaboration between this data science team and your CIO and kind of, you know, systems. And a lot of the challenges around AI are not just coming up with the model, but how do you actually scale it up and you know, go to production with it and interesting about the research. I totally agree with you. I think, you know, you can almost call that product management kind of new fangled Prague product management because if it's applied research, you kind of have your eye on a market generally, but you're not making hardcore product decisions. You're researching it, you're writing it so that you got to, got to do the homework, you know, dream it before you can build it. >>Well, I'm just saying that the field is moving so fast that you're going to need on your team, uh, people who can kind of consume the latest papers. Oh, you're saying consume the research as well. Yeah, I mean if they can contribute, that's great too. I mean, I think this is this kind of open culture where, you know, people consume, they find some improvement. They can then publish it at the next year's conference. It's just been this incredibly healthy eco software. Acceleration's a big part of the cloud. Awesome. Well I really appreciate your insight. This is great topic. I could go for an hour. One of my favorite things. I love the homophobic uh, encryption. I think that's going to be a game changer. I think we're going to start to see some interesting discoveries there. Uh, give a quick plug for Intel. What are you working on now? >>What are you looking to do? What's your plans, highs hiring, doing more research, what's going on? Well, so we think that this intersection of privacy and AI is kind of at the core of, of Intel's data centric mission. So we're trying to figure out, you know, whatever it takes to enable the community, whether it's, you know, uh, optimize software libraries. It could be custom Silicon, it could be even services where, you know, we really want to listen to customers, figure out what they need. Funding. Moore's law is always going to be around the next wave is going to have more compute. It's never going away. More storage, more data. It's just gets better and better. Yeah. Thanks for coming on Catherine. Thanks for having can we have Intel inside the cube breaking down the future of AI. Really exciting stuff on the technology front security day. That's all going to happen at large scale. Of course, it's the cube bringing you all the data here at RSA. I'm John furrier. Thanks for watching.
SUMMARY :
RSA conference, 2020 San Francisco brought to you by Silicon So do the bad guys deserve privacy? So how can you also respect So because of the nature of digital data, I mean you talk to someone who's 12 years old, they see a Tesla, they see software, So what have got a, you know, kind of mathematicians, but I also have coders. So for example, you know, in healthcare you could have multiple So it seems like magic, but with this capability you enable all kinds of new use cases So discovery and leverage of the days that what you're getting at in terms of the benefits, So I've got to ask you the question because one of the things that's a real interesting trade off these days They're going to have to be resolved, not by companies, but probably, you know, by society and kind you know, one, uh, distinguishing feature of the machine learning field, You can agree or disagree to be successful, you got to be open. But what are you working on that you're excited about? I mean you can almost imagine the trust will go up and you can also kind of end up So for example, banks could compete on, you know, lending and so on under normal activities. So people thought, okay, this is interesting, you know, mathematically, but not practical. I think, you know, you can almost call that product management kind of new fangled Prague product Well, I'm just saying that the field is moving so fast that you're going to need on your team, So we're trying to figure out, you know, whatever it takes to enable the community,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Casimir Wierzynski | PERSON | 0.99+ |
Kaz Borzynski | PERSON | 0.99+ |
five seconds | QUANTITY | 0.99+ |
Catherine | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
three days | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
John | PERSON | 0.99+ |
50 milliseconds | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
RSA | ORGANIZATION | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
John furrier | PERSON | 0.99+ |
Python | TITLE | 0.99+ |
20 | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
one year | QUANTITY | 0.98+ |
HIPAA | TITLE | 0.97+ |
RSAC | ORGANIZATION | 0.97+ |
RSA | EVENT | 0.97+ |
today | DATE | 0.97+ |
RSA Congress 2020 | EVENT | 0.97+ |
Prague | LOCATION | 0.97+ |
12 years old | QUANTITY | 0.97+ |
next year | DATE | 0.96+ |
eighties | DATE | 0.96+ |
one set | QUANTITY | 0.95+ |
20 years | QUANTITY | 0.95+ |
USA | LOCATION | 0.94+ |
three triads | QUANTITY | 0.94+ |
second | QUANTITY | 0.93+ |
10 million times | QUANTITY | 0.92+ |
an hour | QUANTITY | 0.92+ |
Moscone | LOCATION | 0.92+ |
one party | QUANTITY | 0.9+ |
Moore | PERSON | 0.89+ |
a year | QUANTITY | 0.88+ |
ORGANIZATION | 0.87+ | |
one thing | QUANTITY | 0.85+ |
next 10 year | DATE | 0.84+ |
last two years | DATE | 0.81+ |
one place | QUANTITY | 0.8+ |
Silicon angle media | ORGANIZATION | 0.78+ |
hundred times | QUANTITY | 0.74+ |
nineties | DATE | 0.74+ |
AI | ORGANIZATION | 0.68+ |
last couple | DATE | 0.65+ |
years | DATE | 0.65+ |
E | OTHER | 0.65+ |
CTO | ORGANIZATION | 0.64+ |
2020 | EVENT | 0.61+ |
AI wave | EVENT | 0.58+ |
G | TITLE | 0.54+ |
number one | QUANTITY | 0.52+ |
JD | PERSON | 0.51+ |
2020 | ORGANIZATION | 0.45+ |
next | DATE | 0.44+ |
H | TITLE | 0.35+ |
wave | EVENT | 0.3+ |
Bobby Allen, CloudGenera & William Giard, Intel | AWS re:Invent 2019
>>long from Las Vegas. It's the Q covering a ws re invent 2019. Brought to you by Amazon Web service is and in along with its ecosystem partners. >>Welcome back to the Cube. We are in Las Vegas, Lisa Martin with John Wall's. I'm very excited that we're kind of color coordinated >>way. Didn't compare notes to begin with, but certainly the pink thing. It's worth it if >>you like. You complete me. >>Oh, thank you. Really, Joe, I don't hear that very often. My wife says that >>you tell that we're at the end of day one of the coverage of A W s three in bed. Good day, though. Yes, it has been very excited. We have a couple of guests joining us for our final segment on this. Please welcome. We have Bill Gerard CTO of Digital Transformation and Scale solutions at Intel Bill, welcome to our show. >>Thank you very much. Happy to be here >>And one of our friends. That's no stranger to the Cube. One of our former host, Bobby Allyn, the CEO of Cloud Generate. Bobby. >>Thank you. Thank you for having us. >>Guys, here we are. This there has not been a lull in the background noise all day. Not reinvent day one. But Bobby want to start with you. Talk to her audience about cloud genera. Who are you guys? What do you do? And what's different about what you're delivering? >>One of the first things is different about Claude Generous where we're located. So we're in Charlotte, which I call Silicon South. So we're kind of representing the East Coast, and we're a company that focuses, focuses on helping with workload, placement and transformation. So where you don't know whether something should go on from off grim. If you put it in Amazon, which service's should have consumed licensing models? Pricing models way help you make data driven decisions, right? So you're not just going based on opinion, you're going based on fact. >>And that's challenging because, you know, in the as, as John Ferrier would say, No Cloud Wanda Otto, which was compute network storage, it was the easy I shouldn't say easy, but the lift and shit applications that enterprises do are these workloads should go to the cloud. Now we have you know what's left over, and that's challenging for organization. Some of the legacy once can't move. How do you help from a Consul Tatum's down point that customers evaluate workloads? What data are they running? What the value that data has and if they are able to move some of those more challenging applications. >>So part of the framework for us, Lisa, is we want to make sure we understand what people are willing and able to change right, because sometimes it's not just about lower costs. Sometimes it's about agility, flexibility, deploying a different region. So what we often start with his wit is better look like you would assist us with life for your organization. And so then, based on that, we analyze the applications with an objective, data driven framework and then make sure the apse land where they're supposed to go. We're not selling any skewer product. We're selling advice to give you inside about what you should do, >>Bobby, I think. And maybe Bill to you could chime in here on this. If you give people a choice, What does this look like? What you know, What do you want? I don't want to do anything right. I want to stay put, right? But that obviously that's not an option, But you I'm sure you do get pushed back quite a bit from these almost the legacy mindset. And we've talked a lot about this whole transformation versus transition. Some people don't want to go, period. So how do you cajole them? Persuade them bring them along on this journey? Because it's gonna be a long trip. Yeah, I think you gotta pack a lunch. >>It's a good point. I think what we've seen, most of them have data experience that this is a tried and elements didn't get the results that they expected. This is where you know, the partnership that we have with call General. Really? You know that data driven, intelligent, based planning is super important, right? We want to really fundamentally health organizations move the right workloads, make sure they get the right results and not have to redo it. Right? And so part of that, you know, move when you're either past scars or not used to what you're doing. Give him the data and the information to be able to do that intelligently and make that as fast as they can. And you know, at the right, you know, experience in performance from a capability perspective. >>So so many businesses these days, if they're not legacy if they're not looking in the rear view mirror, what is the side mirror site? Objects are closer than they appear, even for Amazon. Right? For all of these companies, there are smaller organizations that might be born in a cloud compared to the legacy two words. And if they're not looking at, we have to transform from the top down digitally, truly transform. Their business may not be here in a year or two, so the choice and I think they need to pack a lunch and a hip flask for this because it's quite the journey. But I'm curious with the opportunity that cloud provides. When you have these consultation conversations, what are This? Could be so transformative not just to a business, but to a do an entire industry. Bill talked to us from your perspective about some of the things that you've seen and how this next generation of cloud with a I machine learning, for example, can can really transfer like what's the next industry that you think is prime to be really flipped upside down? >>Well, the good news is I think most of the industries in the segment that we talked to have realized they need to some level of transformation. So doing the business as usual really isn't an option to really grow and drive in the future. But I do think the next evolution really does center on what's happening in a I and analytics. Whether it's, you know, moving manufacturing from video based defect detection, supply chain integrity. You know what's happening from a retail was really the first in that evolution, but we see it in health care in Federal Data Center modernization, and it's really moving at a faster pace and adopting those cloud technologies wherever they needed, both in their data center in the public, cloud out of the edge. And we'll start to see a real shift from really consolidation in tow. Large hyper converts, data centers to distributed computing where everything again. And that's where we're excited about the work we're doing with the Amazon, the work we're doing with Eyes V partners to be at the capability where they need it, but I think it will be really the next. Evolution of service is everywhere. >>Never talk us through an example or use case of a customer that you're working with, a cloud genera with intel and and a W S. What does that trifecta look like for, say, a retailer or financial service is organization >>so that that looks like this? ELISA. When we when we talk about workload placement, we think that most companies look at that as a single question. It's at least a five fold question. Right there is the venue. There's the service. There's the configuration, the licensing model and the pricing model. You need to look at all five of those things. So even if you decided on a DBS is your strategic partner, we're not done yet. So we have a very large financialservices customer that I can't name publicly. But we've collaborated with them to analyze tens of thousands of workloads, some that go best off from some that go best on for him. And they need guidance and coaching on things like, Are you paying for redhead twice your pay for licensing on him? Are you also paying for that in the cloud? There are things that maybe should be running an RT s database as a service. Here's your opportunity to cut down on labor and shift some of the relationships tohave, toe re index and databases is not glamorous or differential to value for your business. Let's take advantage of what a TBS does well and make this better for your company. One of the things that I want to kind of introduce to piggyback on your question. We lean on people process technology as kind of the three, the three legged horse in the Enterprise. I want to change that people process product or people process problem. We're falling in love with the tech and getting lazy. Technology should be almost ubiquitous or under the covers to make a product better or to solve a problem for the customer. >>Well, maybe on that, I mean automation concern to come in and make a big play here because we're taking all these new tasks if you could automate them that you free your people, your developers to do their thing right. So you raise an interesting point on that about being lazy and relying on things. But yet you do want off put our offload some of these nasty not to free up that creativity and free up the people to do what they're supposed to be doing. It's a delicate balance, though, isn't it? It is. It is. This >>is where I think the data driven, you know, informed decisions important. We did a lot of research with Cloud Jenner and our customers, and there's really four key technical characteristics when evaluating workload. The 1st 1 of course, is the size of the data. Where is the created words They use Words that consumed the 2nd 1? Is the performance right? Either performance not only to other systems around it or the end user, but the performance of the infrastructure. What do you need out of the capability? The level of integration with other systems? And then, of course, security. We hear that time and again, right? Regulatory needs. What are we having from top secret data to company sensitive data? Really Getting that type of information to drive those workload placement decision becomes at the forefront of that on getting, you know, using cloud gender to help understand the number of interfaces in and out the sides of the data. The performance utilization of the system's really helps customers understand how to move the right workload. What's involved and then how to put that in the right eight of us instance, and use the right ideas capabilities, >>and you and you both have hit on something here because the complexity of this decision, because it's multi dimensional, you talked about the five points a little bit ago. Now you talked about four other factors. Sue, this is not a static environment, No, and to me that as you're making a decision, that point is what's very difficult for, I would assume for the people that you're interfacing with on the company level. Yes, because it's a moving target for them, right? They just it's it's dynamic and changing your data flows exponentially. Increasing capabilities are changing. How do you keep them from just breaking down? >>I don't want to jump in on that, because again, I'm going to repeat this again. That my thesis is often technology is the easy part. We need to have conversations about what we want to do. And so I had a conversation earlier today. Think of Amazon like a chef. They could make anything I want, but I need to decide what I want to eat. If I'm a vegan and he wants steak. That's not Amazons fault. If they can't cook something, that's a mismatch of a bad conversation. We need to communicate. So what I'm finding is a lot of executives are worried about this. There were Then you're going to give me the right the wrong answer to the right question. The reality is you may have the wrong question. First of all right, the question is usually further upstream, so the worry that you're gonna give me the wrong answer to the right question. But often you need to worry that you're getting your starting with the wrong question. You're gonna get the right answer asked the right question first. And then you got a chance to get to the final destination. But >>and then he in this multi cloud world that many organizations live in, mostly not My strategy could be by Emma A could be bi developer preference for different solutions. A lot of Serios air telling us we've inherited a lot of this multi cloud and technical debt. Exactly. So does not just compound the problem because to your point, I mean you think of one way we hear so many different stats about the number of clouds that on average enterprises using is like 5 to 9. That whole world. That's a reality for organizations. So in terms of how the business can be transformed by what you guys are doing together, it seems like there's a tremendous opportunity there. But to your point, Bobby, where do you start? How do you help them understand what? That right first question is at the executive level so that those four technical points that Bill talked about Tek thee you know, the executive staff is all on board with Yes, this is the question we're asking then will understand it. The technology is right. Sold >>it. It's got to start with, Really? What? The company's business imperatives, right? It can't start with an I t objective. It's it's Are we moving into new markets? Do we need thio deploy capabilities faster? Are we doing a digital customer experience? Transformation? Are we deploying new factories, new products into new regions, and so really the first areas? What's the core company strategy, imperatives of the business objectives? And >>then how >>does I t really help them achieve that? In some cases, it may be we have to shift and reduce our data center footprints way have to move capabilities to where we have a new region. Deployments, right? We've got to get him over to Europe. We don't have capabilities in Europe. We're going to Asia. I've got a mobile sales force now where I need to get that customer, meet the customer where they're doing, you know, in the retail store, and >>that >>really then leads quite simply, too. What are the capabilities that we have in house that we're using? >>How are >>they being utilized? And he's using them, and then how do we get them to where they need to be? Some cases accost, imperative. Some cases and agility, Time to market and another's and we're seeing this more often is really what are the new sets of technologies? A. I service is training in forgetting that we're not experience to do and set up, and we don't want to spend the time to go train our infrastructure teams on the technology. So we'll put our data scientists in there figuring out the right set of workloads, the right set of technology, that we can then transform and move our applications to utilize it really starts, I think with the business conversation, or what's the key inflection point that they're experiencing? >>And have you seen that change in the last few years that now it's where you know, cloud not cloud. What goes on Cloud was an I t conversation to your point, Bill. And then the CEO got involved in a little bit later. But now we're we're seeing and hearing the CEO has got to be involved from a business imperative perspective. >>Share some data, right? Uh, so, you know, a couple of years ago, everybody was pursuing cloud largely for cost. Agility started to become primary, and that's still very important. A lot of the internal enterprise data modernizations were essentially stalled a bit because they were trying to figure how much do we move to the the public cloud, right. We want to take advantage of those modern service is at that time, we did a lot of research with our partners. He was roughly 56% of enterprise workload for in their own data center. You know, the rest of them Republic Cloud. And then we saw really the work, the intelligent workload discussion that says we've had some false starts. Organizations now really consistently realize they need both, you know, their own infrastructure and public cloud, and we've actually seen on increase of infrastructure modernization. While they're moving more and more stuff to the cloud, they're actually growing there on centre. It's now roughly 59% on Prem today for that same business, and that's largely because they're using more. Cloud service is that they're also even using Maur on premise, and they're realizing it's a balance and not stalling one or starving one and then committing to the other the committing to both and really just growing the business where it needs to go. >>Strategic reasons. All right? >>Yes, well, there should be four strategic reasons. There aren't always back to your question about which question asked. One of the questions I often ask is, What do you think the benefits will be if you go to cloud? And part of what happens is is not a cloud capability? Problem is an expectation problem. You're not gonna put your GOP system in the cloud and dropped 30% costs in a month, and so that's where we need to have a conversation on, You know, let's iterating on what this is actually gonna look like. Let's evolve the organization. Let's change our thinking. And then the other part of this and this were clouded or an intel come in. Let's model with simulation looks like. So we're gonna take those legacy work clothes unless model containers. Let's model Micro Service is so before you have to invest in transformation to may not make sense. Let's see what the outcome's look like through simulation through a through M l and understand. Where does it make sense to apply? The resource is, you know, to double click on that solution that will help the business. >>I was gonna finish my last question, Bobby, with you saying, Why, Cloud General? But I think you just answered that. So last question for you, though, from from an expectation perspective, give me one of your favorite examples of customer whatever kind of industry there and that you've come in and helped them really level, set their expectations and kick that door wide open. >>That's tough, many >>to choose from. >>Yeah, let me let me try to tackle that one quickly. Store's computer databases. Those are all things that people look at I think what people are struggling with the most in terms of kind of expectations is what they're willing and able to change. So this is kind of what I leave on. Bill and I talked about this earlier today. A product is good, a plan is better. A partnership is best. Because with the enterprises of saying is, we're overwhelmed. Either fix it for me or get in there with me and do it right. Be in this together. So what we've learned is it's not about were close applications. It's all kind of the same. We need help. We're overwhelmed. I want a partner in telling Claude Juncker the get in this thing with me. Help me figure this out because I told you this cloud is at best a teenager. They just learned how to drive is very capable, but it needs some guard rails. >>I love that. Thanks you guys So much for explaining with Johnny what you guys are doing together and how you're really flipping the model for what customers need to be evaluated and what they need to be asking. We appreciate your time. >>Thank you for having us >>our pleasure. Thank you. for John Wall's I'm Lisa Martin. You've been watching the Cube at Reinvent 19 from Vegas. Wants to go tomorrow.
SUMMARY :
Brought to you by Amazon Web service Welcome back to the Cube. Didn't compare notes to begin with, but certainly the pink thing. you like. Really, Joe, I don't hear that very often. you tell that we're at the end of day one of the coverage of A W s three in bed. Thank you very much. That's no stranger to the Cube. Thank you for having us. What do you do? So where you don't know whether something should go on from off grim. And that's challenging because, you know, in the as, as John Ferrier would say, So what we often start with his wit is better look like you And maybe Bill to you could chime in here on this. at the right, you know, experience in performance from a capability perspective. so the choice and I think they need to pack a lunch and a hip flask for this because it's quite the journey. Well, the good news is I think most of the industries in the segment that we talked to have realized a cloud genera with intel and and a W S. What does that trifecta And they need guidance and coaching on things like, Are you paying for redhead twice your pay because we're taking all these new tasks if you could automate them that you free your people, decision becomes at the forefront of that on getting, you know, using cloud gender to help understand because it's multi dimensional, you talked about the five points a little bit ago. And then you got a chance to get to the final destination. points that Bill talked about Tek thee you know, the executive staff is imperatives of the business objectives? customer, meet the customer where they're doing, you know, in the retail store, and What are the capabilities that we have in house that the right set of technology, that we can then transform and move our applications to utilize it And have you seen that change in the last few years that now it's where you know, Organizations now really consistently realize they need both, you know, All right? One of the questions I often ask is, What do you think the benefits will be if you go I was gonna finish my last question, Bobby, with you saying, Why, Cloud General? It's all kind of the same. Thanks you guys So much for explaining with Johnny what you guys are doing together and Wants to go tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Europe | LOCATION | 0.99+ |
Bobby Allyn | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Johnny | PERSON | 0.99+ |
John Wall | PERSON | 0.99+ |
Bobby | PERSON | 0.99+ |
Joe | PERSON | 0.99+ |
TBS | ORGANIZATION | 0.99+ |
Claude Juncker | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Charlotte | LOCATION | 0.99+ |
30% | QUANTITY | 0.99+ |
Bobby Allen | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
John Ferrier | PERSON | 0.99+ |
Asia | LOCATION | 0.99+ |
William Giard | PERSON | 0.99+ |
five points | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
Emma A | PERSON | 0.99+ |
Bill | PERSON | 0.99+ |
Cloud Generate | ORGANIZATION | 0.99+ |
Bill Gerard | PERSON | 0.99+ |
DBS | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Vegas | LOCATION | 0.99+ |
first | QUANTITY | 0.99+ |
two words | QUANTITY | 0.99+ |
Eyes V | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.98+ |
a year | QUANTITY | 0.98+ |
5 | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
Claude Generous | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
9 | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
two | QUANTITY | 0.98+ |
Wanda Otto | PERSON | 0.97+ |
four technical points | QUANTITY | 0.97+ |
today | DATE | 0.97+ |
twice | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
first areas | QUANTITY | 0.96+ |
Sue | PERSON | 0.96+ |
three legged | QUANTITY | 0.95+ |
East Coast | LOCATION | 0.95+ |
single question | QUANTITY | 0.95+ |
2019 | DATE | 0.94+ |
Consul Tatum | ORGANIZATION | 0.94+ |
56% | QUANTITY | 0.94+ |
Digital Transformation and Scale solutions | ORGANIZATION | 0.93+ |
59% | QUANTITY | 0.93+ |
Cloud Jenner | ORGANIZATION | 0.92+ |
intel | ORGANIZATION | 0.91+ |
2nd 1 | QUANTITY | 0.9+ |
GOP | ORGANIZATION | 0.89+ |
four other factors | QUANTITY | 0.88+ |
ELISA | PERSON | 0.88+ |
Prem | ORGANIZATION | 0.87+ |
couple | QUANTITY | 0.87+ |
tens of thousands | QUANTITY | 0.87+ |
CloudGenera | ORGANIZATION | 0.85+ |
One of | QUANTITY | 0.84+ |
Cloud | ORGANIZATION | 0.84+ |
First | QUANTITY | 0.83+ |
couple of years ago | DATE | 0.82+ |
CTO | PERSON | 0.82+ |
earlier today | DATE | 0.81+ |
day one | QUANTITY | 0.81+ |
Naveen Rao, Intel | AWS re:Invent 2019
>> Announcer: Live from Las Vegas, it's theCUBE! Covering AWS re:Invent 2019. Brought to you by Amazon Web Services and Intel, along with its ecosystem partners. >> Welcome back to the Sands Convention Center in Las Vegas everybody, you're watching theCUBE, the leader in live tech coverage. My name is Dave Vellante, I'm here with my cohost Justin Warren, this is day one of our coverage of AWS re:Invent 2019, Naveen Rao here, he's the corporate vice president and general manager of artificial intelligence, AI products group at Intel, good to see you again, thanks for coming to theCUBE. >> Thanks for having me. >> Dave: You're very welcome, so what's going on with Intel and AI, give us the big picture. >> Yeah, I mean actually the very big picture is I think the world of computing is really shifting. The purpose of what a computer is made for is actually shifting, and I think from its very conception, from Alan Turing, the machine was really meant to be something that recapitulated intelligence, and we took sort of a divergent path where we built applications for productivity, but now we're actually coming back to that original intent, and I think that hits everything that Intel does, because we're a computing company, we supply computing to the world, so everything we do is actually impacted by AI, and will be in service of building better AI platforms, for intelligence at the edge, intelligence in the cloud, and everything in between. >> It's really come full circle, I mean, when I first started this industry, AI was the big hot topic, and really, Intel's ascendancy was around personal productivity, but now we're seeing machines replacing cognitive functions for humans, that has implications for society. But there's a whole new set of workloads that are emerging, and that's driving, presumably, different requirements, so what do you see as the sort of infrastructure requirements for those new workloads, what's Intel's point of view on that? >> Well, so maybe let's focus that on the cloud first. Any kind of machine learning algorithm typically has two phases to it, one is called training or learning, where we're really iterating over large data sets to fit model parameters. And once that's been done to a satisfaction of whatever performance metrics that are relevant to your application, it's rolled out and deployed, that phase is called inference. So these two are actually quite different in their requirements in that inference is all about the best performance per watt, how much processing can I shove into a particular time and power budget? On the training side, it's much more about what kind of flexibility do I have for exploring different types of models, and training them very very fast, because when this field kind of started taking off in 2014, 2013, typically training a model back then would take a month or so, those models now take minutes to train, and the models have grown substantially in size, so we've still kind of gone back to a couple of weeks of training time, so anything we can do to reduce that is very important. >> And why the compression, is that because of just so much data? >> It's data, the sheer amount of data, the complexity of data, and the complexity of the models. So, very broad or a rough categorization of the complexity can be the number of parameters in a model. So, back in 2013, there were, call it 10 million, 20 million parameters, which was very large for a machine learning model. Now they're in the billions, one or two billion is sort of the state of the art. To give you bearings on that, the human brain is about a three to 500 trillion model, so we're still pretty far away from that. So we got a long way to go. >> Yeah, so one of the things about these models is that once you've trained them, that then they do things, but understanding how they work, these are incredibly complex mathematical models, so are we at a point where we just don't understand how these machines actually work, or do we have a pretty good idea of, "No no no, when this model's trained to do this thing, "this is how it behaves"? >> Well, it really depends on what you mean by how much understanding we have, so I'll say at one extreme, we trust humans to do certain things, and we don't really understand what's happening in their brain. We trust that there's a process in place that has tested them enough. A neurosurgeon's cutting into your head, you say you know what, there's a system where that neurosurgeon probably had to go through a ton of training, be tested over and over again, and now we trust that he or she is doing the right thing. I think the same thing is happening in AI, some aspects we can bound and say, I have analytical methods on how I can measure performance. In other ways, other places, it's actually not so easy to measure the performance analytically, we have to actually do it empirically, which means we have data sets that we say, "Does it stand up to all the different tests?" One area we're seeing that in is autonomous driving. Autonomous driving, it's a bit of a black box, and the amount of situations one can incur on the road are almost limitless, so what we say is, for a 16 year old, we say "Go out and drive," and eventually you sort of learn it. Same thing is happening now for autonomous systems, we have these training data sets where we say, "Do you do the right thing in these scenarios?" And we say "Okay, we trust that you'll probably "do the right thing in the real world." >> But we know that Intel has partnered with AWS, I ran autonomous driving with their DeepRacer project, and I believe it's on Thursday is the grand final, it's been running for, I think it was announced on theCUBE last year, and there's been a whole bunch of competitions running all year, basically training models that run on this Intel chip inside a little model car that drives around a race track, so speaking of empirical testing of whether or not it works, lap times gives you a pretty good idea, so what have you learned from that experience, of having all of these people go out and learn how to use these ALM models on a real live race car and race around a track? >> I think there's several things, I mean one thing is, when you turn loose a number of developers on a competitive thing, you get really interesting results, where people find creative ways to use the tools to try to win, so I always love that process, I think competition is how you push technology forward. On the tool side, it's actually more interesting to me, is that we had to come up with something that was adequately simple, so that a large number of people could get going on it quickly. You can't have somebody who spends a year just getting the basic infrastructure to work, so we had to put that in place. And really, I think that's still an iterative process, we're still learning what we can expose as knobs, what kind of areas of innovation we allow the user to explore, and where we sort of walk it down to make it easy to use. So I think that's the biggest learning we get from this, is how I can deploy AI in the real world, and what's really needed from a tool chain standpoint. >> Can you talk more specifically about what you guys each bring to the table with your collaboration with AWS? >> Yeah, AWS has been a great partner. Obviously AWS has a huge ecosystem of developers, all kinds of different developers, I mean web developers are one sort of developer, database developers are another, AI developers are yet another, and we're kind of partnering together to empower that AI base. What we bring from a technological standpoint are of course the hardware, our CPUs, our AI ready now with a lot of software that we've been putting out in the open source. And then other tools like OpenVINO, which make it very easy to start using AI models on our hardware, and so we tie that in to the infrastructure that AWS is building for something like DeepRacer, and then help build a community around it, an ecosystem around it of developers. >> I want to go back to the point you were making about the black box, AI, people are concerned about that, they're concerned about explainability. Do you feel like that's a function of just the newness that we'll eventually get over, and I mean I can think of so many examples in my life where I can't really explain how I know something, but I know it, and I trust it. Do you feel like it's sort of a tempest in a teapot? >> Yeah, I think it depends on what you're talking about, if you're talking about the traceability of a financial transaction, we kind of need that maybe for legal reasons, so even for humans we do that. You got to write down everything you did, why did you do this, why'd you do that, so we actually want traceability for humans, even. In other places, I think it is really about the newness. Do I really trust this thing, I don't know what it's doing. Trust comes with use, after a while it becomes pretty straightforward, I mean I think that's probably true for a cell phone, I remember the first smartphones coming out in the early 2000s, I didn't trust how they worked, I would never do a credit card transaction on 'em, these kind of things, now it's taken for granted. I've done it a million times, and I never had any problems, right? >> It's the opposite in social media, most people. >> Maybe that's the opposite, let's not go down that path. >> I quite like Dr. Kate Darling's analogy from MIT lab, which is we already we have AI, and we're quite used to them, they're called dogs. We don't fully understand how a dog makes a decision, and yet we use 'em every day. In a collaboration with humans, so a dog, sort of replace a particular job, but then again they don't, I don't particularly want to go and sniff things all day long. So having AI systems that can actually replace some of those jobs, actually, that's kind of great. >> Exactly, and think about it like this, if we can build systems that are tireless, and we can basically give 'em more power and they keep going, that's a big win for us. And actually, the dog analogy is great, because I think, at least my eventual goal as an AI researcher is to make the interface for intelligent agents to be like a dog, to train it like a dog, reinforce it for the behaviors you want and keep pushing it in new directions that way, as opposed to having to write code that's kind of esoteric. >> Can you talk about GANs, what is GANs, what's it stand for, what does it mean? >> Generative Adversarial Networks. What this means is that, you can kind of think of it as, two competing sides of solving a problem. So if I'm trying to make a fake picture of you, that makes it look like you have no hair, like me, you can see a Photoshop job, and you can kind of tell, that's not so great. So, one side is trying to make the picture, and the other side is trying to guess whether it's fake or not. We have two neural networks that are kind of working against each other, one's generating stuff, and the other one's saying, is it fake or not, and then eventually you keep improving each other, this one tells that one "No, I can tell," this one goes and tries something else, this one says "No, I can still tell." The one that's trying with a discerning network, once it can't tell anymore, you've kind of built something that's really good, that's sort of the general principle here. So we basically have two things kind of fighting each other to get better and better at a particular task. >> Like deepfakes. >> I use that because it is relevant in this case, and that's kind of where it came from, is from GANs. >> All right, okay, and so wow, obviously relevant with 2020 coming up. I'm going to ask you, how far do you think we can take AI, two part question, how far can we take AI in the near to mid term, let's talk in our lifetimes, and how far should we take it? Maybe you can address some of those thoughts. >> So how far can we take it, well, I think we often have the sci-fi narrative out there of building killer machines and this and that, I don't know that that's actually going to happen anytime soon, for several reasons, one is, we build machines for a purpose, they don't come from an embattled evolutionary past like we do, so their motivations are a little bit different, say. So that's one piece, they're really purpose-driven. Also, building something that's as general as a human or a dog is very hard, and we're not anywhere close to that. When I talked about the trillions of parameters that a human brain has, we might be able to get close to that from a engineering standpoint, but we're not really close to making those trillions of parameters work together in such a coherent way that a human brain does, and efficient, human brain does that in 20 watts, to do it today would be multiple megawatts, so it's not really something that's easily found, just laying around. Now how far should we take it, I look at AI as a way to push humanity to the next level. Let me explain what that means a little bit. Simple equation I always sort of write down, is people are like "Radiologists aren't going to have a job." No no no, what it means is one radiologist plus AI equals 100 radiologists. I can take that person's capabilities and scale it almost freely to millions of other people. It basically increases the accessibility of expertise, we can scale expertise, that's a good thing. It makes, solves problems like we have in healthcare today. All right, that's where we should be going with this. >> Well a good example would be, when, and probably part of the answer's today, when will machines make better diagnoses than doctors? I mean in some cases it probably exists today, but not broadly, but that's a good example, right? >> It is, it's a tool, though, so I look at it as more, giving a human doctor more data to make a better decision on. So, what AI really does for us is it doesn't limit the amount of data on which we can make decisions, as a human, all I can do is read so much, or hear so much, or touch so much, that's my limit of input. If I have an AI system out there listening to billions of observations, and actually presenting data in a form that I can make better decisions on, that's a win. It allows us to actually move science forward, to move accessibility of technologies forward. >> So keeping the context of that timeframe I said, someday in our lifetimes, however you want to define that, when do you think that, or do you think that driving your own car will become obsolete? >> I don't know that it'll ever be obsolete, and I'm a little bit biased on this, so I actually race cars. >> Me too, and I drive a stick, so. >> I kind of race them semi-professionally, so I don't want that to go away, but it's the same thing, we don't need to ride horses anymore, but we still do for fun, so I don't think it'll completely go away. Now, what I think will happen is that commutes will be changed, we will now use autonomous systems for that, and I think five, seven years from now, we will be using autonomy much more on prescribed routes. It won't be that it completely replaces a human driver, even in that timeframe, because it's a very hard problem to solve, in a completely general sense. So, it's going to be a kind of gentle evolution over the next 20 to 30 years. >> Do you think that AI will change the manufacturing pendulum, and perhaps some of that would swing back to, in this country, anyway, on-shore manufacturing? >> Yeah, perhaps, I was in Taiwan a couple of months ago, and we're actually seeing that already, you're seeing things that maybe were much more labor-intensive before, because of economic constraints are becoming more mechanized using AI. AI as inspection, did this machine install this thing right, so you have an inspector tool and you have an AI machine building it, it's a little bit like a GAN, you can think of, right? So this is happening already, and I think that's one of the good parts of AI, is that it takes away those harsh conditions that humans had to be in before to build devices. >> Do you think AI will eventually make large retail stores go away? >> Well, I think as long as there are humans who want immediate satisfaction, I don't know that it'll completely go away. >> Some humans enjoy shopping. >> Naveen: Some people like browsing, yeah. >> Depends how fast you need to get it. And then, my last AI question, do you think banks, traditional banks will lose control of the payment systems as a result of things like machine intelligence? >> Yeah, I do think there are going to be some significant shifts there, we're already seeing many payment companies out there automate several aspects of this, and reducing the friction of moving money. Moving money between people, moving money between different types of assets, like stocks and Bitcoins and things like that, and I think AI, it's a critical component that people don't see, because it actually allows you to make sure that first you're doing a transaction that makes sense, when I move from this currency to that one, I have some sense of what's a real number. It's much harder to defraud, and that's a critical element to making these technologies work. So you need AI to actually make that happen. >> All right, we'll give you the last word, just maybe you want to talk a little bit about what we can expect, AI futures, or anything else you'd like to share. >> I think it's, we're at a really critical inflection point where we have something that works, basically, and we're going to scale it, scale it, scale it to bring on new capabilities. It's going to be really expensive for the next few years, but we're going to then throw more engineering at it and start bringing it down, so I start seeing this look a lot more like a brain, something where we can start having intelligence everywhere, at various levels, very low power, ubiquitous compute, and then very high power compute in the cloud, but bringing these intelligent capabilities everywhere. >> Naveen, great guest, thanks so much for coming on theCUBE. >> Thank you, thanks for having me. >> You're really welcome, all right, keep it right there everybody, we'll be back with our next guest, Dave Vellante for Justin Warren, you're watching theCUBE live from AWS re:Invent 2019. We'll be right back. (techno music)
SUMMARY :
Brought to you by Amazon Web Services and Intel, AI products group at Intel, good to see you again, Dave: You're very welcome, so what's going on and we took sort of a divergent path so what do you see as the Well, so maybe let's focus that on the cloud first. the human brain is about a three to 500 trillion model, and the amount of situations one can incur on the road is that we had to come up with something that was on our hardware, and so we tie that in and I mean I can think of so many examples You got to write down everything you did, and we're quite used to them, they're called dogs. and we can basically give 'em more power and you can kind of tell, that's not so great. and that's kind of where it came from, is from GANs. and how far should we take it? I don't know that that's actually going to happen it doesn't limit the amount of data I don't know that it'll ever be obsolete, but it's the same thing, we don't need to ride horses that humans had to be in before to build devices. I don't know that it'll completely go away. Depends how fast you need to get it. and reducing the friction of moving money. All right, we'll give you the last word, and we're going to scale it, scale it, scale it we'll be back with our next guest,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
20 watts | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2014 | DATE | 0.99+ |
10 million | QUANTITY | 0.99+ |
Naveen Rao | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
20 million | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Taiwan | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
100 radiologists | QUANTITY | 0.99+ |
Alan Turing | PERSON | 0.99+ |
Naveen | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
two things | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
billions | QUANTITY | 0.99+ |
a month | QUANTITY | 0.99+ |
2020 | DATE | 0.99+ |
two part | QUANTITY | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
one piece | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Kate Darling | PERSON | 0.98+ |
early 2000s | DATE | 0.98+ |
two billion | QUANTITY | 0.98+ |
first smartphones | QUANTITY | 0.98+ |
one side | QUANTITY | 0.98+ |
Sands Convention Center | LOCATION | 0.97+ |
today | DATE | 0.97+ |
OpenVINO | TITLE | 0.97+ |
one radiologist | QUANTITY | 0.96+ |
Dr. | PERSON | 0.96+ |
16 year old | QUANTITY | 0.95+ |
two phases | QUANTITY | 0.95+ |
trillions of parameters | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
a million times | QUANTITY | 0.93+ |
seven years | QUANTITY | 0.93+ |
billions of observations | QUANTITY | 0.92+ |
one thing | QUANTITY | 0.92+ |
one extreme | QUANTITY | 0.91+ |
two competing sides | QUANTITY | 0.9+ |
500 trillion model | QUANTITY | 0.9+ |
a year | QUANTITY | 0.89+ |
five | QUANTITY | 0.88+ |
each | QUANTITY | 0.88+ |
One area | QUANTITY | 0.88+ |
a couple of months ago | DATE | 0.85+ |
one sort | QUANTITY | 0.84+ |
two neural | QUANTITY | 0.82+ |
GANs | ORGANIZATION | 0.79+ |
couple of weeks | QUANTITY | 0.78+ |
DeepRacer | TITLE | 0.77+ |
millions of | QUANTITY | 0.76+ |
Photoshop | TITLE | 0.72+ |
deepfakes | ORGANIZATION | 0.72+ |
next few years | DATE | 0.71+ |
year | QUANTITY | 0.67+ |
re:Invent 2019 | EVENT | 0.66+ |
three | QUANTITY | 0.64+ |
Invent 2019 | EVENT | 0.64+ |
about | QUANTITY | 0.63+ |
Bob Ghaffari, Intel Corporation | VMworld 2019
>> live from San Francisco, celebrating 10 years of high tech coverage. It's the Cube covering Veum World 2019. Brought to you by VM Wear and its ecosystem partners. >> Welcome back. We're here. Of'em World 2019. You're watching the Cubans? Our 10th year of coverage at the event. I'm stupid. And my co host this afternoon is Justin Warren. And happy to welcome back to the program. Bob Ghaffari, who's the general manager of the Enterprise and Claude networking division at Intel. Bob, welcome back. Great. Great to be here. Thank you. S Oh, uh, you know, it's a dressing. And I think that last year I felt like every single show that I went to there was an Intel executive up on the stage. You know, there's a way we talked about. You know, the tic tac of the industry is something that drove things. So last year? Ah, lot going on. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't really busy. You know a lot of things going on here. VM worldwide. Give us the update since last we spoke. Well, you know, um >> So I think we have to just go back a little bit in terms of how until has been involved in terms of really driving. Just hold this whole network transformation. I want to say it started about a decade ago when we were really focused on trying to go Dr. You know, a lot of the capabilities on to more of a standard architecture, right? In the past, you know, people were encumbered by challenging architectures, you know, using, you know, proprietary kind of network processors. We were able to bring this together until architecture we open source dp decay, which is really this fast packet processing, you know, library that we basically enabled the industry on. And with that, there's basically been this. I want to say this revolution in terms of how networking has come together. And so what we've seen since last year is you know how NSX via Miranda sex itself has really grown up and be able to sort of get to these newer, interesting usage models. And so, for us, you know what really gets us excited is being really involved with enabling hybrid cloud multi cloud from a network perspective. And that's just what really gets me out of bed every day. Yeah, An s >> t n is, I think, gone from that early days where it was all a bit scary and new, and people weren't quite sure that they wanted to have that. Whereas now Stu is the thing, it's people are quite happy and comfortable to use it. It's it's now a very accepted way of doing networking. What have you noticed about that change where people have gone? Well, actually, it's accepted. Now, what is that enabling customers to do with S T. N. >> You know, um I mean, I think what you know S Dan really does. It gives you a lot of the enterprise customers and cloud customers, and a lot of other is really the flexibility to be able to do what you really need to do much better. And so if you can imagine the first stage, we had to go get a lot of the functions virtualized, right? So we did that over the last 10 years, getting the functions virtualized, getting him optimized and making sure that the performance is there as a virtual function. The next step here is really trying to make sure that you know you weaken enable customers to be able to do what they need to end their micro service's and feels. Or do this in a micro segmented kind of view. When and so um and also being in a scenario, we don't have to trombone the traffic, you know, off to be there, be it's inspected or, you know, our load balance and bringing that capability in a way, in a distributed fashion to where the workloads Neto happen. >> Yeah, who you mentioned micro segmentation there, And that's something which has been spoken about again for quite a while. What's the state of play with micro segmentation? Because it some customs have been trying to use it and found it a little bit tricky. And so they were seeing lots of vendors who come in and say We'll help you manage that. What's the state of play with Michael segmentation From your perspective, >> you know, I would say the way I would categorize it as micro segmentation has definitely become a very important usage model. In turn, how did really contain, you know, uh, policies within certain segments, right? So, one you know, you're able to sort of get to a better way of managing your environments. And you're also getting to a better way of containing any kind of threats. And so the fact that you can somehow, you know, segment off, um, you know, areas and FAA. And if you basically get some kind of, like attack or some kind of, you know, exploit, it's not gonna, you know, will go out of that segmented area to to some extent, that simplifies how you look at your environment, but you want to be able to do it in the fashion that you know, helps. Ultimately, the enterprises managed what they got on their environments. >> So, Bob, one of things that really struck me last year was the messaging that VM were had around networking specifically around multi cloud. It really hearken back to what I had heard from my syrup reacquisition on. Of course. Now, Veum, we're extending that with of'em or cloud in all of you know, aws the partnerships they are false, extended with azure, with Google in non premises with Delhi emcee and others. And a big piece of that message is we're gonna be able to have the same stack on on both sides. You could kind of explain. Where does Intel fit in there? How does Intel's networking multi cloud story dovetail with what we're hearing from VM? Where Right, So I >> think >> the first thing is that until has been very involved in terms of being into, um, any on Prem or public clouds, we get really involved there. What were you really trying to do on my team does is really focusing on the networking aspects. And so, for us is to not only make sure that if you're running something on prime, you get the best experience on from but also the consistency of having a lot of the key instruction sets and any cloud and be able to sort of, ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud environment where you're basically trying to communicate between a certain cloud. It could be on Prem to another cloud that might be somewhere else. Having the consistent way of managing through encrypted tunnels and making sure you're getting the kind of performance that you need to be able to go address that I think these are the kind of things that we really focus on, and I think that for us, it's not only really bring this out and, um improving our instructions that architecture's so most recently What we did is, you know, we launched our second generations Aeon Scaleable processors that really came out in April, and so for us that really takes it to the next level. We get some really interesting new instruction, sets things like a V X 5 12 We get also other kind of, you know, you know more of, like inference, analytic inference capabilities with things like Deal Boost that really brings things together so you can be more effective and efficient in terms of how you look at your workloads and what you need to do with them, making sure they're secure but also giving you the insights that you need to be able to make that kind of decisions you want from a enterprise perspective >> steward. It always amuses me how much Intel is involved in all of his cloud stuff when it it would support. We don't care about hardware anymore. It's all terribly obstructed. And come >> on, Justin, there is no cloud. It's just someone tells his computer and there's a reasonable chance there's an Intel component or two Wednesday, right? >> Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and coming to these kinds of events and showing that it's still relevant, and the technology that you're creating? Exactly how that ties into what's happening in cloud and in networking, I think is an amazing credit to what? To Intel's ability to adapt. >> You know, it's definitely been very exciting, and so not only have we really been focused on, how do we really expand our processor franchise really getting the key capabilities we need. So any time, anywhere you're doing any kind of computer, we want to make sure we're doing the best for our customers as possible. But in addition to that, what we've really done is we've been helped us around doubt our platform capabilities from a solution perspective to really bring out not only what has historically been a very strong franchise, pressed with her what we call our foundational nicks or network interface cards, but we've been eldest would expand that to be able to bring better capabilities no matter what you're trying to do. So let's say, for example, you know, um, you are a customer that wants to be able to do something unique, and you want to be able to sort of accelerate, you know, your own specific networking kind of functions or virtual switches. Well, we have the ability to do that. And so, with her intel, f p g. A and 3000 card as an example, you get that capability to be able to expand what you would traditionally do from a platform level perspective. >> I want to talk about the edge, but before we go there, there's a topic that's hot conversation here. But when I've been talking to Intel for a lot of years out container ization in general and kubernetes more specifically, you know, where does that fit into your group? I mentioned it just cause you know that the last time Intel Developer forum happened, a friend of mine gave a presentation working for intel, and, you know, just talking about how much was going on in that space on. Do you know, I made a comment back there this few years ago. You know, we just spent over a decade fixing all the networking and storage issues with virtualization. Aren't we going to have to do that again? And containers Asian? Of course, we know way are having toe solve some of those things again. So, you >> know, and for us, you know, as you guys probably know, until it's been really involved in one of the biggest things that you know sometimes it's kept as a secret is that we're probably one of the bigger, um, employers of software engineers. And so until was really, really involved. We have a lot of people that started off with, you know, open source of clinics and being involved there. And, of course, containers is sort of evolution to that. And for us really trying to be involved in making sure that we can sort of bring the capabilities that's needed from our instructions, said architecture is to be able to do containers kubernetes, and, you know, to do this efficient, efficiently and effectively is definitely key to what we want to get done. >> All right, so that was a setup. I I wanted for EJ computing because a lot of these we have different architectures we're gonna be doing when we're getting to the edge starting here. A little bit of that show that this show. But it's in overall piece of that multi cloud architecture that we're starting to build out. You know, where's your play? >> Well, so for us, I mean the way that we look at it as we think it starts all, obviously with the network. So when you are really trying to do things often times Dedge is the closest to word that data is being, you know, realized. And so for us making sure that, you know, we have the right kind of platform level capabilities that can take this data. And then you have to do something with this data. So there's a computer aspect to it, and then you have to be able to really ship it somewhere else, right? And so it's basically going to be to another cloud and might be to another micro server somewhere else. And so for us, what really sets the foundation is having a scale will set a platform sort of this thick, too thin kind of concept. That sort of says, depending on what you're trying to do, what you need to have something that could go the answer mold into that. And so for us, having a scaleable platform that can go from our Biggers eons down to an Adam processor is really important. And then also what we've been doing is working with the ecosystem to make sure that the network functions and software defined when and you know that we think sets a foundation to how you want to go and live in this multi cloud world. But starting off of the edge, you want to make sure that that is really effective, efficient. We can basically provide this in a very efficient capability because there's some areas where you know this. It's gonna be very price sensitive. So we think we have this awesome capability here with our Adam processors. In fact, yesterday was really interesting. We had Tom Burns and Tom Gillis basically get on the stage and talk about how Dell and VM we're collaborating on this. Um, and this basically revolves around platforms based on the Adam Process sitter, and that could scale up to our ze aan de processors and above that, so it depends on what you're trying to do, and we've been working with our partners to make sure that these functions that start off with networker optimized and you can do as much compute auras little computer as you want on that edge >> off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. It's been around for a while. We just used to call it other things, like robots for the customers who were using engine the moment. What's what's the most surprising thing that you've seen them do with your technology? >> You know what is interesting is, you know, we sometimes get surprised by this ourselves and so one of the things that you know, some customers say, Well, you know, we really need low cost because all we really care about is just low level. You know, we we want to build the deploy this into a cafe, and we don't think you're gonna be all that the price spot because they automatically think that all intel does is Biggs eons, and we do a great job with that. But what is really interesting is that with their aunt in processors, we get to these very interesting, you know, solutions that are cost effective and yet gives you the scalability of what you might want to do. And so, for example, you know, we've seen customers that say, Yeah, you know, we want to start off with this, but you know, I'm networking, is it? But you know what? We have this plan, and this plan is like this. Maybe it's a 90 day plan or it could be up to a two year plan in terms of how they want to bring more capabilities at that branch and want to want to be able to do more. They want to be able to compute more. They want to make decisions more. They want to be able to give their customers at that place a much better experience that we think we have a really good position here with their platforms and giving you this mix and match capability, but easily built to scale up and do what our customers want. Great >> Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, five g WiFi six, you know, expected to have a significant impact on networking. We're talking a little bit about you know edge. It's gonna play in that environment. Uh, what do you hear from Augusta Summers? How much is that involved with the activities you're working through? You know, >> it's definitely, really interesting. So, uh, five g is definitely getting a lot of hype. Were very, very involved. We've been working on this for a while until it's, uh, on the forefront of enabling five G, especially as it relates to network infrastructure, one of the key focus areas for us. And so the way that we sort of look at this on the edges that a lot of enterprises, some of them are gonna be leading, especially for cases where Leighton see is really important. You want to be able to make decisions, you know, really rather quickly. You want to be able to process it right there. Five g is gonna be one of these interesting technologies that starts, and we're already starting to see it enabled these new or used cases, and so we're definitely really excited about that. We're already starting to see this in stadium experience being enabled by five G what we're doing on the edge. There's experiences like that that we really get excited when we're part of, and we're really able to provide this model of enabling, you know, these new usage models. So for us, you know the connectivity aspects five g is important. Of course, you know, we're going to see a lot of work clothes used for G as basically predominant option. And, of course, the standard wired connective ity of I p m pl less and other things. >> I want to give you the final word. Obviously, Intel long partnership. As we know you know, current CEO Pack else under, you know, spent a good part of his, you know, early part of career at Intel. Give us the takeaway intel VM wear from VM 2019. You know, I mean, we've had a >> long partnership here between intel on VM, where we definitely value the partnership for us. It started off with virtual light servers a while back. Now we've been working on networking and so for us, the partnership has been incredible. You know, we continue to be able to work together. Of course. You know, we continue to see challenges as we go into hybrid cloud Malta Cloud. We are very excited to how in terms of how we can take this to the next level. And, you know, we're very happy to be be great partners with them. >> All right. Well, Bob Ghaffari, thank you for giving us the Intel networking update. We go up the stack down the stack, Multi cloud, all out the edge, coyote and all the applications for Justin Warren. I'm stupid. Men will be back for our continuing coverage of the emerald 2019. Thanks for watching the Cube.
SUMMARY :
Brought to you by VM Wear and its ecosystem partners. Um, haven't seen intel quite as much, but we know that means that, you know, you're you and your team aren't And so what we've seen since last year is you know how NSX via have you noticed about that change where people have gone? you know, off to be there, be it's inspected or, you know, our load balance and And so they were seeing lots of vendors who come in and say We'll help you manage that. And so the fact that you can in all of you know, aws the partnerships they are false, extended with azure, with Google in non ah, you know, managed that ballistically, especially when you're looking at a hybrid cloud And come It's just someone tells his computer and there's a reasonable chance there's an Intel Isn't Intel intelligence and the fact that Intel comes out and is continuing to talk to customers and So let's say, for example, you know, um, you are a customer specifically, you know, where does that fit into your group? We have a lot of people that started off with, you know, open source of clinics and being involved of these we have different architectures we're gonna be doing when we're getting to the edge starting here. to word that data is being, you know, realized. off the customers who were starting to use age because it's it's kind of you, but it's also kind of not. You know what is interesting is, you know, we sometimes get surprised Bob, You know, when I think about this space in general, we haven't talked about five g yet, and you know, You want to be able to make decisions, you know, really rather quickly. As we know you know, And, you know, we're very happy to be be great partners with them. down the stack, Multi cloud, all out the edge, coyote and all the applications
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Bob Ghaffari | PERSON | 0.99+ |
Justin Warren | PERSON | 0.99+ |
San Francisco | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Justin | PERSON | 0.99+ |
Tom Gillis | PERSON | 0.99+ |
90 day | QUANTITY | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
10th year | QUANTITY | 0.99+ |
April | DATE | 0.99+ |
last year | DATE | 0.99+ |
Tom Burns | PERSON | 0.99+ |
10 years | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Michael | PERSON | 0.99+ |
both sides | QUANTITY | 0.99+ |
Augusta Summers | PERSON | 0.98+ |
first stage | QUANTITY | 0.98+ |
Intel Corporation | ORGANIZATION | 0.98+ |
five G | ORGANIZATION | 0.98+ |
Veum World 2019 | EVENT | 0.97+ |
NSX | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.94+ |
five g | ORGANIZATION | 0.94+ |
3000 card | COMMERCIAL_ITEM | 0.93+ |
first thing | QUANTITY | 0.93+ |
Miranda sex | ORGANIZATION | 0.92+ |
p g. A | COMMERCIAL_ITEM | 0.91+ |
Biggs eons | ORGANIZATION | 0.91+ |
Leighton | ORGANIZATION | 0.9+ |
Cubans | PERSON | 0.9+ |
two | QUANTITY | 0.9+ |
few years ago | DATE | 0.9+ |
second generations | QUANTITY | 0.9+ |
G | ORGANIZATION | 0.87+ |
over a decade | QUANTITY | 0.87+ |
Adam | PERSON | 0.86+ |
a decade ago | DATE | 0.81+ |
2019 | DATE | 0.81+ |
Delhi emcee | ORGANIZATION | 0.8+ |
this afternoon | DATE | 0.79+ |
Of'em World 2019 | EVENT | 0.79+ |
Five g | ORGANIZATION | 0.78+ |
VMworld | EVENT | 0.74+ |
5 | COMMERCIAL_ITEM | 0.73+ |
every single show | QUANTITY | 0.72+ |
about | DATE | 0.72+ |
two year | QUANTITY | 0.72+ |
Enterprise | ORGANIZATION | 0.69+ |
VM Wear | ORGANIZATION | 0.68+ |
Wednesday | DATE | 0.67+ |
Asian | LOCATION | 0.67+ |
Aeon | COMMERCIAL_ITEM | 0.64+ |
Adam | COMMERCIAL_ITEM | 0.64+ |
last 10 years | DATE | 0.63+ |
five G | TITLE | 0.63+ |
VM | ORGANIZATION | 0.62+ |
S T. N. | ORGANIZATION | 0.62+ |
Veum | PERSON | 0.55+ |
VM | EVENT | 0.53+ |
V | COMMERCIAL_ITEM | 0.52+ |
Rose Schooler, Intel | Cisco Live US 2019
>> Announcer: Live from San Diego, California, it's theCUBE, covering Cisco Live US 2019. Brought to you by Cisco and its ecosystem partners. >> Welcome back to theCUBE's coverage, day two of Cisco Live from San Diego. I'm Lisa Martin with Stu Miniman, and Stu and I are pleased to welcome to theCUBE, Rose Schooler, the Corporate Vice-President of the data center sales at Intel. Rose, welcome to theCUBE. >> It's great to meet you guys. Pleasure to meet you. >> Great to have you here too. Thank you for joining us in the very popular, buzzy DevNet zone. >> It's crazy. The vibe here is really, really good the last couple days. >> It's been amazing, amazing. So you've been at Intel for quite a long time. >> 30 Years. >> 30 years also since Cisco started doing a customer partner event, lot's happened in 30 years. I'd love to understand your role at Intel, and also, as we're here talking about all of these waves in innovation from 5G and Wi-Fi 6 to GPUs everywhere, to Bobol, edge, what are some of the things that you're hearing from customers when it comes to modernizing their data centers? >> That's a great question, in terms of modernization of infrastructure, and there's some really interesting trends that I think are occurring, and I think the one that's getting a lot of buzz is really edge computing. And what we're finding is, depending on the use case, it can be an enterprise application, where you're trying to get localization of your data. It could be an IOT application where it's really critical for latency or bandwidth to keep compute and data close to the thing, if you will, or it could be mobile edge computing where you want to do something like analytics and AI on a video stream before you tax the bandwidth of the cellular infrastructure with that data stream. So across the board, I think edge is super exciting, and you can't talk about edge without, like I said, talking about artificial intelligence. Another big trend whether it's running native, running with an accelerator, an FPGA, I think we're seeing a myriad of use cases in that space. >> Yeah, I'd love to if we could, dig in a little bit to the edge piece, because you're right, it's super exciting there. It's got to be some different requirements for what's happening at the edge as opposed to what's happening at the hyper scales or the service providers or the traditional enterprise. So we've always seen over the years, Intel will make things into the chip. They'll do their magic to make sure it's there. What are the requirements you're hearing from customers that are help driving what the next innovations will come from Intel with the edge? >> So, yeah, let me break it into a couple domains. We'll talk first of all to technology, and then we'll talk about the go to market. Is that okay? >> Perfect. >> From a technology perspective, you've got to look at the environment of which you're deploying the edge. So if you're in the enterprise, your IO is pretty traditional; you're going to use ethernet and that sort of thing. You move over into the cellular space, mobile edge computing, you have different back haul technologies that are leveraged. But what gets super interesting is when you go to IOT, and when you're in IOT, there's a tremendous amount of fragmentation. The IO fragmentation, the protocol fragmentation is pretty pervasive, so one of the big challenges is how do you standardize and really open up the development of those protocols to allow, and what I like to say, is take down the frictionless data. You don't want any impediments to the data getting to the compute where at the compute, you can run your analytics and really start to take data and transition it to business intelligence. So, fragmentation is really an interesting problem statement from a technology perspective, and there are some interesting concepts around how do you use 5G because of the variability and the bandwidth and the different lanes and protocols and SLAs that exist in that multilane highway, if you will, of 5G. I think it presents some interesting technology options to master that fragmentation. From a go to market perspective, you can see a number of situations where we would call them snowflake implementations. You'd build them once, and you'd sell them once. We had people that were very excited about the number of proof of concepts they were doing. Hey, we just implemented and deployed our 150th or 200th proof of concept, and then when you'd get them at dinner at night, I'd say, how much money are ya making? Yeah, it's not really revenue generating. So we found that really pulling together solutioning with our ecosystem partners, and Cisco is a critical partner in this space, we've been able to create solutions that we co-market and co-sell that allow for scale. So we put a program in place, and we're starting to see some really interesting results from that perspective. >> So let's talk about 5G. One of the things that Chuck Robins was talking about yesterday was this expansion in 5G, lot of opportunities, but you're saying that brings challenges to customer environments. How are Intel and Cisco going to help customers start to unlock the value of 5G once that expansion becomes a reality? >> Yeah, I think that's a great question, and we have to look at how we align around standards. How do we align around really creating the initial use cases? I think that's what everybody's big question is, right? What's the $64,000 question with 5G? It's what are our initial use cases beyond LTE offload. Okay, so we have our use case around LTE offload, but one of the areas that we see a lot of excitement around, and I mentioned it earlier, is video. So how can we partner with Cisco? Look at what that implementation looks like, not only on a streetlamp but also in a digital sign. It might be now an interesting Trojan horse, if you will, for a small sale, and what's then that backend implementation look like, and once we understand either the video use case or the public safety and security use case, how do we then create those solutions to really bring the technology to market? And I think that's an excellent opportunity for Intel and Cisco to work together on. >> All right, Rose, let's bring it into your core, the data center group. >> Sure, let's talk about data center. >> The DevNet zone's been really buzzing the whole time, but I tell you the takeover this morning on ACI, really that core networking group is overwhelming me. I practically had people on our set here, because they were all coming to get inside of here. So what's the latest? Any new announcements in the last couple of months or updates for-- >> So, like I've said, I've been at Intel 30 years, and I've been to a lot of launch events, but at the begin of April, we had our first data centric launch. We had seven different product launches. It ranged from the edge to the network to cloud and multicloud, and covered compute to storage and memory to connectivity like ethernet. And it was the most pervasive, exciting launch we've done at least in my 30 years, especially on the data center side. And like I said, we were able to show how the platform ingredients and the center point of that was obviously Intel's scale Xeon, second generation of Xeon scalable architecture. That is the cornerstone, supplementing it with how we process data on that platform, how do we move data with some of our new ethernet technologies, how do we store data with our intel data center, Optane persistent memory, was really kind of an end to end exciting launch event. We had 95 world records that we broke. Yeah, it was really an exciting day, very exciting day. And again, move, process, store. Talking about from a value proposition and customers, how do you really propel insights? Simply said, how do you use workload specific capabilities to create acceleration of business intelligence? And we had announcements around instruction sets, specifically in the CPU for artificial intelligence. As an example, with 14X improvement over a standard CPU by using that in structure, we had how do you create business resiliency with hardware security features that weren't integrating? And really how do you get acceleration of new services around some technologies around hardware and software, combination of the orchestration of those two? So some really great, great announcements; it was fun. >> To say it was meaty sounds like a massive understatement, but tell us a little bit about how customers are reacting? Because one of the status check Robins also talked about yesterday was that less than 1% of data, organizations are getting value out of less than 1% of it, so this data centric launch, super meaty. How is it going to position Intel and your partner ecosystem to help your customers start turning that dial up so that they're actually getting more value out of that data they know is gold but it's dark? >> Yeah, no, I think that's an excellent question, and you can look at IOT use cases like we talked about. In an IOT use case, it's end to end. You have to compute at the edge in the network and then the multicloud. You have to move the data, whether it be wireless or wire line technology, so I think that's a pretty standard answer and approach to that question. But when we also look at trends that we're seeing in the market, and we see things like how much video content is really becoming pervasive and the data challenges that that creates. And you look at technologies like Intel Optane data center persistent memory, and you have a new technology that solves a lot of problems around capacity issues, capacity in an affordable price point. You have a technology that solves problems around persistence. How do I minimize my downtime from minutes to seconds? And what does that mean in terms of value? So we can sit here, and, you know, Intel is really good about talking about features and new instruction sets, but at the end of the day it comes down to how do I make more money, how do I save more money and offer new services that are specific that my customers are asking for on a real time basis? And I think the portfolio of that launch really kind of solidified that value proposition. >> Rose, I was out living on the vendor side when Cisco launched UCS. There were many in the marketplace that said, why is Cisco getting into this business? Compute's kind of played out. They're all pretty much the same. Intel's dominant in that position. Just choose your favorite flavor and it's there. We look at where we are today. There's a lot going on in the compute space. There's Cisco, the rest of your partners and the like. It's going to be a pretty exciting time, so to be working in this. And you've seen some of those ebbs and flows over the years. >> I have, and I think I should start by wishing them a 10 year anniversary. I think it's the 10 year anniversary of UCS. But what's fascinating is the evolution that compute has gone through. I think when we started down the journey together with Cisco on UCS, it was, I'm going to make servers, and that's kind of the playing field for the technology. But when you think about the evolution of storage and software defined storage and you think about the evolution of networking and the transition to network function virtualization, you now have a common platform that can host a myriad of different applications. So to answer your question, the evolution went from I'm going to build a server, and I think their initial position was really strong in blades, which leverages a lot of their existing relationships and customer relationships and has really grown to now I have a compute platform for server storage network. I have the ability to play in multiple spaces, leverage my channel relationships. So I think the transformation has been amazing, and then when you think about how do I take those assets, and leverage them and bring them to the edge that's evolving nothing but opportunity for both Intel and Cisco. >> So here we are, in the middle of Cisco's fourth quarter fiscal 19. They had a very strong FY 18. The quarterly Q3 results that they released just last month, really good, strong growth across infrastructure platforms, applications, security. As we look at this continuing partnership with Cisco and Intel, what are you excited about with all the momentum that you clearly talked about with Intel and Cisco, going into Cisco's fiscal year of 2020? >> Well, I think we both have an awesome opportunity. We have just launched, as I said, the Intel second generation Xeon scalable processors. We are very well-aligned with Cisco on that technology, so what do we need to do? We have that; we have Optane data center persistent memory. So we have a great core set of ingredients, not to mention ethernet and SSDs where our job in 2020, their fiscal year 2020 is to really drive adoption. Work together on co-marketing, work together on co-selling, and since we have such a collective, strong value proposition, I think there's a tremendous opportunity in refresh, new customer accounts, et cetera to really drive mutual growth which is the foundation of the best types of relationships, when we're growing together. >> Awesome, well, Rose it's been a pleasure to have you on the program this afternoon. >> Thank you so much. Your energy and your excitement after being at Intel for 30 years is remarkable! >> (laughs) Still there, it's a great company. >> And it's contagious, so thank you for sharing that with us today. >> Thank you so much for the time. >> Our pleasure. For Stu Miniman, I'm Lisa Martin. You're watching theCUBE live from Cisco Live, San Diego. Thanks for watching. (upbeat techno music)
SUMMARY :
Brought to you by Cisco and its ecosystem partners. and Stu and I are pleased to welcome to theCUBE, It's great to meet you guys. Great to have you here too. the last couple days. So you've been at Intel for I'd love to understand your role at Intel, close to the thing, if you will, Yeah, I'd love to if we could, We'll talk first of all to technology, the development of those protocols to allow, that brings challenges to customer environments. really bring the technology to market? into your core, the data center group. Any new announcements in the last It ranged from the edge to the network How is it going to position Intel and your partner ecosystem but at the end of the day it comes down to It's going to be a pretty exciting time, I have the ability to play in multiple spaces, with Cisco and Intel, what are you excited about and since we have such a collective, strong value on the program this afternoon. Thank you so much. And it's contagious, so thank you Thanks for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Cisco | ORGANIZATION | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
$64,000 | QUANTITY | 0.99+ |
Stu | PERSON | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
2020 | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Rose Schooler | PERSON | 0.99+ |
30 years | QUANTITY | 0.99+ |
Rose | PERSON | 0.99+ |
San Diego | LOCATION | 0.99+ |
150th | QUANTITY | 0.99+ |
30 Years | QUANTITY | 0.99+ |
Chuck Robins | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
last month | DATE | 0.99+ |
less than 1% | QUANTITY | 0.99+ |
FY 18 | DATE | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
14X | QUANTITY | 0.99+ |
intel | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.97+ |
second generation | QUANTITY | 0.97+ |
ACI | ORGANIZATION | 0.97+ |
10 year anniversary | QUANTITY | 0.96+ |
fiscal year 2020 | DATE | 0.96+ |
fourth quarter fiscal 19 | DATE | 0.95+ |
fiscal year of 2020 | DATE | 0.95+ |
One | QUANTITY | 0.95+ |
begin of April | DATE | 0.93+ |
Xeon | COMMERCIAL_ITEM | 0.93+ |
Robins | PERSON | 0.91+ |
Optane | COMMERCIAL_ITEM | 0.9+ |
day two | QUANTITY | 0.89+ |
95 world records | QUANTITY | 0.88+ |
US | LOCATION | 0.88+ |
Cisco Live | EVENT | 0.85+ |
seven different product launches | QUANTITY | 0.84+ |
this afternoon | DATE | 0.83+ |
200th proof | QUANTITY | 0.82+ |
Kaustubh Das, Cisco & Laura Crone, Intel | Cisco Live US 2019
>> Live from San Diego, California It's the queue covering Sisqo Live US 2019 Tio by Cisco and its ecosystem barkers. >> Welcome back. It's the Cube here at Cisco Live, San Diego 2019 times. Two minute My co host is Day Volante. First, I want to welcome back custom dos Katie, who is the vice president. Product management with Cisco Compute. We talked with him a lot about Piper Flex anywhere in Barcelona. Wanna welcome to the program of first time guests Laura Crone, who's the vice president of sales and marketing group in NSG sales and marketing at Intel. Laura, thanks so much for joining us, All right, So since Katie has been our program, let let's start with you. You know, we know, you know. We've watched, you know, Cisco UCS and that compute, you know, since it rolled out for about a decade ago. Now on DH, you know Intel always up on stage with Cisco talking about the latest enhancements everywhere I go this year, people are talking about obtained and how technologies like envy me are baking in tow. The environment storage class memories, you know, coming there. So you know, let's start with kind of intel. What's happening in your world and you know your activities. Francisco live >> great. So I'm glad to hear you've heard a lot about octane because I have some marketing of my organization. So obtain is the first new memory architecture er in over 25 years. And it is different than Nanda, right? It is you, Khun, right? Data to the silicon that is programs faster and has greater endurance. So when you think of obtain its fast like D ram But it's persistent, like nay on three D now. And it has some industry leading combinations of capabilities such a cz high throughput, high endurance, high quality of service and low latent see. And for a storage device, what could be better than having fast performance and hi consistency. Oh, >> Laura's you say? Yeah, but 25 years since this move. You know, I remember when I when I started working with Dave, it was, you know, how do we get out of you know, the horrible, scuzzy stack is what we had lived on for decades there. And finally, Now it feels like we're coming through the clearing and there is just going to be wave after wave of new technologies that air free to get us high performance low latent c on the like. >> Yeah, And I think the other big part of that which is part of Cisco's hyper flex all in Vienna, is the envy me standards. So, you know, we've lived in a world of legacy satya controllers, which created a lot of bottlenecks and the performance Now that the industry is moving toe envy me, that even opens up it. Mohr And so, as we were developing, obtain, we knew we had Teo go move the industry to a new protocol. Otherwise, that pairing was not going to be very successful. >> Alright, so Katie all envy me, tell more. >> So we come here and we talk about all the cool innovations we do within the company. And then sometimes you come here and we talk about all the cool innovation we do with our partners, our technology partner, that intel being a fantastic technology partner, obviously being the server business, you've got a partner with intel on. We've really going away that across the walls ofthe two organizations to bring, uh, just do to life, right? So Cisco 80 I hyper flex is one of the products >> we >> talked about in the past. Hyper Flex, all in Miami that uses Intel's obtain technology is, well, it's Intel's three demand all envy me devices to power really the fastest workloads that customers want to put on this device. So you talked about free envy me. Pricing is getting to a point where it becomes that much more accessible to youth, ese for powering databases for par like those those work clothes required that leyton see characteristics and acquire those I ops on DH. That's what we've enabled with Cisco Hyper Flex collaborating with Intel of Envy Me portfolio. >> Remember when I started in the business, somebody was sharing with me to educate me on the head? A pyramid? Think of the period is a storage hierarchy. And at the top of it, was it actually an Intel solid state device, which back then was not It was volatile, right? So you had to put, you know, backup power supplies on it. Uh, so but any rate and then with all this memory architecture coming and flash towards people have been saying, well, it's going to flatten that pyramid. But now, with obtain. You're seeing the reemergence of that periods of that pyramid. So help us understand, sort of where it fits from a supplier standpoint and a no yam and ultimate customer. Because if I understand it, so obtain is faster than NAND, but it's going to be more expensive, but it's slower than D Ram, but it's cheaper, right? So where does it fit? What, the use cases? Where does it fit in that hierarchy? Maybe. >> Yeah. So if you think about the hierarchy at the very top is D RAM, which is going to be your fastest lowest Leighton see product. But right below that is obtained. Persistent memory, the dims and you get greater density because that's one of the challenges with the Ram is they're not dense enough, nor are they affordable enough, right? And so you get that creates a new tear in the store tire curry. Go below that and you have obtain assist ease, which bring even mohr density. So we go up to a 1.5 terabyte in a obtain sst, uh, and you that now get performance for your storage and memory expansion. Then you have three Dean and and then even below that, you have three thing and Q l c, which gives you cost effective, high density capacity. And then below that is the old fashioned hard disk drive. And then magnet. Yeah, you start inserting all these tears that give architects and both hardware and software an opportunity. Teo rethink how they wantto do storage. >> So the demand for this granularity obviously coming from your your buyers, your direct bars and your customers. So what does it do for you and specifically your customers? >> Yeah. So the name of the game is performance and the ability to have in a land where things are not very predictable, the ability to support any thing that the your end customers may throw at you if you're a 90 department. That may mean a bur internal of, uh, data scientist team are traditional architect off a traditional application. Now, what Intel and Cisco can do together is truly unique because we control all parts of the stack, everything from the sober itself to the to the storage devices to the distributed file system that sits on top ofit. So, for example, in Etienne, hyper flecks were using obtain as a cashing here on because we write the distributed file system. We can speak in a balance between what we put in the cash in care how it moved out data to the non cashing 3 90 year, as as Intel came out with their latest processors that support memory class torched last memory. We support that now we can engineer this whole system and to end so that we can deliver to customers the innovation that Intel is bringing to the table in a way that's consumable by their, uh, one more thing I'll throw out there. So technology is great, but it needs to be resilient because I D departments will occasionally yank out the wrong wire. They are barely yank out the wrong drive. One of the things that we work together with Intel What? How do we court rise into this? How to be with reliability, availability, serviceability? How do we prevent against accidental removal or accidental insertion on DH? Some of those go innovations have let Teo asked, getting out in the market a hyper flecked system that uses these technologies in a way that's really usable by teens in our customs. I'd >> love to double click on that in the context of envy. Envy? What you guys were talking about, You mentioned horrible storage deck. I think he called it the horrible, scuzzy stack. And Laura, you were talking about the You know, the cheap and deep now is a spinning disk. So my understanding is that you've got a lot of overhead in the traditional scuzzy protocol, but nobody ever noticed because you had this mechanical device. Now, with flash storage, it all becomes exposed. And VM e allows just a like a bat phone. Right? Okay, so correct me where I got that wrong, But maybe you could give us the perspective. You know what? Why Envy Emmy is important from your standpoint. And how are you guys using it? >> Yeah, I think envy and me is just a much faster protocol. And you're absolutely right. We have a graph that we show of the old world and how much overhead there is all the way down to when you have obtained in a dim solution with no overhead octane assist. E still has a tiny bit, but there's a graph that shows all of that Leyton C is removed when you deploy, obtain so envy me gives you much greater band with right. The CPU is not bottlenecked, and you get greater CPU efficiency when you have a faster interface like and >> and like hyper flexes taking advantage of this house. Oh, >> yeah? Let me give you a couple of examples. So anything performance, the first thing that comes to mind is databases. So for those kinds of workloads, this system gets about 25% better performance. Next thing that comes to mind is people really don't know what they're gonna put on the system. So sometimes they put databases, sometimes put mixed workloads. So when we look at mixed workloads way get about 65% or so better I ops, we get 37% better lately sees. So even in a mixed I opened Wyman wherever have databases you may have a Web theory may have other things. This thing is definite resilient to handle the workload. So it's it just opens up the splatter abuse cases. >> So any other questions I had was specific to obtain. D ram has consumer applications, as does Flash Anand was obtained. Have similar consumer applications can achieve that volume so that the prices, you can come down, not free, but continue to sort of drive the curves. >> Eso When we look at the overall tam, we see the tam growing out over time. I don't know exactly when it crosses. Over the volume are the bits of the ram, but we absolutely see it growing over time. And as a technology ramps, it'll have a you know, it costs ramping curves. Well, >> it'll follow that curve. Okay, good. >> Yeah, Just Katie. Give us a little bit. Broad view of hyper flex here. Att? The show, people, you know, play any labs with the brand new obtained pieces or what? What other highlights that you and the team have this week? >> Yeah, absolutely. So in in Barcelona, we talked about high, perfect for all that is live today. So in the show floor, people can look at the hyper flex at the edge combined with S t one. How do you control How did deploy thousands of edge locations from a centralized location to the part of the inner side which cloud based management too? So that whole experience is unable. Now, at the other end of the spectrum is how do we drive even more performance. So we were always, always the performance leader. Now we're comparing ourselves to ourselves to behavior 35% better than our previous all flash. With the innovation Intel is bringing to the table, some of the other pieces are actually use cases. So there's a big hospital chain where my kids go toe goto, get treated and look and see the doctor. There are lots of medical use cases which require epic the medical software company to power it, whether it is the end terminals or it is the back and database. So that epic hyperspace and happy cachet those have been out be invalidated on hyper flex, using the technology that we just talked about around update on doll in via me that can get me there is that much more power. That means that when my my doctor and the nurse pulls off, the records don't show up fast. But all the medical records, all of those other high performance seeking applications also run that much more streamlined, so I would encourage people little water solution. We've got a tremendous set off demos out there to go up there and check us out >> and there's a great white paper out on this, right? That e g s >> e g is made one of the a company that I've seen benchmarking Ah, a hyper flex. >> So whatever Elaborate where they do a lab report or >> it's what they do is they bench around different hyper converge infrastructure vendors. So they did this first time around and they and they said, Well, we could pack that much more We EMS on a on a hyper flex with rotating drives. And then they did it again And I said, Well, now that you got all flash Well, deacon, you got now the performance and the ladies see leadership and then they did it again and they said, Well, hang on, you you've kind of left the competition that does that. That's not going to make a pretty chart to show when we compare your all in Miami against your hyper so many. When you get that good, you compare against yourselves. We've been the performance theater on the estate has been doing the >> data obtained. The next generation added up, >> and this is what a database workload. OK, nowyou bringing obtain a little toast to the latest report >> has that measures >> measures obtain against are all flash report and then also ship or measure across vendors. So >> where can I get this? Is at some party or website or >> it's off all of this. All of this is off off the Cisco Hyper Flex website on artist go dot com. But F is the companies that want to go directly to their about getting a more >> I guess final final question for you is you know, I think back the early is ucs. It was the memory enhancements that they had that allowed the dentist virtual ization in the industry back when it started. It sounds like we're just taking that to the next level with this next generation of solutions. What what else would you out about? The relationship with Cisco and Intel? >> Eso, Intel and Cisco worked together for years right innovation around the CPU and the platform, and it's super exciting to be expanding our relationship to storage. And I'm even more excited that the Cisco hyper flex solution is endorsing Intel obtain and three thing and and we're seeing great examples of really use workloads where are in customers can benefit from this technology. >> Katie Laura. Thanks so much for the update. Congratulations on the progress that you've made so far for David Dante on Student, and we'll be back with more coverage here. It's just go live 2019 in San Diego. Thanks for watching the cue >> theme.
SUMMARY :
Live from San Diego, California It's the queue covering So you know, So when you think of obtain its fast like D ram But it's You know, I remember when I when I started working with Dave, it was, you know, how do we get out of you So, you know, we've lived in a world of legacy So Cisco 80 I hyper flex is one of the products So you talked about free envy me. So you had to put, you know, backup power supplies on it. Persistent memory, the dims and you get greater density So what does it do for you and specifically your customers? One of the things that we work And Laura, you were talking about the You know, of that Leyton C is removed when you deploy, obtain so envy me gives and like hyper flexes taking advantage of this house. So anything performance, the first thing that comes to mind is databases. prices, you can come down, not free, but continue to sort of drive the curves. are the bits of the ram, but we absolutely see it growing over time. it'll follow that curve. What other highlights that you and the team have this week? So in the show floor, people can look at the hyper flex at the edge e g is made one of the a company that I've seen benchmarking Ah, And then they did it again And I said, Well, now that you got all flash Well, deacon, you got now the performance and the The next generation added up, and this is what a database workload. So But F is the companies that want to go directly to What what else would you out about? And I'm even more excited that the Cisco hyper flex solution Congratulations on the progress that you've made so far for
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Laura Crone | PERSON | 0.99+ |
Laura | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Katie | PERSON | 0.99+ |
Miami | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Dave | PERSON | 0.99+ |
Katie Laura | PERSON | 0.99+ |
David Dante | PERSON | 0.99+ |
San Diego | LOCATION | 0.99+ |
Vienna | LOCATION | 0.99+ |
37% | QUANTITY | 0.99+ |
Kaustubh Das | PERSON | 0.99+ |
First | QUANTITY | 0.99+ |
San Diego, California | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Eso | ORGANIZATION | 0.99+ |
intel | ORGANIZATION | 0.99+ |
25 years | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Hyper Flex | COMMERCIAL_ITEM | 0.99+ |
over 25 years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
about 25% | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Leighton | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Teo | PERSON | 0.97+ |
this week | DATE | 0.97+ |
about 65% | QUANTITY | 0.97+ |
Envy Emmy | PERSON | 0.97+ |
thousands | QUANTITY | 0.97+ |
1.5 terabyte | QUANTITY | 0.96+ |
three | QUANTITY | 0.96+ |
One | QUANTITY | 0.95+ |
35% | QUANTITY | 0.95+ |
Two minute | QUANTITY | 0.95+ |
Cisco Compute | ORGANIZATION | 0.94+ |
two organizations | QUANTITY | 0.94+ |
3 | QUANTITY | 0.92+ |
hyper flex | ORGANIZATION | 0.9+ |
decades | QUANTITY | 0.88+ |
90 year | QUANTITY | 0.88+ |
90 department | QUANTITY | 0.87+ |
this year | DATE | 0.87+ |
2019 | DATE | 0.87+ |
Mohr | PERSON | 0.87+ |
first thing | QUANTITY | 0.84+ |
Cisco UCS | ORGANIZATION | 0.84+ |
envy | PERSON | 0.83+ |
Cisco Live | EVENT | 0.83+ |
Nanda | ORGANIZATION | 0.81+ |
NAND | ORGANIZATION | 0.8+ |
octane | OTHER | 0.8+ |
envy | ORGANIZATION | 0.78+ |
a decade ago | DATE | 0.78+ |
hyper flex | COMMERCIAL_ITEM | 0.78+ |
NSG | ORGANIZATION | 0.74+ |
US | LOCATION | 0.72+ |
Flex | ORGANIZATION | 0.72+ |