SiliconANGLE Report: Reporters Notebook with Adrian Cockcroft | AWS re:Invent 2022
(soft techno upbeat music) >> Hi there. Welcome back to Las Vegas. This is Dave Villante with Paul Gillon. Reinvent day one and a half. We started last night, Monday, theCUBE after dark. Now we're going wall to wall. Today. Today was of course the big keynote, Adam Selipsky, kind of the baton now handing, you know, last year when he did his keynote, he was very new. He was sort of still getting his feet wet and finding his guru swing. Settling in a little bit more this year, learning a lot more, getting deeper into the tech, but of course, sharing the love with other leaders like Peter DeSantis. Tomorrow's going to be Swamy in the keynote. Adrian Cockcroft is here. Former AWS, former network Netflix CTO, currently an analyst. You got your own firm now. You're out there. Great to see you again. Thanks for coming on theCUBE. >> Yeah, thanks. >> We heard you on at Super Cloud, you gave some really good insights there back in August. So now as an outsider, you come in obviously, you got to be impressed with the size and the ecosystem and the energy. Of course. What were your thoughts on, you know what you've seen so far, today's keynotes, last night Peter DeSantis, what stood out to you? >> Yeah, I think it's great to be back at Reinvent again. We're kind of pretty much back to where we were before the pandemic sort of shut it down. This is a little, it's almost as big as the, the largest one that we had before. And everyone's turned up. It just feels like we're back. So that's really good to see. And it's a slightly different style. I think there were was more sort of video production things happening. I think in this keynote, more storytelling. I'm not sure it really all stitched together very well. Right. Some of the stories like, how does that follow that? So there were a few things there and some of there were spelling mistakes on the slides, you know that ELT instead of ETL and they spelled ZFS wrong and something. So it just seemed like there was, I'm not quite sure just maybe a few things were sort of rushed at the last minute. >> Not really AWS like, was it? It's kind of remind the Patriots Paul, you know Bill Belichick's teams are fumbling all over the place. >> That's right. That's right. >> Part of it may be, I mean the sort of the market. They have a leader in marketing right now but they're going to have a CMO. So that's sort of maybe as lack of a single threaded leader for this thing. Everything's being shared around a bit more. So maybe, I mean, it's all fixable and it's mine. This is minor stuff. I'm just sort of looking at it and going there's a few things that looked like they were not quite as good as they could have been in the way it was put together. Right? >> But I mean, you're taking a, you know a year of not doing Reinvent. Yeah. Being isolated. You know, we've certainly seen it with theCUBE. It's like, okay, it's not like riding a bike. You know, things that, you know you got to kind of relearn the muscle memories. It's more like golf than is bicycle riding. >> Well I've done AWS keynotes myself. And they are pretty much scrambled. It looks nice, but there's a lot of scrambling leading up to when it actually goes. Right? And sometimes you can, you sometimes see a little kind of the edges of that, and sometimes it's much more polished. But you know, overall it's pretty good. I think Peter DeSantis keynote yesterday was a lot of really good meat there. There was some nice presentations, and some great announcements there. And today I was, I thought I was a little disappointed with some of the, I thought they could have been more. I think the way Andy Jesse did it, he crammed more announcements into his keynote, and Adam seems to be taking sort of a bit more of a measured approach. There were a few things he picked up on and then I'm expecting more to be spread throughout the rest of the day. >> This was more poetic. Right? He took the universe as the analogy for data, the ocean for security. Right? The Antarctic was sort of. >> Yeah. It looked pretty, >> yeah. >> But I'm not sure that was like, we're not here really to watch nature videos >> As analysts and journalists, You're like, come on. >> Yeah, >> Give it the meat >> That was kind the thing, yeah, >> It has always been the AWS has always been Reinvent has always been a shock at our approach. 100, 150 announcements. And they're really, that kind of pressure seems to be off them now. Their position at the top of the market seems to be unshakeable. There's no clear competition that's creeping up behind them. So how does that affect the messaging you think that AWS brings to market when it doesn't really have to prove that it's a leader anymore? It can go after maybe more of the niche markets or fix the stuff that's a little broken more fine tuning than grandiose statements. >> I think so AWS for a long time was so far out that they basically said, "We don't think about the competition, we are listen to the customers." And that was always the statement that works as long as you're always in the lead, right? Because you are introducing the new idea to the customer. Nobody else got there first. So that was the case. But in a few areas they aren't leading. Right? You could argue in machine learning, not necessarily leading in sustainability. They're not leading and they don't want to talk about some of these areas and-- >> Database. I mean arguably, >> They're pretty strong there, but the areas when you are behind, it's like they kind of know how to play offense. But when you're playing defense, it's a different set of game. You're playing a different game and it's hard to be good at both. I think and I'm not sure that they're really used to following somebody into a market and making a success of that. So there's something, it's a little harder. Do you see what I mean? >> I get opinion on this. So when I say database, David Foyer was two years ago, predicted AWS is going to have to converge somehow. They have no choice. And they sort of touched on that today, right? Eliminating ETL, that's one thing. But Aurora to Redshift. >> Yeah. >> You know, end to end. I'm not sure it's totally, they're fully end to end >> That's a really good, that is an excellent piece of work, because there's a lot of work that it eliminates. There's are clear pain points, but then you've got sort of the competing thing, is like the MongoDB and it's like, it's just a way with one database keeps it simple. >> Snowflake, >> Or you've got on Snowflake maybe you've got all these 20 different things you're trying to integrate at AWS, but it's kind of like you have a bag of Lego bricks. It's my favorite analogy, right? You want a toy for Christmas, you want a toy formula one racing car since that seems to be the theme, right? >> Okay. Do you want the fully built model that you can play with right now? Or do you want the Lego version that you have to spend three days building. Right? And AWS is the Lego technique thing. You have to spend some time building it, but once you've built it, you can evolve it, and you'll still be playing those are still good bricks years later. Whereas that prebuilt to probably broken gathering dust, right? So there's something about having an vulnerable architecture which is harder to get into, but more durable in the long term. And so AWS tends to play the long game in many ways. And that's one of the elements that they do that and that's good, but it makes it hard to consume for enterprise buyers that are used to getting it with a bow on top. And here's the solution. You know? >> And Paul, that was always Andy Chassy's answer to when we would ask him, you know, all these primitives you're going to make it simpler. You see the primitives give us the advantage to turn on a dime in the marketplace. And that's true. >> Yeah. So you're saying, you know, you take all these things together and you wrap it up, and you put a snowflake on top, and now you've got a simple thing or a Mongo or Mongo atlas or whatever. So you've got these layered platforms now which are making it simpler to consume, but now you're kind of, you know, you're all stuck in that ecosystem, you know, so it's like what layer of abstractions do you want to tie yourself to, right? >> The data bricks coming at it from more of an open source approach. But it's similar. >> We're seeing Amazon direct more into vertical markets. They spotlighted what Goldman Sachs is doing on their platform. They've got a variety of platforms that are supposedly targeted custom built for vertical markets. How do successful do you see that play being? Is this something that the customers you think are looking for, a fully integrated Amazon solution? >> I think so. There's usually if you look at, you know the MongoDB or data stacks, or the other sort of or elastic, you know, they've got the specific solution with the people that really are developing the core technology, there's open source equivalent version. The AWS is running, and it's usually maybe they've got a price advantage or it's, you know there's some data integration in there or it's somehow easier to integrate but it's not stopping those companies from growing. And what it's doing is it's endorsing that platform. So if you look at the collection of databases that have been around over the last few years, now you've got basically Elastic Mongo and Cassandra, you know the data stacks as being endorsed by the cloud vendors. These are winners. They're going to be around for a very long time. You can build yourself on that architecture. But what happened to Couch base and you know, a few of the other ones, you know, they don't really fit. Like how you going to bait? If you are now becoming an also ran, because you didn't get cloned by the cloud vendor. So the customers are going is that a safe place to be, right? >> But isn't it, don't they want to encourage those partners though in the name of building the marketplace ecosystem? >> Yeah. >> This is huge. >> But certainly the platform, yeah, the platform encourages people to do more. And there's always room around the edge. But the mainstream customers like that really like spending the good money, are looking for something that's got a long term life to it. Right? They're looking for a long commitment to that technology and that it's going to be invested in and grow. And the fact that the cloud providers are adopting and particularly AWS is adopting some of these technologies means that is a very long term commitment. You can base, you know, you can bet your future architecture on that for a decade probably. >> So they have to pick winners. >> Yeah. So it's sort of picking winners. And then if you're the open source company that's now got AWS turning up, you have to then leverage it and use that as a way to grow the market. And I think Mongo have done an excellent job of that. I mean, they're top level sponsors of Reinvent, and they're out there messaging that and doing a good job of showing people how to layer on top of AWS and make it a win-win both sides. >> So ever since we've been in the business, you hear the narrative hardware's going to die. It's just, you know, it's commodity and there's some truth to that. But hardware's actually driving good gross margins for the Cisco's of the world. Storage companies have always made good margins. Servers maybe not so much, 'cause Intel sucked all the margin out of it. But let's face it, AWS makes most of its money. We know on compute, it's got 25 plus percent operating margins depending on the seasonality there. What do you think happens long term to the infrastructure layer discussion? Okay, commodity cloud, you know, we talk about super cloud. Do you think that AWS, and the other cloud vendors that infrastructure, IS gets commoditized and they have to go up market or you see that continuing I mean history would say that still good margins in hardware. What are your thoughts on that? >> It's not commoditizing, it's becoming more specific. We've got all these accelerators and custom chips now, and this is something, this almost goes back. I mean, I was with some micro systems 20,30 years ago and we developed our own chips and HP developed their own chips and SGI mips, right? We were like, the architectures were all squabbling of who had the best processor chips and it took years to get chips that worked. Now if you make a chip and it doesn't work immediately, you screwed up somewhere right? It's become the technology of building these immensely complicated powerful chips that has become commoditized. So the cost of building a custom chip, is now getting to the point where Apple and Amazon, your Apple laptop has got full custom chips your phone, your iPhone, whatever and you're getting Google making custom chips and we've got Nvidia now getting into CPUs as well as GPUs. So we're seeing that the ability to build a custom chip, is becoming something that everyone is leveraging. And the cost of doing that is coming down to startups are doing it. So we're going to see many, many more, much more innovation I think, and this is like Intel and AMD are, you know they've got the compatibility legacy, but of the most powerful, most interesting new things I think are going to be custom. And we're seeing that with Graviton three particular in the three E that was announced last night with like 30, 40% whatever it was, more performance for HPC workloads. And that's, you know, the HPC market is going to have to deal with cloud. I mean they are starting to, and I was at Supercomputing a few weeks ago and they are tiptoeing around the edge of cloud, but those supercomputers are water cold. They are monsters. I mean you go around supercomputing, there are plumbing vendors on the booth. >> Of course. Yeah. >> Right? And they're highly concentrated systems, and that's really the only difference, is like, is it water cooler or echo? The rest of the technology stack is pretty much off the shelf stuff with a few tweets software. >> You point about, you know, the chips and what AWS is doing. The Annapurna acquisition. >> Yeah. >> They're on a dramatically different curve now. I think it comes down to, again, David Floyd's premise, really comes down to volume. The arm wafer volumes are 10 x those of X 86, volume always wins. And the economics of semis. >> That kind of got us there. But now there's also a risk five coming along if you, in terms of licensing is becoming one of the bottlenecks. Like if the cost of building a chip is really low, then it comes down to licensing costs and do you want to pay the arm license And the risk five is an open source chip set which some people are starting to use for things. So your dis controller may have a risk five in it, for example, nowadays, those kinds of things. So I think that's kind of the the dynamic that's playing out. There's a lot of innovation in hardware to come in the next few years. There's a thing called CXL compute express link which is going to be really interesting. I think that's probably two years out, before we start seeing it for real. But it lets you put glue together entire rack in a very flexible way. So just, and that's the entire industry coming together around a single standard, the whole industry except for Amazon, in fact just about. >> Well, but maybe I think eventually they'll get there. Don't use system on a chip CXL. >> I have no idea whether I have no knowledge about whether going to do anything CXL. >> Presuming I'm not trying to tap anything confidential. It just makes sense that they would do a system on chip. It makes sense that they would do something like CXL. Why not adopt the standard, if it's going to be as the cost. >> Yeah. And so that was one of the things out of zip computing. The other thing is the low latency networking with the elastic fabric adapter EFA and the extensions to that that were announced last night. They doubled the throughput. So you get twice the capacity on the nitro chip. And then the other thing was this, this is a bit technical, but this scalable datagram protocol that they've got which basically says, if I want to send a message, a packet from one machine to another machine, instead of sending it over one wire, I consider it over 16 wires in parallel. And I will just flood the network with all the packets and they can arrive in any order. This is why it isn't done normally. TCP is in order, the packets come in order they're supposed to, but this is fully flooding them around with its own fast retry and then they get reassembled at the other end. So they're not just using this now for HPC workloads. They've turned it on for TCP for just without any change to your application. If you are trying to move a large piece of data between two machines, and you're just pushing it down a network, a single connection, it takes it from five gigabits per second to 25 gigabits per second. A five x speed up, with a protocol tweak that's run by the Nitro, this is super interesting. >> Probably want to get all that AIML that stuff is going on. >> Well, the AIML stuff is leveraging it underneath, but this is for everybody. Like you're just copying data around, right? And you're limited, "Hey this is going to get there five times faster, pushing a big enough chunk of data around." So this is turning on gradually as the nitro five comes out, and you have to enable it at the instance level. But it's a super interesting announcement from last night. >> So the bottom line bumper sticker on commoditization is what? >> I don't think so. I mean what's the APIs? Your arm compatible, your Intel X 86 compatible or your maybe risk five one day compatible in the cloud. And those are the APIs, right? That's the commodity level. And the software is now, the software ecosystem is super portable across those as we're seeing with Apple moving from Intel to it's really not an issue, right? The software and the tooling is all there to do that. But underneath that, we're going to see an arms race between the top providers as they all try and develop faster chips for doing more specific things. We've got cranium for training, that instance has they announced it last year with 800 gigabits going out of a single instance, 800 gigabits or no, but this year they doubled it. Yeah. So 1.6 terabytes out of a single machine, right? That's insane, right? But what you're doing is you're putting together hundreds or thousands of those to solve the big machine learning training problems. These super, these enormous clusters that they're being formed for doing these massive problems. And there is a market now, for these incredibly large supercomputer clusters built for doing AI. That's all bandwidth limited. >> And you think about the timeframe from design to tape out. >> Yeah. >> Is just getting compressed It's relative. >> It is. >> Six is going the other way >> The tooling is all there. Yeah. >> Fantastic. Adrian, always a pleasure to have you on. Thanks so much. >> Yeah. >> Really appreciate it. >> Yeah, thank you. >> Thank you Paul. >> Cheers. All right. Keep it right there everybody. Don't forget, go to thecube.net, you'll see all these videos. Go to siliconangle.com, We've got features with Adam Selipsky, we got my breaking analysis, we have another feature with MongoDB's, Dev Ittycheria, Ali Ghodsi, as well Frank Sluman tomorrow. So check that out. Keep it right there. You're watching theCUBE, the leader in enterprise and emerging tech, right back. (soft techno upbeat music)
SUMMARY :
Great to see you again. and the ecosystem and the energy. Some of the stories like, It's kind of remind the That's right. I mean the sort of the market. the muscle memories. kind of the edges of that, the analogy for data, As analysts and journalists, So how does that affect the messaging always in the lead, right? I mean arguably, and it's hard to be good at both. But Aurora to Redshift. You know, end to end. of the competing thing, but it's kind of like you And AWS is the Lego technique thing. to when we would ask him, you know, and you put a snowflake on top, from more of an open source approach. the customers you think a few of the other ones, you know, and that it's going to and doing a good job of showing people and the other cloud vendors the HPC market is going to Yeah. and that's really the only difference, the chips and what AWS is doing. And the economics of semis. So just, and that's the entire industry Well, but maybe I think I have no idea whether if it's going to be as the cost. and the extensions to that AIML that stuff is going on. and you have to enable And the software is now, And you think about the timeframe Is just getting compressed Yeah. Adrian, always a pleasure to have you on. the leader in enterprise
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Adam Selipsky | PERSON | 0.99+ |
David Floyd | PERSON | 0.99+ |
Peter DeSantis | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Ali Ghodsi | PERSON | 0.99+ |
Adrian Cockcroft | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Frank Sluman | PERSON | 0.99+ |
Paul Gillon | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Andy Chassy | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Adam | PERSON | 0.99+ |
Dev Ittycheria | PERSON | 0.99+ |
Andy Jesse | PERSON | 0.99+ |
Dave Villante | PERSON | 0.99+ |
August | DATE | 0.99+ |
two machines | QUANTITY | 0.99+ |
Bill Belichick | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
last year | DATE | 0.99+ |
1.6 terabytes | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
one machine | QUANTITY | 0.99+ |
three days | QUANTITY | 0.99+ |
Adrian | PERSON | 0.99+ |
800 gigabits | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
David Foyer | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
yesterday | DATE | 0.99+ |
this year | DATE | 0.99+ |
Snowflake | TITLE | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
five times | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
both sides | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Christmas | EVENT | 0.99+ |
last night | DATE | 0.99+ |
HP | ORGANIZATION | 0.98+ |
25 plus percent | QUANTITY | 0.98+ |
thousands | QUANTITY | 0.98+ |
20,30 years ago | DATE | 0.98+ |
pandemic | EVENT | 0.98+ |
both | QUANTITY | 0.98+ |
two years ago | DATE | 0.98+ |
twice | QUANTITY | 0.98+ |
tomorrow | DATE | 0.98+ |
X 86 | COMMERCIAL_ITEM | 0.98+ |
Antarctic | LOCATION | 0.98+ |
Patriots | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.97+ |
AMD & Oracle Partner to Power Exadata X9M
(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)
SUMMARY :
in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Juan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark Papermaster | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Last April | DATE | 0.99+ |
11 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
7,003 | QUANTITY | 0.99+ |
X9M | TITLE | 0.99+ |
50% | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
Zen | COMMERCIAL_ITEM | 0.97+ |
third generation | QUANTITY | 0.97+ |
X86 | COMMERCIAL_ITEM | 0.97+ |
first appearance | QUANTITY | 0.97+ |
Exadata | TITLE | 0.97+ |
third gen | QUANTITY | 0.96+ |
earlier this century | DATE | 0.96+ |
seven nanometer | QUANTITY | 0.96+ |
Exadata | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.92+ |
Eastern Pacific Yacht Club | ORGANIZATION | 0.9+ |
EPYC | ORGANIZATION | 0.87+ |
both | QUANTITY | 0.86+ |
OCI | TITLE | 0.85+ |
One thing | QUANTITY | 0.83+ |
Exadata X9M | COMMERCIAL_ITEM | 0.81+ |
Power Exadata | ORGANIZATION | 0.81+ |
The Cube | ORGANIZATION | 0.8+ |
OCI | ORGANIZATION | 0.79+ |
The Cube | COMMERCIAL_ITEM | 0.79+ |
Zen | ORGANIZATION | 0.78+ |
three years | QUANTITY | 0.78+ |
Exadata X9M | COMMERCIAL_ITEM | 0.74+ |
X9M | COMMERCIAL_ITEM | 0.74+ |
years | DATE | 0.73+ |
15 years ago | DATE | 0.7+ |
10 | DATE | 0.7+ |
EPYC | OTHER | 0.65+ |
Exadara | ORGANIZATION | 0.64+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.61+ |
last few years | DATE | 0.6+ |
Exadata Cloud Infrastructure X9M | TITLE | 0.6+ |
Power Panel: Does Hardware Still Matter
(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)
SUMMARY :
but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Keith Townson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Bob O'Donnell | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
ZK Research | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith Townsend | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
ARISTA | ORGANIZATION | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Zeus Kerravala | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
25 gig | QUANTITY | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
Norman Rice | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
Dragon Slayer Consulting | ORGANIZATION | 0.99+ |
Vikas Ratna and James Leach, Cisco | Simplifying Hybrid Cloud
(upbeat music) >> Welcome back to theCUBE special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers. And as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How Cisco specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question and, you know, customer-centric view is the way that we've taken, is kind of the approach we've taken from day one right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications right? That's the, that's where the rubber meets your proverbial road with the customer, and I would say that, you know, what we're seeing is the challenges customers are facing within applications come from the way that applications have evolved. So what we're seeing now is more data-centric applications for example. Those require that we, you know, are able to move, and process large datasets really in real time. And the other aspect of applications I think that give our customers kind of some, you know, pose some challenges, would be around the fact that they're changing so quickly. So the application that exists today, or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I rightsize it without over provisioning for example? But also there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or Blade approach which offers a lot of operational and consolidation benefits, and they have to move to something like a rack server model for some applications because of these needs that these data-centric applications have, and that creates a lot of, you know, opportunity for siloing infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again drive a lot of complexity. So that complexity is definitely the enemy here. And then finally I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU-focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together, that are all changing rapidly and have very different life cycles. So, when those life cycles don't align, for a lot of our customers they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase, without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So that is a, you know, kind of the other bucket. And then finally I think management is huge, right? So management, you know, at its core is really rightsized for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009 we weren't meeting that mark in the industry and UCS came about and took a management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time, however, as things have changed, we're seeing a very new scale and scope needed right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure but they're not the same tool. They tend to be disparate tools that have to be put together. >> Dave: All right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right, you know, Jim I said in my open that you guys, Cisco had sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things. These data-intensive workloads, alternative processors to sort of meet those needs, the whole cloud operating model and hybrid cloud has really changed so how is it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake, we're seeing, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exists, not just, you know, bringing customers onto a new product but we're actually bringing them into the product in the way that we had envisioned which is one infrastructure that can run any application into it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco but also within the industry. And I think right now as a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture, and we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground-up work that we did is really paying off, and I think that what we're also seeing is it's not really a leapfrog game as it may have been in the past. X-Series is out in front today and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers, and, you know, we're pretty excited that we're seeing the results as well. So as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June, when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So, Vikas I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way, and you know, on-prem folks pulling the other way, and hybrid cloud so, organizations are struggling with a lot of different systems and architectures, and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that and I think your stated aim is really to try to help with that confusion with the X-Series right? I mean, so how so? Can you explain that? >> Sure, and that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across because every application has its unique needs, and hence you find drive node, drive-dense system, memory-dense system, GPU-dense system, core-dense system, and variety of form factors, 1U, 2U, 4U, and every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is brought, the adapter is brought, the power and cooling implications, the rack, you know, space challenges. And above all, the multiple management plane that they come up with which makes it very difficult for IT to have one common center policy, and enforce it all across the firmware, and software, and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade references of their own. As a result we observe quite a few of our customers, you know, really, seeing a slowness in their agility, and high burdened in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities, they become more agile, and drive lower these issues. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the rightful footprint that is required. Infrastructure where power and cooling budgets are in the lower. Second, we want to simplify with, by delivering a cloud operating model. Where they can create the policy once across compute, network, storage, and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade, and any platform evolution that they're going to go through in the next two, three years. So that's where, the focus is on just driving down the simplicity, lowering down their issues. >> Oh, that's key. Less friction is always a good thing. Now of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone, you have hard news as well, what innovations are you announcing around X-Series today? >> Absolutely, so we are following up on the exciting X-Series announcement that we made in June last year Dave, and we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node. And thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the 5th generation of our very popular Unified Fabric Technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the HCI workload, the SDS workload, all these workloads that have historically not lived in the modular farm factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric we've become the only vendor to now finally enable 100 Gig end-to-end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of this performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters, may it be ethernet adapter, may it be with fiber channel adapters, or may it be the other storage adapters, they've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in the smart release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So Jim you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to and think about some of the things that we mentioned before in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is is really important, right? Those pieces that we just announced really enhanced that story and really move, again, to the, kind of to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI, that's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market every, every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow, we have the, you know, the premier cloud-based management you know, hybrid cloud suite in the industry right? So check there. We have the flexible GPU accelerators that you, that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X Fabric, which is really, really critical to this launch as well, and, you know, I think finally the fifth generation of Fabric Interconnect, and Virtual Interface Card, and Intelligent Fabric Module go hand in hand in creating this 100 Gig end-to-end bandwidth story that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible to, that it fit exactly their needs, this is huge, right? It solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future there are other elements that we can disaggregate like the GPUs that solve of these life cycle mismanagement issues, they solve issues around the form factor limitations. It solves all these issues for, like it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly, the fabric that stitches them together is critical and we know that we're just on the edge of some of the developments that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. X, and the X Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future right? We want our customers to take these disruptive technologies and not be disrupted but use them to disrupt their competition as well. So we, you know, we're really excited about the pieces today, and I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas, talk, give us some nuggets on the roadmap. What's next for X-Series that we can look forward to. >> Absolutely Dave. As we talked about and James also hinted, this is a future-ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And there we are looking into enabling the customer's journey as they transition from PCA in less than four to five to six, without rip and replace, as they embrace CXL without rip and replace, as they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCI or NVMe-based dense drives and so forth. We are also looking forward to X Fabric next generation which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO, by keeping them not only covered for today but also for future. So that's where some of the focus is on Dave. >> Okay, thank you guys, we'll leave it there, in a moment I'll have some closing thoughts. (bright upbeat music) We're seeing a major evolution perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something it figured out a long time ago. But just when you have it nailed down in the technology business, things change don't they? You can count on that. The cloud operating model has bled into on-premises locations, and is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality and it supports much more diverse and data-intensive workloads and alternative compute modes. It's one where flexibility is a watchword enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation, remember all these videos are available on demand at thecube.net, and if you want to learn more please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante be well, and we'll see you next time. (upbeat music)
SUMMARY :
brought to you by Cisco. challenges that they face? So that is a, you know, being in the business of, you know, that you guys, Cisco had sort in the way that we had envisioned and you know, on-prem folks the rack, you know, space challenges. heard from the HyperFlex guys and densities that we that you have, customers are adopting, we have the, you know, the and the management piece is key there. and drive down the TCO, and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Jim | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
2009 | DATE | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
James Leach | PERSON | 0.99+ |
Vikas | PERSON | 0.99+ |
last year | DATE | 0.99+ |
June last year | DATE | 0.99+ |
Vikas Ratna | PERSON | 0.99+ |
Second | QUANTITY | 0.99+ |
5th generation | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
100 Gig | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
third | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
Third | QUANTITY | 0.99+ |
Intersight | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
Vikas | ORGANIZATION | 0.98+ |
less than four | QUANTITY | 0.98+ |
three years | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
second bucket | QUANTITY | 0.97+ |
Simplifying Hybrid Cloud | TITLE | 0.97+ |
fifth gen | QUANTITY | 0.97+ |
one more question | QUANTITY | 0.96+ |
three aspects | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
HyperFlex | ORGANIZATION | 0.96+ |
X Fabric | ORGANIZATION | 0.95+ |
fifth generation | QUANTITY | 0.93+ |
next decade | DATE | 0.93+ |
X-Series | TITLE | 0.92+ |
X-Series | COMMERCIAL_ITEM | 0.92+ |
one simple goal | QUANTITY | 0.91+ |
three things | QUANTITY | 0.9+ |
single infrastructure | QUANTITY | 0.89+ |
single | QUANTITY | 0.89+ |
one | QUANTITY | 0.88+ |
three new innovations | QUANTITY | 0.87+ |
one common center policy | QUANTITY | 0.86+ |
ORGANIZATION | 0.86+ | |
three big announcements | QUANTITY | 0.84+ |
six | QUANTITY | 0.81+ |
Cisco: Simplifying Hybrid Cloud
>> The introduction of the modern public cloud in the mid 2000s, permanently changed the way we think about IT. At the heart of it, the cloud operating model attacked one of the biggest problems in enterprise infrastructure, human labor costs. More than half of IT budgets were spent on people, and much of that effort added little or no differentiable value to the business. The automation of provisioning, management, recovery, optimization, and decommissioning infrastructure resources has gone mainstream as organizations demand a cloud-like model across all their application infrastructure, irrespective of its physical location. This has not only cut cost, but it's also improved quality and reduced human error. Hello everyone, my name is Dave Vellante and welcome to Simplifying Hybrid Cloud, made possible by Cisco. Today, we're going to explore Hybrid Cloud as an operating model for organizations. Now the definite of cloud is expanding. Cloud is no longer an abstract set of remote services, you know, somewhere out in the clouds. No, it's an operating model that spans public cloud, on-premises infrastructure, and it's also moving to edge locations. This trend is happening at massive scale. While at the same time, preserving granular control of resources. It's an entirely new game where IT managers must think differently to deal with this complexity. And the environment is constantly changing. The growth and diversity of applications continues. And now, we're living in a world where the workforce is remote. Hybrid work is now a permanent state and will be the dominant model. In fact, a recent survey of CIOs by Enterprise Technology Research, ETR, indicates that organizations expect 36% of their workers will be operating in a hybrid mode. Splitting time between remote work and in office environments. This puts added pressure on the application infrastructure required to support these workers. The underlying technology must be more dynamic and adaptable to accommodate constant change. So the challenge for IT managers is ensuring that modern applications can be run with a cloud-like experience that spans on-prem, public cloud, and edge locations. This is the future of IT. Now today, we have three segments where we're going to dig into these issues and trends surrounding Hybrid Cloud. First up, is DD Dasgupta, who will set the stage and share with us how Cisco is approaching this challenge. Next, we're going to hear from Manish Agarwal and Darren Williams, who will help us unpack HyperFlex which is Cisco's hyperconverged infrastructure offering. And finally, our third segment will drill into Unified Compute. More than a decade ago, Cisco pioneered the concept of bringing together compute with networking in a single offering. Cisco frankly, changed the legacy server market with UCS, Unified Compute System. The X-Series is Cisco's next generation architecture for the coming decade and we'll explore how it fits into the world of Hybrid Cloud, and its role in simplifying the complexity that we just discussed. So, thanks for being here. Let's go. (upbeat music playing) Okay, let's start things off. DD Dasgupta is back on theCUBE to talk about how we're going to simplify Hybrid Cloud complexity. DD welcome, good to see you again. >> Hey Dave, thanks for having me. Good to see you again. >> Yeah, our pleasure. Look, let's start with big picture. Talk about the trends you're seeing from your customers. >> Well, I think first off, every customer these days is a public cloud customer. They do have their on-premise data centers, but, every customer is looking to move workloads, new services, cloud native services from the public cloud. I think that's one of the big things that we're seeing. While that is happening, we're also seeing a pretty dramatic evolution of the application landscape itself. You've got, you know, bare metal applications, you always have virtualized applications, and then most modern applications are containerized, and, you know, managed by Kubernetes. So I think we're seeing a big change in, in the application landscape as well. And, probably, you know, triggered by the first two things that I mentioned, the execution venue of the applications, and then the applications themselves, it's triggering a change in the IT organizations in the development organizations and sort of not only how they work within their organizations, but how they work across all of these different organizations. So I think those are some of the big things that, that I hear about when I talk to customers. >> Well, so it's interesting. I often say Cisco kind of changed the game in server and compute when it developed the original UCS. And you remember there were organizational considerations back then bringing together the server team and the networking team and of course the storage team as well. And now you mentioned Kubernetes, that is a total game changer with regard to whole the application development process. So you have to think about a new strategy in that regard. So how have you evolved your strategy? What is your strategy to help customers simplify, accelerate their hybrid cloud journey in that context? >> No, I think you're right Dave, back to the origins of UCS and we, you know, why did a networking company build a server? Well, we just enabled with the best networking technologies so, would do compute better. And now, doing something similar on the software, actually the managing software for our hyperconvergence, for our, you know, Rack server, for our blade servers. And, you know, we've been on this journey for about four years. The software is called Intersight, and, you know, we started out with Intersight being just the element manager, the management software for Cisco's compute and hyperconverged devices. But then we've evolved it over the last few years because we believe that a customer shouldn't have to manage a separate piece of software, would do manage the hardware, the underlying hardware. And then a separate tool to connect it to a public cloud. And then a third tool to do optimization, workload optimization or performance optimization, or cost optimization. A fourth tool to now manage, you know, Kubernetes and like, not just in one cluster, one cloud, but multi-cluster, multi-cloud. They should not have to have a fifth tool that does, goes into observability anyway. I can go on and on, but you get the idea. We wanted to bring everything onto that same platform that manage their infrastructure. But it's also the platform that enables the simplicity of hybrid cloud operations, automation. It's the same platform on which you can use to manage the, the Kubernetes infrastructure, Kubernetes clusters, I mean, whether it's on-prem or in a cloud. So, overall that's the strategy. Bring it to a single platform, and a platform is a loaded word we'll get into that a little bit, you know, in this conversation, but, that's the overall strategy, simplify. >> Well, you know, you brought platform. I like to say platform beats products, but you know, there was a day, and you could still point to some examples today in the IT industry where, hey, another tool we can monetize that. And another one to solve a different problem, we can monetize that. And so, tell me more about how Intersight came about. You obviously sat back, you saw what your customers were going through, you said, "We can do better." So tell us the story there. >> Yeah, absolutely. So, look, it started with, you know, three or four guys in getting in a room and saying, "Look, we've had this, you know, management software, UCS manager, UCS director." And these are just the Cisco's management, you know, for our, softwares for our own platforms. And every company has their own flavor. We said, we took on this bold goal of like, we're not, when we rewrite this or we improve on this, we're not going to just write another piece of software. We're going to create a cloud service. Or we're going to create a SaaS offering. Because the same, the infrastructure built by us whether it's on networking or compute, or the cyber cloud software, how do our customers use it? Well, they use it to write and run their applications, their SaaS services, every customer, every customer, every company today is a software company. They live and die by how their applications work or don't. And so, we were like, "We want to eat our own dog food here," right? We want to deliver this as a SaaS offering. And so that's how it started, we've being on this journey for about four years, tens of thousands of customers. But it was a pretty big, bold ambition 'cause you know, the big change with SaaS as you're familiar Dave is, the job of now managing this piece of software, is not on the customer, it's on the vendor, right? This can never go down. We have a release every Thursday, new capabilities, and we've learned so much along the way, whether it's to announce scalability, reliability, working with, our own company's security organizations on what can or cannot be in a SaaS service. So again, it's been a wonderful journey, but, I wanted to point out, we are in some ways eating our own dog food 'cause we built a SaaS application that helps other companies deliver their SaaS applications. >> So Cisco, I look at Cisco's business model and I compare, of course compare it to other companies in the infrastructure business and, you're obviously a very profitable company, you're a large company, you're growing faster than most of the traditional competitors. And, so that means that you have more to invest. You, can afford things, like to you know, stock buybacks, and you can invest in R&D you don't have to make those hard trade offs that a lot of your competitors have to make, so-- >> You got to have a talk with my boss on the whole investment. >> Yeah, right. You'd never enough, right? Never enough. But in speaking of R&D and innovations that you're intro introducing, I'm specifically interested in, how are you dealing with innovations to help simplify hybrid cloud, the operations there, improve flexibility, and things around Cloud Native initiatives as well? >> Absolutely, absolutely. Well, look, I think, one of the fundamentals where we're kind of philosophically different from a lot of options that I see in the industry is, we don't need to build everything ourselves, we don't. I just need to create a damn good platform with really good platform services, whether it's, you know, around, searchability, whether it's around logging, whether it's around, you know, access control, multi-tenants. I need to create a really good platform, and make it open. I do not need to go on a shopping spree to buy 17 and 1/2 companies and then figure out how to stich it all together. 'Cause it's almost impossible. And if it's impossible for us as a vendor, it's three times more difficult for the customer who then has to consume it. So that was the philosophical difference and how we went about building Intersight. We've created a hardened platform that's always on, okay? And then you, then the magic starts happening. Then you get partners, whether it is, you know, infrastructure partners, like, you know, some of our storage partners like NetApp or PR, or you know, others, who want their conversion infrastructures also to be managed, or their other SaaS offerings and software vendors who have now become partners. Like we did not write Terraform, you know, but we partnered with Hashi and now, you know, Terraform service's available on the Intersight platform. We did not write all the algorithms for workload optimization between a public cloud and on-prem. We partner with a company called Turbonomic and so that's now an offering on the Intersight platform. So that's where we're philosophically different, in sort of, you know, how we have gone about this. And, it actually dovetails well into, some of the new things that I want to talk about today that we're announcing on the Intersight platform where we're actually announcing the ability to attach and be able to manage Kubernetes clusters which are not on-prem. They're actually on AWS, on Azure, soon coming on GC, on GKE as well. So it really doesn't matter. We're not telling a customer if you're comfortable building your applications and running Kubernetes clusters on, you know, in AWS or Azure, stay there. But in terms of monitoring, managing it, you can use Intersight, and since you're using it on-prem you can use that same piece of software to manage Kubernetes clusters in a public cloud. Or even manage DMS in a EC2 instance. So. >> Yeah so, the fact that you could, you mentioned Storage Pure, NetApp, so Intersight can manage that infrastructure. I remember the Hashi deal and I, it caught my attention. I mean, of course a lot of companies want to partner with Cisco 'cause you've got such a strong ecosystem, but I thought that was an interesting move, Turbonomic you mentioned. And now you're saying Kubernetes in the public cloud. So a lot different than it was 10 years ago. So my last question is, how do you see this hybrid cloud evolving? I mean, you had private cloud and you had public cloud, and it was kind of a tug of war there. We see these two worlds coming together. How will that evolve on for the next few years? >> Well, I think it's the evolution of the model and I, really look at Cloud, you know, 2.0 or 3.0, or depending on, you know, how you're keeping terms. But, I think one thing has become very clear again, we, we've be eating our own dog food, I mean, Intersight is a hybrid cloud SaaS application. So we've learned some of these lessons ourselves. One thing is for sure that the customers are looking for a consistent model, whether it's on the edge, on the COLO, public cloud, on-prem, no data center, it doesn't matter. They're looking for a consistent model for operations, for governance, for upgrades, for reliability. They're looking for a consistent operating model. What (indistinct) tells me I think there's going to be a rise of more custom clouds. It's still going to be hybrid, so applications will want to reside wherever it most makes most sense for them which is obviously data, 'cause you know, data is the most expensive thing. So it's going to be complicated with the data goes on the edge, will be on the edge, COLO, public cloud, doesn't matter. But, you're basically going to see more custom clouds, more industry specific clouds, you know, whether it's for finance, or transportation, or retail, industry specific, I think sovereignty is going to play a huge role, you know, today, if you look at the cloud provider there's a handful of, you know, American and Chinese companies, that leave the rest of the world out when it comes to making, you know, good digital citizens of their people and you know, whether it's data latency, data gravity, data sovereignty, I think that's going to play a huge role. Sovereignty's going to play a huge role. And the distributor cloud also called Edge, is going to be the next frontier. And so, that's where we are trying line up our strategy. And if I had to sum it up in one sentence, it's really, your cloud, your way. Every customer is on a different journey, they will have their choice of like workloads, data, you know, upgrade reliability concern. That's really what we are trying to enable for our customers. >> You know, I think I agree with you on that custom clouds. And I think what you're seeing is, you said every company is a software company. Every company is also becoming a cloud company. They're building their own abstraction layers, they're connecting their on-prem to their public cloud. They're doing that across clouds, and they're looking for companies like Cisco to do the hard work, and give me an infrastructure layer that I can build value on top of. 'Cause I'm going to take my financial services business to my cloud model, or my healthcare business. I don't want to mess around with, I'm not going to develop, you know, custom infrastructure like an Amazon does. I'm going to look to Cisco and your R&D to do that. Do you buy that? >> Absolutely. I think again, it goes back to what I was talking about with platform. You got to give the world a solid open, flexible platform. And flexible in terms of the technology, flexible in how they want to consume it. Some of our customers are fine with the SaaS, you know, software. But if I talk to, you know, my friends in the federal team, no, that does not work. And so, how they want to consume it, they want to, you know, (indistinct) you know, sovereignty we talked about. So, I think, you know, job for an infrastructure vendor like ourselves is to give the world a open platform, give them the knobs, give them the right API tool kit. But the last thing I will mention is, you know, there's still a place for innovation in hardware. And I think some of my colleagues are going to get into some of those, you know, details, whether it's on our X-Series, you know, platform or HyperFlex, but it's really, it's going to be software defined, it's a SaaS service and then, you know, give the world an open rock solid platform. >> Got to run on something All right, Thanks DD, always a pleasure to have you on the, theCUBE, great to see you. >> Thanks for having me. >> You're welcome. In a moment, I'll be back to dig into hyperconverged, and where HyperFlex fits, and how it may even help with addressing some of the supply chain challenges that we're seeing in the market today. >> It used to be all your infrastructure was managed here. But things got more complex in distributing, and now IT operations need to be managed everywhere. But what if you could manage everywhere from somewhere? One scalable place that brings together your teams, technology, and operations. Both on-prem and in the cloud. One automated place that provides full stack visibility to help you optimize performance and stay ahead of problems. One secure place where everyone can work better, faster, and seamlessly together. That's the Cisco Intersight cloud operations platform. The time saving, cost reducing, risk managing solution for your whole IT environment, now and into the future of this ever-changing world of IT. (upbeat music) >> With me now are Manish Agarwal, senior director of product management for HyperFlex at Cisco, @flash4all, number four, I love that, on Twitter. And Darren Williams, the director of business development and sales for Cisco. MrHyperFlex, @MrHyperFlex on Twitter. Thanks guys. Hey, we're going to talk about some news and HyperFlex, and what role it plays in accelerating the hybrid cloud journey. Gentlemen, welcome to theCUBE, good to see you. >> Thanks a lot Dave. >> Thanks Dave. >> All right Darren, let's start with you. So, for a hybrid cloud, you got to have on-prem connection, right? So, you got to have basically a private cloud. What are your thoughts on that? >> Yeah, we agree. You can't have a hybrid cloud without that prime element. And you've got to have a strong foundation in terms of how you set up the whole benefit of the cloud model you're building in terms of what you want to try and get back from the cloud. You need a strong foundation. Hyperconversions provides that. We see more and more customers requiring a private cloud, and they're building it with Hyperconversions, in particular HyperFlex. Now to make all that work, they need a good strong cloud operations model to be able to connect both the private and the public. And that's where we look at Intersight. We've got solution around that to be able to connect that around a SaaS offering. That looks around simplified operations, gives them optimization, and also automation to bring both private and public together in that hybrid world. >> Darren let's stay with you for a minute. When you talk to your customers, what are they thinking these days when it comes to implementing hyperconverged infrastructure in both the enterprise and at the edge, what are they trying to achieve? >> So there's many things they're trying to achieve, probably the most brutal honesty is they're trying to save money, that's probably the quickest answer. But, I think they're trying to look in terms of simplicity, how can they remove layers of components they've had before in their infrastructure? We see obviously collapsing of storage into hyperconversions and storage networking. And we've got customers that have saved 80% worth of savings by doing that collapse into a hyperconversion infrastructure away from their Three Tier infrastructure. Also about scalability, they don't know the end game. So they're looking about how they can size for what they know now, and how they can grow that with hyperconvergence very easy. It's one of the major factors and benefits of hyperconversions. They also obviously need performance and consistent performance. They don't want to compromise performance around their virtual machines when they want to run multiple workloads. They need that consistency all all way through. And then probably one of the biggest ones is that around the simplicity model is the management layer, ease of management. To make it easier for their operations, yeah, we've got customers that have told us, they've saved 50% of costs in their operations model on deploying HyperFlex, also around the time savings they make massive time savings which they can reinvest in their infrastructure and their operations teams in being able to innovate and go forward. And then I think probably one of the biggest pieces we've seen as people move away from three tier architecture is the deployment elements. And the ease of deployment gets easy with hyperconverged, especially with Edge. Edge is a major key use case for us. And, what I want, what our customers want to do is get the benefit of a data center at the edge, without A, the big investment. They don't want to compromise in performance, and they want that simplicity in both management and deployment. And, we've seen our analysts recommendations around what their readers are telling them in terms of how management deployment's key for our IT operations teams. And how much they're actually saving by deploying Edge and taking the burden away when they deploy hyperconversions. And as I said, the savings elements is the key bit, and again, not always, but obviously those are case studies around about public cloud being quite expensive at times, over time for the wrong workloads. So by bringing them back, people can make savings. And we again have customers that have made 50% savings over three years compared to their public cloud usage. So, I'd say that's the key things that customers are looking for. Yeah. >> Great, thank you for that Darren. Manish, we have some hard news, you've been working a lot on evolving the HyperFlex line. What's the big news that you've just announced? >> Yeah, thanks Dave. So there are several things that we are announcing today. The first one is a new offer called HyperFlex Express. This is, you know, Cisco Intersight led and Cisco Intersight managed eight HyperFlex configurations. That we feel are the fastest path to hybrid cloud. The second is we are expanding our server portfolio by adding support for HX on AMD Rack, UCS AMD Rack. And the third is a new capability that we are introducing, that we are calling, local containerized witness. And let me take a minute to explain what this is. This is a pretty nifty capability to optimize for Edge environments. So, you know, this leverages the, Cisco's ubiquitous presence of the networking, you know, products that we have in the environments worldwide. So the smallest HyperFlex configuration that we have is a 2-node configuration, which is primarily used in Edge environments. Think of a, you know, a backroom in a departmental store or a oil rig, or it might even be a smaller data center somewhere around the globe. For these 2-node configurations, there is always a need for a third entity that, you know, industry term for that is either a witness or an arbitrator. We had that for HyperFlex as well. And the problem that customers face is, where you host this witness. It cannot be on the cluster because the job of the witness is to, when the infrastructure is going down, it basically breaks, sort of arbitrates which node gets to survive. So it needs to be outside of the cluster. But finding infrastructure to actually host this is a problem, especially in the Edge environments where these are resource constraint environments. So what we've done is we've taken that witness, we've converted it into a container reform factor. And then qualified a very large slew of Cisco networking products that we have, right from ISR, ASR, Nexus, Catalyst, industrial routers, even a Raspberry Pi that can host this witness. Eliminating the need for you to find yet another piece of infrastructure, or doing any, you know, care and feeding of that infrastructure. You can host it on something that already exists in the environment. So those are the three things that we are announcing today. >> So I want to ask you about HyperFlex Express. You know, obviously the whole demand and supply chain is out of whack. Everybody's, you know, global supply chain issues are in the news, everybody's dealing with it. Can you expand on that a little bit more? Can HyperFlex Express help customers respond to some of these issues? >> Yeah indeed Dave. You know the primary motivation for HyperFlex Express was indeed an idea that, you know, one of the folks are on my team had, which was to build a set of HyperFlex configurations that are, you know, would have a shorter lead time. But as we were brainstorming, we were actually able to tag on multiple other things and make sure that, you know, there is in it for, something in it for our customers, for sales, as well as our partners. So for example, you know, for our customers, we've been able to dramatically simplify the configuration and the install for HyperFlex Express. These are still HyperFlex configurations and you would at the end of it, get a HyperFlex cluster. But the part to that cluster is much, much simplified. Second is that we've added in flexibility where you can now deploy these, these are data center configurations, but you can deploy these with or without fabric interconnects, meaning you can deploy with your existing top of rack. We've also, you know, added attractive price point for these, and of course, you know, these will have better lead times because we've made sure that, you know, we are using components that are, that we have clear line of sight from our supply perspective. For partner and sales, this is, represents a high velocity sales motion, a faster turnaround time, and a frictionless sales motion for our distributors. This is actually a set of disty-friendly configurations, which they would find very easy to stalk, and with a quick turnaround time, this would be very attractive for the distys as well. >> It's interesting Manish, I'm looking at some fresh survey data, more than 70% of the customers that were surveyed, this is the ETR survey again, we mentioned 'em at the top. More than 70% said they had difficulty procuring server hardware and networking was also a huge problem. So that's encouraging. What about, Manish, AMD? That's new for HyperFlex. What's that going to give customers that they couldn't get before? >> Yeah Dave, so, you know, in the short time that we've had UCS AMD Rack support, we've had several record making benchmark results that we've published. So it's a powerful platform with a lot of performance in it. And HyperFlex, you know, the differentiator that we've had from day one is that it has the industry leading storage performance. So with this, we are going to get the fastest compute, together with the fastest storage. And this, we are hoping that we'll, it'll basically unlock, you know, a, unprecedented level of performance and efficiency, but also unlock several new workloads that were previously locked out from the hyperconverged experience. >> Yeah, cool. So Darren, can you give us an idea as to how HyperFlex is doing in the field? >> Sure, absolutely. So, both me and Manish been involved right from the start even before it was called HyperFlex, and we've had a great journey. And it's very exciting to see where we are taking, where we've been with the technology. So we have over 5,000 customers worldwide, and we're currently growing faster year over year than the market. The majority of our customers are repeat buyers, which is always a good sign in terms of coming back when they've proved the technology and are comfortable with the technology. They, repeat buyer for expanded capacity, putting more workloads on. They're using different use cases on there. And from an Edge perspective, more numbers of science. So really good endorsement of the technology. We get used across all verticals, all segments, to house mission critical applications, as well as the traditional virtual server infrastructures. And we are the lifeblood of our customers around those, mission critical customers. I think one big example, and I apologize for the worldwide audience, but this resonates with the American audience is, the Super Bowl. So, the SoFi stadium that housed the Super Bowl, actually has Cisco HyperFlex running all the management services, through from the entire stadium for digital signage, 4k video distribution, and it's completely cashless. So, if that were to break during Super Bowl, that would've been a big news article. But it was run perfectly. We, in the design of the solution, we're able to collapse down nearly 200 servers into a few nodes, across a few racks, and have 120 virtual machines running the whole stadium, without missing a heartbeat. And that is mission critical for you to run Super Bowl, and not be on the front of the press afterwards for the wrong reasons, that's a win for us. So we really are, really happy with HyperFlex, where it's going, what it's doing, and some of the use cases we're getting involved in, very, very exciting. >> Hey, come on Darren, it's Super Bowl, NFL, that's international now. And-- >> Thing is, I follow NFL. >> The NFL's, it's invading London, of course, I see the, the picture, the real football over your shoulder. But, last question for Manish. Give us a little roadmap, what's the future hold for HyperFlex? >> Yeah. So, you know, as Darren said, both Darren and I have been involved with HyperFlex since the beginning. But, I think the best is yet to come. There are three main pillars for HyperFlex. One is, Intersight is central to our strategy. It provides a, you know, lot of customer benefit from a single pane of class management. But we are going to take this beyond the lifecycle management, which is for HyperFlex, which is integrated into Intersight today, and element management. We are going to take it beyond that and start delivering customer value on the dimensions of AI Ops, because Intersight really provides us a ideal platform to gather stats from all the clusters across the globe, do AI/ML and do some predictive analysis with that, and return back as, you know, customer valued, actionable insights. So that is one. The second is UCS expand the HyperFlex portfolio, go beyond UCS to third party server platforms, and newer UCS server platforms as well. But the highlight there is one that I'm really, really excited about and think that there is a lot of potential in terms of the number of customers we can help. Is HX on X-Series. X-Series is another thing that we are going to, you know, add, we're announcing a bunch of capabilities on in this particular launch. But HX on X-Series will have that by the end of this calendar year. And that should unlock with the flexibility of X-Series of hosting a multitude of workloads and the simplicity of HyperFlex. We're hoping that would bring a lot of benefits to new workloads that were locked out previously. And then the last thing is HyperFlex data platform. This is the heart of the offering today. And, you'll see the HyperFlex data platform itself it's a distributed architecture, a unique distributed architecture. Primarily where we get our, you know, record baring performance from. You'll see it can foster more scalable, more resilient, and we'll optimize it for you know, containerized workloads, meaning it'll get granular containerized, container granular management capabilities, and optimize for public cloud. So those are some things that we are, the team is busy working on, and we should see that come to fruition. I'm hoping that we'll be back at this forum in maybe before the end of the year, and talking about some of these newer capabilities. >> That's great. Thank you very much for that, okay guys, we got to leave it there. And you know, Manish was talking about the HX on X-Series that's huge, customers are going to love that and it's a great transition 'cause in a moment, I'll be back with Vikas Ratna and Jim Leach, and we're going to dig into X-Series. Some real serious engineering went into this platform, and we're going to explore what it all means. You're watching Simplifying Hybrid Cloud on theCUBE, your leader in enterprise tech coverage. >> The power is here, and here, but also here. And definitely here. Anywhere you need the full force and power of your infrastructure hyperconverged. It's like having thousands of data centers wherever you need them, powering applications anywhere they live, but manage from the cloud. So you can automate everything from here. (upbeat music) Cisco HyperFlex goes anywhere. Cisco, the bridge to possible. (upbeat music) >> Welcome back to theCUBE's special presentation, Simplifying Hybrid Cloud brought to you by Cisco. We're here with Vikas Ratna who's the director of product management for UCS at Cisco and James Leach, who is director of business development at Cisco. Gents, welcome back to theCUBE, good to see you again. >> Hey, thanks for having us. >> Okay, Jim, let's start. We know that when it comes to navigating a transition to hybrid cloud, it's a complicated situation for a lot of customers, and as organizations as they hit the pavement for their hybrid cloud journeys, what are the most common challenges that they face? What are they telling you? How is Cisco, specifically UCS helping them deal with these problems? >> Well, you know, first I think that's a, you know, that's a great question. And you know, customer centric view is the way that we've taken, is kind of the approach we've taken from day one. Right? So I think that if you look at the challenges that we're solving for that our customers are facing, you could break them into just a few kind of broader buckets. The first would definitely be applications, right? That's the, that's where the rubber meets your proverbial road with the customer. And I would say that, you know, what we're seeing is, the challenges customers are facing within applications come from the the way that applications have evolved. So what we're seeing now is more data centric applications for example. Those require that we, you know, are able to move and process large data sets really in real time. And the other aspect of applications I think to give our customers kind of some, you know, pause some challenges, would be around the fact that they're changing so quickly. So the application that exists today or the day that they, you know, make a purchase of infrastructure to be able to support that application, that application is most likely changing so much more rapidly than the infrastructure can keep up with today. So, that creates some challenges around, you know, how do I build the infrastructure? How do I right size it without over provisioning, for example? But also, there's a need for some flexibility around life cycle and planning those purchase cycles based on the life cycle of the different hardware elements. And within the infrastructure, which I think is the second bucket of challenges, we see customers who are being forced to move away from the, like a modular or blade approach, which offers a lot of operational and consolidation benefits, and they have to move to something like a Rack server model for some applications because of these needs that these data centric applications have, and that creates a lot of you know, opportunity for siloing the infrastructure. And those silos in turn create multiple operating models within the, you know, a data center environment that, you know, again, drive a lot of complexity. So that, complexity is definitely the enemy here. And then finally, I think life cycles. We're seeing this democratization of processing if you will, right? So it's no longer just CPU focused, we have GPU, we have FPGA, we have, you know, things that are being done in storage and the fabrics that stitch them together that are all changing rapidly and have very different life cycles. So, when those life cycles don't align for a lot of our customers, they see a challenge in how they can manage this, you know, these different life cycles and still make a purchase without having to make too big of a compromise in one area or another because of the misalignment of life cycles. So, that is a, you know, kind of the other bucket. And then finally, I think management is huge, right? So management, you know, at its core is really right size for our customers and give them the most value when it meets the mark around scale and scope. You know, back in 2009, we weren't meeting that mark in the industry and UCS came about and took management outside the chassis, right? We put it at the top of the rack and that worked great for the scale and scope we needed at that time. However, as things have changed, we're seeing a very new scale and scope needed, right? So we're talking about a hybrid cloud world that has to manage across data centers, across clouds, and, you know, having to stitch things together for some of our customers poses a huge challenge. So there are tools for all of those operational pieces that touch the application, that touch the infrastructure, but they're not the same tool. They tend to be disparate tools that have to be put together. >> Right. >> So our customers, you know, don't really enjoy being in the business of, you know, building their own tools, so that creates a huge challenge. And one where I think that they really crave that full hybrid cloud stack that has that application visibility but also can reach down into the infrastructure. >> Right. You know Jim, I said in my open that you guys, Cisco sort of changed the server game with the original UCS, but the X-Series is the next generation, the generation for the next decade which is really important 'cause you touched on a lot of things, these data intensive workload, alternative processors to sort of meet those needs. The whole cloud operating model and hybrid cloud has really changed. So, how's it going with with the X-Series? You made a big splash last year, what's the reception been in the field? >> Actually, it's been great. You know, we're finding that customers can absolutely relate to our, you know, UCS X-Series story. I think that, you know, the main reason they relate to it is they helped create it, right? It was their feedback and their partnership that gave us really the, those problem areas, those areas that we could solve for the customer that actually add, you know, significant value. So, you know, since we brought UCS to market back in 2009, you know, we had this unique architectural paradigm that we created, and I think that created a product which was the fastest in Cisco history in terms of growth. What we're seeing now is X-Series is actually on a faster trajectory. So we're seeing a tremendous amount of uptake. We're seeing all, you know, both in terms of, you know, the number of customers, but also more importantly, the number of workloads that our customers are using, and the types of workloads are growing, right? So we're growing this modular segment that exist, not just, you know, bringing customers onto a new product, but we're actually bring them into the product in the way that we had envisioned, which is one infrastructure that can run any application and do it seamlessly. So we're really excited to be growing this modular segment. I think the other piece, you know, that, you know, we judge ourselves is, you know, sort of not just within Cisco, but also within the industry. And I think right now is a, you know, a great example, you know, our competitors have taken kind of swings and misses over the past five years at this, at a, you know, kind of the new next architecture. And, we're seeing a tremendous amount of growth even faster than any of our competitors have seen when they announced something that was new to this space. So, I think that the ground up work that we did is really paying off. And I think that what we're also seeing is it's not really a leap frog game, as it may have been in the past. X-Series is out in front today, and, you know, we're extending that lead with some of the new features and capabilities we have. So we're delivering on the story that's already been resonating with customers and, you know, we're pretty excited that we're seeing the results as well. So, as our competitors hit walls, I think we're, you know, we're executing on the plan that we laid out back in June when we launched X-Series to the world. And, you know, as we continue to do that, we're seeing, you know, again, tremendous uptake from our customers. >> So thank you for that Jim. So Vikas, I was just on Twitter just today actually talking about the gravitational pull, you've got the public clouds pulling CXOs one way and you know, on-prem folks pulling the other way and hybrid cloud. So, organizations are struggling with a lot of different systems and architectures and ways to do things. And I said that what they're trying to do is abstract all that complexity away and they need infrastructure to support that. And I think your stated aim is really to try to help with that confusion with the X series, right? I mean, so how so can you explain that? >> Sure. And, that's the right, the context that you built up right there Dave. If you walk into enterprise data center you'll see plethora of compute systems spread all across. Because, every application has its unique needs, and, hence you find drive node, drive-dense system, memory dense system, GPU dense system, core dense system, and variety of form factors, 1U, 2U, 4U, and, every one of them typically come with, you know, variety of adapters and cables and so forth. This creates the siloness of resources. Fabric is (indistinct), the adapter is (indistinct). The power and cooling implication. The Rack, you know, face challenges. And, above all, the multiple management plane that they come up with, which makes it very difficult for IT to have one common center policy, and enforce it all across, across the firmware and software and so forth. And then think about upgrade challenges of the siloness makes it even more complex as these go through the upgrade processes of their own. As a result, we observe quite a few of our customers, you know, really seeing an inter, slowness in that agility, and high burden in the cost of overall ownership. This is where with the X-Series powered by Intersight, we have one simple goal. We want to make sure our customers get out of that complexities. They become more agile, and drive lower TCOs. And we are delivering it by doing three things, three aspects of simplification. First, simplify their whole infrastructure by enabling them to run their entire workload on single infrastructure. An infrastructure which removes the siloness of form factor. An infrastructure which reduces the Rack footprint that is required. An infrastructure where power and cooling budgets are in the lower. Second, we want to simplify by delivering a cloud operating model, where they can and create the policy once across compute network storage and deploy it all across. And third, we want to take away the pain they have by simplifying the process of upgrade and any platform evolution that they're going to go through in the next two, three years. So that's where the focus is on just driving down the simplicity, lowering down their TCOs. >> Oh, that's key, less friction is always a good thing. Now, of course, Vikas we heard from the HyperFlex guys earlier, they had news not to be outdone. You have hard news as well. What innovations are you announcing around X-Series today? >> Absolutely. So we are following up on the exciting X-Series announcement that we made in June last year, Dave. And we are now introducing three innovation on X-Series with the goal of three things. First, expand the supported workload on X-Series. Second, take the performance to new levels. Third, dramatically reduce the complexities in the data center by driving down the number of adapters and cables that are needed. To that end, three new innovations are coming in. First, we are introducing the support for the GPU node using a cableless and very unique X-Fabric architecture. This is the most elegant design to add the GPUs to the compute node in the modular form factor. Thereby, our customers can now power in AI/ML workload, or any workload that need many more number of GPUs. Second, we are bringing in GPUs right onto the compute node, and thereby our customers can now fire up the accelerated VDI workload for example. And third, which is what you know, we are extremely proud about, is we are innovating again by introducing the fifth generation of our very popular unified fabric technology. With the increased bandwidth that it brings in, coupled with the local drive capacity and densities that we have on the compute node, our customers can now fire up the big data workload, the FCI workload, the SDS workload. All these workloads that have historically not lived in the modular form factor, can be run over there and benefit from the architectural benefits that we have. Second, with the announcement of fifth generation fabric, we've become the only vendor to now finally enable 100 gig end to end single port bandwidth, and there are multiple of those that are coming in there. And we are working very closely with our CI partners to deliver the benefit of these performance through our Cisco Validated Design to our CI franchise. And third, the innovations in the fifth gen fabric will again allow our customers to have fewer physical adapters made with ethernet adapter, made with power channel adapters, or made with, the other storage adapters. They've reduced it down and coupled with the reduction in the cable. So very, very excited about these three big announcements that we are making in this month's release. >> Great, a lot there, you guys have been busy, so thank you for that Vikas. So, Jim, you talked a little bit about the momentum that you have, customers are adopting, what problems are they telling you that X-Series addresses, and how do they align with where they want to go in the future? >> That's a great question. I think if you go back to, and think about some of the things that we mentioned before, in terms of the problems that we originally set out to solve, we're seeing a lot of traction. So what Vikas mentioned I think is really important, right? Those pieces that we just announced really enhance that story and really move again, to the, kind of, to the next level of taking advantage of some of these, you know, problem solving for our customers. You know, if you look at, you know, I think Vikas mentioned accelerated VDI. That's a great example. These are where customers, you know, they need to have this dense compute, they need video acceleration, they need tight policy management, right? And they need to be able to deploy these systems anywhere in the world. Well, that's exactly what we're hitting on here with X-Series right now. We're hitting the market in every single way, right? We have the highest compute config density that we can offer across the, you know, the very top end configurations of CPUs, and a lot of room to grow. We have the, you know, the premier cloud based management, you know, hybrid cloud suite in the industry, right? So check there. We have the flexible GPU accelerators that Vikas just talked about that we're announcing both on the system and also adding additional ones to the, through the use of the X-Fabric, which is really, really critical to this launch as well. And, you know, I think finally, the fifth generation of fabric interconnect and virtual interface card, and, intelligent fabric module go hand in hand in creating this 100 gig end to end bandwidth story, that we can move a lot of data. Again, you know, having all this performance is only as good as what we can get in and out of it, right? So giving customers the ability to manage it anywhere, to be able to get the bandwidth that they need, to be able to get the accelerators that are flexible that it fit exactly their needs, this is huge, right? This solves a lot of the problems we can tick off right away. With the infrastructure as I mentioned, X-Fabric is really critical here because it opens a lot of doors here, you know, we're talking about GPUs today, but in the future, there are other elements that we can disaggregate, like the GPUs that solve these life cycle mismanagement issues. They solve issues around the form factor limitations. It solves all these issues for like, it does for GPU we can do that with storage or memory in the future. So that's going to be huge, right? This is disaggregation that actually delivers, right? It's not just a gimmicky bar trick here that we're doing, this is something that customers can really get value out of day one. And then finally, I think the, you know, the future readiness here, you know, we avoid saying future proof because we're kind of embracing the future here. We know that not only are the GPUs going to evolve, the CPUs are going to evolve, the drives, you know, the storage modules are going to evolve. All of these things are changing very rapidly. The fabric that stitches them together is critical, and we know that we're just on the edge of some of the development that are coming with CXL, with some of the PCI Express changes that are coming in the very near future, so we're ready to go. And the X-Fabric is exactly the vehicle that's going to be able to deliver those technologies to our customers, right? Our customers are out there saying that, you know, they want to buy into to something like X-Series that has all the operational benefits, but at the same time, they have to have the comfort in knowing that they're protected against being locked out of some technology that's coming in the future, right? We want our customers to take these disruptive technologies and not be disrupted, but use them to disrupt their competition as well. So, you know, we're really excited about the pieces today, and, I think it goes a long way towards continuing to tell the customer benefit story that X-Series brings, and, you know, again, you know, stay tuned because it's going to keep getting better as we go. >> Yeah, a lot of headroom for scale and the management piece is key there. Just have time for one more question Vikas. Give us some nuggets on the roadmap. What's next for X-Series that we can look forward to? >> Absolutely Dave. As we talked about, and as Jim also hinted, this is a future ready architecture. A lot of focus and innovation that we are going through is about enabling our customers to seamlessly and painlessly adopt very disruptive hardware technologies that are coming up, no refund replace. And, there we are looking into, enabling the customer's journey as they transition from PCI generation four, to five to six without driven replace, as they embrace CXL without driven replace. As they embrace the newer paradigm of computing through the disaggregated memory, disaggregated PCIe or NVMe based dense drives, and so forth. We are also looking forward to X-Fabric next generation, which will allow dynamic assignment of GPUs anywhere within the chassis and much more. So this is again, all about focusing on the innovation that will make the enterprise data center operations a lot more simpler, and drive down the TCO by keeping them not only covered for today, but also for future. So that's where some of the focus is on Dave. >> Okay. Thank you guys we'll leave it there, in a moment, I'll have some closing thoughts. (upbeat music) We're seeing a major evolution, perhaps even a bit of a revolution in the underlying infrastructure necessary to support hybrid work. Look, virtualizing compute and running general purpose workloads is something IT figured out a long time ago. But just when you have it nailed down in the technology business, things change, don't they? You can count on that. The cloud operating model has bled into on-premises locations. And is creating a new vision for the future, which we heard a lot about today. It's a vision that's turning into reality. And it supports much more diverse and data intensive workloads and alternative compute modes. It's one where flexibility is a watch word, enabling change, attacking complexity, and bringing a management capability that allows for a granular management of resources at massive scale. I hope you've enjoyed this special presentation. Remember, all these videos are available on demand at thecube.net. And if you want to learn more, please click on the information link. Thanks for watching Simplifying Hybrid Cloud brought to you by Cisco and theCUBE, your leader in enterprise tech coverage. This is Dave Vellante, be well and we'll see you next time. (upbeat music)
SUMMARY :
and its role in simplifying the complexity Good to see you again. Talk about the trends you're of the big things that, and of course the storage team as well. UCS and we, you know, Well, you know, you brought platform. is not on the customer, like to you know, stock buybacks, on the whole investment. hybrid cloud, the operations Like we did not write Terraform, you know, Kubernetes in the public cloud. that leave the rest of the world out you know, custom infrastructure And flexible in terms of the technology, have you on the, theCUBE, some of the supply chain challenges to help you optimize performance And Darren Williams, the So, for a hybrid cloud, you in terms of what you want to in both the enterprise and at the edge, is that around the simplicity What's the big news that Eliminating the need for you to find are in the news, and of course, you know, more than 70% of the is that it has the industry is doing in the field? and not be on the front Hey, come on Darren, the real football over your shoulder. and return back as, you know, And you know, Manish was Cisco, the bridge to possible. theCUBE, good to see you again. We know that when it comes to navigating or the day that they, you know, the business of, you know, my open that you guys, can absolutely relate to our, you know, and you know, on-prem the context that you What innovations are you And third, which is what you know, the momentum that you have, the future readiness here, you know, for scale and the management a lot more simpler, and drive down the TCO brought to you by Cisco and theCUBE,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jim | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
UCS | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Manish Agarwal | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
80% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
50% | QUANTITY | 0.99+ |
June | DATE | 0.99+ |
17 | QUANTITY | 0.99+ |
36% | QUANTITY | 0.99+ |
Darren | PERSON | 0.99+ |
James Leach | PERSON | 0.99+ |
three | QUANTITY | 0.99+ |
100 gig | QUANTITY | 0.99+ |
Darren Williams | PERSON | 0.99+ |
Enterprise Technology Research | ORGANIZATION | 0.99+ |
June last year | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one sentence | QUANTITY | 0.99+ |
Turbonomic | ORGANIZATION | 0.99+ |
Super Bowl | EVENT | 0.99+ |
thecube.net | OTHER | 0.99+ |
more than 70% | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Vikas | ORGANIZATION | 0.99+ |
third segment | QUANTITY | 0.99+ |
Vikas | PERSON | 0.99+ |
One | QUANTITY | 0.99+ |
fourth tool | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
third | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Vikas Ratna | PERSON | 0.99+ |
Intersight | ORGANIZATION | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
HyperFlex | ORGANIZATION | 0.99+ |
mid 2000s | DATE | 0.99+ |
third tool | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
More than 70% | QUANTITY | 0.99+ |
X-Series | TITLE | 0.99+ |
10 years ago | DATE | 0.99+ |
Kimberly Leyenaar, Broadcom
(upbeat music) >> Hello everyone, and welcome to this CUBE conversation where we're going to go deep into system performance. We're here with an expert. Kim Leyenaar is the Principal Performance Architect at Broadcom. Kim. Great to see you. Thanks so much for coming on. >> Thanks so much too. >> So you have a deep background in performance, performance assessment, benchmarking, modeling. Tell us a little bit about your background, your role. >> Thanks. So I've been a storage performance engineer and architect for about 22 years. And I'm specifically been for abroad with Broadcom for I think next month is going to be my 14 year mark. So what I do there is initially I built and I manage their international performance team, but about six years ago I moved back into architecture, and what my roles right now are is I generate performance projections for all of our next generation products. And then I also work on marketing material and I interface with a lot of the customers and debugging customer issues, and looking at how our customers are actually using our storage. >> Great. Now we have a graphic that we want to share. It talks to how storage has evolved over the past decade. So my question is what changes have you seen in storage and how has that impacted the way you approach benchmarking. In this graphic we got sort of big four items that impact performance, memory processor, IO pathways, and the storage media itself, but walk us through this data if you would. >> Sure. So what I put together is a little bit of what we've seen over the past 15 to 20 years. So I've been doing this for about 22 years and kind of going back and focusing a little bit on the storage, we looked back at hard disk, they ruled for. And nearly they had almost 50 years of ruling. And our first hard drive that came out back in the 1950s was only capable of five megabytes in capacity. and one and a half iOS per second. It had almost a full second in terms of seat time. So we've come a long way since then. But when I first came on, we were looking at Ultra 320 SCSI. And one of the biggest memories that I have of that was my office is located close to our tech support. And I could hear the first question was always, what's your termination like? And so we had some challenges with SCSI, and then we moved on into SAS and data protocols. And we continued to move on. But right now, back in the early 2000s when I came on board, the best drives really could do maybe 400 iOS per second. Maybe two 250 megabytes per second, with millisecond response times. And so when I was benchmarking way back when it was always like, well, IOPS are IOPS. We were always faster than what the drives to do. And that was just how it was. The drives were always the bottleneck in the system. And so things started changing though by the early 2000s, mid 2000s. We started seeing different technologies come out. We started seeing that virtualization and multi-tenant infrastructures becoming really popular. And then we had cloud computing that was well on the horizon. And so at this point, we're like, well, wait a minute, we really can't make processors that much faster. And so everybody got excited to include (indistinct) and the home came out but, they had two cores per processor and four cores per processor. And so we saw a little time period where actually the processing capability kind of pulled ahead of everybody else. And memory was falling behind. We had good old DVR, 2, 6, 67. It was new with the time, but we only had maybe one or two memory channels per processor. And then in 2007 we saw disk capacity hit one terabyte. And we started seeing a little bit of an imbalance because we were seeing these drives are getting massive, but their performance per drive was not really kind of keeping up. So now we see a revolution around 2010. And my co-worker and I at the time, we have these little USB discs, if you recall, we would put them in. They were so fast. We were joking at the time. "Hey, you know what, wonder if we could make a raid array out of these little USB disks?" They were just so fast. The idea was actually kind of crazy until we started seeing it actually happen. So in 2010 SSD started revolutionizing storage. And the first SSDs that we really worked with these plaint LS-300 and they were amazing because they were so over-provisioned that they had almost the same reader, right performance. But to go from a drive that could do maybe 400 IOS per second to a drive like 40,000 plus iOS per second, really changed our thought process about how our storage controller could actually try and keep up with the rest of the system. So we started falling behind. That was a big challenge for us. And then in 2014, NVMe came around as well. So now we've got these drives, they're 30 terabytes. They can do one and a half million iOS per second, and over 6,000 megabytes per second. But they were expensive. So people start relegating SSDs more towards tiered storage or cash. And as the prices of these drives kind of came down, they became a lot more mainstream. And then the memory channels started picking up. And they started doubling every few years. And we're looking now at DVR 5 4800. And now we're looking at cores that used to go from two to four cores per processor up to 48 with some of the latest different processes that are out there. So our ability to consume the computing and the storage resources, it's astounding, you know, it's like that whole saying, 'build it and they will come.' Because I'm always amazed, I'm like, how are we going to possibly utilize all this memory bandwidth? How are we going to utilize all these cores? But we do. And the trick to this is having just a balanced infrastructure. It's really critical. Because if you have a performance mismatch between your server and your storage, you really lose a lot of productivity and it does impact your revenue. >> So that's such a key point. Pardon, begin that slide up again with the four points. And that last point that you made Kim about balance. And so here you have these, electronic speeds with memory and IO, and then you've got the spinning disc, this mechanical disc. You mentioned that SSD kind of changed the game, but it used to be, when I looked at benchmarks, it was always the D stage bandwidth of the cash out to the spinning disc was always the bottleneck. And, you go back to the days of you it's symmetrics, right? The huge backend disk bandwidth was how they dealt with that. But, and then you had things the oxymoron of the day was high spin speed disks of a high performance disk. Compared to memories. And, so the next chart that we have is show some really amazing performance increases over the years. And so you see these bars on the left-hand side, it looks at historical performance for 4k random IOPS. And on the right-hand side, it's the storage controller performance for sequential bandwidth from 2008 to 2022. That's 22 is that yellow line. It's astounding the increases. I wonder if you could tell us what we're looking at here, when did SSD come in and how did that affect your thinking? (laughs) >> So I remember back in 2007, we were kind of on the precipice of SSDs. We saw it, the writing was on the wall. We had our first three gig SAS and SATA capable HPAs that had come out. And it was a shock because we were like, wow, we're going to really quickly become the bottleneck once this becomes more mainstream. And you're so right though about people work in, building these massive hard drive based back ends in order to handle kind of that tiered architecture that we were seeing that back in the early 2010s kind of when the pricing was just so sky high. And I remember looking at our SAS controllers, our very first one, and that was when I first came in at 2007. We had just launched our first SAS controller. We're so proud of ourselves. And I started going how many IOPS can this thing, even handled? We couldn't even attach enough drives to figure it out. So what we would do is we'd do these little tricks where we would do a five 12 byte read, and we would do it on a 4k boundary, so that it was actually reading sequentially from the disc, but we were handling these discrete IOPS. So we were like, oh, we can do around 35,000. Well, that's just not going to hit it anymore. Bandwidth wise we were doing great. Really our limitation and our bottleneck on bandwidth was always either the host or the backend. So, our controllers are there basically, there were three bottlenecks for our storage controllers. The first one is the bottleneck from the host to the controller. So that is typically a PCIe connection. And then there's another bottleneck on the controller to the disc. And that's really the number of ports that we have. And then the third one is the discs themselves. So in typical storage, that's what we look at. And we say, well, how do we improve this? So some of these are just kind of evolutionary, such as PCIE generations. And we're going to talk a little bit about that, but some of them are really revolutionary, and those are some of the things that we've been doing over the last five or six years to try and make sure that we are no longer the bottleneck. And we can enable these really, really fast drives. >> So can I ask a question? I'm sorry to interrupted but on these blue bars here. So these all spinning disks, I presume, out years they're not. Like when did flash come in to these blue bars? is that..you said 27 you started looking at it, but on these benchmarks, is it all spinning disc? Is it all flash? How should we interpret that? >> No, no. Initially they were actually all hard drives. And the way that we would identify, the max iOS would be by doing very small sequential reads to these hard drives. We just didn't have SSDs at that point. And then somewhere around 2010 is where we.. it was very early in that chart, we were able to start incorporating SSD technology into our benchmarking. And so what you're looking at here is really the max that our controller is capable of. So we would throw as many drives as we could and do what we needed to do in order to just make sure our controller was the bottleneck and what can we expose. >> So the drive then when SSD came in was no longer the bottleneck. So you guys had to sort of invent and rethink sort of how, what your innovation and your technology, because, I mean, these are astounding increases in performance. I mean, I think in the left-hand side, we've built this out pad, you got 170 X increase for the 4k random IOPS, and you've got a 20 X increase for the sequential bandwidth. How were you able to achieve that level of performance over time? >> Well, in terms of the sequential bandwidth, really those come naturally by increases in the PCIe or the SAS generation. So we just make sure we stay out of the way, and we enable that bandwidth. But the IOPS that's where it got really, really tricky. So we had to start thinking about different things. So, first of all, we started optimizing all of our pathways, all of our IO management, we increased the processing capabilities on our IO controllers. We added more on-chip memory. We started putting in IO accelerators, these hardware accelerators. We put in SAS poor kind of enhancements. We even went and improved our driver to make sure that our driver was as thin as possible. So we can make sure that we can enable all the IOPS on systems. But a big thing happening a few couple of generations ago was we started introducing something called tri capable controllers, which means that you could attach NVMe. You could attach SAS or you could attach SATA. So you could have this really amazing deployment of storage infrastructure based around your customized needs and your cost requirements by using one controller. >> Yeah. So anybody who's ever been to a trade show where they were displaying a glass case with a Winchester disc drive, for example, you see it's spinning and its actuators is moving, wow, that's so fast. Well, no. That's like a tourist slower. It's like a snail compared to the system's speed. So it's, in a way life was easy back in those days, because when you did a right to a disk, you had plenty of time to do stuff, right. And now it's changed. And so I want to talk about Gen3 versus Gen4, and how all this relates to what's new in Gen4 and the impacts of PCIe here, you have a chart here that you've shared with us that talks to that. And I wonder if you could elaborate on that, Kim. >> Sure. But first, you said something that kind of hit my funny bone there. And I remember I made a visit once about 15 or 20 years ago to IBM. And this gentleman actually had one of those old ones in his office and he referred to them as disk files. And he never until the day he retired, he'd never stopped calling them disc files. And it's kind of funny to be a part of that history. >> Yeah. DASD. They used to call it. (both laughing) >> SD, DASD. I used to get all kinds of, you know, you don't know what it was like back then, but yeah. But now nowadays we've got it quite easily enabled because back then, we had, SD DASD and all that. And then, ATA and then SCSI, well now we've got PCIe. And what's fabulous about PCIe is that it just has the generations are already planned out. It's incredible. You know, we're looking at right now, Gen3 moving to Gen4, and that's a lot about what we're going to be talking about. And that's what we're trying to test out. What is Gen4 PCIe when to bias? And it really is. It's fantastic. And PCIe came around about 18 years ago and Broadcom is, and we do participate and contribute to the PCIe SIG, which is, who develops the standards for PCIe, but the host in both our host interface in our NVMe desk and utilize the standards. So this is really, really a big deal, really critical for us. But if you take a look here, you can see that in terms of the capabilities of it, it's really is buying us a lot. So most of our drives right now NVMe drives tend to be by four. And a lot of people will connect them. And what that means is four lanes of NVMe and a lot of people that will connect them either at by one or by two kind of depending on what their storage infrastructure will allow. But the majority of them you could buy, or there are so, as you can see right now, we've gone from eight gig transfers per second to 16 gig of transfers per second. What that means is for a by four, we're going from one drive being able to do 4,000 to do an almost 8,000 megabytes per second. And in terms of those 4k IOPS that really evade us, they were really really tough sometimes to squeeze out of these drives, but now we're got 1 million, all we have to 2 million, it's just, it's insane. You know, just the increase in performance. And there's a lot of other standards that are going to be sitting on top of PCIe. So it's not going away anytime soon. We've got to open standards like CXL and things like that, but we also have graphics cards. You've got all of your hosts connections, they're also sitting on PCIe. So it's fantastic. It's backwards, it's orbits compatible, and it really is going to be our future. >> So this is all well and good. And I think I really believe that a lot of times in our industry, the challenges in the plumbing are underappreciated. But let's make it real for the audience because we have all these new workloads coming out, AI, heavily data oriented. So I want to get your thoughts on what types of workloads are going to benefit from Gen4 performance increases. In other words, what does it mean for application performance? You shared a chart that lists some of the key workloads, and I wonder if we could go through those. >> Yeah, yeah. I could have a large list of different workloads that are able to consume large amounts of data, whether or not it's in small or large kind of bytes of data. But as you know right now, and I said earlier, our ability to consume these compute and storage resources is amazing. So you build it and we'll use it. And the world's data we're expected to grow 61% to 175 zettabytes by the year 2025, according to IDC. So that's just a lot of data to manage. It's a lot of data to have, and it's something that's sitting around, but to be useful, you have to actually be able to access it. And that's kind of where we come in. So who is accessing it? What kind of applications? I spend a lot of time trying to understand that. And recently I attended a virtual conference SDC and what I like to do when I attend these conferences is to try to figure out what the buzz words are. What's everybody talking about? Because every year it's a little bit different, but this year was edge, edge everything. And so I kind of put edge on there first in, even you can ask anybody what's edge computing and it's going to mean a lot of different things, but basically it's all the computing outside of the cloud. That's happening typically at the edge of the network. So it tends to encompass a lot of real time processing on those instant data. So in the data is usually coming from either users or different sensors. It's that last mile. It's where we kind of put a lot of our content caching. And, I uncovered some interesting stuff when I was attending this virtual conference and they say only about 25% of all the usable data actually even reach the data center. The rest is ephemeral and it's localized, locally and in real time. So what it does is in the goal of edge computing is to try and reduce the bandwidth costs for these kinds of IOT devices that go over a long distance. But the reality is the growth of real-time applications that require these kinds of local processing are going to drive this technology forward over the coming years. So Dave, your toaster and your dishwasher they're, IOT edge devices probably in the next year, if they're not already. So edge is a really big one and consumes a lot of the data. >> The buzzword does your now is met the metaverse, it's almost like the movie, the matrix is going to come in real time. But the fact is it's all this data, a lot of videos, some of the ones that I would call out here, you mentioned facial recognition, real-time analytics. A lot of the edge is going to be real-time inferencing, applying AI. And these are just a massive, massive data sets that you again, you and of course your customers are enabling. >> When we first came out with our very first Gen3 product, our marketing team actually asked me, "Hey, how can we show users how they can consume this?" So I actually set up a head to environment. I decided I'm going to learn how to do this. I set up this massive environment with Hadoop, and at the time they called big data, the 3V's, I don't know if you remember these big 3Vs, the volume, velocity and variety. Well Dave, did you know, there are now 10 Vs? So besides those three, we got velocity, we got valued, we got variability, validity, vulnerability, volatility, visualization. So I'm thinking we need just to add another beat of that. >> Yeah. (both laughing) Well, that's interesting. You mentioned that, and that sort of came out of the big data world, a dupe world, which was very centralized. You're seeing the cloud is expanding, the world's getting, you know, data is by its very nature decentralized. And so you've got to have the ability to do an analysis in place. A lot of the edge analytics are going to be done in real time. Yes, sure. Some of it's going to go back in the cloud for detailed modeling, but we are the next decade Kim, ain't going to be like the last I often say. (laughing) I'll give you the last word. I mean, how do you see this sort of evolving, who's going to be adopting this stuff. Give us a sort of a timeframe for this kind of rollout in your world. >> In terms of the timeframe. I mean really nobody knows, but we feel like Gen5, that it's coming out next year. It may not be a full rollout, but we're going to start seeing Gen5 devices and Gen5 infrastructure is being built out over the next year. And then follow very, very, very quickly by Gen6. And so what we're seeing though is, we're starting to see these graphics processors, These GPU's, and I'm coming out as well, that are going to be connecting, using PCIe interfaces as well. So being able to access lots and lots and lots of data locally is going to be a really, really big deal and order because worldwide, all of our companies they're using business analytics. Data is money. And the person that actually can improve their operational efficiency, bolster those sales and increase your customer satisfaction. Those are the companies that are going on to win. And those are the companies that are going to be able to effectively store, retrieve and analyze all the data that they're collecting over the years. And that requires an abundance of data. >> Data is money and it's interesting. It kind of all goes back to when Steve jobs decided to put flash inside of an iPhone and the industry exploded, consumer economics kicked in 5G now edge AI, a lot of the things you talked about, GPU's the neural processing unit. It's all going to be coming together in this decade. Very exciting. Kim, thanks so much for sharing this data and your perspectives. I'd love to have you back when you got some new perspectives, new benchmark data. Let's do that. Okay. >> I look forward to it. Thanks so much. >> You're very welcome. And thank you for watching this CUBE conversation. This is Dave Vellante and we'll see you next time. (upbeat music)
SUMMARY :
Kim Leyenaar is the Principal So you have a deep a lot of the customers and how has that impacted the And I could hear the And, so the next chart that we have And it was a shock because we were like, in to these blue bars? And the way that we would identify, So the drive then when SSD came in Well, in terms of the And I wonder if you could And it's kind of funny to They used to call it. and a lot of people that will But let's make it real for the audience and consumes a lot of the data. the matrix is going to come in real time. and at the time they the ability to do an analysis And the person that actually can improve a lot of the things you talked about, I look forward to it. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
2007 | DATE | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
2008 | DATE | 0.99+ |
Kim Leyenaar | PERSON | 0.99+ |
2014 | DATE | 0.99+ |
61% | QUANTITY | 0.99+ |
Kimberly Leyenaar | PERSON | 0.99+ |
4,000 | QUANTITY | 0.99+ |
14 year | QUANTITY | 0.99+ |
2010 | DATE | 0.99+ |
20 X | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
1 million | QUANTITY | 0.99+ |
Kim | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
two cores | QUANTITY | 0.99+ |
third one | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
2 million | QUANTITY | 0.99+ |
16 gig | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
five megabytes | QUANTITY | 0.99+ |
10 Vs | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
170 X | QUANTITY | 0.99+ |
eight gig | QUANTITY | 0.99+ |
30 terabytes | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
mid 2000s | DATE | 0.99+ |
400 | QUANTITY | 0.99+ |
early 2000s | DATE | 0.99+ |
one and a half million | QUANTITY | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
first | QUANTITY | 0.99+ |
four cores | QUANTITY | 0.99+ |
175 zettabytes | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
three bottlenecks | QUANTITY | 0.99+ |
early 2010s | DATE | 0.99+ |
next decade | DATE | 0.99+ |
early 2000s | DATE | 0.99+ |
4k | QUANTITY | 0.99+ |
one drive | QUANTITY | 0.99+ |
first one | QUANTITY | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
LS-300 | COMMERCIAL_ITEM | 0.98+ |
next month | DATE | 0.98+ |
1950s | DATE | 0.98+ |
about 22 years | QUANTITY | 0.98+ |
one controller | QUANTITY | 0.98+ |
2025 | DATE | 0.98+ |
iOS | TITLE | 0.98+ |
Steve | PERSON | 0.98+ |
Winchester | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
DVR 5 4800 | COMMERCIAL_ITEM | 0.98+ |
both | QUANTITY | 0.98+ |
four lanes | QUANTITY | 0.98+ |
around 35,000 | QUANTITY | 0.97+ |
first three gig | QUANTITY | 0.97+ |
about six years ago | DATE | 0.96+ |
about 25% | QUANTITY | 0.96+ |
first hard drive | QUANTITY | 0.96+ |
over 6,000 megabytes per second | QUANTITY | 0.96+ |
20 years ago | DATE | 0.96+ |
almost 50 years | QUANTITY | 0.96+ |
22 | QUANTITY | 0.95+ |
one and a half | QUANTITY | 0.95+ |
Manoj Nair, Metallic.io & Dave Totten, Microsoft | Commvault Connections 2021
(lighthearted music) >> We're here now with Manoj Nair, who's the general manager of Metallic and Dave Totten CTO with Microsoft. And we're going to talk about some of the announcements that we heard earlier today and what Metallic and Microsoft are doing to meet customer needs around cyber threats and ensuring secure cloud data management. Gentlemen, welcome to theCUBE. Good to see you. >> Thanks Dave. >> Thank you. >> Hey Manoj, let me start with you. We heard early this morning, Dave Totten was here, David Noe, talk a lot about security. Has the conversation changed, how has it changed when you talk to customers, Manoj? What's top of mind. >> Yeah, thank you, Dave. And thank you, Dave Totten. You know, great conversation earlier. Dave, you and I have talked about this in the past, right? Security long a big passion of mine. You know, having lived through nation state attacks in the past and all that. We're seeing those kinds of techniques really just getting mainstream, right? Ransomware has become a mainstream problem in the scourge in our lives. Now, when you look at it from a lens of data and data management, data protection, backup, all of this was very much a passive you know, compliance centric use case. It was pretty static you know, put it in tapes, haul it all over. And what has really changed with this ransomware and cybercrime change rate is data, which is now your most precious asset, is under attack. So now you see security teams, just like you talked with Dave Martin, from ADP earlier, they are looking for that bridge between SecurityOps and ITOps. That data management solution needs to do more. It needs to be part of an active conversation, you know? Not just, you know, recovery readiness. Can you ensure that, are you testing that, is it recoverable? That is your last mile of defense. So then you get questions like that from security teams. You get you know, the need for doing more, signals. Can I get better signals from my data management stack to tell me I might be under attack? So what we're seeing in the conversation is the need to have more active conversations around data management and the bridge between ITOps and SecurityOps is really becoming paramount for our customers. >> Yeah, Dave Totten I mean, I often say that I think data protection used to be this bolt on. Now it's a fundamental component of the digital business stack. Anything you would add to what Manoj just said. >> Yeah, I would just say exactly that. Data is an asset, right? We talked about it a lot about the competitive advantage that customers are now realizing that no longer is IT considered sort of this cost center element. We need to be able to leverage our interactions with customers, with partners, with supply chains, with manufacturers, we need to be able to leverage that to sort of create differentiation and competitive advantage in the marketplace. And so if you think about it, as that way as the fuel for economic profitability and business growth, you would do everything in your power to secure it, to support it, to make sure you had access to it, to make sure that you didn't have you know, bad intent users accessing it. And I think we're seeing that shift with customers as they think more about how to be more efficient with their investments in information technology and then how just to make sure that they protect the lifeblood of their businesses. >> Yeah, and that just makes it harder because the adversary is very capable. They're coming in through the digital supply chain. So it's complicated. And so Dave and maybe Manoj, you can comment as well after, Microsoft and Commvault, you guys have been working together for decades and so you've seen a lot of the changes, a lot of the waves. So I'm curious as to how the partnership has evolved. You've got a recent strategic announcement around Azure with Metallic. Dave, take us through that. >> Yeah, I mean you know, Commvault and Microsoft aren't newlyweds, we've been together now for 25 plus years. We send each other anniversary gifts, all that good stuff. And you know, listen, there's a couple things that are key to our relationship. One, we started believing in each other's engineering organizations, right? We hire the best, we train and retain the best. And we both put a lot of investment behind our infrastructure and the ability to work together to really innovate at real time, rapid speeds. Two, we use Commvault products so you know, there's no greater I think, advantage that if a major supplier or platform partner like Microsoft uses your products. We've used it for years in our Xbox group to support and store the data for a hundred million XBox live users. And we're very avid with it with our data centers, our access to Azure data centers, our Microsoft office products. And so we use Commvault services as well. And through that mutual relationship you know, obviously Commvault has seen the ins and outs of what's great about our services and where we're continuing to build and invest. And so they've been able to really you know, dedicate a team of engineers and architects to support all that Azure as a platform, as a service can provide. And then how to take the best of those features and build it into their own first party products. I think when you get close enough to somebody for so many years right, 25 plus years, you figure out what they're great at and you learn to take those advantages like Commvault has with Microsoft and Azure and use it to your advantage, right? To build the best in class product that Metallic actually is. And you're right, the announcement this week it feels culminating, it feels like it's a major milestone in first off, industry innovation but also in our relationship. But it's really not that big of a step change from what we've been doing and building and innovating on for the past you know, 25 years. >> Yeah so Manoj, that's got to be music to your ears. Because you come at it with this rich data protection stack, Microsoft there's so many capabilities. One of the courses, which is Azure. It's like the secret weapon, it's become the secret weapon. How do you think about that relationship, Manoj? >> Absolutely Dave said it right. We are strong partners, 25 years, founding in Western Commvault, mutual customers, partnership. You know, really when you look at it from a customer lens, what our customers have appreciated, over the last year of that strengthening of that partnership basically the two pillars of Commvault the leader of data protection, or you know, for the last 25 years, 10 out of 10 in the Gartner MQ comes together with Azure, the enterprise secure cloud leader in creating Metallic. Metallic, now with 1,000 plus customers around the world, there's a reason they trust it. It's now become part of how they protect their Office 365. No workload left behind, which is very unique, you know? So what we have architected together and now we're taking it to the next phase, our joint partners, right? Our joint customers, that those are some of the things that are really changing in terms of how we're accelerating the partnership. >> Manoj, you and I have talked about ransomware a lot, we did a special segment a while back on that. The adversary is very capable. And you know, I put in the chat this morning, at Commvault Connections, you don't even need a high school diploma to be a ransomwarist. You can go on the dark web, you can buy ransomware as a service. All you need is access to a server and you can stick you know, some malware on it. So you know, it's very, very dangerous times. What is it about data management as a service that makes it a good fit right now from a customer perspective to solve this problem? >> Absolutely. Bad guys, real life, or in the cyber world, they have some techniques. First thing they do in a ransomware is you go after the exits. What are the exit doors? Now you back up data, they know that that backup data can be used to recover. So they go and try to defeat the backup products in that environment. That's number one game that changes with data management as a service. Your data management data protection environment is not inside your environment. Chances to do two simultaneous penetrations to try and anything is possible. But now you've got an additional layer of recovery readiness because that control plane secured on top of Microsoft, Azure, 3,500 security professionals, FedRAMP high standard only data management and service entity to get it. As one of our customers said, "A unicorn in the wild", that is what you have as your data management environment. So if something bad happens, worst case, this environment is ready. Our enterprise customers are starting to understand that this is becoming a big reason to shift to this model. You know, then it's okay if you're not ready to shift the entire model, you're given the easy button of just air gapping of your data. So if you're an existing Commvault customer, appliance, software, anything, secure air gap Metallic cloud storage on hardened Azure Blob protected jointly by us, start there. And finally things like active directory. Talk about shutting the exit path, right? Take that down, your entire environment is not accessible. We make it easy for you to recover that. And because of our partnership, we're able to get it for free to every one of our customers. Go protect your active directory environment using (speaks faintly) kind of three big reasons that we're seeing that entire conversation shift in the minds of our customers. >> Yeah, thank you for that. That's a no brainer. Dave, how do Metallic and Microsoft fit together? Where's the you know, kind of value chain if you will, when it comes to dealing with cyber protection or ransomware recovery, how are your customers thinking about that? >> Yeah well, first it's a shared responsibility model, right? When you've got the best in class platform like Azure with built in protections, scalable data centers all over the global footprint. But then also we spend 10 plus billion dollars a year in security and defense and our own data center environments, right? And so I always find it inspiring when companies believe that their investments in security and platform protection is going to do the job. That's true, that used to be true. Now with Azure, you can take advantage of this global scale and secure you know, footprint of investment that a company like Microsoft has done to really set your heart at ease. Now, what do you do with your actual applications and who has access to it, and how do you actually integrate like Manoj was talking about down to the individual or the individual account that's trying to get access to your environment? Well, that's where Commvault comes in at that point of attack or at that point of an actual data element. So if you've got that environment within Commvault system backed by the umbrella of the Azure security infrastructure, that's how the two sort of compliment each other. And again, it's about shared responsibility, right? We want every customer that leverages Azure to make sure that they know it's secure, it's protected. We've got a mechanism to protect your best interests. Commvault has that exact same mission statement, right? To make sure that every single element that comes into contact with their products is protected, is secure, is trustworthy. You know, I got a long lesson, long, long time ago, early in my career that says you can goof up a product feature, you can goof up the color scheme on a website but if you lose a customer's data or somebody trust, you never get it back. And so we don't take our relationships with customers very lightly. And I think our committed and joint responsibility to delight and support our customers is what has led to this partnership being so successful over the past couple of decades. >> Great, thank you, Dave. And so Manoj, I was saying earlier that data protection has become a fundamental component of your digital business stack. So that sounds good but what should customers be doing to make data protection and data management, a business value driver versus just a liability or exposure or cost factor that has to be managed? What do you think about that? >> No, and then David added earlier, right? It's no longer a liability. In fact it is you know, someone said data is the new oil, right? It is your crown jewels. You got to to start with thinking about an active data protection strategy, not you know, thinking about passive tools and looking at it in terms of a compliance or I need to keep the data around. So that's the number one part is like, how do I have something that protects all my workloads and everyone has a different pace of transformation. So unless you know, you're a company that just got created, you have environments that are on-prem, on the edge, in CoLOS, public cloud. You got you know, SaaS applications, all of those have a critical data that needs to come together. Look for breadth of data protection, something that doesn't leave your workloads behind. Siloed solutions, create a Swiss cheese that create light for the attackers to go after those gaps. You don't want to look for that, you know? And then finally trust. I mean you know, what are the pillars of trust that the solution is built on? You got to figure out how your teams can get to doing more productive things rather than patching systems. You know, making sure that the infrastructure is up. As Dave said you know, we invest a ton jointly in securing this infrastructure. Trust that and leverage that as a differentiator rather than trying to duplicate all of that. So those are some of the you know, key things. And you know, look for players who understand that hybrid is here, give you different entry points. Don't force you know, the single single mode of operation. Those are the things we have built to make it easier for our customers to have a more active data management strategy. >> Dave, Todd, I'll give you the last word we got to go but I want to hit on this notion of zero trust. It used to be a buzz word now it's mainstream. There's so much that this discussion, is it Prudentialist access? Every access is treated maybe as privileged but what does zero trust mean to you in less than a minute? >> Yeah you know, trust but verify, right? Every interaction you have with your infrastructure, with your data, with your applications and you do it at the identity level. We care about identity and we know that that's the core of how people are going to try and access infrastructure. Used to be protect the perimeter. The analogy I always use is we have locks on our houses. Now the bad guys are everywhere. They're getting inside our houses and they're not immediately taking things, they're hiding in the closet and they're popping out three weeks later before anybody knows it. And so being able to actually manage, measure, protect every interaction you have with your infrastructure and do it at the individual or application level, that's what zero trust is all about. So don't trust any interaction, make sure that you pass that authorization through with every ask. And then make sure you protect it from the inside out. >> Great stuff. Okay guys, we've got to leave it there. Thanks so much for the time today. All right next, right after a short break, we're headed into the CXL Power Panel to hear what's on the minds of the executives as it relates to data management in the digital era. Keep it right there, you're watching theCUBE. (lighthearted music)
SUMMARY :
Good to see you. when you talk to customers, Manoj? You get you know, the need of the digital business stack. to make sure that you Microsoft and Commvault, you able to really you know, to be music to your ears. or you know, for the last You can go on the dark web, you can buy that is what you have as your Where's the you know, kind and secure you know, that has to be managed? And you know, look for to you in less than a minute? make sure that you pass minds of the executives
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dave Totten | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
David Noe | PERSON | 0.99+ |
Dave Martin | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
Metallic | ORGANIZATION | 0.99+ |
Todd | PERSON | 0.99+ |
Manoj | PERSON | 0.99+ |
25 years | QUANTITY | 0.99+ |
Manoj Nair | PERSON | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
Two | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
ADP | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Commvault Connections | ORGANIZATION | 0.99+ |
Office 365 | TITLE | 0.99+ |
One | QUANTITY | 0.99+ |
less than a minute | QUANTITY | 0.99+ |
1,000 plus customers | QUANTITY | 0.99+ |
Commvault | ORGANIZATION | 0.99+ |
three weeks later | DATE | 0.98+ |
two pillars | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
XBox | COMMERCIAL_ITEM | 0.98+ |
Metallic.io | ORGANIZATION | 0.97+ |
3,500 security professionals | QUANTITY | 0.97+ |
early this morning | DATE | 0.95+ |
last year | DATE | 0.95+ |
single | QUANTITY | 0.95+ |
2021 | DATE | 0.95+ |
hundred million | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
Azure | ORGANIZATION | 0.94+ |
Azure | TITLE | 0.93+ |
10 plus billion dollars a year | QUANTITY | 0.93+ |
SecurityOps | TITLE | 0.93+ |
Manoj | ORGANIZATION | 0.93+ |
past couple of decades | DATE | 0.92+ |
this week | DATE | 0.92+ |
zero trust | QUANTITY | 0.91+ |
Gartner MQ | ORGANIZATION | 0.9+ |
decades | QUANTITY | 0.9+ |
Xbox | COMMERCIAL_ITEM | 0.9+ |
three big reasons | QUANTITY | 0.9+ |
FedRAMP | ORGANIZATION | 0.88+ |
this morning | DATE | 0.86+ |
Western | LOCATION | 0.85+ |
James Leach & Todd Brannon, Cisco | CUBEconversation
(upbeat music) >> In 2009, Cisco made a major announcement in the form of UCS. It was designed to attack the IT labor problem. Cisco recognized that, data center professionals were struggling to be agile and provide the types of infrastructure services that lines of business were demanding for the modern applications of that day. The value proposition was all about, simplifying infrastructure deployment and management and by combining networking compute and storage with virtualization and a management layer, Cisco changed the game for running applications on premises and the era of converged infrastructure was born. Now fast forward a dozen years, and a lot has changed. The cloud has gone mainstream, forcing new requirements on organizations to bridge their on-prem environments to public clouds and manage workloads across clouds. Now to address this challenge, Cisco earlier this month, announced a series of offerings, that meaningfully expands its original vision, to support the more demanding requirements of today's dev sec ops teams. In particular Cisco, with this announcement is enabling customers to deploy a full stack cloud-like operating model that leverages modern platforms such as Kubernetes, new integrations and advanced tooling to bring automation, visibility and better security for both hybrid and multi-cloud environments. Now the underpinning of this solution, is a new UCS architecture called the X series. Cisco claims this new system gives customers a trusted platform for the next decade to support their hybrid and multi-cloud workloads. Gents, great to see you, welcome. >> Hey, thank you. Good to be here. >> Thanks for having a us Dave. I appreciate. >> My pleasure. Looking forward to this. So look, we've seen the X series announcement and it looks to be quite a new approach. What are the critical aspects of the X series that you want people to understand? Maybe James, and you can take that. >> Sure I think that, you know, overall, there is a lot of change coming in the marketplace, right? We're seeing we're looking at and we're seeing from a technology standpoint, a significant amount of change. Look at CPU's and GPU's, the power draw alone is becoming, you know, it basically at the trajectory, it is, it may be untenable for some, you know, of the current configurations that people are consuming, right? So some of these current architectures just can't deal with that, right? Or at least they can't deal with what's coming in the future. We're also seeing the relevance of other types of architectures like maybe arm to start to become something that our customers want to take advantage of, right? Or maybe want to see how that scale fits into their environment on a totally different level. At the same time, the fabrics are really evolving at lightning speed here, right? So we're seeing PCI express, we've gone from gen three to gen four, gen five is coming in the very near future. We're layering on top of that, things like CXL to take that, that fabric to the next level for capabilities and be able to do things that we couldn't do before. To connect things together, we couldn't do before. Beyond that, we probably are just a few years away from even more exciting developments in the fabric space around some of the high performance low latency fabrics that are that are again on the drawing board today just around the corner. Take that and you, you look at the kind of the evolution of the the admin, right? So we're seeing the admin developer emerge. No longer is this just a guy who's sitting in front of a dashboard and managing systems, keeping them up and down, we're now seeing a whole class of developers that are also administrators, right? So all of this together is starting to push us well beyond what human scale really can manage, what human scale can consume. So, there's a lot of change coming and I think that we're taking a look at that and realizing that something like X series has to be able to deal with that change and the challenges that it brings, but also and do so in a simple manner that we can allow automation orchestration and some of these new capabilities to enhance what our customers can do, not to drown them in technology. >> You know, that taught, that's kind of interesting what James was saying about beyond human scale. I mean, I think my little narrative upfront, it was sort of, hey, we recognize as an IT labor problem. We're going to address that. And it really wasn't about massive scale back then, it is now. We really what we've learned from the cloud guys, right? >> Definitely. I mean, people are moving from pets to cattle to now with containers, they're saying that it's mosquitoes, right? Cause they're so ephemeral, they come and go and on a single host, you could have, you know, hundreds if not thousands of containers. And so the application environment has influenced the infrastructure design and really changed the role of the infrastructure operator to one that necessitates automation, necessitates operations at scale, even on prem everyone's trying to operate in that cloud like model and they're trying to bridge, the big challenge I see is, they're trying to bridge their existing environment big monolithic applications they've got on-prem with those data lakes that they built around them over the past decade, but they're also trying to follow their developers as they go out into the public cloud and innovate there. That's really where the nexus of all the application innovation is. So the IT teams who are already strapped for resources it's not like their budgets are going up every year, are now taking on a new front out in the cloud while they're still trying to maintain the systems that they've built with on-prem. That's the challenge. >> Yeah that's really the hard part and where some of the innovation here is, is anybody that lives in an old house knows that connecting old to new is very challenging much more challenging than building from scratch. But James I wonder if we'd come back to the to the architecture of the X series and what's really unique about it and what's in it for your customers? >> Yes, absolutely. So we're, when were looking at at kind of redesigning this thing from the ground up, we recognized that, you know from a timing standpoint, we're sitting at a place with the development of future fabrics and some of these other technologies that we finally have the opportunity to hit the timing perfectly to start to do composability right. So we've heard a lot of noise, you know in the market for the last several years about composability and how that's going to be the salvation or change the game here. But at the end of the day, the technology hasn't been there in those offerings, right? So we're sitting at the edge of some of the development of those technologies that are going to allow us to do that. And what we've done with X series, is we've taken a construct that we call the UCS X fabric, which is the ability to consume these technologies today as like a effectively a chassis fabric that can allow us to connect resources together within the chassis and future external to the chassis. But it also allows us to take advantage of the change in fabric that's coming. So as fabrics evolve, as we see new technologies like CXL and the PCI express gen five and beyond, come into play here and eventually physical technologies like Silicon Photonix, those are constructs that are going to allow our customers to do some amazing things and we have the construct to be able to consume those. Our goal here is like, to effectively look out at these disruptive technologies on the horizon and make sure that they're not disrupting our customers that we give our customers the ability to disrupt their competitors and to disrupt their markets, but by consuming those technologies in an easy way. >> You know, you didn't use the term future-proof. And I usually don't like that phrase because a lot of times people go that's future-proof and I'm like, well, what's future proof? Well, it's really fast. Well, okay. And in two years, it's going to be, you know really slow compared to everything else. But what you, what you just laid out is an architecture that's really taking advantage of some of these new capabilities that are driving latency down. So that's so, thank you for that. Now, Todd I get how the X series is going to enable customers you know, today I just mentioned the future but how does it play into Cisco's hybrid cloud vision? >> Well I mean, our customers aren't looking for, you know, point solutions or bolt on layers of software to manage across the hybrid cloud landscape. That's the fundamental challenge and so what we're doing with intersite, if you really think about all the systems that we have in our portfolio, like X series, really it's just extensions of our inner site platform. And there we're bridging the gaps between fundamental infrastructure prem, with all of those services that you need to optimize workloads and infrastructure, both in that on-prem environment but also out in the public cloud and even moving up the stack now into serverless. So we know that customers again are trying to bolt together a cohesive environment that allows them to manage those existing workloads on prem but also support the innovation going on out in the cloud and to do that, you have to have services to manage Kubernetes. You need hooks into modern tool chains like a Hashi corks Terraform, we did that a few months back and we recently brought in something we call our service mesh manager that came out of an acquisition of a Bonzai cloud. So what we're doing is, we're kind of spanning that entire spectrum from physical infrastructure, to the workload and that could be extracted in any number of ways either in containers or containers around VMs or bare metal running applications run on bare metal or just virtual machine applications encapsulation. So, you got all these different modalities that customers are going to run applications in and it's our intent to create a platform here that supports all of them, both on their on-prem environment and also all the resources they're managing out in the cloud. So that's a big deal for us. You know, one thing I want to go back to the X series for a second, something James mentioned, right? Is you know as we see subsystems in computing, start to decompose and break apart, you know, we have intersite as the mechanism to put Humpty Dumpty back together again and that's really, I think composability and district's options bar, but that's okay. But so I'll read it together. And like James said, you know be able to take on whatever fabrics, low latency fabrics, ultra low latency fabrics we need in coming years to sew these systems together, we're kind of breaking a barrier that didn't, that wasn't, you know people have trouble breaking through in the past, right? And that's this idea of true infrastructure as code or true software defined infrastructure. Cause now we're talking about being able to apply policy and automation, to the actual construct of a server. How do you build that thing to the needs of the workload? And so if you talk to an SRE or a developer today and you say infrastructure, they're thinking of Kubernetes cluster, but ultimately we want to push that boundary or that frontier between the world software to find it abstracted as far down in the infrastructure, as we can. And with intersite and X fabric and X series, we're taking it all the way down to the individual drive or CPU or ultimately breaking memory apart and sewing that back together. So it's kind of exciting time for us, cause really, pushing that frontier of what is software defined further and further down into the infrastructure and that just gives people a lot more flexibility in what they build. >> So I want to play something back to you and see if it resonates. Essentially, I look at what you just said is you're building a layer across my on-prem, whatever public cloud across clouds at the conventionally, you know, get to the edge, but let's hold off on that, let's park that for now. But that layer obstructs the underlying technical complexity and allows that infrastructure to be, you said programmable, infrastructure is code essentially. So that's one of my other questions, it's like, how programmable is this infrastructure, you know, today and in the future? But is that idea of an abstraction layer kind of how you're thinking about hybrid and multi-cloud? >> It is in terms of the infrastructure that customers are going to run on prem right in the public cloud the cloud providers are already abstracting that for them. And so what we want to do is bring that same type of public cloud experience to managing infrastructure on prem. So being able to have pools of resources that you allocate out to workloads, shifted as things change. So it's absolutely a cloud-like approach to on-prem infrastructure and you know, one of the things I like to say is, you know, friends don't let friends, build their own private cloud platforms from scratch, right? We're productizing this, we're bringing it as a cohesive system that customers don't need to engineer on their own. They can focus on their operations and James actually, he's a pilot, and one of the things he observed about Intersight a couple of years ago was, this idea of Intersight as a co-pilot and kind of, you know, adding a person to your team almost when you have intersite in your data center, because some very, what feels like rudimentary things are incredibly impactful day-to-day for our customers. So we have recommendation engines. If it, if like, you know, maybe it says some interplay between bios and firmware and operating system and we know that there's an issue there rather than letting customers stumble upon that on their own we're going to flag it, show them the correction, go implement it for them. So that it starts to feel a lot more like what they're accustomed to in a public cloud setting where the system has some intelligence baked in, the system is kind of covering them and watching their back and acting like a co-pilot day-to-day operations. >> Okay, so I get that, you know, the cloud guys will abstract the complexity you guys are focused on prem, but is it, so my question then is multi-cloud across clouds because we have some cloud providers, you know you're partners with Google they do some things with Antho, so I know Microsoft with Ark, but even near-term. Should we think about Cisco as playing that role of my, across cloud, you know, partner if you will? >> Absolutely. You know, cloud agnosticism is core to our approach because we know that, you know if you dial the clock way back to the early odds, right? When cloud first started emerging it was kind of an efficiency play. And you had folks like Nicholas Carr, right? The author that they put out the big switch, kind of envisioning a world where there'd be this ultimate consolidation to maybe one or two or three cloud platforms worldwide. But what we're seeing, you know we had data sovereignty kind of emerge over the past decade but even the past year or two, it's now becoming issues of actual cloud sovereignty. So you have governments in Australia and in India and in Europe actually asserting control over the cloud providers and services that can be used by their public sector organizations and so that's just leading to actually cloud fragmentation. It's not nearly as monolithic of future as we thought it would be. It's a lot of clouds and so as customers want to move around geographically or if they want to go harvest innovation that maybe Google is really good at something like machine vision, or they want to use AWS or Azure for different applications that they're going to go build. We're seeing customers really being put in a place where they're going to deal with multiple cloud providers and the data supports that. So it's definitely our approach especially on the networking technology side to make it very easy for our customers to go out and connect these different clouds and not have to repeat the integration process every time they want to go, you know, start using another public cloud provider. So that's absolutely our strategies to be very agnostic and build everything in mind for customers they're going to be using in multiple providers. >> Thank you for that touch. So James, I want to come back and talk a little bit about sort of your competitive posture here. I mean, you guys, when you made the announcement, I inferred that you were feeling like you were in a pretty good position relative to the competition that you were putting forth, not just you know, core infrastructure in hardware and software but also all these other components around it that we talked about, observability extending out to the, you know, beyond the four walls of my data center, et cetera. But talk a little bit about why you think this gives you such competitive advantage in the marketplace. >> Well I mean, I think first of all, back to where Todd was going as well, is that, you know if you think about trying to be, to work in this hybrid cloud world, that we're clearly living in, the idea of burrowing features and functions as far down the stack as possible, doesn't make a lot of sense, right? So intersite is a great example. We want to manage and we want to orchestrate across clouds, right? So how are we going to have our management and infrastructure services buried into the chassis, down at the very lowest level, that doesn't make sense. So we elevated our, you know, our operating model to the cloud, right? And that's how we manage across clouds from the cloud. So, building a system and really we've done this from the ground up with X series, building a system that is able to take advantage of all these two technologies. And you mentioned, you know, how being future proof was probably you know, a derogatory term almost and I agree with you completely. I think we're future ready. Like, we're ready to embrace it because we're not trying to say that nothing is going to change beyond what we've already thought of, we're saying, bring it on. We're saying, bring on that change because we're ready for it. We've we can accommodate change. We, we're not saying that the technology we have today is to going to ride us for 10 years, we're saying,, we're ready for the next 10 years of change. Bring it. We can do that in a simple way. That is, you know, I think, you know going to give us the versatility and the simplicity to allow the technology to go beyond human scale without having to you know drown our customers in administrative duties, right? So that co-pilot that Todd mentioned is going to be able to take on a lot more of the work, just like an airplane where you know, the pilot has functionality that he has to absolutely be part of and those are the our developers, right? We want those admin developers to develop, to build things and to do things and not get bogged down in the minutiae that exists. So I think competitively, you know, our architecture top to bottom, you know, all the way up the stack, all the way to the bottom is unique and it is focused on not just the rear view mirror but what's coming in the future. >> So my takeaway there is that, okay, I get it. The new technologies will come along but this architecture is the architecture for the decade. You're not going to have to redo the architecture in a few years. That's really the key point here. Todd, I'll give you last word might just taking some notes here and takeaways that I heard, I heard upfront. Chip diversity really take advantage of all the innovations that are coming out. You're ready for that. You're kind of blurring the lines between blade and rack, giving some optionality there. Scale is a big theme. I mean, the cloud has brought that in and, you know people want to scale, they don't want to be, you know provisioning lawns all day and they won't be able to scale if that's what their job is. Developer friendly, particularly as it relates to infrastructure as code. And you've got a roadmap. So Todd, that's my summary. I'll give you the last word. >> No, it's really good. I mean, you hit it, right. We're thinking about this holistic operating environment that our customers are building for hybrid cloud and we're pre-engineering that environment for them. So our Intersight platform, all of our systems that connect to that, are really built to tackle that hybrid environment from end to end, and with systems like X series, we're giving them a more simple, efficient landing spot for their workloads on prem but crucially it's fully integrated with this hybrid cloud platform so as they have workloads on prem and workloads in the cloud, it's kind of a transparent environment between those two, between those two, two worlds there. So bringing it together so that our customers don't have to build it themselves. >> Excellent. Well, gents thanks so much for coming on theCUBE and sharing the details of this announcement. Congratulations, I know how much work and thought goes into these things, really looking forward to its progress and adoption in the marketplace. Appreciate your time. >> Thank you. >> Thanks for time. >> And thank you for watching this cube conversation. This is Dave Vellante. We'll see you next time. (upbeat music)
SUMMARY :
and the era of converged Good to be here. I appreciate. and it looks to be quite a new approach. that fabric to the next We're going to address that. and really changed the role to the architecture of the X series and how that's going to be the salvation going to be, you know and to do that, you have to have services and allows that infrastructure to be, So that it starts to feel a lot more Okay, so I get that, you know, and so that's just leading to out to the, you know, beyond that he has to absolutely be part of brought that in and, you know all of our systems that connect to that, and adoption in the marketplace. And thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
James | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Todd | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Australia | LOCATION | 0.99+ |
two | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
one | QUANTITY | 0.99+ |
India | LOCATION | 0.99+ |
Todd Brannon | PERSON | 0.99+ |
Nicholas Carr | PERSON | 0.99+ |
2009 | DATE | 0.99+ |
10 years | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
hundreds | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ark | ORGANIZATION | 0.99+ |
thousands | QUANTITY | 0.99+ |
Intersight | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Bonzai | ORGANIZATION | 0.98+ |
two years | QUANTITY | 0.98+ |
two technologies | QUANTITY | 0.98+ |
X series | TITLE | 0.97+ |
Antho | ORGANIZATION | 0.97+ |
X | TITLE | 0.97+ |
first | QUANTITY | 0.96+ |
James Leach | PERSON | 0.96+ |
Kubernetes | TITLE | 0.96+ |
past year | DATE | 0.95+ |
gen five | QUANTITY | 0.94+ |
next decade | DATE | 0.94+ |
earlier this month | DATE | 0.94+ |
single host | QUANTITY | 0.94+ |
gen four | QUANTITY | 0.92+ |
a dozen years | QUANTITY | 0.9+ |
CXL | ORGANIZATION | 0.89+ |
UCS | ORGANIZATION | 0.88+ |
gen three | QUANTITY | 0.88+ |
one thing | QUANTITY | 0.87+ |
couple of years ago | DATE | 0.86+ |
two worlds | QUANTITY | 0.85+ |
three cloud platforms | QUANTITY | 0.85+ |
past decade | DATE | 0.84+ |
Silicon Photonix | ORGANIZATION | 0.81+ |
Terraform | ORGANIZATION | 0.79+ |
next 10 years | DATE | 0.74+ |
X series | COMMERCIAL_ITEM | 0.73+ |
a few months back | DATE | 0.72+ |
last several years | DATE | 0.72+ |
Hashi | ORGANIZATION | 0.71+ |
years | QUANTITY | 0.71+ |
two | DATE | 0.67+ |
things | QUANTITY | 0.61+ |
second | QUANTITY | 0.6+ |
Azure | ORGANIZATION | 0.54+ |
Breaking Analysis: Why Apple Could be the Key to Intel's Future
>> From theCUBE studios in Palo Alto, in Boston bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante >> The latest Arm Neoverse announcement further cements our opinion that it's architecture business model and ecosystem execution are defining a new era of computing and leaving Intel in it's dust. We believe the company and its partners have at least a two year lead on Intel and are currently in a far better position to capitalize on a major waves that are driving the technology industry and its innovation. To compete our view is that Intel needs a new strategy. Now, Pat Gelsinger is bringing that but they also need financial support from the US and the EU governments. Pat Gelsinger was just noted as asking or requesting from the EU government $9 billion, sorry, 8 billion euros in financial support. And very importantly, Intel needs a volume for its new Foundry business. And that is where Apple could be a key. Hello, everyone. And welcome to this week's weekly bond Cube insights powered by ETR. In this breaking analysis will explain why Apple could be the key to saving Intel and America's semiconductor industry leadership. We'll also further explore our scenario of the evolution of computing and what will happen to Intel if it can't catch up. Here's a hint it's not pretty. Let's start by looking at some of the key assumptions that we've made that are informing our scenarios. We've pointed out many times that we believe Arm wafer volumes are approaching 10 times those of x86 wafers. This means that manufacturers of Arm chips have a significant cost advantage over Intel. We've covered that extensively, but we repeat it because when we see news reports and analysis and print it's not a factor that anybody's highlighting. And this is probably the most important issue that Intel faces. And it's why we feel that Apple could be Intel's savior. We'll come back to that. We've projected that the chip shortage will last no less than three years, perhaps even longer. As we reported in a recent breaking analysis. Well, Moore's law is waning. The result of Moore's law, I.e the doubling of processor performance every 18 to 24 months is actually accelerating. We've observed and continue to project a quadrupling of performance every two years, breaking historical norms. Arm is attacking the enterprise and the data center. We see hyperscalers as the tip of their entry spear. AWS's graviton chip is the best example. Amazon and other cloud vendors that have engineering and software capabilities are making Arm-based chips capable of running general purpose applications. This is a huge threat to x86. And if Intel doesn't quickly we believe Arm will gain a 50% share of an enterprise semiconductor spend by 2030. We see the definition of Cloud expanding. Cloud is no longer a remote set of services, in the cloud, rather it's expanding to the edge where the edge could be a data center, a data closet, or a true edge device or system. And Arm is by far in our view in the best position to support the new workloads and computing models that are emerging as a result. Finally geopolitical forces are at play here. We believe the U S government will do, or at least should do everything possible to ensure that Intel and the U S chip industry regain its leadership position in the semiconductor business. If they don't the U S and Intel could fade to irrelevance. Let's look at this last point and make some comments on that. Here's a map of the South China sea in a way off in the Pacific we've superimposed a little pie chart. And we asked ourselves if you had a hundred points of strategic value to allocate, how much would you put in the semiconductor manufacturing bucket and how much would go to design? And our conclusion was 50, 50. Now it used to be because of Intel's dominance with x86 and its volume that the United States was number one in both strategic areas. But today that orange slice of the pie is dominated by TSMC. Thanks to Arm volumes. Now we've reported extensively on this and we don't want to dwell on it for too long but on all accounts cost, technology, volume. TSMC is the clear leader here. China's president Xi has a stated goal of unifying Taiwan by China's Centennial in 2049, will this tiny Island nation which dominates a critical part of the strategic semiconductor pie, go the way of Hong Kong and be subsumed into China. Well, military experts say it was very hard for China to take Taiwan by force, without heavy losses and some serious international repercussions. The US's military presence in the Philippines and Okinawa and Guam combined with support from Japan and South Korea would make it even more difficult. And certainly the Taiwanese people you would think would prefer their independence. But Taiwanese leadership, it ebbs and flows between those hardliners who really want to separate and want independence and those that are more sympathetic to China. Could China for example, use cyber warfare to over time control the narrative in Taiwan. Remember if you control the narrative you can control the meme. If you can crawl the meme you control the idea. If you control the idea, you control the belief system. And if you control the belief system you control the population without firing a shot. So is it possible that over the next 25 years China could weaponize propaganda and social media to reach its objectives with Taiwan? Maybe it's a long shot but if you're a senior strategist in the U S government would you want to leave that to chance? We don't think so. Let's park that for now and double click on one of our key findings. And that is the pace of semiconductor performance gains. As we first reported a few weeks ago. Well, Moore's law is moderating the outlook for cheap dense and efficient processing power has never been better. This slideshows two simple log lines. One is the traditional Moore's law curve. That's the one at the bottom. And the other is the current pace of system performance improvement that we're seeing measured in trillions of operations per second. Now, if you calculate the historical annual rate of processor performance improvement that we saw with x86, the math comes out to around 40% improvement per year. Now that rate is slowing. It's now down to around 30% annually. So we're not quite doubling every 24 months anymore with x86 and that's why people say Moore's law is dead. But if you look at the (indistinct) effects of packaging CPU's, GPU's, NPUs accelerators, DSPs and all the alternative processing power you can find in SOC system on chip and eventually system on package it's growing at more than a hundred percent per annum. And this means that the processing power is now quadrupling every 24 months. That's impressive. And the reason we're here is Arm. Arm has redefined the core process of model for a new era of computing. Arm made an announcement last week which really recycle some old content from last September, but it also put forth new proof points on adoption and performance. Arm laid out three components and its announcement. The first was Neoverse version one which is all about extending vector performance. This is critical for high performance computing HPC which at one point you thought that was a niche but it is the AI platform. AI workloads are not a niche. Second Arm announced the Neoverse and two platform based on the recently introduced Arm V9. We talked about that a lot in one of our earlier Breaking Analysis. This is going to performance boost of around 40%. Now the third was, it was called CMN-700 Arm maybe needs to work on some of its names, but Arm said this is the industry's most advanced mesh interconnect. This is the glue for the V1 and the N2 platforms. The importance is it allows for more efficient use and sharing of memory resources across components of the system package. We talked about this extensively in previous episodes the importance of that capability. Now let's share with you this wheel diagram underscores the completeness of the Arm platform. Arms approach is to enable flexibility across an open ecosystem, allowing for value add at many levels. Arm has built the architecture in design and allows an open ecosystem to provide the value added software. Now, very importantly, Arm has created the standards and specifications by which they can with certainty, certify that the Foundry can make the chips to a high quality standard, and importantly that all the applications are going to run properly. In other words, if you design an application, it will work across the ecosystem and maintain backwards compatibility with previous generations, like Intel has done for years but Arm as we'll see next is positioning not only for existing workloads but also the emerging high growth applications. To (indistinct) here's the Arm total available market as we see it, we think the end market spending value of just the chips going into these areas is $600 billion today. And it's going to grow to 1 trillion by 2030. In other words, we're allocating the value of the end market spend in these sectors to the marked up value of the Silicon as a percentage of the total spend. It's enormous. So the big areas are Hyperscale Clouds which we think is around 20% of this TAM and the HPC and AI workloads, which account for about 35% and the Edge will ultimately be the largest of all probably capturing 45%. And these are rough estimates and they'll ebb and flow and there's obviously some overlap but the bottom line is the market is huge and growing very rapidly. And you see that little red highlighted area that's enterprise IT. Traditional IT and that's the x86 market in context. So it's relatively small. What's happening is we're seeing a number of traditional IT vendors, packaging x86 boxes throwing them over the fence and saying, we're going after the Edge. And what they're doing is saying, okay the edge is this aggregation point for all these end point devices. We think the real opportunity at the Edge is for AI inferencing. That, that is where most of the activity and most of the spending is going to be. And we think Arm is going to dominate that market. And this brings up another challenge for Intel. So we've made the point a zillion times that PC volumes peaked in 2011. And we saw that as problematic for Intel for the cost reasons that we've beat into your head. And lo and behold PC volumes, they actually grew last year thanks to COVID and we'll continue to grow it seems for a year or so. Here's some ETR data that underscores that fact. This chart shows the net score. Remember that's spending momentum it's the breakdown for Dell's laptop business. The green means spending is accelerating and the red is decelerating. And the blue line is net score that spending momentum. And the trend is up and to the right now, as we've said this is great news for Dell and HP and Lenovo and Apple for its laptops, all the laptops sellers but it's not necessarily great news for Intel. Why? I mean, it's okay. But what it does is it shifts Intel's product mix toward lower margin, PC chips and it squeezes Intel's gross margins. So the CFO has to explain that margin contraction to wall street. Imagine that the business that got Intel to its monopoly status is growing faster than the high margin server business. And that's pulling margins down. So as we said, Intel is fighting a war on multiple fronts. It's battling AMD in the core x86 business both PCs and servers. It's watching Arm mop up in mobile. It's trying to figure out how to reinvent itself and change its culture to allow more flexibility into its designs. And it's spinning up a Foundry business to compete with TSMC. So it's got to fund all this while at the same time propping up at stock with buybacks Intel last summer announced that it was accelerating it's $10 billion stock buyback program, $10 billion. Buy stock back, or build a Foundry which do you think is more important for the future of Intel and the us semiconductor industry? So Intel, it's got to protect its past while building his future and placating wall street all at the same time. And here's where it gets even more dicey. Intel's got to protect its high-end x86 business. It is the cash cow and funds their operation. Who's Intel's biggest customer Dell, HP, Facebook, Google Amazon? Well, let's just say Amazon is a big customer. Can we agree on that? And we know AWS is biggest revenue generator is EC2. And EC2 was powered by microprocessors made from Intel and others. We found this slide in the Arm Neoverse deck and it caught our attention. The data comes from a data platform called lifter insights. The charts show, the rapid growth of AWS is graviton chips which are they're custom designed chips based on Arm of course. The blue is that graviton and the black vendor A presumably is Intel and the gray is assumed to be AMD. The eye popper is the 2020 pie chart. The instant deployments, nearly 50% are graviton. So if you're Pat Gelsinger, you better be all over AWS. You don't want to lose this customer and you're going to do everything in your power to keep them. But the trend is not your friend in this account. Now the story gets even gnarlier and here's the killer chart. It shows the ISV ecosystem platforms that run on graviton too, because AWS has such good engineering and controls its own stack. It can build Arm-based chips that run software designed to run on general purpose x86 systems. Yes, it's true. The ISV, they got to do some work, but large ISV they have a huge incentives because they want to ride the AWS wave. Certainly the user doesn't know or care but AWS cares because it's driving costs and energy consumption down and performance up. Lower cost, higher performance. Sounds like something Amazon wants to consistently deliver, right? And the ISV portfolio that runs on our base graviton and it's just going to continue to grow. And by the way, it's not just Amazon. It's Alibaba, it's Oracle, it's Marvell. It's 10 cents. The list keeps growing Arm, trotted out a number of names. And I would expect over time it's going to be Facebook and Google and Microsoft. If they're not, are you there? Now the last piece of the Arm architecture story that we want to share is the progress that they're making and compare that to x86. This chart shows how Arm is innovating and let's start with the first line under platform capabilities. Number of cores supported per die or, or system. Now die is what ends up as a chip on a small piece of Silicon. Think of the die as circuit diagram of the chip if you will, and these circuits they're fabricated on wafers using photo lithography. The wafers then cut up into many pieces each one, having a chip. Each of these pieces is the chip. And two chips make up a system. The key here is that Arm is quadrupling the number of cores instead of increasing thread counts. It's giving you cores. Cores are better than threads because threads are shared and cores are independent and much easier to virtualize. This is particularly important in situations where you want to be as efficient as possible sharing massive resources like the Cloud. Now, as you can see in the right hand side of the chart under the orange Arm is dramatically increasing the amount of capabilities compared to previous generations. And one of the other highlights to us is that last line that CCIX and CXL support again Arm maybe needs to name these better. These refer to Arms and memory sharing capabilities within and between processors. This allows CPU's GPU's NPS, et cetera to share resources very often efficiently especially compared to the way x86 works where everything is currently controlled by the x86 processor. CCIX and CXL support on the other hand will allow designers to program the system and share memory wherever they want within the system directly and not have to go through the overhead of a central processor, which owns the memory. So for example, if there's a CPU, GPU, NPU the CPU can say to the GPU, give me your results at a specified location and signal me when you're done. So when the GPU is finished calculating and sending the results, the GPU just signals the operation is complete. Versus having to ping the CPU constantly, which is overhead intensive. Now composability in that chart means the system it's a fixed. Rather you can programmatically change the characteristics of the system on the fly. For example, if the NPU is idle you can allocate more resources to other parts of the system. Now, Intel is doing this too in the future but we think Arm is way ahead. At least by two years this is also huge for Nvidia, which today relies on x86. A major problem for Nvidia has been coherent memory management because the utilization of its GPU is appallingly low and it can't be easily optimized. Last week, Nvidia announced it's intent to provide an AI capability for the data center without x86 I.e using Arm-based processors. So Nvidia another big Intel customer is also moving to Arm. And if it's successful acquiring Arm which is still a long shot this trend is only going to accelerate. But the bottom line is if Intel can't move fast enough to stem the momentum of Arm we believe Arm will capture 50% of the enterprise semiconductor spending by 2030. So how does Intel continue to lead? Well, it's not going to be easy. Remember we said, Intel, can't go it alone. And we posited that the company would have to initiate a joint venture structure. We propose a triumvirate of Intel, IBM with its power of 10 and memory aggregation and memory architecture And Samsung with its volume manufacturing expertise on the premise that it coveted in on US soil presence. Now upon further review we're not sure the Samsung is willing to give up and contribute its IP to this venture. It's put a lot of money and a lot of emphasis on infrastructure in South Korea. And furthermore, we're not convinced that Arvind Krishna who we believe ultimately made the call to Jettisons. Jettison IBM's micro electronics business wants to put his efforts back into manufacturing semi-conductors. So we have this conundrum. Intel is fighting AMD, which is already at seven nanometer. Intel has a fall behind in process manufacturing which is strategically important to the United States it's military and the nation's competitiveness. Intel's behind the curve on cost and architecture and is losing key customers in the most important market segments. And it's way behind on volume. The critical piece of the pie that nobody ever talks about. Intel must become more price and performance competitive with x86 and bring in new composable designs that maintain x86 competitive. And give the ability to allow customers and designers to add and customize GPU's, NPUs, accelerators et cetera. All while launching a successful Foundry business. So we think there's another possibility to this thought exercise. Apple is currently reliant on TSMC and is pushing them hard toward five nanometer, in fact sucking up a lot of that volume and TSMC is maybe not servicing some other customers as well as it's servicing Apple because it's a bit destructive, it is distracted and you have this chip shortage. So Apple because of its size gets the lion's share of the attention but Apple needs a trusted onshore supplier. Sure TSMC is adding manufacturing capacity in the US and Arizona. But back to our precarious scenario in the South China sea. Will the U S government and Apple sit back and hope for the best or will they hope for the best and plan for the worst? Let's face it. If China gains control of TSMC, it could block access to the latest and greatest process technology. Apple just announced that it's investing billions of dollars in semiconductor technology across the US. The US government is pressuring big tech. What about an Apple Intel joint venture? Apple brings the volume, it's Cloud, it's Cloud, sorry. It's money it's design leadership, all that to the table. And they could partner with Intel. It gives Intel the Foundry business and a guaranteed volume stream. And maybe the U S government gives Apple a little bit of breathing room and the whole big up big breakup, big tech narrative. And even though it's not necessarily specifically targeting Apple but maybe the US government needs to think twice before it attacks big tech and thinks about the long-term strategic ramifications. Wouldn't that be ironic? Apple dumps Intel in favor of Arm for the M1 and then incubates, and essentially saves Intel with a pipeline of Foundry business. Now back to IBM in this scenario, we've put a question mark on the slide because maybe IBM just gets in the way and why not? A nice clean partnership between Intel and Apple? Who knows? Maybe Gelsinger can even negotiate this without giving up any equity to Apple, but Apple could be a key ingredient to a cocktail of a new strategy under Pat Gelsinger leadership. Gobs of cash from the US and EU governments and volume from Apple. Wow, still a long shot, but one worth pursuing because as we've written, Intel is too strategic to fail. Okay, well, what do you think? You can DM me @dvellante or email me at david.vellante@siliconangle.com or comment on my LinkedIn post. Remember, these episodes are all available as podcasts so please subscribe wherever you listen. I publish weekly on wikibon.com and siliconangle.com. And don't forget to check out etr.plus for all the survey analysis. And I want to thank my colleague, David Floyer for his collaboration on this and other related episodes. This is Dave Vellante for theCUBE insights powered by ETR. Thanks for watching, be well, and we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and most of the spending is going to be.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Floyer | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
TSMC | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
2011 | DATE | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Pat Gelsinger | PERSON | 0.99+ |
$10 billion | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
50% | QUANTITY | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
$600 | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
45% | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
10 times | QUANTITY | 0.99+ |
10 cents | QUANTITY | 0.99+ |
South Korea | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
Last week | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Arizona | LOCATION | 0.99+ |
U S | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
1 trillion | QUANTITY | 0.99+ |
2030 | DATE | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
China | ORGANIZATION | 0.99+ |
Arvind Krishna | PERSON | 0.99+ |
two years | QUANTITY | 0.99+ |
Moore | PERSON | 0.99+ |
$9 billion | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
EU | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
last week | DATE | 0.99+ |
twice | QUANTITY | 0.99+ |
first line | QUANTITY | 0.99+ |
Okinawa | LOCATION | 0.99+ |
last September | DATE | 0.99+ |
Hong Kong | LOCATION | 0.99+ |
CoC Virtual Events Annoucement
>> Hello everyone, welcome to the special Cube conversation. I'm John foray with Dave Volante. We're known as theCube guys. We're doing a lot of Cube events over the past year with COVID in a virtual format and we really miss being onsite and being at the events extracting the signal from the noise. Dave, we've got some big news, we're announcing our Cube On Cloud series of virtual events. We're going to do in combination to the hybrid format of theCube when it comes back to, when theCube is coming back which we look like (indistinct), we'll be implementing theCube virtual format. And so Dave, Cube On Cloud Startups is our first inaugural event coming up this month. >> Well, I'm really excited John, because of course as you well know, in the early days of Cloud, we really doubled down on our content focus. And I think if you're a customer, firmly I believe CIO, CTOs, you have to have a portion of your portfolio that is really driven toward innovation and that really comes from startups. And that's really what we're going to feature today. We're talking about startups from tens of millions to hundreds of millions of ARR. I think if you're an investor, there's some great opportunities here. If you're a technologist, you might be trying to figure out, okay, "where's the next great place that I want to work?" And I think really it's all enabled by the Cloud and the Cloud is changing John, right? It's evolving from what was just core infrastructure storage servers, networking to really now driving transformative business value. And that's what this event is going to be all about. >> And what's exciting Dave, I want to share with the folks out there, you see theCube. you've seen us on all the channels, Twitter, Facebook, LinkedIn, all over the internet. Now with the virtual, we going to bring that together. And every quarter we're going to do a quarterly startup hot Cuban Cloud Startup event every quarter, four times a year. So, join us, we want to be part of our community. Be part of the conversation, theCube 365 virtual format is interactive, it's engaging. It's our own clubhouse, it's our own place to engage with you. If you want to engage with us, this is the time to do it. Or if you want to sit back and consume some of the great content, do that. Our first event on the 24th is with aWS and their sponsored showcase startups. We're going to be featuring 10 of the hottest Cloud startups obviously all around data and machine learning. And we're going to feature those 10, we're going to introduce the world to them, unpack them, talk to the founders and this top management of teams and understand their secret sauce, their competitive advantage and how they're going to be successful in the enterprise in Cloud. But we've also got a great keynote program to kick it off. We're going to have Jeff Barr who's legendary in the developer and Cloud community. He's with aWS. He does a lot of their developer. He writes all the blog posts announcing all the great products at aWS. If you're in the Cloud, visit, you know who Jeff Barr is. He's a legend. We got Jerry chin, Cube alumni. He's a partner at Greylock, tier one venture capital firm, and Michael Liebow who's a partner at McKinsey. And McKinsey is talking to all the C-suites Dave, they're the ones setting the table. And just recently came out with a Cloud report called, "Trillion Dollar Market Opportunity." Of course, we wrote Trillion Dollar Baby Cloud Ambition for Andy Jassy in 2015. We're going to tie that together. And of course, when you come to the event and join us, you get a free copy of that report. So, Dave-- >> And Don't forget Ben Haynes. He's going to bring the practitioner perspective which we're really excited about. And I'm glad you made that shout out to the Cube community because as you know, it's not just coming to the event and doing some chat. Do that, lay down your knowledge because the next show we're going to have you on live interview, you that's what we're all about. Bringing our community together, bringing you in and interacting with you, not just on chat or email or whatever but actually making you part of the program. >> Yeah, it's not a webinar Dave, these aren't webinars, webinars are old they're dying. Webinars are great for sales tools. You do those every day if you're a sales person or a company. This is different. We're talking about making it an immersive and interactive, engaging, virtually. This is going to be a great compliment. Certainly when the events come back and we're looking forward to it. I can't tell you Dave how many times people want to chat with me on Twitter, I'm not available, time zone around the world. Now, you can come to our events and engage directly with us and consume, but also we'll call you up. We're going to have sessions, maybe have some Ad hoc femoral conversations, set up your own little clubhouse with us and share your knowledge on the Cube. The Cube going virtual. Virtualization Dave, as we were joking during the pandemic is one of the upsides for what happened this year. And I got to say, I'm really, really excited because this brings a new format for us to bring to people. So, I'm really looking forward to it. >> Yeah, me too, John. So give us the details. Date, time, we've got a, I think we've got a screenshot but we'll pull that up and show people. So, there's a site. What's the dates again, John? >> This is on going to happen on March 24th, >> 9: 00 AM Pacific to 1:00 PM, it's a morning program. Again, it's a featured conference with the hot stars. We're going to feature, We're going to do a keynote session and then we're going to have the breakouts with all the startups. So, it can jump into the rooms find the startups you like and talk to them. And then a closing fireside chat with Ben Haynes who's a practitioner, CIO Perspectives, CXL Perspective as well as executives in the industry. So, we're going to wrap that up at the end of the day. So, great program. Good keynote on what ways are happening? What's the top trends and then ending fireside chats. Should be a great day, very cool. And of course it's virtual. So, you can do a fly by, you can come hang out with us and also come back. it's always going to be on 24/7, 365. So, that is the Cube On Cloud startups, March 24th. Join us and join our community, thank you.
SUMMARY :
We're going to do in is going to be all about. it's our own place to engage with you. He's going to bring the And I got to say, I'm What's the dates again, John? We're going to feature, We're going to do
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
aWS | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Jeff Barr | PERSON | 0.99+ |
Ben Haynes | PERSON | 0.99+ |
McKinsey | ORGANIZATION | 0.99+ |
Jeff Barr | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dave Volante | PERSON | 0.99+ |
2015 | DATE | 0.99+ |
March 24th | DATE | 0.99+ |
Michael Liebow | PERSON | 0.99+ |
Trillion Dollar Market Opportunity | TITLE | 0.99+ |
10 | QUANTITY | 0.99+ |
Trillion Dollar Baby Cloud Ambition | TITLE | 0.99+ |
hundreds of millions | QUANTITY | 0.99+ |
1:00 PM | DATE | 0.99+ |
Andy Jassy | PERSON | 0.99+ |
tens of millions | QUANTITY | 0.99+ |
Greylock | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.98+ | |
9: 00 AM Pacific | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
Cube | ORGANIZATION | 0.98+ |
first event | QUANTITY | 0.97+ |
ORGANIZATION | 0.96+ | |
this year | DATE | 0.96+ |
tier one | QUANTITY | 0.95+ |
this month | DATE | 0.94+ |
past year | DATE | 0.93+ |
365 | QUANTITY | 0.93+ |
today | DATE | 0.93+ |
four times a year | QUANTITY | 0.92+ |
Cloud | TITLE | 0.91+ |
first inaugural | QUANTITY | 0.85+ |
CXL Perspective | ORGANIZATION | 0.78+ |
CoC | EVENT | 0.78+ |
John foray | PERSON | 0.77+ |
Jerry chin | PERSON | 0.73+ |
Cube | COMMERCIAL_ITEM | 0.73+ |
every quarter | QUANTITY | 0.73+ |
theCube | COMMERCIAL_ITEM | 0.71+ |
one | QUANTITY | 0.69+ |
pandemic | EVENT | 0.58+ |
theCube 365 | COMMERCIAL_ITEM | 0.56+ |
On Cloud | TITLE | 0.56+ |
ARR | QUANTITY | 0.5+ |
Startups | EVENT | 0.5+ |
Cuban | EVENT | 0.46+ |
24th | QUANTITY | 0.44+ |
theCube | ORGANIZATION | 0.42+ |
COVID | ORGANIZATION | 0.4+ |