Jay Marshall, Neural Magic | AWS Startup Showcase S3E1
(upbeat music) >> Hello, everyone, and welcome to theCUBE's presentation of the "AWS Startup Showcase." This is season three, episode one. The focus of this episode is AI/ML: Top Startups Building Foundational Models, Infrastructure, and AI. It's great topics, super-relevant, and it's part of our ongoing coverage of startups in the AWS ecosystem. I'm your host, John Furrier, with theCUBE. Today, we're excited to be joined by Jay Marshall, VP of Business Development at Neural Magic. Jay, thanks for coming on theCUBE. >> Hey, John, thanks so much. Thanks for having us. >> We had a great CUBE conversation with you guys. This is very much about the company focuses. It's a feature presentation for the "Startup Showcase," and the machine learning at scale is the topic, but in general, it's more, (laughs) and we should call it "Machine Learning and AI: How to Get Started," because everybody is retooling their business. Companies that aren't retooling their business right now with AI first will be out of business, in my opinion. You're seeing massive shift. This is really truly the beginning of the next-gen machine learning AI trend. It's really seeing ChatGPT. Everyone sees that. That went mainstream. But this is just the beginning. This is scratching the surface of this next-generation AI with machine learning powering it, and with all the goodness of cloud, cloud scale, and how horizontally scalable it is. The resources are there. You got the Edge. Everything's perfect for AI 'cause data infrastructure's exploding in value. AI is just the applications. This is a super topic, so what do you guys see in this general area of opportunities right now in the headlines? And I'm sure you guys' phone must be ringing off the hook, metaphorically speaking, or emails and meetings and Zooms. What's going on over there at Neural Magic? >> No, absolutely, and you pretty much nailed most of it. I think that, you know, my background, we've seen for the last 20-plus years. Even just getting enterprise applications kind of built and delivered at scale, obviously, amazing things with AWS and the cloud to help accelerate that. And we just kind of figured out in the last five or so years how to do that productively and efficiently, kind of from an operations perspective. Got development and operations teams. We even came up with DevOps, right? But now, we kind of have this new kind of persona and new workload that developers have to talk to, and then it has to be deployed on those ITOps solutions. And so you pretty much nailed it. Folks are saying, "Well, how do I do this?" These big, generational models or foundational models, as we're calling them, they're great, but enterprises want to do that with their data, on their infrastructure, at scale, at the edge. So for us, yeah, we're helping enterprises accelerate that through optimizing models and then delivering them at scale in a more cost-effective fashion. >> Yeah, and I think one of the things, the benefits of OpenAI we saw, was not only is it open source, then you got also other models that are more proprietary, is that it shows the world that this is really happening, right? It's a whole nother level, and there's also new landscape kind of maps coming out. You got the generative AI, and you got the foundational models, large LLMs. Where do you guys fit into the landscape? Because you guys are in the middle of this. How do you talk to customers when they say, "I'm going down this road. I need help. I'm going to stand this up." This new AI infrastructure and applications, where do you guys fit in the landscape? >> Right, and really, the answer is both. I think today, when it comes to a lot of what for some folks would still be considered kind of cutting edge around computer vision and natural language processing, a lot of our optimization tools and our runtime are based around most of the common computer vision and natural language processing models. So your YOLOs, your BERTs, you know, your DistilBERTs and what have you, so we work to help optimize those, again, who've gotten great performance and great value for customers trying to get those into production. But when you get into the LLMs, and you mentioned some of the open source components there, our research teams have kind of been right in the trenches with those. So kind of the GPT open source equivalent being OPT, being able to actually take, you know, a multi-$100 billion parameter model and sparsify that or optimize that down, shaving away a ton of parameters, and being able to run it on smaller infrastructure. So I think the evolution here, you know, all this stuff came out in the last six months in terms of being turned loose into the wild, but we're staying in the trenches with folks so that we can help optimize those as well and not require, again, the heavy compute, the heavy cost, the heavy power consumption as those models evolve as well. So we're staying right in with everybody while they're being built, but trying to get folks into production today with things that help with business value today. >> Jay, I really appreciate you coming on theCUBE, and before we came on camera, you said you just were on a customer call. I know you got a lot of activity. What specific things are you helping enterprises solve? What kind of problems? Take us through the spectrum from the beginning, people jumping in the deep end of the pool, some people kind of coming in, starting out slow. What are the scale? Can you scope the kind of use cases and problems that are emerging that people are calling you for? >> Absolutely, so I think if I break it down to kind of, like, your startup, or I maybe call 'em AI native to kind of steal from cloud native years ago, that group, it's pretty much, you know, part and parcel for how that group already runs. So if you have a data science team and an ML engineering team, you're building models, you're training models, you're deploying models. You're seeing firsthand the expense of starting to try to do that at scale. So it's really just a pure operational efficiency play. They kind of speak natively to our tools, which we're doing in the open source. So it's really helping, again, with the optimization of the models they've built, and then, again, giving them an alternative to expensive proprietary hardware accelerators to have to run them. Now, on the enterprise side, it varies, right? You have some kind of AI native folks there that already have these teams, but you also have kind of, like, AI curious, right? Like, they want to do it, but they don't really know where to start, and so for there, we actually have an open source toolkit that can help you get into this optimization, and then again, that runtime, that inferencing runtime, purpose-built for CPUs. It allows you to not have to worry, again, about do I have a hardware accelerator available? How do I integrate that into my application stack? If I don't already know how to build this into my infrastructure, does my ITOps teams, do they know how to do this, and what does that runway look like? How do I cost for this? How do I plan for this? When it's just x86 compute, we've been doing that for a while, right? So it obviously still requires more, but at least it's a little bit more predictable. >> It's funny you mentioned AI native. You know, born in the cloud was a phrase that was out there. Now, you have startups that are born in AI companies. So I think you have this kind of cloud kind of vibe going on. You have lift and shift was a big discussion. Then you had cloud native, kind of in the cloud, kind of making it all work. Is there a existing set of things? People will throw on this hat, and then what's the difference between AI native and kind of providing it to existing stuff? 'Cause we're a lot of people take some of these tools and apply it to either existing stuff almost, and it's not really a lift and shift, but it's kind of like bolting on AI to something else, and then starting with AI first or native AI. >> Absolutely. It's a- >> How would you- >> It's a great question. I think that probably, where I'd probably pull back to kind of allow kind of retail-type scenarios where, you know, for five, seven, nine years or more even, a lot of these folks already have data science teams, you know? I mean, they've been doing this for quite some time. The difference is the introduction of these neural networks and deep learning, right? Those kinds of models are just a little bit of a paradigm shift. So, you know, I obviously was trying to be fun with the term AI native, but I think it's more folks that kind of came up in that neural network world, so it's a little bit more second nature, whereas I think for maybe some traditional data scientists starting to get into neural networks, you have the complexity there and the training overhead, and a lot of the aspects of getting a model finely tuned and hyperparameterization and all of these aspects of it. It just adds a layer of complexity that they're just not as used to dealing with. And so our goal is to help make that easy, and then of course, make it easier to run anywhere that you have just kind of standard infrastructure. >> Well, the other point I'd bring out, and I'd love to get your reaction to, is not only is that a neural network team, people who have been focused on that, but also, if you look at some of the DataOps lately, AIOps markets, a lot of data engineering, a lot of scale, folks who have been kind of, like, in that data tsunami cloud world are seeing, they kind of been in this, right? They're, like, been experiencing that. >> No doubt. I think it's funny the data lake concept, right? And you got data oceans now. Like, the metaphors just keep growing on us, but where it is valuable in terms of trying to shift the mindset, I've always kind of been a fan of some of the naming shift. I know with AWS, they always talk about purpose-built databases. And I always liked that because, you know, you don't have one database that can do everything. Even ones that say they can, like, you still have to do implementation detail differences. So sitting back and saying, "What is my use case, and then which database will I use it for?" I think it's kind of similar here. And when you're building those data teams, if you don't have folks that are doing data engineering, kind of that data harvesting, free processing, you got to do all that before a model's even going to care about it. So yeah, it's definitely a central piece of this as well, and again, whether or not you're going to be AI negative as you're making your way to kind of, you know, on that journey, you know, data's definitely a huge component of it. >> Yeah, you would have loved our Supercloud event we had. Talk about naming and, you know, around data meshes was talked about a lot. You're starting to see the control plane layers of data. I think that was the beginning of what I saw as that data infrastructure shift, to be horizontally scalable. So I have to ask you, with Neural Magic, when your customers and the people that are prospects for you guys, they're probably asking a lot of questions because I think the general thing that we see is, "How do I get started? Which GPU do I use?" I mean, there's a lot of things that are kind of, I won't say technical or targeted towards people who are living in that world, but, like, as the mainstream enterprises come in, they're going to need a playbook. What do you guys see, what do you guys offer your clients when they come in, and what do you recommend? >> Absolutely, and I think where we hook in specifically tends to be on the training side. So again, I've built a model. Now, I want to really optimize that model. And then on the runtime side when you want to deploy it, you know, we run that optimized model. And so that's where we're able to provide. We even have a labs offering in terms of being able to pair up our engineering teams with a customer's engineering teams, and we can actually help with most of that pipeline. So even if it is something where you have a dataset and you want some help in picking a model, you want some help training it, you want some help deploying that, we can actually help there as well. You know, there's also a great partner ecosystem out there, like a lot of folks even in the "Startup Showcase" here, that extend beyond into kind of your earlier comment around data engineering or downstream ITOps or the all-up MLOps umbrella. So we can absolutely engage with our labs, and then, of course, you know, again, partners, which are always kind of key to this. So you are spot on. I think what's happened with the kind of this, they talk about a hockey stick. This is almost like a flat wall now with the rate of innovation right now in this space. And so we do have a lot of folks wanting to go straight from curious to native. And so that's definitely where the partner ecosystem comes in so hard 'cause there just isn't anybody or any teams out there that, I literally do from, "Here's my blank database, and I want an API that does all the stuff," right? Like, that's a big chunk, but we can definitely help with the model to delivery piece. >> Well, you guys are obviously a featured company in this space. Talk about the expertise. A lot of companies are like, I won't say faking it till they make it. You can't really fake security. You can't really fake AI, right? So there's going to be a learning curve. They'll be a few startups who'll come out of the gate early. You guys are one of 'em. Talk about what you guys have as expertise as a company, why you're successful, and what problems do you solve for customers? >> No, appreciate that. Yeah, we actually, we love to tell the story of our founder, Nir Shavit. So he's a 20-year professor at MIT. Actually, he was doing a lot of work on kind of multicore processing before there were even physical multicores, and actually even did a stint in computational neurobiology in the 2010s, and the impetus for this whole technology, has a great talk on YouTube about it, where he talks about the fact that his work there, he kind of realized that the way neural networks encode and how they're executed by kind of ramming data layer by layer through these kind of HPC-style platforms, actually was not analogous to how the human brain actually works. So we're on one side, we're building neural networks, and we're trying to emulate neurons. We're not really executing them that way. So our team, which one of the co-founders, also an ex-MIT, that was kind of the birth of why can't we leverage this super-performance CPU platform, which has those really fat, fast caches attached to each core, and actually start to find a way to break that model down in a way that I can execute things in parallel, not having to do them sequentially? So it is a lot of amazing, like, talks and stuff that show kind of the magic, if you will, a part of the pun of Neural Magic, but that's kind of the foundational layer of all the engineering that we do here. And in terms of how we're able to bring it to reality for customers, I'll give one customer quote where it's a large retailer, and it's a people-counting application. So a very common application. And that customer's actually been able to show literally double the amount of cameras being run with the same amount of compute. So for a one-to-one perspective, two-to-one, business leaders usually like that math, right? So we're able to show pure cost savings, but even performance-wise, you know, we have some of the common models like your ResNets and your YOLOs, where we can actually even perform better than hardware-accelerated solutions. So we're trying to do, I need to just dumb it down to better, faster, cheaper, but from a commodity perspective, that's where we're accelerating. >> That's not a bad business model. Make things easier to use, faster, and reduce the steps it takes to do stuff. So, you know, that's always going to be a good market. Now, you guys have DeepSparse, which we've talked about on our CUBE conversation prior to this interview, delivers ML models through the software so the hardware allows for a decoupling, right? >> Yep. >> Which is going to drive probably a cost advantage. Also, it's also probably from a deployment standpoint it must be easier. Can you share the benefits? Is it a cost side? Is it more of a deployment? What are the benefits of the DeepSparse when you guys decouple the software from the hardware on the ML models? >> No you actually, you hit 'em both 'cause that really is primarily the value. Because ultimately, again, we're so early. And I came from this world in a prior life where I'm doing Java development, WebSphere, WebLogic, Tomcat open source, right? When we were trying to do innovation, we had innovation buckets, 'cause everybody wanted to be on the web and have their app and a browser, right? We got all the money we needed to build something and show, hey, look at the thing on the web, right? But when you had to get in production, that was the challenge. So to what you're speaking to here, in this situation, we're able to show we're just a Python package. So whether you just install it on the operating system itself, or we also have a containerized version you can drop on any container orchestration platform, so ECS or EKS on AWS. And so you get all the auto-scaling features. So when you think about that kind of a world where you have everything from real-time inferencing to kind of after hours batch processing inferencing, the fact that you can auto scale that hardware up and down and it's CPU based, so you're paying by the minute instead of maybe paying by the hour at a lower cost shelf, it does everything from pure cost to, again, I can have my standard IT team say, "Hey, here's the Kubernetes in the container," and it just runs on the infrastructure we're already managing. So yeah, operational, cost and again, and many times even performance. (audio warbles) CPUs if I want to. >> Yeah, so that's easier on the deployment too. And you don't have this kind of, you know, blank check kind of situation where you don't know what's on the backend on the cost side. >> Exactly. >> And you control the actual hardware and you can manage that supply chain. >> And keep in mind, exactly. Because the other thing that sometimes gets lost in the conversation, depending on where a customer is, some of these workloads, like, you know, you and I remember a world where even like the roundtrip to the cloud and back was a problem for folks, right? We're used to extremely low latency. And some of these workloads absolutely also adhere to that. But there's some workloads where the latency isn't as important. And we actually even provide the tuning. Now, if we're giving you five milliseconds of latency and you don't need that, you can tune that back. So less CPU, lower cost. Now, throughput and other things come into play. But that's the kind of configurability and flexibility we give for operations. >> All right, so why should I call you if I'm a customer or prospect Neural Magic, what problem do I have or when do I know I need you guys? When do I call you in and what does my environment look like? When do I know? What are some of the signals that would tell me that I need Neural Magic? >> No, absolutely. So I think in general, any neural network, you know, the process I mentioned before called sparcification, it's, you know, an optimization process that we specialize in. Any neural network, you know, can be sparcified. So I think if it's a deep-learning neural network type model. If you're trying to get AI into production, you have cost concerns even performance-wise. I certainly hate to be too generic and say, "Hey, we'll talk to everybody." But really in this world right now, if it's a neural network, it's something where you're trying to get into production, you know, we are definitely offering, you know, kind of an at-scale performant deployable solution for deep learning models. >> So neural network you would define as what? Just devices that are connected that need to know about each other? What's the state-of-the-art current definition of neural network for customers that may think they have a neural network or might not know they have a neural network architecture? What is that definition for neural network? >> That's a great question. So basically, machine learning models that fall under this kind of category, you hear about transformers a lot, or I mentioned about YOLO, the YOLO family of computer vision models, or natural language processing models like BERT. If you have a data science team or even developers, some even regular, I used to call myself a nine to five developer 'cause I worked in the enterprise, right? So like, hey, we found a new open source framework, you know, I used to use Spring back in the day and I had to go figure it out. There's developers that are pulling these models down and they're figuring out how to get 'em into production, okay? So I think all of those kinds of situations, you know, if it's a machine learning model of the deep learning variety that's, you know, really specifically where we shine. >> Okay, so let me pretend I'm a customer for a minute. I have all these videos, like all these transcripts, I have all these people that we've interviewed, CUBE alumnis, and I say to my team, "Let's AI-ify, sparcify theCUBE." >> Yep. >> What do I do? I mean, do I just like, my developers got to get involved and they're going to be like, "Well, how do I upload it to the cloud? Do I use a GPU?" So there's a thought process. And I think a lot of companies are going through that example of let's get on this AI, how can it help our business? >> Absolutely. >> What does that progression look like? Take me through that example. I mean, I made up theCUBE example up, but we do have a lot of data. We have large data models and we have people and connect to the internet and so we kind of seem like there's a neural network. I think every company might have a neural network in place. >> Well, and I was going to say, I think in general, you all probably do represent even the standard enterprise more than most. 'Cause even the enterprise is going to have a ton of video content, a ton of text content. So I think it's a great example. So I think that that kind of sea or I'll even go ahead and use that term data lake again, of data that you have, you're probably going to want to be setting up kind of machine learning pipelines that are going to be doing all of the pre-processing from kind of the raw data to kind of prepare it into the format that say a YOLO would actually use or let's say BERT for natural language processing. So you have all these transcripts, right? So we would do a pre-processing path where we would create that into the file format that BERT, the machine learning model would know how to train off of. So that's kind of all the pre-processing steps. And then for training itself, we actually enable what's called sparse transfer learning. So that's transfer learning is a very popular method of doing training with existing models. So we would be able to retrain that BERT model with your transcript data that we have now done the pre-processing with to get it into the proper format. And now we have a BERT natural language processing model that's been trained on your data. And now we can deploy that onto DeepSparse runtime so that now you can ask that model whatever questions, or I should say pass, you're not going to ask it those kinds of questions ChatGPT, although we can do that too. But you're going to pass text through the BERT model and it's going to give you answers back. It could be things like sentiment analysis or text classification. You just call the model, and now when you pass text through it, you get the answers better, faster or cheaper. I'll use that reference again. >> Okay, we can create a CUBE bot to give us questions on the fly from the the AI bot, you know, from our previous guests. >> Well, and I will tell you using that as an example. So I had mentioned OPT before, kind of the open source version of ChatGPT. So, you know, typically that requires multiple GPUs to run. So our research team, I may have mentioned earlier, we've been able to sparcify that over 50% already and run it on only a single GPU. And so in that situation, you could train OPT with that corpus of data and do exactly what you say. Actually we could use Alexa, we could use Alexa to actually respond back with voice. How about that? We'll do an API call and we'll actually have an interactive Alexa-enabled bot. >> Okay, we're going to be a customer, let's put it on the list. But this is a great example of what you guys call software delivered AI, a topic we chatted about on theCUBE conversation. This really means this is a developer opportunity. This really is the convergence of the data growth, the restructuring, how data is going to be horizontally scalable, meets developers. So this is an AI developer model going on right now, which is kind of unique. >> It is, John, I will tell you what's interesting. And again, folks don't always think of it this way, you know, the AI magical goodness is now getting pushed in the middle where the developers and IT are operating. And so it again, that paradigm, although for some folks seem obvious, again, if you've been around for 20 years, that whole all that plumbing is a thing, right? And so what we basically help with is when you deploy the DeepSparse runtime, we have a very rich API footprint. And so the developers can call the API, ITOps can run it, or to your point, it's developer friendly enough that you could actually deploy our off-the-shelf models. We have something called the SparseZoo where we actually publish pre-optimized or pre-sparcified models. And so developers could literally grab those right off the shelf with the training they've already had and just put 'em right into their applications and deploy them as containers. So yeah, we enable that for sure as well. >> It's interesting, DevOps was infrastructure as code and we had a last season, a series on data as code, which we kind of coined. This is data as code. This is a whole nother level of opportunity where developers just want to have programmable data and apps with AI. This is a whole new- >> Absolutely. >> Well, absolutely great, great stuff. Our news team at SiliconANGLE and theCUBE said you guys had a little bit of a launch announcement you wanted to make here on the "AWS Startup Showcase." So Jay, you have something that you want to launch here? >> Yes, and thank you John for teeing me up. So I'm going to try to put this in like, you know, the vein of like an AWS, like main stage keynote launch, okay? So we're going to try this out. So, you know, a lot of our product has obviously been built on top of x86. I've been sharing that the past 15 minutes or so. And with that, you know, we're seeing a lot of acceleration for folks wanting to run on commodity infrastructure. But we've had customers and prospects and partners tell us that, you know, ARM and all of its kind of variance are very compelling, both cost performance-wise and also obviously with Edge. And wanted to know if there was anything we could do from a runtime perspective with ARM. And so we got the work and, you know, it's a hard problem to solve 'cause the instructions set for ARM is very different than the instruction set for x86, and our deep tensor column technology has to be able to work with that lower level instruction spec. But working really hard, the engineering team's been at it and we are happy to announce here at the "AWS Startup Showcase," that DeepSparse inference now has, or inference runtime now has support for AWS Graviton instances. So it's no longer just x86, it is also ARM and that obviously also opens up the door to Edge and further out the stack so that optimize once run anywhere, we're not going to open up. So it is an early access. So if you go to neuralmagic.com/graviton, you can sign up for early access, but we're excited to now get into the ARM side of the fence as well on top of Graviton. >> That's awesome. Our news team is going to jump on that news. We'll get it right up. We get a little scoop here on the "Startup Showcase." Jay Marshall, great job. That really highlights the flexibility that you guys have when you decouple the software from the hardware. And again, we're seeing open source driving a lot more in AI ops now with with machine learning and AI. So to me, that makes a lot of sense. And congratulations on that announcement. Final minute or so we have left, give a summary of what you guys are all about. Put a plug in for the company, what you guys are looking to do. I'm sure you're probably hiring like crazy. Take the last few minutes to give a plug for the company and give a summary. >> No, I appreciate that so much. So yeah, joining us out neuralmagic.com, you know, part of what we didn't spend a lot of time here, our optimization tools, we are doing all of that in the open source. It's called SparseML and I mentioned SparseZoo briefly. So we really want the data scientists community and ML engineering community to join us out there. And again, the DeepSparse runtime, it's actually free to use for trial purposes and for personal use. So you can actually run all this on your own laptop or on an AWS instance of your choice. We are now live in the AWS marketplace. So push button, deploy, come try us out and reach out to us on neuralmagic.com. And again, sign up for the Graviton early access. >> All right, Jay Marshall, Vice President of Business Development Neural Magic here, talking about performant, cost effective machine learning at scale. This is season three, episode one, focusing on foundational models as far as building data infrastructure and AI, AI native. I'm John Furrier with theCUBE. Thanks for watching. (bright upbeat music)
SUMMARY :
of the "AWS Startup Showcase." Thanks for having us. and the machine learning and the cloud to help accelerate that. and you got the foundational So kind of the GPT open deep end of the pool, that group, it's pretty much, you know, So I think you have this kind It's a- and a lot of the aspects of and I'd love to get your reaction to, And I always liked that because, you know, that are prospects for you guys, and you want some help in picking a model, Talk about what you guys have that show kind of the magic, if you will, and reduce the steps it takes to do stuff. when you guys decouple the the fact that you can auto And you don't have this kind of, you know, the actual hardware and you and you don't need that, neural network, you know, of situations, you know, CUBE alumnis, and I say to my team, and they're going to be like, and connect to the internet and it's going to give you answers back. you know, from our previous guests. and do exactly what you say. of what you guys call enough that you could actually and we had a last season, that you want to launch here? And so we got the work and, you know, flexibility that you guys have So you can actually run Vice President of Business
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Jay | PERSON | 0.99+ |
Jay Marshall | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
Nir Shavit | PERSON | 0.99+ |
20-year | QUANTITY | 0.99+ |
Alexa | TITLE | 0.99+ |
2010s | DATE | 0.99+ |
seven | QUANTITY | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
each core | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
Today | DATE | 0.99+ |
nine years | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
BERT | TITLE | 0.98+ |
theCUBE | ORGANIZATION | 0.98+ |
ChatGPT | TITLE | 0.98+ |
20 years | QUANTITY | 0.98+ |
over 50% | QUANTITY | 0.97+ |
second nature | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
ARM | ORGANIZATION | 0.96+ |
one | QUANTITY | 0.95+ |
DeepSparse | TITLE | 0.94+ |
neuralmagic.com/graviton | OTHER | 0.94+ |
SiliconANGLE | ORGANIZATION | 0.94+ |
WebSphere | TITLE | 0.94+ |
nine | QUANTITY | 0.94+ |
first | QUANTITY | 0.93+ |
Startup Showcase | EVENT | 0.93+ |
five milliseconds | QUANTITY | 0.92+ |
AWS Startup Showcase | EVENT | 0.91+ |
two | QUANTITY | 0.9+ |
YOLO | ORGANIZATION | 0.89+ |
CUBE | ORGANIZATION | 0.88+ |
OPT | TITLE | 0.88+ |
last six months | DATE | 0.88+ |
season three | QUANTITY | 0.86+ |
double | QUANTITY | 0.86+ |
one customer | QUANTITY | 0.86+ |
Supercloud | EVENT | 0.86+ |
one side | QUANTITY | 0.85+ |
Vice | PERSON | 0.85+ |
x86 | OTHER | 0.83+ |
AI/ML: Top Startups Building Foundational Models | TITLE | 0.82+ |
ECS | TITLE | 0.81+ |
$100 billion | QUANTITY | 0.81+ |
DevOps | TITLE | 0.81+ |
WebLogic | TITLE | 0.8+ |
EKS | TITLE | 0.8+ |
a minute | QUANTITY | 0.8+ |
neuralmagic.com | OTHER | 0.79+ |
Pierluca Chiodelli, Dell Technologies & Dan Cummins, Dell Technologies | MWC Barcelona 2023
(intro music) >> "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> We're not going to- >> Hey everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante, I'm here with Dave Nicholson, day four of MWC23. I mean, it's Dave, it's, it's still really busy. And you walking the floors, you got to stop and start. >> It's surprising. >> People are cheering. They must be winding down, giving out the awards. Really excited. Pier, look at you and Elias here. He's the vice president of Engineering Technology for Edge Computing Offers Strategy and Execution at Dell Technologies, and he's joined by Dan Cummins, who's a fellow and vice president of, in the Edge Business Unit at Dell Technologies. Guys, welcome. >> Thank you. >> Thank you. >> I love when I see the term fellow. You know, you don't, they don't just give those away. What do you got to do to be a fellow at Dell? >> Well, you know, fellows are senior technical leaders within Dell. And they're usually tasked to help Dell solve you know, a very large business challenge to get to a fellow. There's only, I think, 17 of them inside of Dell. So it is a small crowd. You know, previously, really what got me to fellow, is my continued contribution to transform Dell's mid-range business, you know, VNX two, and then Unity, and then Power Store, you know, and then before, and then after that, you know, they asked me to come and, and help, you know, drive the technology vision for how Dell wins at the Edge. >> Nice. Congratulations. Now, Pierluca, I'm looking at this kind of cool chart here which is Edge, Edge platform by Dell Technologies, kind of this cube, like cubes course, you know. >> AK project from here. >> Yeah. So, so tell us about the Edge platform. What, what's your point of view on all that at Dell? >> Yeah, absolutely. So basically in a, when we create the Edge, and before even then was bringing aboard, to create this vision of the platform, and now building the platform when we announced project from here, was to create solution for the Edge. Dell has been at the edge for 30 years. We sold a lot of compute. But the reality was people want outcome. And so, and the Edge is a new market, very exciting, but very siloed. And so people at the Edge have different personas. So quickly realize that we need to bring in Dell, people with expertise, quickly realize as well that doing all these solution was not enough. There was a lot of problem to solve because the Edge is outside of the data center. So you are outside of the wall of the data center. And what is going to happen is obviously you are in the land of no one. And so you have million of device, thousand of million of device. All of us at home, we have all connected thing. And so we understand that the, the capability of Dell was to bring in technology to secure, manage, deploy, with zero touch, zero trust, the Edge. And all the edge the we're speaking about right now, we are focused on everything that is outside of a normal data center. So, how we married the computer that we have for many years, the new gateways that we create, so having the best portfolio, number one, having the best solution, but now, transforming the way that people deploy the Edge, and secure the Edge through a software platform that we create. >> You mentioned Project Frontier. I like that Dell started to do these sort of project, Project Alpine was sort of the multi-cloud storage. I call it "The Super Cloud." The Project Frontier. It's almost like you develop, it's like mission based. Like, "Okay, that's our North Star." People hear Project Frontier, they know, you know, internally what you're talking about. Maybe use it for external communications too, but what have you learned since launching Project Frontier? What's different about the Edge? I mean you're talking about harsh environments, you're talking about new models of connectivity. So, what have you learned from Project Frontier? What, I'd love to hear the fellow perspective as well, and what you guys are are learning so far. >> Yeah, I mean start and then I left to them, but we learn a lot. The first thing we learn that we are on the right path. So that's good, because every conversation we have, there is nobody say to us, you know, "You are crazy. "This is not needed." Any conversation we have this week, start with the telco thing. But after five minutes it goes to, okay, how I can solve the Edge, how I can bring the compute near where the data are created, and how I can do that secure at scale, and with the right price. And then can speak about how we're doing that. >> Yeah, yeah. But before that, we have to really back up and understand what Dell is doing with Project Frontier, which is an Edge operations platform, to simplify your Edge use cases. Now, Pierluca and his team have a number of verticalized applications. You want to be able to securely deploy those, you know, at the Edge. But you need a software platform that's going to simplify both the life cycle management, and the security at the Edge, with the ability to be able to construct and deploy distributed applications. Customers are looking to derive value near the point of generation of data. We see a massive explosion of data. But in particular, what's different about the Edge, is the different computing locations, and the constraints that are on those locations. You know, for example, you know, in a far Edge environment, the people that service that equipment are not trained in the IT, or train, trained in it. And they're also trained in the safety and security protocols of that environment. So you necessarily can't apply the same IT techniques when you're managing infrastructure and deploying applications, or servicing in those locations. So Frontier was designed to solve for those constraints. You know, often we see competitors that are doing similar things, that are starting from an IT mindset, and trying to shift down to cover Edge use cases. What we've done with Frontier, is actually first understood the constraints that they have at the Edge. Both the operational constraints and technology constraints, the service constraints, and then came up with a, an architecture and technology platform that allows them to start from the Edge, and bleed into the- >> So I'm laughing because you guys made the same mistake. And you, I think you learned from that mistake, right? You used to take X86 boxes and throw 'em over the fence. Now, you're building purpose-built systems, right? Project Frontier I think is an example of the learnings. You know, you guys an IT company, right? Come on. But you're learning fast, and that's what I'm impressed about. >> Well Glenn, of course we're here at MWC, so it's all telecom, telecom, telecom, but really, that's a subset of Edge. >> Yes. >> Fair to say? >> Yes. >> Can you give us an example of something that is, that is, orthogonal to, to telecom, you know, maybe off to the side, that maybe overlaps a little bit, but give us an, give us an example of Edge, that isn't specifically telecom focused. >> Well, you got the, the Edge verticals. and Pierluca could probably speak very well to this. You know, you got manufacturing, you got retail, you got automotive, you got oil and gas. Every single one of them are going to make different choices in the software that they're going to use, the hyperscaler investments that they're going to use, and then write some sort of automation, you know, to deploy that, right? And the Edge is highly fragmented across all of these. So we certainly could deploy a private wireless 5G solution, orchestrate that deployment through Frontier. We can also orchestrate other use cases like connected worker, or overall equipment effectiveness in manufacturing. But Pierluca you have a, you have a number. >> Well, but from your, so, but just to be clear, from your perspective, the whole idea of, for example, private 5g, it's a feature- >> Yes. >> That might be included. It happened, it's a network topology, a network function that might be a feature of an Edge environment. >> Yes. But it's not the center of the discussion. >> So, it enables the outcome. >> Yeah. >> Okay. >> So this, this week is a clear example where we confirm and establish this. The use case, as I said, right? They, you say correctly, we learned very fast, right? We brought people in that they came from industry that was not IT industry. We brought people in with the things, and we, we are Dell. So we have the luxury to be able to interview hundreds of customers, that just now they try to connect the OT with the IT together. And so what we learn, is really, at the Edge is different personas. They person that decide what to do at the Edge, is not the normal IT administrator, is not the normal telco. >> Who is it? Is it an engineer, or is it... >> It's, for example, the store manager. >> Yeah. >> It's, for example, the, the person that is responsible for the manufacturing process. Those people are not technology people by any means. But they have a business goal in mind. Their goal is, "I want to raise my productivity by 30%," hence, I need to have a preventive maintenance solution. How we prescribe this preventive maintenance solution? He doesn't prescribe the preventive maintenance solution. He goes out, he has to, a consult or himself, to deploy that solution, and he choose different fee. Now, the example that I was doing from the houses, all of us, we have connected device. The fact that in my house, I have a solar system that produce energy, the only things I care that I can read, how much energy I produce on my phone, and how much energy I send to get paid back. That's the only thing. The fact that inside there is a compute that is called Dell or other things is not important to me. Same persona. Now, if I can solve the security challenge that the SI, or the user need to implement this technology because it goes everywhere. And I can manage this in extensively, and I can put the supply chain of Dell on top of that. And I can go every part in the world, no matter if I have in Papua New Guinea, or I have an oil ring in Texas, that's the winning strategy. That's why people, they are very interested to the, including Telco, the B2B business in telco is looking very, very hard to how they recoup the investment in 5g. One of the way, is to reach out with solution. And if I can control and deploy things, more than just SD one or other things, or private mobility, that's the key. >> So, so you have, so you said manufacturing, retail, automotive, oil and gas, you have solutions for each of those, or you're building those, or... >> Right now we have solution for manufacturing, with for example, PTC. That is the biggest company. It's actually based in Boston. >> Yeah. Yeah, it is. There's a company that the market's just coming right to them. >> We have a, very interesting. Another solution with Litmus, that is a startup that, that also does manufacturing aggregation. We have retail with Deep North. So we can do detecting in the store, how many people they pass, how many people they doing, all of that. And all theses solution that will be, when we will have Frontier in the market, will be also in Frontier. We are also expanding to energy, and we going vertical by vertical. But what is they really learn, right? You said, you know you are an IT company. What, to me, the Edge is a pre virtualization area. It's like when we had, you know, I'm, I've been in the company for 24 years coming from EMC. The reality was before there was virtualization, everybody was starting his silo. Nobody thought about, "Okay, I can run this thing together "with security and everything, "but I need to do it." Because otherwise in a manufacturing, or in a shop, I can end up with thousand of devices, just because someone tell to me, I'm a, I'm a store manager, I don't know better. I take this video surveillance application, I take these things, I take a, you know, smart building solution, suddenly I have five, six, seven different infrastructure to run this thing because someone say so. So we are here to democratize the Edge, to secure the Edge, and to expand. That's the idea. >> So, the Frontier platform is really the horizontal platform. And you'll build specific solutions for verticals. On top of that, you'll, then I, then the beauty is ISV's come in. >> Yes. >> 'Cause it's open, and the developers. >> We have a self certification program already for our solution, as well, for the current solution, but also for Frontier. >> What does that involve? Self-certification. You go through you, you go through some- >> It's basically a, a ISV can come. We have a access to a lab, they can test the thing. If they pass the first screen, then they can become part of our ecosystem very easily. >> Ah. >> So they don't need to spend days or months with us to try to architect the thing. >> So they get the premature of being certified. >> They get the Dell brand associated with it. Maybe there's some go-to-market benefits- >> Yes. >> As well. Cool. What else do we need to know? >> So, one thing I, well one thing I just want to stress, you know, when we say horizontal platform, really, the Edge is really a, a distributed edge computing problem, right? And you need to almost create a mesh of different computing locations. So for example, even though Dell has Edge optimized infrastructure, that we're going to deploy and lifecycle manage, customers may also have compute solutions, existing compute solutions in their data center, or at a co-location facility that are compute destinations. Project Frontier will connect to those private cloud stacks. They'll also collect to, connect to multiple public cloud stacks. And then, what they can do, is the solutions that we talked about, they construct that using an open based, you know, protocol, template, that describes that distributed application that produces that outcome. And then through orchestration, we can then orchestrate across all of these locations to produce that outcome. That's what the platform's doing. >> So it's a compute mesh, is what you just described? >> Yeah, it's, it's a, it's a software orchestration mesh. >> Okay. >> Right. And allows customers to take advantage of their existing investments. Also allows them to, to construct solutions based on the ISV of their choice. We're offering solutions like Pierluca had talked about, you know, in manufacturing with Litmus and PTC, but they could put another use case that's together based on another ISV. >> Is there a data mesh analog here? >> The data mesh analog would run on top of that. We don't offer that as part of Frontier today, but we do have teams working inside of Dell that are working on this technology. But again, if there's other data mesh technology or packages, that they want to deploy as a solution, if you will, on top of Frontier, Frontier's extensible in that way as well. >> The open nature of Frontier is there's a, doesn't, doesn't care. It's just a note on the mesh. >> Yeah. >> Right. Now, of course you'd rather, you'd ideally want it to be Dell technology, and you'll make the business case as to why it should be. >> They get additional benefits if it's Dell. Pierluca talked a lot about, you know, deploying infrastructure outside the walls of an IT data center. You know, this stuff can be tampered with. Somebody can move it to another room, somebody can open up. In the supply chain with, you know, resellers that are adding additional people, can open these devices up. We're actually deploying using an Edge technology called Secure Device Onboarding. And it solves a number of things for us. We, as a manufacturer can initialize the roots of trust in the Dell hardware, such that we can validate, you know, tamper detection throughout the supply chain, and securely transfer ownership. And that's different. That is not an IT technique. That's an edge technique. And that's just one example. >> That's interesting. I've talked to other people in IT about how they're using that technique. So it's, it's trickling over to that side of the business. >> I'm almost curious about the friction that you, that you encounter because the, you know, you paint a picture of a, of a brave new world, a brave new future. Ideally, in a healthy organization, they have, there's a CTO, or at least maybe a CIO, with a CTO mindset. They're seeking to leverage technology in the service of whatever the mission of the organization is. But they've got responsibilities to keep the lights on, as well as innovate. In that mix, what are you seeing as the inhibitors? What's, what's the push back against Frontier that you're seeing in most cases? Is it, what, what is it? >> Inside of Dell? >> No, not, I'm saying out, I'm saying with- >> Market friction. >> Market, market, market friction. What is the push back? >> I think, you know, as I explained, do yourself is one of the things that probably is the most inhibitor, because some people, they think that they are better already. They invest a lot in this, and they have the content. But those are again, silo solutions. So, if you go into some of the huge things that they already established, thousand of store and stuff like that, there is an opportunity there, because also they want to have a refresh cycle. So when we speak about softer, softer, softer, when you are at the Edge, the software needs to run on something that is there. So the combination that we offer about controlling the security of the hardware, plus the operating system, and provide an end-to-end platform, allow them to solve a lot of problems that today they doing by themselves. Now, I met a lot of customers, some of them, one actually here in Spain, I will not make the name, but it's a large automotive. They have the same challenge. They try to build, but the problem is this is just for them. And they want to use something that is a backup and provide with the Dell service, Dell capability of supply chain in all the world, and the diversity of the portfolio we have. These guys right now, they need to go out and find different types of compute, or try to adjust thing, or they need to have 20 people there to just prepare the device. We will take out all of this. So I think the, the majority of the pushback is about people that they already established infrastructure, and they want to use that. But really, there is an opportunity here. Because the, as I said, the IT/OT came together now, it's a reality. Three years ago when we had our initiative, they've pointed out, sarcastically. We, we- >> Just trying to be honest. (laughing) >> I can't let you get away with that. >> And we, we failed because it was too early. And we were too focused on, on the fact to going. Push ourself to the boundary of the IOT. This platform is open. You want to run EdgeX, you run EdgeX, you want OpenVINO, you want Microsoft IOT, you run Microsoft IOT. We not prescribe the top. We are locking down the bottom. >> What you described is the inertia of, of sunk dollars, or sunk euro into an infrastructure, and now they're hanging onto that. >> Yeah. >> But, I mean, you know, I, when we say horizontal, we think scale, we think low cost, at volume. That will, that will win every time. >> There is a simplicity at scale, right? There is a, all the thing. >> And the, and the economics just overwhelm that siloed solution. >> And >> That's inevitable. >> You know, if you want to apply security across the entire thing, if you don't have a best practice, and a click that you can do that, or bring down an application that you need, you need to touch each one of these silos. So, they don't know yet, but we going to be there helping them. So there is no pushback. Actually, this particular example I did, this guy said you know, there are a lot of people that come here. Nobody really described the things we went through. So we are on the right track. >> Guys, great conversation. We really appreciate you coming on "theCUBE." >> Thank you. >> Pleasure to have you both. >> Okay. >> Thank you. >> All right. And thank you for watching Dave Vellante for Dave Nicholson. We're live at the Fira. We're winding up day four. Keep it right there. Go to siliconangle.com. John Furrier's got all the news on "theCUBE.net." We'll be right back right after this break. "theCUBE," at MWC 23. (outro music)
SUMMARY :
that drive human progress. And you walking the floors, in the Edge Business Unit the term fellow. and help, you know, drive cubes course, you know. about the Edge platform. and now building the platform when I like that Dell started to there is nobody say to us, you know, and the security at the Edge, an example of the learnings. Well Glenn, of course you know, maybe off to the side, in the software that they're going to use, a network function that might be a feature But it's not the center of the discussion. is really, at the Edge Who is it? that the SI, or the user So, so you have, so That is the biggest company. There's a company that the market's just I take a, you know, is really the horizontal platform. and the developers. We have a self What does that involve? We have a access to a lab, to try to architect the thing. So they get the premature They get the Dell As well. is the solutions that we talked about, it's a software orchestration mesh. on the ISV of their choice. that they want to deploy It's just a note on the mesh. as to why it should be. In the supply chain with, you know, to that side of the business. In that mix, what are you What is the push back? So the combination that we offer about Just trying to be honest. on the fact to going. What you described is the inertia of, you know, I, when we say horizontal, There is a, all the thing. overwhelm that siloed solution. and a click that you can do that, you coming on "theCUBE." And thank you
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dan Cummins | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Elias | PERSON | 0.99+ |
Pierluca | PERSON | 0.99+ |
Texas | LOCATION | 0.99+ |
Papua New Guinea | LOCATION | 0.99+ |
Pierluca Chiodelli | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Glenn | PERSON | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Frontier | ORGANIZATION | 0.99+ |
Edge | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Litmus | ORGANIZATION | 0.99+ |
20 people | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
24 years | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
PTC | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
one example | QUANTITY | 0.99+ |
this week | DATE | 0.99+ |
five minutes | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
first screen | QUANTITY | 0.98+ |
six | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Pier | PERSON | 0.98+ |
seven | QUANTITY | 0.98+ |
Three years ago | DATE | 0.98+ |
Edge | TITLE | 0.98+ |
OpenVINO | TITLE | 0.97+ |
Project Frontier | ORGANIZATION | 0.97+ |
first | QUANTITY | 0.97+ |
thousand | QUANTITY | 0.97+ |
Both | QUANTITY | 0.96+ |
first thing | QUANTITY | 0.96+ |
EdgeX | TITLE | 0.96+ |
Ken Byrnes, Dell Technologies & David Trigg, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. >> All right, welcome back to the Fira in Barcelona. This is Dave Vellante with Dave Nicholson. Day 4 of coverage MWC 23. We've been talking all week about the disaggregation of the telco networks, how telcos need to increase revenue how they're not going to let the over the top providers do it again. They want to charge Netflix, right? And Netflix is punching back. There maybe are better ways to do revenue acceleration. We're going to talk to that topic with Dave Trigg who's the Global Vice President of Telecom systems business at Dell Technologies. And Ken Burns, who's a global telecom partner, sales lead. Guys, good to see you. >> Good to see you. Great to be here. >> Dave, you heard my, you're welcome. You heard my intro. It's got to be better ways to, for the telcos to make money. How can they accelerate revenue beyond taxing Netflix? >> Yeah, well, well first of all, sort of the promise of 5G, and a lot of people talk about 5G as the enterprise G. Right? So the promise of 5G is to really help drive revenue enterprise use cases. And so, it's sort of the promise of the next generation of technology, but it's not easy to figure out how we monetize that. And so we think Dell has a pretty significant role to play. It's a CEO conversation for every telco and how they accelerate. And so it's an area we're investing heavily into three different areas for telcos. One is the IT space. Dell's done that forever. 90% of the companies leaning in on that. The other places network, network's more about cost takeout. And the third area where we're investing in is working with what we call their line of businesses, but it's really their business units, right? How can we sit down with them and really understand what services do they take to market? Where do they go? So, we're making significant investments. So one way they can do it is working with Dell and and we're making big investments 'cause in most Geos we have a fairly significant sales force. We've brought in an industry leader to help us put it together. And we're getting very focused on this space and, you know, looking forward to talking more about it. >> So Ken, you know, the space inside and out, we just had at AT&T on... >> Dave Trigg: Yep. >> And they were saying we have to be hypersensitive because of our platinum brand to the use of personal information. >> Ken: Yeah. >> So we're not going to go there yet. We're not going to go directly monetize, but yet I'm thinking well, Netflix knows what I'm watching and they're making recommendations and they're, and and that's how they make money. And so the, the telcos are, are shy about doing that for right reasons, but they want to make better offers. They want to put, put forth better bundles. You know, they don't, they don't want to spend all their time trying to figure that out and not being able to change when they need to change. So, so what is the answer? If they're not going to go toward that direct monetization of data? >> Ken: Yeah. >> How do they get there? >> So I, I joined Dell in- at the end of June and brought on, as David said, to, to build and lead this what we call the line of business strategy, right? And ultimately what it is is tying together Dell technology solutions and the best of breed of what the telecoms bring to bear to solve the business outcomes of our joint customers. And there's a few jewels inside of Dell. One of it is that we have 35,000 sellers out there all touching enterprise business customers. And we have a really good understanding of what those customer needs are and you know what their outcomes needs to be. The other jewel is we have a really good understanding of how to solve those business outcomes. Dell is an open company. We work with thousands of integrators, and we have a really good insight in terms of how to solve those business outcomes, right? And so in my conversations with the telecom companies when you talk about, you know combining the best assets of Dell with their capabilities and we're all talking to the same customers, right? And if we're giving them the same story on these solutions solving business outcomes it's a beautiful thing. It's a time to market. >> What's an example of a, of a, of a situation where you'll partner with telcos that's going to drive revenue for, for both of you and value for the customer? >> Yeah, great question. So we've been laser focused on four key areas, cyber, well, let me start off with connected laptops, cyber, private mobility, and edge. Right? Now, the last two are a little bit squishy, but I'll I'll get to that in a bit, right? Because ultimately I feel like with this 5G market, we could actually make the market. And the way that we've been positioning this is almost, almost on a journey for IOT. When we talk about laptops, right? Dell is the, is the number one company in the world to sell business laptops. Well, if we start selling connected laptops the telcos are starting to say, well, you know what? If all of those laptops get connected to my network, that's a ton of 5G activations, right? We have the used cases on why having a connected workforce makes sense, right? So we're sharing that with the telcos to not simply sell a laptop, but to sell the company on why it makes sense to have that connected workforce. >> Dave Vellante: Why does it make sense? It could change the end customer. >> Ken: Yeah. So, you know, I'm probably not the best to answer that one right? But, but ultimately, you know Dell is selling millions and millions of laptops out there. And, and again, the Verizon's, the AT&T's, the T-mobile's, they're seeing the opportunity that, you know, connecting those laptops, give those the 5G activations right? But Dave, you know, the way that we've been positioning this is it's not simply a laptop could be really a Trojan horse into this IOT journey. Because ultimately, if you sell a thousand laptops to an enterprise company and you're connecting a thousand of their employees, you're connecting people, right? And we can give the analytics around that, what they're using it for, you know, making sure that the security, the bios, all of that is up to date. So now that you're connecting their people you could open up the conversation to why don't we we connect your place and, you know, allowing the telecom companies to come in and educate customers and the Dell sales force on why a private 5G mobility network makes sense to connecting places. That's a great opportunity. When you connect the place, the next part of that journey is connecting things in that place. Robotics, sensors, et cetera, right? And, and so really, so we're on the journey of people, places, things. >> So they got the cyber angle angle in there, Dave. That, that's clear benefit. If you, you know, if you got all these bespoke laptops and they're all at different levels you're going to get, you know, you're going to get hacked anyway. >> Ken: That's right. >> You're going to get hacked worse. >> Yeah. I'm curious, as you go to market, do you see significant differences? You don't have to name any names, but I imagine that there are behemoths that could be laggards because essentially they feel like they're the toll booth and all they have to do is collect, keep collecting the tolls. Whereas some of the smaller, more nimble, more agile entities that you might deal with might be more receptive to this message. That seems to be the sort of way the circle of life are. Are you seeing that? Are you seeing the big ones? Are you seeing the, you know, the aircraft carriers realizing that we got to turn into the wind guys and if we don't start turning into the wind now we're going to be in trouble. >> So this conference has been absolutely fantastic allowing us to speak with, you know, probably 30 plus telecom operators around this strategy, right? And all of the big guys, they've invested hundreds of billions of dollars in their 5G network and they haven't really seen the ROI. So when we're coming into them with a story about how Dell can help monetize their 5G network I got to tell you they're pretty excited >> Dave Nicholson: So they're receptive? >> Oh my God. They are very receptive >> So that's the big question, right? I mean is, who's, is anybody ever going to make any money off of 5G? And Ken, you were saying that private mobility and edge are a little fuzzy but I think from a strategy standpoint I mean that is a potential gold mine. >> Yeah, but it, for, for lot of the telcos and most telcos it's a pretty significant shift in mentality, right? Cause they are used to selling sim cards to some degree and how many sim cards are they selling and how many, what other used cases? And really to get to the point where they understand the use case, 'cause to get into the enterprise to really get into what can they do to help power a enterprise business more wholly. They've got to understand the use case. They got to understand the more complete solution. You know, Dell's been doing that for years. And that's where we can bring our Salesforce, our capabilities, our understanding of the customer. 'cause even your original question around AT&T and trying to understand the data, that's just really a how do you get better understanding of your customer, right? >> Right. Absolutely. >> And, and combined we're better together 'cause we bring a more complete picture of understanding our customers and then how can we help them understand what the edge is. Cause nobody's ever bought an Edge, right? They're buying an Edge to get a business outcome. You know, back in the day, nobody ever bought a data lake, right? Like, you know, they're buying an outcome. They want to use, use that data lake or they want to use the edge to deliver something. They want to use 5G. And 5G has very real capabilities. It's got intrinsic security, which, you know a lot of the wifi doesn't. It's got guaranteed on time, you know, for areas where you can't lose connectivity: autonomous vehicles, et cetera. So it's got very real capabilities that helps deliver that outcome. But you got to be able to translate that into the en- enterprise language to help them solve a problem. And that's where we think we need the help of the telcos. I think the telcos we can help them as well and, and really go drive that outcome. >> So Dell's bringing its go to market expertise and its technology. The telcos obviously have the the connectivity piece and what they do. There's no overlap in terms of the... >> Yeah. >> The, the equipment and the software that you're selling. I mean, they're going to, they're going to take your equipment and create new networks. Beautiful. And, and it's interesting you, like, you think about how Dell has transformed prior to EMC, Dell was, you know, PC maker with a subpar enterprise business, right? Kind of a wannabe enterprise business. Sorry Dell, it's the truth. And then EMC was largely, you know, a company sold storage boxes, but you owned VMware and then brought those two together. Now all of a sudden you had Dell powerhouse leader and Michael Dell, you had VMware incredibly strategic and important and it got EMC with amazing go to market. All of a sudden this Dell, Dell technologies became incredibly attractive to CIOs, C-level executives, board level. And you've come out of that transition VMware's now a separate company, right? And now, but now you have these relationships and you got the shops to be able to go into these edge locations at companies And actually go partner with the telcos. And you got a very compelling value proposition. >> Well, it's been interesting as in, in this show, again most telcos think of Dell as a server provider, you know? Important, but not overly strategic in their journey. But as we've started to invest in this business we've started to invest in things like automation. We've brought together things in our Infra Blocks and then we help them develop revenue. We're not only helping 'em take costs out of their network we're not helping 'em take risk out of deploying that network. We're helping them accelerate the deployment of that network. And then we're helping 'em drive revenue. We are having, you know, they're starting to see us in a new light. Not done yet, but, you know, you can start to see, one, how they're looking at Dell and two, and then how we can go to market. And you know, a big part of that is helping 'em drive and generate revenue. >> Yeah. Well, as, as a, as a former EMC person myself, >> Yeah? >> I will assert that that strategic DNA was injected into Dell by the acquisition of, of EMC. And I'm sticking... >> I won't say that. Okay I'll believe you on that. >> I'm sticking with the story. And it makes sense when you think about moving up market, that's the natural thing. What's, what's what's nearly impossible is to say, we sell semi-trucks but we want to get into the personal pickup truck market. That's that, that doesn't work. Going the other way works. >> Dave Trigg: Yeah. >> Now, now back to the conversation that you had with, with, with AT&T. I'm not buying this whole, no offense to AT&T, but I'm not buying this whole story that, you know, oh we're concerned about our branded customer data. That sounds like someone who's a little bit too comfortable with their existing revenue stream. If I'm out there, I want to be out partnering with folks who are truly aggressive about, about coming up with the next cool thing. You guys are talking about being connected in a laptop. Someone would say, well I got wifi. No, no, no. I'm thinking I want to sim in my laptop cause I don't want to screw around with wifi. Okay, fine. If I know I'm going to be somewhere with excellent wifi connectivity, great. But most of the time it's not excellent. >> That's right. >> So the idea that I could maybe hit F2 and have it switch over to my sim and know that anywhere that I've got coverage, I have high speed connections. Just the convenience of that. >> Ken: Absolutely. >> I'd pay extra for that as an end user consumer. >> Absolutely. >> And I pay for the service. >> Like I tell you, if it interests AT&T I think it's more not, they ask, they're comfortable. They don't know how to monetize that data. Now, of course, AT&T has a media >> Dave Nicholson: Business necessity is the mother of invention. If they don't see the necessity then they're not going to think about it. >> It's a mentality shift. Yes, but, but when you start talking about private mobility and edge, there's there's no concern about personal information there. You're going in with basically a business transformation. Hey, your, your business is, is not, not digital. It's not automated. Now we're going to automate that and digitize that. It's like the, the Dell booth with the beer guys. >> Right. >> You saw that, right? >> I mean that's, I mean that's a simple application. Yeah, a perfect example of how you network and use this technology. >> I mean, how many non-digital businesses are that that need to go digital? >> Dave Nicholson: Like, hundred percent of them. >> Everyone. >> Dave Nicholson: Pretty much. >> Yeah. And this, and this jewel that we have inside of Dell our global industries group, right, where we're investing really heavily in terms of what is the manufacturing industry looking for retail, finance, et cetera. So we have a CTO that came in, that it would be the CTO of manufacturing that gives us a really good opportunity to go to at AT&T or to Verizon or any telco out there, right? To, to say, these are the outcomes. There's Dell technology already in place. How do we connect it to your network? How do we leverage your assets, your manager professional services to provide a richer experience? So it's, there's, you said before Dave, there's really no overlap between Dell and, and our telecom partners. >> You guys making some serious investments here. I mean I, I've been, I was been critical over the years of, hey, you can't just take an X86 block, put a name on it that says edge something and throw it over the fence because that's what you were doing. >> Dave Trigg: And we would agree. >> Yeah. Right. But, of course, but that's all you had at the time. And so you put some... >> We may not have agreed then, but we would agree. >> You bought, brought some people in, you know, like Ken, who really know the business. You brought people into the technical side and you can really see it happening. It's not going to happen overnight. You know, I mean, you know if I were an investor in Dell, I'd be like, okay when are you going to start making money at this business? I'd be like, be patient. You know, it's going to take some time but look at the TAM. >> Yep. >> You know, you guys do a good, good TAM. Tennis is a pro at this stuff. >> We've been at, we've been at this two, three years and we're just now coming with some real material products. You've seen our server line really start to get more purpose-built, really start to get in there as we've started to put out some software that allows for quicker automation, quicker deployments. We have some telcos that are using it to deploy at 10,000 locations. They're literally turning up thousands of locations a week. And so yeah, we're starting to put out some real capability. Got a long way to go. A lot of exciting things on the roadmap. But to your point, it doesn't, you know the ship doesn't turn overnight, you know. >> It could be a really meaningful portion of Dell's business. I'm, I'm excited for the day that Tom Sweet starts reporting on it. Here's our telco business. Yeah. The telco business. But that's not going to happen overnight. But you know, Dell's pretty good at things like ROI. And so you guys do a lot of planning a lot of TAM analysis, a lot of technical analysis, bringing the ecosystem together. That's what this business needs. I, I just don't, it's, it feels unstoppable. You know, you're at this show everybody recognizes the need to open up. Some telcos are moving faster than others. The ones that move faster are going to disrupt. They're going to probably make some mistakes, you know but they're going to get there first. >> Well we've, we've seen the disruptors are making some mistakes and are kind of re- they're already at the phase where they're reevaluating, you know, their approach. Which is great. You know, you, you learn and adjust. You know, you run into a wall, you, you make a turn. And the interesting thing, one of the biggest learnings I've taken out of the show is talking to a bunch of the telcos that are a little bit more of the laggards. They're like, Nope, we, we don't believe in open. We don't think we can do it. We don't have the skillset. They're maybe in a geo that it's hard to find the skillset. As they've been talking to us, and we've been talking about, there's almost a glimmer of hope. They're not convinced yet, but they're like, well wait, maybe we can do this. Maybe open, you know, does give us choice. Maybe it can help us accelerate revenue. So it's been interesting to see a little bit of the, just a little bit, but a little bit of that shift. >> We all remember at 2010, 2011, you talked to banks and financial services companies about, the heck, the Cloud is happening, the Cloud's going to take over the world. We're never going to go into the Cloud. Now they're the biggest, you know Capital One's launching Cloud businesses, Western Union, I mean, they're all in the cloud, right? I mean, it's the same thing's going to happen here. Might, it might take a different pattern. Maybe it takes a little longer, but it's, it's it's a fate are completely >> I was in high school then, so I don't remember all that. >> Sorry, Dave. >> Wow, that was a low blow, like you know? >> But, but the, but the one thing that is for sure there's money to be made convincing people to get off of the backs of the dinosaurs they're riding. >> Dave Vellante: That's right. >> And also, the other thing that's a certainty is that it's not easy. And because it's not easy, there's opportunity there. So I know, I know it's, it, it, it, it, it all sounds great to talk about the the wonderful vision of the future, but I know how hard the the road is that you have to go down to get people, especially if you're comfortable with the revenue stream, if you're comfortable running the plumbing. If you're so comfortable that you can get up on stage and say, I want more money from you to pump your con- your content across my network. I love the Netflix retort, right Dave? >> Yeah, totally Dave. And, but the, the other thing is, telco's a great business. It's, they got monopolies that print money. So... >> Dave Nicholson: It's rational. It's rational. I understand. >> There's less of an incentive to move but what's going to be the incentive is guys like Dish Network coming in saying, we're going to, we're going to disrupt, we're going to build new apps. >> That's right. >> Yeah. >> Well and it's, you know, revenue acceleration, the board level, the CEO level know that they have to, you know, do things different. But to your point, it's just hard, and there's so much gravity there. There's hundreds of years literally of gravity of how they've operated their business. To your point, a lot of them, you know, lot- most of 'em were regulated and most Geos around the world at one point, right? They were government owned or government regulated entities. It's, it's a big ship to turn and it's really hard. We're not claiming we can help them turn the ship overnight but we think we can help evolve them. We think we can go along with the journey and we do think we are better together. >> IT the network and the line of business. Love the strategy. Guys, thanks so much for coming in theCUBE. >> Thank you so much. >> Thank you. >> All right, for Dave, Nicholson, Dave Vellante here, John Furrier is in our Palo Alto studio banging out all the news, keep it right there. TheCUBE's coverage of MWC 23. We'll be right back.
SUMMARY :
that drive human progress. of the telco networks, how Great to be here. for the telcos to make money. 90% of the companies leaning in on that. So Ken, you know, the space of our platinum brand to the If they're not going to go toward that of how to solve those business outcomes. the telcos are starting to the end customer. allowing the telecom companies to come in and they're all at different levels and all they have to do is collect, I got to tell you they're pretty excited So that's the big question, right? And really to get Right. a lot of the wifi doesn't. the connectivity piece and what they do. And then EMC was largely, you know, And you know, a big part a former EMC person myself, into Dell by the acquisition I'll believe you on that. And it makes sense when you think about But most of the time it's not excellent. So the idea that I could I'd pay extra for that They don't know how to monetize that data. then they're not going to think about it. Yes, but, but when you start talking Yeah, a perfect example of how you network Dave Nicholson: Like, a really good opportunity to over the years of, hey, you And so you put some... then, but we would agree. You know, it's going to take some time You know, you guys do a good, good TAM. the ship doesn't turn overnight, you know. everybody recognizes the need to open up. of the telcos that are a little the Cloud's going to take over the world. I was in high school then, there's money to be made the road is that you have that print money. I understand. There's less of an incentive to move of them, you know, lot- the line of business. banging out all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Ken Burns | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dave Trigg | PERSON | 0.99+ |
Ken | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Nicholson | PERSON | 0.99+ |
Ken Byrnes | PERSON | 0.99+ |
AT&T. | ORGANIZATION | 0.99+ |
David Trigg | PERSON | 0.99+ |
millions | QUANTITY | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Tom Sweet | PERSON | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
90% | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
2010 | DATE | 0.99+ |
10,000 locations | QUANTITY | 0.99+ |
Dish Network | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
both | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
35,000 sellers | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
hundred percent | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
30 plus telecom operators | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Peter Fetterolf, ACG Business Analytics & Charles Tsai, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (light airy music) >> Hi, everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante. I'm here with my co-host Dave Nicholson. Lisa Martin is in the house. John Furrier is pounding the news from our Palo Alto studio. We are super excited to be talking about cloud at the edge, what that means. Charles Tsai is here. He's the Senior Director of product management at Dell Technologies and Peter Fetterolf is the Chief Technology Officer at ACG Business Analytics, a firm that goes deep into the TCO and the telco space, among other things. Gents, welcome to theCUBE. Thanks for coming on. Thank you. >> Good to be here. >> Yeah, good to be here. >> So I've been in search all week of the elusive next wave of monetization for the telcos. We know they make great money on connectivity, they're really good at that. But they're all talking about how they can't let this happen again. Meaning we can't let the over the top vendors yet again, basically steal our cookies. So we're going to not mess it up this time. We're going to win in the monetization. Charles, where are those monetization opportunities? Obviously at the edge, the telco cloud at the edge. What is that all about and where's the money? >> Well, Dave, I think from a Dell's perspective, what we want to be able to enable operators is a solution that enable them to roll out services much quicker, right? We know there's a lot of innovation around IoT, MEG and so on and so forth, but they continue to rely on traditional technology and way of operations is going to take them years to enable new services. So what Dell is doing is now, creating the entire vertical stack from the hardware through CAST and automation that enable them, not only to push out services very quickly, but operating them using cloud principles. >> So it's when you say the entire vertical stack, it's the integrated hardware components with like, for example, Red Hat on top- >> Right. >> Or a Wind River? >> That's correct. >> Okay, and then open API, so the developers can create workloads, I presume data companies. We just had a data conversation 'cause that was part of the original stack- >> That's correct. >> So through an open ecosystem, you can actually sort of recreate that value, correct? >> That's correct. >> Okay. >> So one thing Dell is doing, is we are offering an infrastructure block where we are taking over the overhead of certifying every release coming from the Red Hat or the Wind River of the world, right? We want telcos to spend their resources on what is going to generate them revenue. Not the overhead of creating this cloud stack. >> Dave, I remember when we went through this in the enterprise and you had companies like, you know, IBM with the AS400 and the mainframe saying it's easier to manage, which it was, but it's still, you know, it was subsumed by the open systems trend. >> Yeah, yeah. And I think that's an important thing to probe on, is this idea of what is, what exactly does it mean to be cloud at the edge in the telecom space? Because it's a much used term. >> Yeah. >> When we talk about cloud and edge, in sort of generalized IT, but what specifically does it mean? >> Yeah, so when we talk about telco cloud, first of all it's kind of different from what you're thinking about public cloud today. And there's a couple differences. One, if you look at the big hyperscaler public cloud today, they tend to be centralized in huge data centers. Okay, telco cloud, there are big data centers, but then there's also regional data centers. There are edge data centers, which are your typical like access central offices that have turned data centers, and then now even cell sites are becoming mini data centers. So it's distributed. I mean like you could have like, even in a country like say Germany, you'd have 30,000 soul sites, each one of them being a data center. So it's a very different model. Now the other thing I want to go back to the question of monetization, okay? So how do you do monetization? The only way to do that, is to be able to offer new services, like Charles said. How do you offer new services? You have to have an open ecosystem that's going to be very, very flexible. And if we look at where telcos are coming from today, they tend to be very inflexible 'cause they're all kind of single vendor solutions. And even as we've moved to virtualization, you know, if you look at packet core for instance, a lot of them are these vertical stacks of say a Nokia or Ericson or Huawei where you know, you can't really put any other vendors or any other solutions into that. So basically the idea is this kind of horizontal architecture, right? Where now across, not just my central data centers, but across my edge data centers, which would be traditionally my access COs, as well as my cell sites. I have an open environment. And we're kind of starting with, you know, packet core obviously with, and UPFs being distributed, but now open ran or virtual ran, where I can have CUs and DUs and I can split CUs, they could be at the soul site, they could be in edge data centers. But then moving forward, we're going to have like MEG, which are, you know, which are new kinds of services, you know, could be, you know, remote cars it could be gaming, it could be the Metaverse. And these are going to be a multi-vendor environment. So one of the things you need to do is you need to have you know, this cloud layer, and that's what Charles was talking about with the infrastructure blocks is helping the service providers do that, but they still own their infrastructure. >> Yeah, so it's still not clear to me how the service providers win that game but we can maybe come back to that because I want to dig into TCO a little bit. >> Sure. >> Because I have a lot of friends at Dell. I don't have a lot of friends at HPE. I've always been critical when they take an X86 server put a name on it that implies edge and they throw it over the fence to the edge, that's not going to work, okay? We're now seeing, you know we were just at the Dell booth yesterday, you did the booth crawl, which was awesome. Purpose-built servers for this environment. >> Charles: That's right. >> So there's two factors here that I want to explore in TCO. One is, how those next gen servers compare to the previous gen, especially in terms of power consumption but other factors and then how these sort of open ran, open ecosystem stacks compared to proprietary stacks. Peter, can you help us understand those? >> Yeah, sure. And Charles can comment on this as well. But I mean there, there's a couple areas. One is just moving the next generation. So especially on the Intel side, moving from Ice Lake to the Sapphire Rapids is a big deal, especially when it comes to the DU. And you know, with the radios, right? There's the radio unit, the RU, and then there's the DU the distributed unit, and the CU. The DU is really like part of the radio, but it's virtualized. When we moved from Ice lake to Sapphire Rapids, which is third generation intel to fourth generation intel, we're literally almost doubling the performance in the DU. And that's really important 'cause it means like almost half the number of servers and we're talking like 30, 40, 50,000 servers in some cases. So, you know, being able to divide that by two, that's really big, right? In terms of not only the the cost but all the TCO and the OpEx. Now another area that's really important, when I was talking moving from these vertical silos to the horizontal, the issue with the vertical silos is, you can't place any other workloads into those silos. So it's kind of inefficient, right? Whereas when we have the horizontal architecture, now you can place workloads wherever you want, which basically also means less servers but also more flexibility, more service agility. And then, you know, I think Charles can comment more, specifically on the XR8000, some things Dell's doing, 'cause it's really exciting relative to- >> Sure. >> What's happening in there. >> So, you know, when we start looking at putting compute at the edge, right? We recognize the first thing we have to do is understand the environment we are going into. So we spend with a lot of time with telcos going to the south side, going to the edge data center, looking at operation, how do the engineer today deal with maintenance replacement at those locations? Then based on understanding the operation constraints at those sites, we create innovation and take a traditional server, remodel it to make sure that we minimize the disruption to the operations, right? Just because we are helping them going from appliances to open compute, we do not want to disrupt what is have been a very efficient operation on the remote sites. So we created a lot of new ideas and develop them on general compute, where we believe we can save a lot of headache and disruptions and still provide the same level of availability, resiliency, and redundancy on an open compute platform. >> So when we talk about open, we don't mean generic? Fair? See what I mean? >> Open is more from the software workload perspective, right? A Dell server can run any type of workload that customer intend. >> But it's engineered for this? >> Environment. >> Environment. >> That's correct. >> And so what are some of the environmental issues that are dealt with in the telecom space that are different than the average data center? >> The most basic one, is in most of the traditional cell tower, they are deployed within cabinets instead of racks. So they are depth constraints that you just have no access to the rear of the chassis. So that means on a server, is everything you need to access, need to be in the front, nothing should be in the back. Then you need to consider how labor union come into play, right? There's a lot of constraint on who can go to a cell tower and touch power, who can go there and touch compute, right? So we minimize all that disruption through a modular design and make it very efficient. >> So when we took a look at XR8000, literally right here, sitting on the desk. >> Uh-huh. >> Took it apart, don't panic, just pulled out some sleds and things. >> Right, right. >> One of the interesting demonstrations was how it compared to the size of a shoe. Now apparently you hired someone at Dell specifically because they wear a size 14 shoe, (Charles laughs) so it was even more dramatic. >> That's right. >> But when you see it, and I would suggest that viewers go back and take a look at that segment, specifically on the hardware. You can see exactly what you just referenced. This idea that everything is accessible from the front. Yeah. >> So I want to dig in a couple things. So I want to push back a little bit on what you were saying about the horizontal 'cause there's the benefit, if you've got the horizontal infrastructure, you can run a lot more workloads. But I compare it to the enterprise 'cause I, that was the argument, I've made that argument with converged infrastructure versus say an Oracle vertical stack, but it turned out that actually Oracle ran Oracle better, okay? Is there an analog in telco or is this new open architecture going to be able to not only service the wide range of emerging apps but also be as resilient as the proprietary infrastructure? >> Yeah and you know, before I answer that, I also want to say that we've been writing a number of white papers. So we have actually three white papers we've just done with Dell looking at infrastructure blocks and looking at vertical versus horizontal and also looking at moving from the previous generation hardware to the next generation hardware. So all those details, you can find the white papers, and you can find them either in the Dell website or at the ACG research website >> ACGresearch.com? >> ACG research. Yeah, if you just search ACG research, you'll find- >> Yeah. >> Lots of white papers on TCO. So you know, what I want to say, relative to the vertical versus horizontal. Yeah, obviously in the vertical side, some of those things will run well, I mean it won't have issues. However, that being said, as we move to cloud native, you know, it's very high performance, okay? In terms of the stack, whether it be a Red Hat or a VMware or other cloud layers, that's really become much more mature. It now it's all CNF base, which is really containerized, very high performance. And so I don't think really performance is an issue. However, my feeling is that, if you want to offer new services and generate new revenue, you're not going to do it in vertical stacks, period. You're going to be able to do a packet core, you'll be able to do a ran over here. But now what if I want to offer a gaming service? What if I want to do metaverse? What if I want to do, you have to have an environment that's a multi-vendor environment that supports an ecosystem. Even in the RAN, when we look at the RIC, and the xApps and the rApps, these are multi-vendor environments that's going to create a lot of flexibility and you can't do that if you're restricted to, I can only have one vendor running on this hardware. >> Yeah, we're seeing these vendors work together and create RICs. That's obviously a key point, but what I'm hearing is that there may be trade offs, but the incremental value is going to overwhelm that. Second question I have, Peter is, TCO, I've been hearing a lot about 30%, you know, where's that 30% come from? Is it Op, is it from an OpEx standpoint? Is it labor, is it power? Is it, you mentioned, you know, cutting the number of servers in half. If I can unpack the granularity of that TCO, where's the benefit coming from? >> Yeah, the answer is yes. (Peter and Charles laugh) >> Okay, we'll do. >> Yeah, so- >> One side that, in terms of, where is the big bang for the bucks? >> So I mean, so you really need to look at the white paper to see details, but definitely power, definitely labor, definitely reducing the number of servers, you know, reducing the CapEx. The other thing is, is as you move to this really next generation horizontal telco cloud, there's the whole automation and orchestration, that is a key component as well. And it's enabled by what Dell is doing. It's enabled by the, because the thing is you're not going to have end-to-end automation if you have all this legacy stuff there or if you have these vertical stacks where you can't integrate. I mean you can automate that part and then you have separate automation here, you separate. you need to have integrated automation and orchestration across the whole thing. >> One other point I would add also, right, on the hardware perspective, right? With the customized hardware, what we allow operator to do is, take out the existing appliance and push a edge optimized server without reworking the entire infrastructure. There is a significant saving where you don't have to rethink about what is my power infrastructure, right? What is my security infrastructure? The server is designed to leverage the existing, what is already there. >> How should telco, Charles, plan for this transformation? Are there specific best practices that you would recommend in terms of the operational model? >> Great question. I think first thing is do an inventory of what you have. Understand what your constraints are and then come to Dell, we will love to consult with you, based on our experience on the best practices. We know how to minimize additional changes. We know how to help your support engineer, understand how to shift appliance based operation to a cloud-based operation. >> Is that a service you offer? Is that a pre-sales freebie? What is maybe both? >> It's both. >> Yeah. >> It's both. >> Yeah. >> Guys- >> Just really quickly. >> We're going to wrap. >> The, yeah. Dave loves the TCO discussion. I'm always thinking in terms of, well how do you measure TCO when you're comparing something where you can't do something to an environment where you're going to be able to do something new? And I know that that's always the challenge in any kind of emerging market where things are changing, any? >> Well, I mean we also look at, not only TCO, but we look at overall business case. So there's basically service at GLD and revenue and then there's faster time to revenues. Well, and actually ACG, we actually have a platform called the BAE or Business Analytics Engine that's a very sophisticated simulation cloud-based platform, where we can actually look at revenue month by month. And we look at what's the impact of accelerating revenue by three months. By four months. >> So you're looking into- >> By six months- >> So you're forward looking. You're just not consistently- >> So we're not just looking at TCO, we're looking at the overall business case benefit. >> Yeah, exactly right. There's the TCO, which is the hard dollars. >> Right. >> CFO wants to see that, he or she needs to see that. But you got to, you can convince that individual, that there's a business case around it. >> Peter: Yeah. >> And then you're going to sign up for that number. >> Peter: Yeah. >> And they're going to be held to it. That's the story the world wants. >> At the end of the day, telcos have to be offered new services 'cause look at all the money that's been spent. >> Dave: Yeah, that's right. >> On investment on 5G and everything else. >> 0.5 trillion over the next seven years. All right, guys, we got to go. Sorry to cut you off. >> Okay, thank you very much. >> But we're wall to wall here. All right, thanks so much for coming on. >> Dave: Fantastic. >> All right, Dave Vellante, for Dave Nicholson. Lisa Martin's in the house. John Furrier in Palo Alto Studios. Keep it right there. MWC 23 live from the Fira in Barcelona. (light airy music)
SUMMARY :
that drive human progress. and Peter Fetterolf is the of the elusive next wave of creating the entire vertical of the original stack- or the Wind River of the world, right? AS400 and the mainframe in the telecom space? So one of the things you need to do how the service providers win that game the fence to the edge, to the previous gen, So especially on the Intel side, We recognize the first thing we have to do from the software workload is in most of the traditional cell tower, sitting on the desk. Took it apart, don't panic, One of the interesting demonstrations accessible from the front. But I compare it to the Yeah and you know, Yeah, if you just search ACG research, and the xApps and the rApps, but the incremental value Yeah, the answer is yes. and then you have on the hardware perspective, right? inventory of what you have. Dave loves the TCO discussion. and then there's faster time to revenues. So you're forward looking. So we're not just There's the TCO, But you got to, you can And then you're going to That's the story the world wants. At the end of the day, and everything else. Sorry to cut you off. But we're wall to wall here. Lisa Martin's in the house.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Charles | PERSON | 0.99+ |
Charles Tsai | PERSON | 0.99+ |
Peter Fetterolf | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
ACG Business Analytics | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
TCO | ORGANIZATION | 0.99+ |
four months | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
0.5 trillion | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two factors | QUANTITY | 0.99+ |
six months | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
MWC 23 | EVENT | 0.98+ |
Germany | LOCATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
XR8000 | COMMERCIAL_ITEM | 0.98+ |
Ice Lake | COMMERCIAL_ITEM | 0.98+ |
One | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
Palo Alto Studios | LOCATION | 0.97+ |
third generation | QUANTITY | 0.97+ |
fourth generation | QUANTITY | 0.96+ |
40, 50,000 servers | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
telcos | ORGANIZATION | 0.95+ |
telco cloud | ORGANIZATION | 0.95+ |
each one | QUANTITY | 0.95+ |
CUBE Analysis of Day 1 of MWC Barcelona 2023 | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies creating technologies that drive human progress. (upbeat music) >> Hey everyone, welcome back to theCube's first day of coverage of MWC 23 from Barcelona, Spain. Lisa Martin here with Dave Vellante and Dave Nicholson. I'm literally in between two Daves. We've had a great first day of coverage of the event. There's been lots of conversations, Dave, on disaggregation, on the change of mobility. I want to be able to get your perspectives from both of you on what you saw on the show floor, what you saw and heard from our guests today. So we'll start with you, Dave V. What were some of the things that were our takeaways from day one for you? >> Well, the big takeaway is the event itself. On day one, you get a feel for what this show is like. Now that we're back, face-to-face kind of pretty much full face-to-face. A lot of excitement here. 2000 plus exhibitors, I mean, planes, trains, automobiles, VR, AI, servers, software, I mean everything. I mean, everybody is here. So it's a really comprehensive show. It's not just about mobile. That's why they changed the name from Mobile World Congress. I think the other thing is from the keynotes this morning, I mean, you heard, there's a lot of, you know, action around the telcos and the transformation, but in a lot of ways they're sort of protecting their existing past from the future. And so they have to be careful about how fast they move. But at the same time if they don't move fast, they're going to get disrupted. We heard some complaints, essentially, you know, veiled complaints that the over the top guys aren't paying their fair share and Telco should be able to charge them more. We heard the chairman of Ericsson talk about how we can't let the OTTs do that again. We're going to charge directly for access through APIs to our network, to our data. We heard from Chris Lewis. Yeah. They've only got, or maybe it was San Ji Choha, how they've only got eight APIs. So, you know the developers are the ones who are going to actually build out the innovation at the edge. The telcos are going to provide the connectivity and the infrastructure companies like Dell as well. But it's really to me all about the developers. And that's where the action's going to be. And it's going to be interesting to see how the developers respond to, you know, the gun to the head. If you want access, you're going to have to pay for it. Now maybe there's so much money to be made that they'll go for it, but I feel like there's maybe a different model. And I think some of the emerging telcos are going to say, you know what, here developers, here's a platform, have at it. We're not going to charge you for all the data until you succeed. Then we're going to figure out a monetization model. >> Right. A lot of opportunity for the developer. That skillset is certainly one that's in demand here. And certainly the transformation of the telecom industry is, there's a lot of conundrums that I was hearing going on today, kind of chicken and egg scenarios. But Dave, you had a chance to walk around the show floor. We were here interviewing all day. What were some of the things that you saw that really stuck out to you? >> I think I was struck by how much attention was being paid to private 5G networks. You sort of read between the lines and it appears as though people kind of accept that the big incumbent telecom players are going to be slower to move. And this idea of things like open RAN where you're leveraging open protocols in a stack to deliver more agility and more value. So it sort of goes back to the generalized IT discussion of moving to cloud for agility. It appears as though a lot of players realize that the wild wild west, the real opportunity, is in the private sphere. So it's really interesting to see how that works, how 5G implemented into an environment with wifi how that actually works. It's really interesting. >> So it's, obviously when you talk to companies like Dell, I haven't hit HPE yet. I'm going to go over there and check out their booth. They got an analyst thing going on but it's really early days for them. I mean, they started in this business by taking an X86 box, putting a name on it, you know, that sounded like it was edged, throwing it over, you know, the wall. That's sort of how they all started in this business. And now they're, you know, but they knew they had to form partnerships. They had to build purpose-built systems. Now with 16 G out, you're seeing that. And so it's still really early days, talking about O RAN, open RAN, the open RAN alliance. You know, it's just, I mean, not even, the game hasn't even barely started yet but we heard from Dish today. They're trying to roll out a massive 5G network. Rakuten is really focused on sort of open RAN that's more reliable, you know, or as reliable as the existing networks but not as nearly as huge a scale as Dish. So it's going to take a decade for this to evolve. >> Which is surprising to the average consumer to hear that. Because as far as we know 5G has been around for a long time. We've been talking about 5G, implementing 5G, you sort of assume it's ubiquitous but the reality is it is just the beginning. >> Yeah. And you know, it's got a fake 5G too, right? I mean you see it on your phone and you're like, what's the difference here? And it's, you know, just, >> Dave N.: What does it really mean? >> Right. And so I think your point about private is interesting, the conversation Dave that we had earlier, I had throughout, hey I don't think it's a replacement for wifi. And you said, "well, why not?" I guess it comes down to economics. I mean if you can get the private network priced close enough then you're right. Why wouldn't it replace wifi? Now you got wifi six coming in. So that's a, you know, and WiFi's flexible, it's cheap, it's good for homes, good for offices, but these private networks are going to be like kickass, right? They're going to be designed to run whatever, warehouses and robots, and energy drilling facilities. And so, you know the economics I don't think are there today but maybe they can be at volume. >> Maybe at some point you sort of think of today's science experiment becoming the enterprise-grade solution in the future. I had a chance to have some conversations with folks around the show. And I think, and what I was surprised by was I was reminded, frankly, I wasn't surprised. I was reminded that when we start talking about 5G, we're talking about spectrum that is managed by government entities. Of course all broadcast, all spectrum, is managed in one way or another. But in particular, you can't simply put a SIM in every device now because there are a lot of regulatory hurdles that have to take place. So typically what these things look like today is 5G backhaul to the network, communication from that box to wifi. That's a huge improvement already. So yeah, my question about whether, you know, why not put a SIM in everything? Maybe eventually, but I think, but there are other things that I was not aware of that are standing in the way. >> Your point about spectrum's an interesting one though because private networks, you're going to be able to leverage that spectrum in different ways, and tune it essentially, use different parts of the spectrum, make it programmable so that you can apply it to that specific use case, right? So it's going to be a lot more flexible, you know, because I presume the needs spectrum needs of a hospital are going to be different than, you know, an agribusiness are going to be different than a drilling, you know, unit, offshore drilling unit. And so the ability to have the flexibility to use the spectrum in different ways and apply it to that use case, I think is going to be powerful. But I suspect it's going to be expensive initially. I think the other thing we talked about is public policy and regulation, and it's San Ji Choha brought up the point, is telcos have been highly regulated. They don't just do something and ask for permission, you know, they have to work within the confines of that regulated environment. And there's a lot of these greenfield companies and private networks that don't necessarily have to follow those rules. So that's a potential disruptive force. So at the same time, the telcos are spending what'd we hear, a billion, a trillion and a half over the next seven years? Building out 5G networks. So they got to figure out, you know how to get a payback on that. They'll get it I think on connectivity, 'cause they have a monopoly but they want more. They're greedy. They see the over, they see the Netflixes of the world and the Googles and the Amazons mopping up services and they want a piece of that action but they've never really been good at it. >> Well, I've got a question for both of you. I mean, what do you think the odds are that by the time the Shangri La of fully deployed 5G happens that we have so much data going through it that effectively it feels exactly the same as 3G? What are the odds? >> That's a good point. Well, the thing that gets me about 5G is there's so much of it on, if I go to the consumer side when we're all consumers in our daily lives so much of it's marketing hype. And, you know all the messaging about that, when it's really early innings yet they're talking about 6G. What does actual fully deployed 5G look like? What is that going to enable a hospital to achieve or an oil refinery out in the middle of the ocean? That's something that interests me is what's next for that? Are we going to hear that at this event? >> I mean, walking around, you see a fair amount of discussion of, you know, the internet of things. Edge devices, the increase in connectivity. And again, what I was surprised by was that there's very little talk about a sim card in every one of those devices at this point. It's like, no, no, no, we got wifi to handle all that but aggregating it back into a central network that's leveraging 5G. That's really interesting. That's really interesting. >> I think you, the odds of your, to go back to your question, I think the odds are even money, that by the time it's all built out there's going to be so much data and so much new capability it's going to work similarly at similar speeds as we see in the networks today. You're just going to be able to do so many more things. You know, and your video's going to look better, the graphics are going to look better. But I think over the course of history, this is what's happening. I mean, even when you go back to dial up, if you were in an AOL chat room in 1996, it was, you know, yeah it took a while. You're like, (screeches) (Lisa laughs) the modem and everything else, but once you were in there- >> Once you're there, 2400 baud. >> It was basically real time. And so you could talk to your friends and, you know, little chat room but that's all you could do. You know, if you wanted to watch a video, forget it, right? And then, you know, early days of streaming video, stop, start, stop, start, you know, look at Amazon Prime when it first started, Prime Video was not that great. It's sort of catching up to Netflix. But, so I think your point, that question is really prescient because more data, more capability, more apps means same speed. >> Well, you know, you've used the phrase over the top. And so just just so we're clear so we're talking about the same thing. Typically we're talking about, you've got, you have network providers. Outside of that, you know, Netflix, internet connection, I don't need Comcast, right? Perfect example. Well, what about the over the top that's coming from direct satellite communications with devices. There are times when I don't have a signal on my, happens to be an Apple iPhone, when I get a little SOS satellite logo because I can communicate under very limited circumstances now directly to the satellite for very limited text messaging purposes. Here at the show, I think it might be a Motorola device. It's a dongle that allows any mobile device to leverage direct satellite communication. Again, for texting back to the 2,400 baud modem, you know, days, 1200 even, 300 even, go back far enough. What's that going to look like? Is that too far in the future to think that eventually it's all going to be over the top? It's all going to be handset to satellite and we don't need these RANs anymore. It's all going to be satellite networks. >> Dave V.: I think you're going to see- >> Little too science fiction-y? (laughs) >> No, I, no, I think it's a good question and I think you're going to see fragments. I think you're going to see fragmentation of private networks. I think you're going to see fragmentation of satellites. I think you're going to see legacy incumbents kind of hanging on, you know, the cable companies. I think that's coming. I think by 2030 it'll, the picture will be much more clear. The question is, and I think it's come down to the innovation on top, which platform is going to be the most developer friendly? Right, and you know, I've not heard anything from the big carriers that they're going to be developer friendly. I've heard "we have proprietary data that we're going to charge access for and developers are going to have to pay for that." But I haven't heard them saying "Developers, developers, developers!" You know, Steve Bomber running around, like bend over backwards for developers, they're asking the developers to bend over. And so if a network can, let's say the satellite network is more developer friendly, you know, you're going to see more innovation there potentially. You know, or if a dish network says, "You know what? We're going after developers, we're going after innovation. We're not going to gouge them for all this network data. Rather we're going to make the platform open or maybe we're going to do an app store-like model where we take a piece of the action after they succeed." You know, take it out of the backend, like a Silicon Valley VC as opposed to an East Coast VC. They're not going to get you in the front end. (Lisa laughs) >> Well, you can see the sort of disruptive forces at play between open RAN and the legacy, call it proprietary stack, right? But what is the, you know, if that's sort of a horizontal disruptive model, what's the vertically disruptive model? Is it private networks coming in? Is it a private 5G network that comes in that says, "We're starting from the ground up, everything is containerized. We're going to go find people at KubeCon who are, who understand how to orchestrate with Kubernetes and use containers in microservices, and we're going to have this little 5G network that's going to deliver capabilities that you can't get from the big boys." Is there a way to monetize that? Is there a way for them to be disrupted, be disruptive, or are these private 5G networks that everybody's talking about just relegated to industrial use cases where you're just squeezing better economics out of wireless communication amongst all your devices in your factory? >> That's an interesting question. I mean, there are a lot of those smart factory industrial use cases. I mean, it's basically industry 4.0 use cases. But yeah, I don't count the cloud guys out. You know, everybody says, "oh, the narrative is, well, the latency of the cloud." Well, not if the cloud is at the edge. If you take a local zone and put storage, compute, and data right next to each other and the cloud model with the cloud APIs, and then you got an asynchronous, you know, connection back. I think that's a reasonable model. I think the cloud guys figured out developers, right? Pretty well. Certainly Microsoft and, and Amazon and Google, they know developers. I don't see any reason why they can't bring their model to the edge. So, and that's really disruptive to the legacy telco guys, you know? So they have to be careful. >> One step closer to my dream of eliminating the word "cloud" from IT lexicon. (Lisa laughs) I contend that it has always been IT, and it will always be IT. And this whole idea of cloud, what is cloud? If AWS, for example, is delivering hardware to the edge where it needs to be, is that cloud? Do we go back to the idea that cloud is an operational model and not a question of physical location? I hope we get to that point. >> Well, what's Apex and GreenLake? Apex is, you know, Dell's as a service. GreenLake is- >> HPE. >> HPE's as a service. That's outposts. >> Dave N.: Right. >> Yeah. >> That's their outpost. >> Yeah. >> Well AWS's position used to be, you know, to use them as a proxy for hyperscale cloud. We'll just, we'll grow in a very straight trajectory forever on the back of net new stuff. Forget about the old stuff. As James T. Kirk said of the Klingons, "let them die." (Lisa laughs) As far as the cloud providers were concerned just, yeah, let, let that old stuff go away. Well then they found out, there came a point in time where they realized there's a lot of friction and stickiness associated with that. So they had to deal with the reality of hybridity, if that's the word, the hybrid nature of things. So what are they doing? They're pushing stuff out to the edge, so... >> With the same operating model. >> With the same operating model. >> Similar. I mean, it's limited, right? >> So you see- >> You can't run a lot of database on outpost, you can run RES- >> You see this clash of Titans where some may have written off traditional IT infrastructure vendors, might have been written off as part of the past. Whereas hyperscale cloud providers represent the future. It seems here at this show they're coming head to head and competing evenly. >> And this is where I think a company like Dell or HPE or Cisco has some advantages in that they're not going to compete with the telcos, but the hyperscalers will. >> Lisa: Right. >> Right. You know, and they're already, Google's, how much undersea cable does Google own? A lot. Probably more than anybody. >> Well, we heard from Google and Microsoft this morning in the keynote. It'd be interesting to see if we hear from AWS and then over the next couple of days. But guys, clearly there is, this is a great wrap of day one. And the crazy thing is this is only day one. We've got three more days of coverage, more news, more information to break down and unpack on theCUBE. Look forward to doing that with you guys over the next three days. Thank you for sharing what you saw on the show floor, what you heard from our guests today as we had about 10 interviews. Appreciate your insights and your perspectives and can't wait for tomorrow. >> Right on. >> All right. For Dave Vellante and Dave Nicholson, I'm Lisa Martin. You're watching theCUBE's day one wrap from MWC 23. We'll see you tomorrow. (relaxing music)
SUMMARY :
that drive human progress. of coverage of the event. are going to say, you know what, of the telecom industry is, are going to be slower to move. And now they're, you know, Which is surprising to the I mean you see it on your phone I guess it comes down to economics. I had a chance to have some conversations And so the ability to have the flexibility I mean, what do you think the odds are What is that going to of discussion of, you know, the graphics are going to look better. And then, you know, early the 2,400 baud modem, you know, days, They're not going to get you that you can't get from the big boys." to the legacy telco guys, you know? dream of eliminating the word Apex is, you know, Dell's as a service. That's outposts. So they had to deal with I mean, it's limited, right? they're coming head to going to compete with the telcos, You know, and they're already, Google's, And the crazy thing is We'll see you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Telco | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Comcast | ORGANIZATION | 0.99+ |
Steve Bomber | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
James T. Kirk | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
1996 | DATE | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Motorola | ORGANIZATION | 0.99+ |
Amazons | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Dave V. | PERSON | 0.99+ |
Dave N. | PERSON | 0.99+ |
1200 | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
first day | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
2,400 baud | QUANTITY | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
2400 baud | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Apex | ORGANIZATION | 0.99+ |
San Ji Choha | ORGANIZATION | 0.99+ |
AOL | ORGANIZATION | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
300 | QUANTITY | 0.99+ |
Googles | ORGANIZATION | 0.98+ |
2030 | DATE | 0.98+ |
GreenLake | ORGANIZATION | 0.98+ |
iPhone | COMMERCIAL_ITEM | 0.98+ |
MWC 23 | EVENT | 0.98+ |
day one | QUANTITY | 0.98+ |
MWC 23 | EVENT | 0.98+ |
X86 | COMMERCIAL_ITEM | 0.97+ |
eight APIs | QUANTITY | 0.97+ |
One | QUANTITY | 0.96+ |
2023 | DATE | 0.96+ |
Dish | ORGANIZATION | 0.96+ |
Prime | COMMERCIAL_ITEM | 0.95+ |
this morning | DATE | 0.95+ |
Day 1 | QUANTITY | 0.95+ |
a billion, a trillion and a half | QUANTITY | 0.94+ |
Prime Video | COMMERCIAL_ITEM | 0.94+ |
three more days | QUANTITY | 0.94+ |
Apple | ORGANIZATION | 0.93+ |
first | QUANTITY | 0.92+ |
Brad Smith, AMD & Rahul Subramaniam, Aurea CloudFix | AWS re:Invent 2022
(calming music) >> Hello and welcome back to fabulous Las Vegas, Nevada. We're here at AWS re:Invent day three of our scintillating coverage here on theCUBE. I'm Savannah Peterson, joined by John Furrier. John Day three energy's high. How you feeling? >> I dunno, it's day two, day three, day four. It feels like day four, but again, we're back. >> Who's counting? >> Three pandemic levels in terms of 50,000 plus people? Hallways are packed. I got pictures. People don't believe it. It's actually happening. Then people are back. So, you know, and then the economy is a big question too and it's still, people are here, they're still building on the cloud and cost is a big thing. This next segment's going to be really important. I'm looking forward to this next segment. >> Yeah, me too. Without further ado let's welcome our guests for this segment. We have Brad from AMD and we have Rahul from you are, well you do a variety of different things. We'll start with CloudFix for this segment, but we could we could talk about your multiple hats all day long. Welcome to the show, gentlemen. How you doing? Brad how does it feel? We love seeing your logo above our stage here. >> Oh look, we love this. And talking about re:Invent last year, the energy this year compared to last year is so much bigger. We love it. We're excited to be here. >> Yeah, that's awesome. Rahul, how are you feeling? >> Excellent, I mean, I think this is my eighth or ninth re:Invent at this point and it's been fabulous. I think the, the crowd, the engagement, it's awesome. >> You wouldn't know there's a looming recession if you look at the activity but yet still the reality is here we had an analyst on yesterday, we were talking about spend more in the cloud, save more. So that you can still use the cloud and there's a lot of right sizing, I call you got to turn the lights off before you go to bed. Kind of be more efficient with your infrastructure as a theme. This re:Invent is a lot more about that now. Before it's about the glory days. Oh yeah, keep building, now with a little bit of pressure. This is the conversation. >> Exactly and I think most companies are looking to figure out how to innovate their way out of this uncertainty that's kind of on everyone's head. And the only way to do it is to be able to be more efficient with whatever your existing spend is, take those savings and then apply them to innovating on new stuff. And that's the way to go about it at this point. >> I think it's such a hot topic, for everyone that we're talking about. I mean, total cost optimization figuring out ways to be more efficient. I know that that's a big part of your mission at CloudFix. So just in case the audience isn't versed, give us the pitch. >> Okay, so a little bit of background on this. So the other hat I wear is CTO of ESW Capital. We have over 150 enterprise software companies within the portfolio. And one of my jobs is also to manage and run about 40 to 45,000 AWS accounts of our own. >> Casual number, just a few, just a couple pocket change, no big deal. >> And like everyone else here in the audience, yeah we had a problem with our costs, just going out of control and as we were looking at a lot of the tools to help us kind of get more efficient one of the biggest issues was that while people give you a lot of recommendations recommendations are way too far from realized savings. And we were running through the challenge of how do you take recommendation and turn them into real savings and multiple different hurdles. The short story being, we had to create CloudFix to actually realize those savings. So we took AWS recommendations around cost, filtered them down to the ones that are completely non-disruptive in nature, implemented those as simple automations that everyone could just run and realize those savings right away. We then took those savings and then started applying them to innovating and doing new interesting things with that money. >> Is there a best practice in your mind that you see merging in this time? People start more focused on it. Is there a method or a purpose kind of best practice of how to approach cost optimization? >> I think one of the things that most people don't realize is that cost optimization is not a one and done thing. It is literally nonstop. Which means that, on one hand AWS is constantly creating new services. There are over a hundred thousand API at this point of time How to use them right, how to use them efficiently You also have a problem of choice. Developers are constantly discovering new services discovering new ways to utilize them. And they are behaving in ways that you had not anticipated before. So you have to stay on top of things all the time. And really the only way to kind of stay on top is to have automation that helps you stay on top of all of these things. So yeah, finding efficiencies, standardizing your practices about how you leverage these AWS services and then automating the governance and hygiene around how you utilize them is really the key >> Brad tell me what this means for AMD and what working with CloudFix and Rahul does for your customers. >> Well, the idea of efficiency and cost optimization is near and dear to our heart. We have the leading. >> It's near and dear to everyone's heart, right now. (group laughs) >> But we are the leaders in x86 price performance and density and power efficiency. So this is something that's actually part of our core culture. We've been doing this a long time and what's interesting is most companies don't understand how much more efficiency they can get out of their applications aside from just the choices they make in cloud. but that's the one thing, the message we're giving to everybody is choice matters very much when it comes to your cloud solutions and just deciding what type of instance types you choose can have a massive impact on your bottom line. And so we are excited to partner with CloudFix, they've got a great model for this and they make it very easier for our customers to help identify those areas. And then AMD can come in as well and then help provide additional insight into those applications what else they can squeeze out of it. So it's a great relationship. >> If I hear you correctly, then there's more choice for the customers, faster selection, so no bad choices means bad performance if they have a workload or an app that needs to run, is that where you you kind of get into the, is that where it is or more? >> Well, I mean from the AMD side right now, one of the things they do very quickly is they identify where the low hanging fruit is. So it's the thing about x86 compatibility, you can shift instance types instantly in most cases without any change to your environment at all. And CloudFix has an automated tool to do that. And that's one thing you can immediately have an impact on your cost without having to do any work at all. And customers love that. >> What's the alternative if this doesn't exist they have to go manually figure it out or it gets them in the face or they see the numbers don't work or what's the, if you don't have the tool to automate what's the customer's experience >> The alternative is that you actually have people look at every single instance of usage of resources and try and figure out how to do this. At cloud scale, that just doesn't make sense. You just can't. >> It's too many different options. >> Correct The reality is that your resources your human resources are literally your most expensive part of your budget. You want to leverage all the amazing people you have to do the amazing work. This is not amazing work. This is mundane. >> So you free up all the people time. >> Correct, you free up wasting their time and resources on doing something that's mundane, simple and should be automated, because that's the only way you scale. >> I think of you is like a little helper in the background helping me save money while I'm not thinking about it. It's like a good financial planner making you money since we're talking about the economy >> Pretty much, the other analogy that I give to all the technologists is this is like garbage collection. Like for most languages when you are coding, you have these new languages that do garbage collection for you. You don't do memory management and stuff where developers back in the day used to do that. Why do that when you can have technology do that in an automated manner for you in an optimal way. So just kind of freeing up your developer's time from doing this stuff that's mundane and it's a standard best practice. One of the things that we leverage AMD for, is they've helped us define the process of seamlessly migrating folks over to AMD based instances without any major disruptions or trying to minimize every aspect of disruption. So all the best practices are kind of borrowed from them, borrowed from AWS in most other cases. And we basically put them in the automation so that you don't ever have to worry about that stuff. >> Well you're getting so much data you have the opportunity to really streamline, I mean I love this, because you can look across industry, across verticals and behavior of what other folks are doing. Learn from that and apply that in the background to all your different customers. >> So how big is the company? How big is the team? >> So we have people in about 130 different countries. So we've completely been remote and global and actually the cloud has been one of the big enablers of that. >> That's awesome, 130 countries. >> And that's the best part of it. I was just telling Brad a short while ago that's allowed us to hire the best talent from across the world and they spend their time building new amazing products and new solutions instead of doing all this other mundane stuff. So we are big believers in automation not only for our world. And once our customers started asking us about or telling us about the same problem that they were having that's when we actually took what we had internally for our own purpose. We packaged it up as CloudFix and launched it last year at re:Invent. >> If the customers aren't thinking about automation then they're going to probably have struggle. They're going to probably struggle. I mean with more data coming in you see the data story here more data's coming in, more automation. And this year Brad price performance, I've heard the word price performance more this year at re:Invent than any other year I've heard it before, but this year, price performance not performance, price performance. So you're starting to hear that dialogue of squeeze, understand the use cases use the right specialized processor instance starting to see that evolve. >> Yeah and and there's so much to it. I mean, AMD right out of the box is any instance is 10% less expensive than the equivalent in the market right now on AWS. They do a great job of maximizing those products. We've got our Zen four core general processor family just released in November and it's going to be a beast. Yeah, we're very excited about it and AWS announced support for it so we're excited to see what they deliver there too. But price performance is so critical and again it's going back to the complexity of these environments. Giving some of these enterprises some help, to help them understand where they can get additional value. It goes well beyond the retail price. There's a lot more money to be shaved off the top just by spending time thinking about those applications. >> Yeah, absolutely. I love that you talked about collaboration we've been talking about community. I want to acknowledge the AWS super fans here, standing behind the stage. Rahul, I know that you are an AWS super fan. Can you tell us about that community and the program? >> Yeah, so I have been involved with AWS and building products with AWS since 2007. So it's kind of 15 years back when literally there were just a handful of API for launching EC2 instances and S3. >> Not the a hundred thousand that you mentioned earlier, my goodness, the scale. >> So I think I feel very privileged and honored that I have been part of that journey and have had to learn or have had the opportunity to learn both from successes and failures. And it's just my way of contributing back to that community. So we are part of the FinOps foundation as well, contributing through that. I run a podcast called AWS Insiders and a livestream called AWS Made Easy. So we are trying to make sure that people out there are able to understand how to leverage AWS in the best possible way. And yeah, we are there to help and hold their hand through it. >> Talk about the community, take a minute to explain to the audience watching the community around this cost optimization area. It's evolving, you mentioned FinOps. There's a whole large community developing, of practitioners and technologists coming together to look at this. What does this all mean? Talk about this community. >> So cost management within organizations is has evolved so drastically that organizations haven't really coped with it. Historically, you've had finance teams basically buy a lot of infrastructure, which is CapEx and the engineering teams had kind of an upper bound on what they would spend and where they would spend. Suddenly with cloud, that's kind of enabled so much innovation all of a sudden, everyone's realized it, five years was spent figuring out whether people should be on the cloud or not. That's no longer a question, right. Everyone needs to be in the cloud and I think that's a no-brainer. The problem there is that suddenly your operating model has moved from CapEx to OpEx. And organizations haven't really figured out how to deal with it. Finance now no longer has the controls to control and manage and forecast costs. Engineering has never had to deal with it in the past and suddenly now they have to figure out how to do all this finance stuff. And procurement finds itself in a very awkward way position because they are no longer doing these negotiations like they were doing in the past where it was okay right up front before you engage, you do these negotiations. Now it's kind of an ongoing thing and it's constantly changing. Like every day is different. >> And you got marketplace >> And you got marketplace. So it's a very complex situation and I think what we are trying to do with the FinOps foundation is try and take a lot of the best practices across organizations that have been doing this at least for the last 10, 15 years. Take all the learnings and failures and turn them into hopefully opinionated approaches that people can take organizations can take to navigate through this faster rather than kind of falter and then decide that oh, this is not for us. >> Yeah. It's a great model, it's a great model. >> I know it's time John, go ahead. >> All right so, we got a little bumper sticker exercise we used to say what's the bumper sticker for the show? We used to say that, now we're modernizing, we're saying if you had to do an Instagram reel right now, short hot take of what's going on at re:Invent this year with AMD or CloudFix or just in general what would be the sizzle reel, that would be on Instagram or TikTok, go. >> Look, I think when you're at re:Invent right now and number one the energy is fantastic. 23 is going to be a building year. We've got a lot of difficult times ahead financially but it's the time, the ones that come out of 23 stronger and more efficient, and cost optimize are going to survive the long run. So now's the time to build. >> Well done, Rahul let's go for it. >> Yeah, so like Brad said, cost and efficiencies at the top of everyone's mind. Stuff that's the low hanging fruit, easy, use automation. Apply your sources to do most of the innovation. Take the easiest part to realizing savings and operate as efficiently as you possibly can. I think that's got to be key. >> I think they nailed it. They both nailed it. Wow, well it was really good. >> I put you on our talent list of >> And alright, so we repeat them. Are you part of our host team? I love this, I absolutely love this Rahul we wish you the best at CloudFix and your 17 other jobs. And I am genuinely impressed. Do you sleep actually? Last question. >> I do, I do. I have an amazing team that really helps me with all of this. So yeah, thanks to them and thank you for having us here. >> It's been fantastic. >> It's our pleasure. And Brad, I'm delighted we get you both now and again on our next segment. Thank you for being here with us. >> Thank you very much. >> And thank you all for tuning in to our live coverage here at AWS re:Invent, in fabulous Sin City with John Furrier, my name's Savannah Peterson. You're watching theCUBE, the leader in high tech coverage. (calm music)
SUMMARY :
How you feeling? I dunno, it's day on the cloud and cost is a big thing. Rahul from you are, the energy this year compared to last year Rahul, how are you feeling? the engagement, it's awesome. So that you can still use the cloud and then apply them to So just in case the audience isn't versed, and run about 40 to 45,000 AWS accounts just a couple pocket change, no big deal. at a lot of the tools how to approach cost optimization? is to have automation that helps you and Rahul does for your customers. We have the leading. to everyone's heart, right now. from just the choices they make in cloud. So it's the thing about x86 compatibility, The alternative is that you actually It's too many all the amazing people you have because that's the only way you scale. I think of you is like One of the things that in the background to all and actually the cloud has been one And that's the best part of it. If the customers aren't and it's going to be a beast. and the program? So it's kind of 15 years that you mentioned earlier, or have had the opportunity to learn the community around this and the engineering teams had of the best practices it's a great model. if you had to do an So now's the time to build. Take the easiest part to realizing savings I think they nailed it. Rahul we wish you the best and thank you for having us here. we get you both now And thank you all
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Brad | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rahul | PERSON | 0.99+ |
Savannah Peterson | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
Brad Smith | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
ESW Capital | ORGANIZATION | 0.99+ |
November | DATE | 0.99+ |
five years | QUANTITY | 0.99+ |
last year | DATE | 0.99+ |
Rahul Subramaniam | PERSON | 0.99+ |
17 other jobs | QUANTITY | 0.99+ |
John | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
Las Vegas, Nevada | LOCATION | 0.99+ |
CloudFix | TITLE | 0.99+ |
130 countries | QUANTITY | 0.99+ |
2007 | DATE | 0.99+ |
this year | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
eighth | QUANTITY | 0.98+ |
about 130 different countries | QUANTITY | 0.98+ |
ninth | QUANTITY | 0.98+ |
CapEx | ORGANIZATION | 0.98+ |
both | QUANTITY | 0.98+ |
FinOps | ORGANIZATION | 0.97+ |
CTO | PERSON | 0.97+ |
Aurea CloudFix | ORGANIZATION | 0.96+ |
over a hundred thousand API | QUANTITY | 0.96+ |
Zen four core | COMMERCIAL_ITEM | 0.95+ |
one thing | QUANTITY | 0.95+ |
EC2 | TITLE | 0.95+ |
50,000 plus people | QUANTITY | 0.95+ |
day three | QUANTITY | 0.95+ |
day four | QUANTITY | 0.95+ |
about 40 | QUANTITY | 0.95+ |
23 | QUANTITY | 0.95+ |
day two | QUANTITY | 0.94+ |
CloudFix | ORGANIZATION | 0.94+ |
45,000 | QUANTITY | 0.93+ |
TikTok | ORGANIZATION | 0.92+ |
OpEx | ORGANIZATION | 0.92+ |
S3 | TITLE | 0.92+ |
over 150 enterprise software companies | QUANTITY | 0.89+ |
Invent | EVENT | 0.87+ |
ORGANIZATION | 0.86+ |
Next Gen Servers Ready to Hit the Market
(upbeat music) >> The market for enterprise servers is large and it generates well north of $100 billion in annual revenue, and it's growing consistently in the mid to high single digit range. Right now, like many segments, the market for servers is, it's like slingshotting, right? Organizations, they've been replenishing their install bases and upgrading, especially at HQs coming out of the isolation economy. But the macro headwinds, as we've reported, are impacting all segments of the market. CIOs, you know, they're tapping the brakes a little bit, sometimes quite a bit and being cautious with both capital expenditures and discretionary opex, particularly in the cloud. They're dialing it down and just being a little bit more, you know, cautious. The market for enterprise servers, it's dominated as you know, by x86 based systems with an increasingly large contribution coming from alternatives like ARM and NVIDIA. Intel, of course, is the largest supplier, but AMD has been incredibly successful competing with Intel because of its focus, it's got an outsourced manufacturing model and its innovation and very solid execution. Intel's frequent delays with its next generation Sapphire Rapid CPUs, now slated for January 2023 have created an opportunity for AMD, specifically AMD's next generation EPYC CPUs codenamed Genoa will offer as many as 96 Zen 4 cores per CPU when it launches later on this month. Observers can expect really three classes of Genoa. There's a standard Zen 4 compute platform for general purpose workloads, there's a compute density optimized Zen 4 package and then a cache optimized version for data intensive workloads. Indeed, the makers of enterprise servers are responding to customer requirements for more diversity and server platforms to handle different workloads, especially those high performance data-oriented workloads that are being driven by AI and machine learning and high performance computing, HPC needs. OEMs like Dell, they're going to be tapping these innovations and try to get to the market early. Dell, in particular, will be using these systems as the basis for its next generation Gen 16 servers, which are going to bring new capabilities to the market. Now, of course, Dell is not alone, there's got other OEM, you've got HPE, Lenovo, you've got ODMs, you've got the cloud players, they're all going to be looking to keep pace with the market. Now, the other big trend that we've seen in the market is the way customers are thinking about or should be thinking about performance. No longer is the clock speed of the CPU the soul and most indicative performance metric. There's much more emphasis in innovation around all those supporting components in a system, specifically the parts of the system that take advantage, for example, of faster bus speeds. We're talking about things like network interface cards and RAID controllers and memories and other peripheral devices that in combination with microprocessors, determine how well systems can perform and those kind of things around compute operations, IO and other critical tasks. Now, the combinatorial factors ultimately determine the overall performance of the system and how well suited a particular server is to handling different workloads. So we're seeing OEMs like Dell, they're building flexibility into their offerings and putting out products in their portfolios that can meet the changing needs of their customers. Welcome to our ongoing series where we investigate the critical question, does hardware matter? My name is Dave Vellante, and with me today to discuss these trends and the things that you should know about for the next generation of server architectures is former CTO from Oracle and EMC and adjunct faculty and Wharton CTO Academy, David Nicholson. Dave, always great to have you on "theCUBE." Thanks for making some time with me. >> Yeah, of course, Dave, great to be here. >> All right, so you heard my little spiel in the intro, that summary, >> Yeah. >> Was it accurate? What would you add? What do people need to know? >> Yeah, no, no, no, 100% accurate, but you know, I'm a resident nerd, so just, you know, some kind of clarification. If we think of things like microprocessor release cycles, it's always going to be characterized as rolling thunder. I think 2023 in particular is going to be this constant release cycle that we're going to see. You mentioned the, (clears throat) excuse me, general processors with 96 cores, shortly after the 96 core release, we'll see that 128 core release that you referenced in terms of compute density. And then, we can talk about what it means in terms of, you know, nanometers and performance per core and everything else. But yeah, no, that's the main thing I would say, is just people shouldn't look at this like a new car's being released on Saturday. This is going to happen over the next 18 months, really. >> All right, so to that point, you think about Dell's next generation systems, they're going to be featuring these new AMD processes, but to your point, when you think about performance claims, in this industry, it's a moving target. It's that, you call it a rolling thunder. So what does that game of hopscotch, if you will, look like? How do you see it unfolding over the next 12 to 18 months? >> So out of the gate, you know, slated as of right now for a November 10th release, AMD's going to be first to market with, you know, everyone will argue, but first to market with five nanometer technology in production systems, 96 cores. What's important though is, those microprocessors are going to be resident on motherboards from Dell that feature things like PCIe 5.0 technology. So everything surrounding the microprocessor complex is faster. Again, going back to this idea of rolling thunder, we expect the Gen 16 PowerEdge servers from Dell to similarly be rolled out in stages with initial releases that will address certain specific kinds of workloads and follow on releases with a variety of systems configured in a variety of ways. >> So I appreciate you painting a picture. Let's kind of stay inside under the hood, if we can, >> Sure. >> And share with us what we should know about these kind of next generation CPUs. How are companies like Dell going to be configuring them? How important are clock speeds and core counts in these new systems? And what about, you mentioned motherboards, what about next gen motherboards? You mentioned PCIe Gen 5, where does that fit in? So take us inside deeper into the system, please. >> Yeah, so if you will, you know, if you will join me for a moment, let's crack open the box and look inside. It's not just microprocessors. Like I said, they're plugged into a bus architecture that interconnect. How quickly that interconnect performs is critical. Now, I'm going to give you a statistic that doesn't require a PhD to understand. When we go from PCIe Gen 4 to Gen 5, which is going to be featured in all of these systems, we double the performance. So just, you can write that down, two, 2X. The performance is doubled, but the numbers are pretty staggering in terms of giga transactions per second, 128 gigabytes per second of aggregate bandwidth on the motherboard. Again, doubling when going from 4th Gen to 5th Gen. But the reality is, most users of these systems are still on PCIe Gen 3 based systems. So for them, just from a bus architecture perspective, you're doing a 4X or 8X leap in performance, and then all of the peripherals that plug into that faster bus are faster, whether it's RAID control cards from RAID controllers or storage controllers or network interface cards. Companies like Broadcom come to mind. All of their components are leapfrogging their prior generation to fit into this ecosystem. >> So I wonder if we could stay with PCIe for a moment and, you know, just understand what Gen 5 brings. You said, you know, 2X, I think we're talking bandwidth here. Is there a latency impact? You know, why does this matter? And just, you know, this premise that these other components increasingly matter more, Which components of the system are we talking about that can actually take advantage of PCIe Gen 5? >> Pretty much all of them, Dave. So whether it's memory plugged in or network interface cards, so communication to the outside world, which computer servers tend to want to do in 2022, controllers that are attached to internal and external storage devices. All of them benefit from this enhancement and performance. And it's, you know, PCI express performance is measured in essentially bandwidth and throughput in the sense of the numbers of transactions per second that you can do. It's mind numbing, I want to say it's 32 giga transfers per second. And then in terms of bandwidth, again, across the lanes that are available, 128 gigabytes per second. I'm going to have to check if it's gigabits or gigabytes. It's a massive number. And again, it's double what PCIe 4 is before. So what does that mean? Just like the advances in microprocessor technology, you can consolidate massive amounts of work into a much smaller footprint. That's critical because everything in that server is consuming power. So when you look at next generation hardware that's driven by things like AMD Genoa or you know, the EPYC processors, the Zen with the Z4 microprocessors, for every dollar that you're spending on power and equipment and everything else, you're getting far greater return on your investment. Now, I need to say that we anticipate that these individual servers, if you're out shopping for a server, and that's a very nebulous term because they come in all sorts of shapes and sizes, I think there's going to be a little bit of sticker shock at first until you run the numbers. People will look at an individual server and they'll say, wow, this is expensive and the peripherals, the things that are going into those slots are more expensive, but you're getting more bang for your buck. You're getting much more consolidation, lower power usage and for every dollar, you're getting a greater amount of performance and transactions, which translates up the stack through the application layer and, you know, out to the end user's desire to get work done. >> So I want to come back to that, but let me stay on performance for a minute. You know, we all used to be, when you'd go buy a new PC, you'd be like, what's the clock speed of that? And so, when you think about performance of a system today and how measurements are changing, how should customers think about performance in these next gen systems? And where does that, again, where does that supporting ecosystem play? >> So if you are really into the speeds and feeds and what's under the covers, from an academic perspective, you can go in and you can look at the die size that was used to create the microprocessors, the clock speeds, how many cores there are, but really, the answer is look at the benchmarks that are created through testing, especially from third party organizations that test these things for workloads that you intend to use these servers for. So if you are looking to support something like a high performance environment for artificial intelligence or machine learning, look at the benchmarks as they're recorded, as they're delivered by the entire system. So it's not just about the core. So yeah, it's interesting to look at clock speeds to kind of compare where we are with regards to Moore's Law. Have we been able to continue to track along that path? We know there are physical limitations to Moore's Law from an individual microprocessor perspective, but none of that really matters. What really matters is what can this system that I'm buying deliver in terms of application performance and user requirement performance? So that's what I'd say you want to look for. >> So I presume we're going to see these benchmarks at some point, I'm hoping we can, I'm hoping we can have you back on to talk about them. Is that something that we can expect in the future? >> Yeah, 100%, 100%. Dell, and I'm sure other companies, are furiously working away to demonstrate the advantages of this next gen architecture. If I had to guess, I would say that we are going to see quite a few world records set because of the combination of things, like faster network interface cards, faster storage cards, faster memory, more memory, faster cache, more cache, along with the enhanced microprocessors that are going to be delivered. And you mentioned this is, you know, AMD is sort of starting off this season of rolling thunder and in a few months, we'll start getting the initial entries from Intel also, and we'll be able to compare where they fit in with what AMD is offering. I'd expect OEMs like Dell to have, you know, a portfolio of products that highlight the advantages of each processor's set. >> Yeah, I talked in my open Dave about the diversity of workloads. What are some of those emerging workloads and how will companies like Dell address them in your view? >> So a lot of the applications that are going to be supported are what we think of as legacy application environments. A lot of Oracle databases, workloads associated with ERP, all of those things are just going to get better bang for their buck from a compute perspective. But what we're going to be hearing a lot about and what the future really holds for us that's exciting is this arena of artificial intelligence and machine learning. These next gen platforms offer performance that allows us to do things in areas like natural language processing that we just couldn't do before cost effectively. So I think the next few years are going to see a lot of advances in AI and ML that will be debated in the larger culture and that will excite a lot of computer scientists. So that's it, AI/ML are going to be the big buzzwords moving forward. >> So Dave, you talked earlier about this, some people might have sticker shocks. So some of the infrastructure pros that are watching this might be, oh, okay, I'm going to have to pitch this, especially in this, you know, tough macro environment. I'm going to have to sell this to my CIO, my CFO. So what does this all mean? You know, if they're going to have to pay more, how is it going to affect TCO? How would you pitch that to your management? >> As long as you stay away from per unit cost, you're fine. And again, we don't have necessarily, or I don't have necessarily insider access to street pricing on next gen servers yet, but what I do know from examining what the component suppliers tell us is that, these systems are going to be significantly more expensive on a per unit basis. But what does that mean? If the server that you're used to buying for five bucks is now 10 bucks, but it's doing five times as much work, it's a great deal, and anyone who looks at it and says, 10 bucks? It used to only be five bucks, well, the ROI and the TCO, that's where all of this really needs to be measured and a huge part of that is going to be power consumption. And along with the performance tests that we expect to see coming out imminently, we should also be expecting to see some of those ROI metrics, especially around power consumption. So I don't think it's going to be a problem moving forward, but there will be some sticker shock. I imagine you're going to be able to go in and configure a very, very expensive, fully loaded system on some of these configurators online over the next year. >> So it's consolidation, which means you could do more with less. It's going to be, or more with the same, it's going to be lower power, less cooling, less floor space and lower management overhead, which is kind of now you get into staff, so you're going to have to sort of identify how the staff can be productive in other areas. You're probably not going to fire people hopefully. But yeah, it sounds like it's going to be a really consolidation play. I talked at the open about Intel and AMD and Intel coming out with Sapphire Rapids, you know, of course it's been well documented, it's late but they're now scheduled for January. Pat Gelsinger's talked about this, and of course they're going to try to leapfrog AMD and then AMD is going to respond, you talked about this earlier, so that game is going to continue. How long do you think this cycle will last? >> Forever. (laughs) It's just that, there will be periods of excitement like we're going to experience over at least the next year and then there will be a lull and then there will be a period of excitement. But along the way, we've got lurkers who are trying to disrupt this market completely. You know, specifically you think about ARM where the original design point was, okay, you're powered by a battery, you have to fit in someone's pocket. You can't catch on fire and burn their leg. That's sort of the requirement, as opposed to the, you know, the x86 model, which is okay, you have a data center with a raised floor and you have a nuclear power plant down the street. So don't worry about it. As long as an 18-wheeler can get it to where it needs to be, we'll be okay. And so, you would think that over time, ARM is going to creep up as all destructive technologies do, and we've seen that, we've definitely seen that. But I would argue that we haven't seen it happen as quickly as maybe some of us expected. And then you've got NVIDIA kind of off to the side starting out, you know, heavy in the GPU space saying, hey, you know what, you can use the stuff we build for a whole lot of really cool new stuff. So they're running in a different direction, sort of gnawing at the traditional x86 vendors certainly. >> Yes, so I'm glad- >> That's going to be forever. >> I'm glad you brought up ARM and NVIDIA, I think, but you know, maybe it hasn't happened as quickly as many thought, although there's clearly pockets and examples where it is taking shape. But this to me, Dave, talks to the supporting cast. It's not just about the microprocessor unit anymore, specifically, you know, generally, but specifically the x86. It's the supporting, it's the CPU, the NPU, the XPU, if you will, but also all those surrounding components that, to your earlier point, are taking advantage of the faster bus speeds. >> Yeah, no, 100%. You know, look at it this way. A server used to be measured, well, they still are, you know, how many U of rack space does it take up? You had pizza box servers with a physical enclosure. Increasingly, you have the concept of a server in quotes being the aggregation of components that are all plugged together that share maybe a bus architecture. But those things are all connected internally and externally, especially externally, whether it's external storage, certainly networks. You talk about HPC, it's just not one server. It's hundreds or thousands of servers. So you could argue that we are in the era of connectivity and the real critical changes that we're going to see with these next generation server platforms are really centered on the bus architecture, PCIe 5, and the things that get plugged into those slots. So if you're looking at 25 gig or 100 gig NICs and what that means from a performance and/or consolidation perspective, or things like RDMA over Converged Ethernet, what that means for connecting systems, those factors will be at least as important as the microprocessor complexes. I imagine IT professionals going out and making the decision, okay, we're going to buy these systems with these microprocessors, with this number of cores in memory. Okay, great. But the real work starts when you start talking about connecting all of them together. What does that look like? So yeah, the definition of what constitutes a server and what's critically important I think has definitely changed. >> Dave, let's wrap. What can our audience expect in the future? You talked earlier about you're going to be able to get benchmarks, so that we can quantify these innovations that we've been talking about, bring us home. >> Yeah, I'm looking forward to taking a solid look at some of the performance benchmarking that's going to come out, these legitimate attempts to set world records and those questions about ROI and TCO. I want solid information about what my dollar is getting me. I think it helps the server vendors to be able to express that in a concrete way because our understanding is these things on a per unit basis are going to be more expensive and you're going to have to justify them. So that's really what, it's the details that are going to come the day of the launch and in subsequent weeks. So I think we're going to be busy for the next year focusing on a lot of hardware that, yes, does matter. So, you know, hang on, it's going to be a fun ride. >> All right, Dave, we're going to leave it there. Thanks you so much, my friend. Appreciate you coming on. >> Thanks, Dave. >> Okay, and don't forget to check out the special website that we've set up for this ongoing series. Go to doeshardwarematter.com and you'll see commentary from industry leaders, we got analysts on there, technical experts from all over the world. Thanks for watching, and we'll see you next time. (upbeat music)
SUMMARY :
and the things that you should know about Dave, great to be here. I think 2023 in particular is going to be over the next 12 to 18 months? So out of the gate, you know, So I appreciate you painting a picture. going to be configuring them? So just, you can write that down, two, 2X. Which components of the and the peripherals, the And so, when you think about So it's not just about the core. can expect in the future? Dell to have, you know, about the diversity of workloads. So a lot of the applications that to your management? So I don't think it's going to and then AMD is going to respond, as opposed to the, you the XPU, if you will, and the things that get expect in the future? it's the details that are going to come going to leave it there. Okay, and don't forget to
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
January 2023 | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
hundreds | QUANTITY | 0.99+ |
November 10th | DATE | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
10 bucks | QUANTITY | 0.99+ |
five bucks | QUANTITY | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
100 gig | QUANTITY | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
100% | QUANTITY | 0.99+ |
Saturday | DATE | 0.99+ |
128 core | QUANTITY | 0.99+ |
25 gig | QUANTITY | 0.99+ |
96 cores | QUANTITY | 0.99+ |
five times | QUANTITY | 0.99+ |
2X | QUANTITY | 0.99+ |
96 core | QUANTITY | 0.99+ |
8X | QUANTITY | 0.99+ |
4X | QUANTITY | 0.99+ |
96 | QUANTITY | 0.99+ |
next year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
2022 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
doeshardwarematter.com | OTHER | 0.98+ |
5th Gen. | QUANTITY | 0.98+ |
4th Gen | QUANTITY | 0.98+ |
ARM | ORGANIZATION | 0.98+ |
18-wheeler | QUANTITY | 0.98+ |
Z4 | COMMERCIAL_ITEM | 0.97+ |
first | QUANTITY | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
2023 | DATE | 0.97+ |
Zen 4 | COMMERCIAL_ITEM | 0.97+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.97+ |
thousands | QUANTITY | 0.96+ |
one server | QUANTITY | 0.96+ |
double | QUANTITY | 0.95+ |
PCIe Gen 4 | OTHER | 0.95+ |
Sapphire Rapid CPUs | COMMERCIAL_ITEM | 0.94+ |
PCIe Gen 3 | OTHER | 0.93+ |
PCIe 4 | OTHER | 0.93+ |
x86 | COMMERCIAL_ITEM | 0.92+ |
Wharton CTO Academy | ORGANIZATION | 0.92+ |
theCUBE Previews Supercomputing 22
(inspirational music) >> The history of high performance computing is unique and storied. You know, it's generally accepted that the first true supercomputer was shipped in the mid 1960s by Controlled Data Corporations, CDC, designed by an engineering team led by Seymour Cray, the father of Supercomputing. He left CDC in the 70's to start his own company, of course, carrying his own name. Now that company Cray, became the market leader in the 70's and the 80's, and then the decade of the 80's saw attempts to bring new designs, such as massively parallel systems, to reach new heights of performance and efficiency. Supercomputing design was one of the most challenging fields, and a number of really brilliant engineers became kind of quasi-famous in their little industry. In addition to Cray himself, Steve Chen, who worked for Cray, then went out to start his own companies. Danny Hillis, of Thinking Machines. Steve Frank of Kendall Square Research. Steve Wallach tried to build a mini supercomputer at Convex. These new entrants, they all failed, for the most part because the market at the time just wasn't really large enough and the economics of these systems really weren't that attractive. Now, the late 80's and the 90's saw big Japanese companies like NEC and Fujitsu entering the fray and governments around the world began to invest heavily in these systems to solve societal problems and make their nations more competitive. And as we entered the 21st century, we saw the coming of petascale computing, with China actually cracking the top 100 list of high performance computing. And today, we're now entering the exascale era, with systems that can complete a billion, billion calculations per second, or 10 to the 18th power. Astounding. And today, the high performance computing market generates north of $30 billion annually and is growing in the high single digits. Supercomputers solve the world's hardest problems in things like simulation, life sciences, weather, energy exploration, aerospace, astronomy, automotive industries, and many other high value examples. And supercomputers are expensive. You know, the highest performing supercomputers used to cost tens of millions of dollars, maybe $30 million. And we've seen that steadily rise to over $200 million. And today we're even seeing systems that cost more than half a billion dollars, even into the low billions when you include all the surrounding data center infrastructure and cooling required. The US, China, Japan, and EU countries, as well as the UK, are all investing heavily to keep their countries competitive, and no price seems to be too high. Now, there are five mega trends going on in HPC today, in addition to this massive rising cost that we just talked about. One, systems are becoming more distributed and less monolithic. The second is the power of these systems is increasing dramatically, both in terms of processor performance and energy consumption. The x86 today dominates processor shipments, it's going to probably continue to do so. Power has some presence, but ARM is growing very rapidly. Nvidia with GPUs is becoming a major player with AI coming in, we'll talk about that in a minute. And both the EU and China are developing their own processors. We're seeing massive densities with hundreds of thousands of cores that are being liquid-cooled with novel phase change technology. The third big trend is AI, which of course is still in the early stages, but it's being combined with ever larger and massive, massive data sets to attack new problems and accelerate research in dozens of industries. Now, the fourth big trend, HPC in the cloud reached critical mass at the end of the last decade. And all of the major hyperscalers are providing HPE, HPC as a service capability. Now finally, quantum computing is often talked about and predicted to become more stable by the end of the decade and crack new dimensions in computing. The EU has even announced a hybrid QC, with the goal of having a stable system in the second half of this decade, most likely around 2027, 2028. Welcome to theCUBE's preview of SC22, the big supercomputing show which takes place the week of November 13th in Dallas. theCUBE is going to be there. Dave Nicholson will be one of the co-hosts and joins me now to talk about trends in HPC and what to look for at the show. Dave, welcome, good to see you. >> Hey, good to see you too, Dave. >> Oh, you heard my narrative up front Dave. You got a technical background, CTO chops, what did I miss? What are the major trends that you're seeing? >> I don't think you really- You didn't miss anything, I think it's just a question of double-clicking on some of the things that you brought up. You know, if you look back historically, supercomputing was sort of relegated to things like weather prediction and nuclear weapons modeling. And these systems would live in places like Lawrence Livermore Labs or Los Alamos. Today, that requirement for cutting edge, leading edge, highest performing supercompute technology is bleeding into the enterprise, driven by AI and ML, artificial intelligence and machine learning. So when we think about the conversations we're going to have and the coverage we're going to do of the SC22 event, a lot of it is going to be looking under the covers and seeing what kind of architectural things contribute to these capabilities moving forward, and asking a whole bunch of questions. >> Yeah, so there's this sort of theory that the world is moving toward this connectivity beyond compute-centricity to connectivity-centric. We've talked about that, you and I, in the past. Is that a factor in the HPC world? How is it impacting, you know, supercomputing design? >> Well, so if you're designing an island that is, you know, tip of this spear, doesn't have to offer any level of interoperability or compatibility with anything else in the compute world, then connectivity is important simply from a speeds and feeds perspective. You know, lowest latency connectivity between nodes and things like that. But as we sort of democratize supercomputing, to a degree, as it moves from solely the purview of academia into truly ubiquitous architecture leverage by enterprises, you start asking the question, "Hey, wouldn't it be kind of cool if we could have this hooked up into our ethernet networks?" And so, that's a whole interesting subject to explore because with things like RDMA over converged ethernet, you now have the ability to have these supercomputing capabilities directly accessible by enterprise computing. So that level of detail, opening up the box of looking at the Nix, or the storage cards that are in the box, is actually critically important. And as an old-school hardware knuckle-dragger myself, I am super excited to see what the cutting edge holds right now. >> Yeah, when you look at the SC22 website, I mean, they're covering all kinds of different areas. They got, you know, parallel clustered systems, AI, storage, you know, servers, system software, application software, security. I mean, wireless HPC is no longer this niche. It really touches virtually every industry, and most industries anyway, and is really driving new advancements in society and research, solving some of the world's hardest problems. So what are some of the topics that you want to cover at SC22? >> Well, I kind of, I touched on some of them. I really want to ask people questions about this idea of HPC moving from just academia into the enterprise. And the question of, does that mean that there are architectural concerns that people have that might not be the same as the concerns that someone in academia or in a lab environment would have? And by the way, just like, little historical context, I can't help it. I just went through the upgrade from iPhone 12 to iPhone 14. This has got one terabyte of storage in it. One terabyte of storage. In 1997, I helped build a one terabyte NAS system that a government defense contractor purchased for almost $2 million. $2 million! This was, I don't even know, it was $9.99 a month extra on my cell phone bill. We had a team of seven people who were going to manage that one terabyte of storage. So, similarly, when we talk about just where are we from a supercompute resource perspective, if you consider it historically, it's absolutely insane. I'm going to be asking people about, of course, what's going on today, but also the near future. You know, what can we expect? What is the sort of singularity that needs to occur where natural language processing across all of the world's languages exists in a perfect way? You know, do we have the compute power now? What's the interface between software and hardware? But really, this is going to be an opportunity that is a little bit unique in terms of the things that we typically cover, because this is a lot about cracking open the box, the server box, and looking at what's inside and carefully considering all of the components. >> You know, Dave, I'm looking at the exhibitor floor. It's like, everybody is here. NASA, Microsoft, IBM, Dell, Intel, HPE, AWS, all the hyperscale guys, Weka IO, Pure Storage, companies I've never heard of. It's just, hundreds and hundreds of exhibitors, Nvidia, Oracle, Penguin Solutions, I mean, just on and on and on. Google, of course, has a presence there, theCUBE has a major presence. We got a 20 x 20 booth. So, it's really, as I say, to your point, HPC is going mainstream. You know, I think a lot of times, we think of HPC supercomputing as this just sort of, off in the eclectic, far off corner, but it really, when you think about big data, when you think about AI, a lot of the advancements that occur in HPC will trickle through and go mainstream in commercial environments. And I suspect that's why there are so many companies here that are really relevant to the commercial market as well. >> Yeah, this is like the Formula 1 of computing. So if you're a Motorsports nerd, you know that F1 is the pinnacle of the sport. SC22, this is where everybody wants to be. Another little historical reference that comes to mind, there was a time in, I think, the early 2000's when Unisys partnered with Intel and Microsoft to come up with, I think it was the ES7000, which was supposed to be the mainframe, the sort of Intel mainframe. It was an early attempt to use... And I don't say this in a derogatory way, commodity resources to create something really, really powerful. Here we are 20 years later, and we are absolutely smack in the middle of that. You mentioned the focus on x86 architecture, but all of the other components that the silicon manufacturers bring to bear, companies like Broadcom, Nvidia, et al, they're all contributing components to this mix in addition to, of course, the microprocessor folks like AMD and Intel and others. So yeah, this is big-time nerd fest. Lots of academics will still be there. The supercomputing.org, this loose affiliation that's been running these SC events for years. They have a major focus, major hooks into academia. They're bringing in legit computer scientists to this event. This is all cutting edge stuff. >> Yeah. So like you said, it's going to be kind of, a lot of techies there, very technical computing, of course, audience. At the same time, we expect that there's going to be a fair amount, as they say, of crossover. And so, I'm excited to see what the coverage looks like. Yourself, John Furrier, Savannah, I think even Paul Gillin is going to attend the show, because I believe we're going to be there three days. So, you know, we're doing a lot of editorial. Dell is an anchor sponsor, so we really appreciate them providing funding so we can have this community event and bring people on. So, if you are interested- >> Dave, Dave, I just have- Just something on that point. I think that's indicative of where this world is moving when you have Dell so directly involved in something like this, it's an indication that this is moving out of just the realm of academia and moving in the direction of enterprise. Because as we know, they tend to ruthlessly drive down the cost of things. And so I think that's an interesting indication right there. >> Yeah, as do the cloud guys. So again, this is mainstream. So if you're interested, if you got something interesting to talk about, if you have market research, you're an analyst, you're an influencer in this community, you've got technical chops, maybe you've got an interesting startup, you can contact David, david.nicholson@siliconangle.com. John Furrier is john@siliconangle.com. david.vellante@siliconangle.com. I'd be happy to listen to your pitch and see if we can fit you onto the program. So, really excited. It's the week of November 13th. I think November 13th is a Sunday, so I believe David will be broadcasting Tuesday, Wednesday, Thursday. Really excited. Give you the last word here, Dave. >> No, I just, I'm not embarrassed to admit that I'm really, really excited about this. It's cutting edge stuff and I'm really going to be exploring this question of where does it fit in the world of AI and ML? I think that's really going to be the center of what I'm really seeking to understand when I'm there. >> All right, Dave Nicholson. Thanks for your time. theCUBE at SC22. Don't miss it. Go to thecube.net, go to siliconangle.com for all the news. This is Dave Vellante for theCUBE and for Dave Nicholson. Thanks for watching. And we'll see you in Dallas. (inquisitive music)
SUMMARY :
And all of the major What are the major trends on some of the things that you brought up. that the world is moving or the storage cards that are in the box, solving some of the across all of the world's languages a lot of the advancements but all of the other components At the same time, we expect and moving in the direction of enterprise. Yeah, as do the cloud guys. and I'm really going to be go to siliconangle.com for all the news.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danny Hillis | PERSON | 0.99+ |
Steve Chen | PERSON | 0.99+ |
NEC | ORGANIZATION | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Steve Wallach | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
NASA | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Steve Frank | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Seymour Cray | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Unisys | ORGANIZATION | 0.99+ |
1997 | DATE | 0.99+ |
Savannah | PERSON | 0.99+ |
Dallas | LOCATION | 0.99+ |
EU | ORGANIZATION | 0.99+ |
Controlled Data Corporations | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Penguin Solutions | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Tuesday | DATE | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
iPhone 12 | COMMERCIAL_ITEM | 0.99+ |
10 | QUANTITY | 0.99+ |
Cray | PERSON | 0.99+ |
one terabyte | QUANTITY | 0.99+ |
CDC | ORGANIZATION | 0.99+ |
thecube.net | OTHER | 0.99+ |
Lawrence Livermore Labs | ORGANIZATION | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Kendall Square Research | ORGANIZATION | 0.99+ |
iPhone 14 | COMMERCIAL_ITEM | 0.99+ |
john@siliconangle.com | OTHER | 0.99+ |
$2 million | QUANTITY | 0.99+ |
November 13th | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
Today | DATE | 0.99+ |
more than half a billion dollars | QUANTITY | 0.99+ |
20 | QUANTITY | 0.99+ |
seven people | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
mid 1960s | DATE | 0.99+ |
three days | QUANTITY | 0.99+ |
Convex | ORGANIZATION | 0.99+ |
70's | DATE | 0.99+ |
SC22 | EVENT | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
late 80's | DATE | 0.98+ |
80's | DATE | 0.98+ |
ES7000 | COMMERCIAL_ITEM | 0.98+ |
today | DATE | 0.98+ |
almost $2 million | QUANTITY | 0.98+ |
second | QUANTITY | 0.98+ |
both | QUANTITY | 0.98+ |
20 years later | DATE | 0.98+ |
tens of millions of dollars | QUANTITY | 0.98+ |
Sunday | DATE | 0.98+ |
Japanese | OTHER | 0.98+ |
90's | DATE | 0.97+ |
Bich Le, Platform9 | Cloud Native at Scale
foreign [Music] to the special presentation of cloud native at scale the cube and Platform 9 special presentation going in and digging into the next generation super cloud infrastructure as code and the future of application development we're here with dick Lee who's the Chief Architect and co-founder of platform nine pick great to see you Cube alumni we we met at openstack event in about eight years ago or later earlier uh when openstack was going great to see you and great congratulations on the success of platform nine thank you very much yeah you guys been at this for a while and this is really the the Year we're seeing the the crossover of kubernetes because of what happens with containers everyone now was realized and you've seen what docker's doing with the new Docker the open source Docker now just the success of containerization and now the kubernetes layer that we've been working on for years is coming bearing fruit this is huge exactly yes and so as infrastructure as code comes in we talked to baskar talking about super cloud I met her about you know the new Arlo our our lawn um you guys just launched the infrastructure's code is going to another level and it's always been devops infrastructure is code that's been the ethos that's been like from day one developers just code I think you saw the rise of serverless and you see now multi-cloud or on the horizon connect the dots for us what is the state of infrastructure as code today so I think I think um I'm glad you mentioned it everybody or most people know about infrastructure as code but with kubernetes I think that project has evolved at the concept even further and these days it's um infrastructure as configuration right so which is an evolution of infrastructure as code so instead of telling the system here's how I want my infrastructure by telling it you know do step a b c and d uh instead with kubernetes you can describe your desired State declaratively using things called manifest resources and then the system kind of magically figures it out and tries to converge the state towards the one that you specify so I think it's it's a even better version of infrastructure as code yeah and that really means it's developer just accessing resources okay that declare okay give me some compute stand me up some turn the lights on turn them off turn them on that's kind of where we see this going and I like the configuration piece some people say composability I mean now with open source so popular you don't have to have to write a lot of code this code being developed and so it's integration it's configuration these are areas that we're starting to see computer science principles around automation machine learning assisting open source because you've got a lot of code that's what you're hearing software supply chain issues so infrastructure as code has to factor in these new Dynamics can you share your opinion on these new dynamics of as open source grows the glue layers the configurations the integration what are the core issues I think one of the major core issues is with all that power comes complexity right so um You know despite its expressive Power Systems like kubernetes and declarative apis let you express a lot of complicated and complex Stacks right but you're dealing with um hundreds if not thousands of these yaml files or resources and so I think you know the emergence of systems and layers to help you manage that complexity is becoming a key Challenge and opportunity in this space I wrote a LinkedIn post today those comments about you know hey Enterprise is the new breed the trend of SAS companies moving uh our consumer consumer-like thinking into the Enterprise has been happening for a long time but now more than ever you're seeing it the old way used to be solve complexity with more complexity and then lock the customer in now with open source it's speed simplification and integration right these are the new Dynam power dynamics for developers so as companies are starting to now deploy and look at kubernetes what are the things that need to be in place because you have some I won't say technical debt but maybe some shortcuts some scripts here that make it look like infrastructure as code people have done some things to simulate or or make infrastructures code happen yes but to do it at scale yes is harder what's your take on this what's your view it's hard because there's a proliferation of of methods tools Technologies so for example today it's a very common for devops and platform engineering tools I mean sorry teams to have to deploy a large number of kubernetes clusters but then apply the applications and configurations on top of those clusters and they're using a wide range of tools to do this right for example maybe ansible or terraform or bash scripts to bring up the infrastructure and then the Clusters and then they may use a different set of tools such as Argo CD or other tools to apply configurations and applications on top of the Clusters so you have this sprawl of tools you also you also have this sprawl of configurations and files because the more objects you're dealing with the more resources you have to manage and there's a risk of drift that people call that where you know you think you have things under control but some people from various teams will make changes here and there and then before the end of the day systems break and you have no idea of tracking them so I think there's real need to kind of unify simplify and try to solve these problems using a smaller more unified set of tools and methodology apologies and that's something that we try to do with this new project Arlon yeah so so we're going to get to our line in a second I want to get to the yr lawn you guys announced that at argocon which was put on here in Silicon Valley at the community meeting by Intuit they had their own little day over their headquarters but before we get there um Bhaskar your CEO came on and he talked about super cloud at our inaugural event what's your definition of super cloud if you had to kind of explain that to someone at a cocktail party or someone in the industry technical how would you look at the super cloud Trend that's emerging has become a thing what's your what would be your contribution to that definition or the narrative well it's it's uh funny because I've actually heard of the term for the first time today speaking to you earlier today but I think based on what you said I I already get kind of some of the the gist and the the main Concepts it seems like uh super cloud the way I interpret that is you know um clouds and infrastructure um programmable infrastructure all of those things are becoming commodity in a way and everyone's got their own flavor but there's a real opportunity for people to solve real business Problems by perhaps trying to abstract away you know all of those various implementations and then building uh um better abstractions that are perhaps business or application specific to help companies and businesses solve real business problems yeah I remember it's a great great definition I remember not to date myself but back in the old days you know IBM had its proprietary Network operating system so the deck for the mini computer vintage deck net and sna respectively um but tcpip came out of the OSI the open systems interconnect and remember ethernet beat token ring out so not to get all nerdy for all the young kids out there look just look up token ring you'll see if I never heard of it it's IBM's you know a connection for the internet at the layer two is Amazon the ethernet right so if TCP could be the kubernetes and containers abstraction that made the industry completely change at that point in history so at every major inflection point where there's been serious industry change and wealth creation and business value there's been an abstraction Yes somewhere yes what's your reaction to that I think um this is um I think a saying that's been heard many times in this industry and I forgot who originated it but um I think the saying goes like there's no problem that can't be solved with another layer of indirection right and we've seen this over and over and over again where Amazon and its peers have inserted this layer that has simplified you know Computing and infrastructure management and I believe this trend is going to continue right the next set of problems are going to be solved with these insertions of additional abstraction layers I think that that's really a yeah it's going to continue it's interesting just when I wrote another post today on LinkedIn called the Silicon Wars AMD stock is down arm has been on the rise we've been reporting for many years now that arm's going to be huge it has become true if you look at the success of the infrastructure as a service layer across the clouds Azure AWS Amazon's clearly way ahead of everybody the stuff that they're doing with the Silicon and the physics and the atoms the pro you know this is where the Innovation they're going so deep and so strong at is the more that they get that gets gone they have more performance so if you're an app developer wouldn't you want the best performance and you'd want to have the best abstraction layer that gives you the most ability to do infrastructures code or infrastructure for configuration for provisioning for managing services and you're seeing that today with service meshes a lot of action going on in the service mesh area in this community of kubecon which we'll be covering so that brings up the whole what's next you guys just announced our lawn at argocon which came out of Intuit we've had Mariana Tesla out our supercloud event she's a CTO you know they're all in the cloud so there contributed that project where did Arlon come from what was the origination what's the purpose why our lawn why this announcement yeah so um the the Inception of the project this was the result of um us realizing that problem that we spoke about earlier which is complexity right with all of this these clouds these infrastructure all the variations around and you know compute storage networks and um the proliferation of tools we talked about the ansibles and terraforms and kubernetes itself you can think of that as another tool right we saw a need to solve that complexity problem and especially for people and users who use kubernetes at scale so when you have you know hundreds of clusters thousands of applications thousands of users spread out over many many locations there there needs to be a system that helps simplify that management right so that means fewer tools more expressive ways of describing the state that you want and more consistency and and that's why um you know we built um Arlon and we built it um recognizing that many of these problems or sub problems have already been solved so Arlon doesn't try to reinvent the wheel it instead rests on the shoulders of several Giants right so for example kubernetes is one building block get Ops and Argo CD is another one which provides a very structured way of applying configuration and then we have projects like cluster API and cross-plane which provide apis for describing infrastructure so Arlon takes all of those building blocks and um builds a thin layer which gives users a very expressive way of defining configuration and desired state so that's that's kind of the Inception and what's the benefit of that what does that give what does that give the developer the user in this case the developers the the platform engineer team members the devops engineers they uh get a ways to provision not just infrastructure and clusters but also applications and configurations they get away a system for provisioning configuring deploying and doing life cycle Management in a in a much simpler way okay especially as I said if you're dealing with a large number of applications so it's like an operating fabric if you will yes for them okay so let's get into what that means for up above and below the the abstraction or thin layer below is the infrastructure we talked a lot about what's going on below that yeah above our workloads at the end of the day and I talked to cxos and um I.T folks that are now devops Engineers they care about the workloads and they want the infrastructure's code to work they want to spend their time getting in the weeds figuring out what happened when someone made a push that that happened or something happened they need observability and they need to to know that it's working that's right and as my workloads running if effectively so how do you guys look at the workload side because now you have multiple workloads on these fabric right so workloads so kubernetes has defined kind of a standard way to describe workloads and you can you know tell kubernetes I want to run this container this particular way or you can use other projects that are in the kubernetes cloud native ecosystem like k-native where you can express your application in more at a higher level right but what's also happening is in addition to the workloads devops and platform engineering teams they need to very often deploy the applications with the Clusters themselves clusters are becoming this commodity it's it's becoming this um host for the application and it kind of comes bundled with it in many cases it's like an appliance right so devops teams have to provision clusters at a really incredible rate and they need to tear them down clusters are becoming more extremely like an ec2 instance spin up a cluster we've heard people used words like that that's right and before Arlon you kind of had to do all of that using a different set of tools as I explained so with our own you can kind of express everything together you can say I want a cluster with a health monitoring stack and a logging stack and this Ingress controller and I want these applications and these security policies you can describe all of that using something we call the profile and then you can stamp out your app your applications and your clusters and manage them in a very essentially standard that creates a mechanism it's standardized declarative kind of configurations and it's like a Playbook you just deploy it now what's this between say a script like I have scripts I can just automate Scripts or yes this is where that um declarative API and um infrastructures configuration comes in right because scripts yes you can automate scripts but the order in which they run matters right they can break things can break in the middle and um and sometimes you need to debug them whereas the declarative way is much more expressive and Powerful you just tell the system what you want and then the system kind of uh figures it out and there are these things called controllers which will in the background reconcile all the state to converge towards your desire to say it's a much more powerful expressive and reliable way of getting things done so infrastructure as configuration is built kind of on it's a superset of infrastructures code because different Evolution you need Edge restaurant's code but then you can configure The Code by just saying do it you're basically declaring and saying go go do that that's right okay so all right so Cloud native at scale take me through your vision of what that means someone says hey what is cloud native at scale mean what's success look like how does it roll out in the future as you that future next couple years I mean people are now starting to figure out okay it's not as easy as it sounds kubernetes has value we're going to hear this year kubecon a lot of this what is cloud native at scale mean yeah there are different interpretations but if you ask me when people think of scale they think of a large number of deployments right geographies many you know supporting thousands or tens or millions of users there's that aspect to scale there's also um an equally important aspect of scale which is also something that we try to address with Arlon and that is just complexity for the people operating this or configuring this right so in order to describe that desired State and in order to perform things like maybe upgrades or updates on a very large scale you want the humans behind that to be able to express and direct the system to do that in in relatively simple terms right and so we want uh the tools and the abstractions and the mechanisms available to the user to be as powerful but as simple as possible so there's I think there's going to be a number and there have been a number of cncf and Cloud native projects that are trying to attack that complexity problem as well and Arlon kind of Falls in in that category okay so I'll put you on the spot where I've got kubecon coming up and obviously this will be shipping this seg series out before what do you expect to see at kubecon issue it's the big story this year what's the what's the most important thing happening is it in the open source community and also within a lot of the the people jockeying for leadership I know there's a lot of projects and still there's some white space on the overall systems map about the different areas get runtime and observability in all these different areas what's the where's the action where's the smoke where's the fire where's the piece where's the tension yeah so uh I think uh one thing that has been happening over the past couple of coupons and I expect to continue and and that is uh the the word on the street is kubernetes getting boring right which is good right or I mean simple well um well maybe yeah invisible no drama right so so the rate of change of the kubernetes features and and all that has slowed but in a positive way um but um there's still a general sentiment and feeling that there's just too much stuff if you look at a stack necessary for uh hosting applications based on kubernetes they're just still too many moving Parts too many uh components right too much complexity I go I keep going back to the complexity problem so I expect kubecon and all the vendors and the players and the startups and the people there to continue to focus on that complexity problem and introduce a further simplifications uh to to the stack yeah Vic you've had a storied career VMware over decades with them uh obviously 12 years for the 14 years or something like that big number co-founder here platform I think it's been around for a while at this game uh we man we'll talk about openstack that project you we interviewed at one of their events so openstack was the beginning of that this new Revolution I remember the early days was it wasn't supposed to be an alternative to Amazon but it was a way to do more cloud cloud native I think we had a Colorado team at that time I mean it's a joke we you know about about the dream it's happening now now at platform nine you guys have been doing this for a while what's the what are you most excited about as the Chief Architect what did you guys double down on what did you guys pivot from or two did you do any pivots did you extend out certain areas because you guys are in a good position right now a lot of DNA in Cloud native um what are you most excited about and what is platform nine bring to the table for customers and for people in the industry watching this yeah so I think our mission really hasn't changed over the years right it's been always about taking complex open source software because open source software it's powerful it solves new problems you know every year and you have new things coming out all the time right openstack was an example within kubernetes took the World by storm but there's always that complexity of you know just configuring it deploying it running it operating it and our mission has always been that we will take all that complexity and just make it you know easy for users to consume regardless of the technology right so the successor to kubernetes you know I don't have a crystal ball but you know you have some indications that people are coming up of new and simpler ways of running applications there are many projects around there who knows what's coming uh next year or the year after that but platform will a Platform 9 will be there and we will you know take the Innovations from the the community we will contribute our own Innovations and make all of those things uh very consumable to customers simpler faster cheaper always a good business model technically to make that happen yeah I think the reigning in the chaos is key you know now we have now visibility into the scale final question before we depart you know this segment um what is that scale how many clusters do you see that would be a high a watermark for an at scale conversation around an Enterprise um is it workloads we're looking at or or clusters how would you yeah how would you describe that and when people try to squint through and evaluate what's a scale what's the at scale kind of threshold yeah and the number of clusters doesn't tell the whole story because clusters can be small in terms of the number of nodes or they can be large but roughly speaking when we say you know large-scale cluster deployments we're talking about um maybe a hundreds uh two thousands yeah and final final question what's the role of the hyperscalers you've got AWS continuing to do well but they got their core I asked they got a pass they're not too too much putting assess out there they have some SAS apps but mostly it's the ecosystem they have marketplaces doing over two billion dollars billions of transactions a year um and and it's just like just sitting there it has really they're now innovating on it but that's going to change ecosystems what's the role the cloud play and the cloud native at scale the the hyperscale yeah Abus Azure Google you mean from a business they have their own interests that you know that they're uh they will keep catering to they they will continue to find ways to lock their users into their ecosystem of uh services and and apis um so I don't think that's going to change right they're just going to keep well they got great uh performance I mean from a from a hardware standpoint yes that's going to be key right yes I think the uh the move from x86 being the dominant away and platform to run workloads is changing right that that that and I think the the hyperscalers really want to be in the game in terms of you know the the new risk and arm ecosystems and platforms yeah that joking aside Paul maritz when he was the CEO of VMware when he took over once said I remember our first year doing the cube the cloud is one big distributed computer it's it's hardware and you've got software and you got middleware and uh he kind of over these kind of tongue-in-cheek but really you're talking about large compute and sets of services that is essentially a distributed computer yes exactly it's we're back in the same game Vic thank you for coming on the segment appreciate your time this is uh Cloud native at scale special presentation with platform nine really unpacking super cloud rlon open source and how to run large-scale applications uh on the cloud cloud native philadelph4 developers and John Furrier with the cube thanks for watching and we'll stay tuned for another great segment coming right up foreign [Music]
SUMMARY :
the successor to kubernetes you know I
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Paul maritz | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
12 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
14 years | QUANTITY | 0.99+ |
tens | QUANTITY | 0.99+ |
millions | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
dick Lee | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
first year | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
thousands of users | QUANTITY | 0.98+ |
thousands of applications | QUANTITY | 0.98+ |
two | QUANTITY | 0.98+ |
Mariana Tesla | PERSON | 0.98+ |
over two billion dollars | QUANTITY | 0.98+ |
two thousands | QUANTITY | 0.98+ |
next year | DATE | 0.98+ |
ORGANIZATION | 0.98+ | |
openstack | ORGANIZATION | 0.97+ |
argocon | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
Arlon | ORGANIZATION | 0.96+ |
kubecon | ORGANIZATION | 0.96+ |
Colorado | LOCATION | 0.95+ |
first time | QUANTITY | 0.95+ |
one | QUANTITY | 0.95+ |
Intuit | ORGANIZATION | 0.95+ |
AMD | ORGANIZATION | 0.94+ |
baskar | PERSON | 0.94+ |
earlier today | DATE | 0.93+ |
one thing | QUANTITY | 0.92+ |
Docker | TITLE | 0.91+ |
ORGANIZATION | 0.91+ | |
a lot of projects | QUANTITY | 0.91+ |
hundreds of clusters | QUANTITY | 0.91+ |
Azure | TITLE | 0.9+ |
Platform9 | ORGANIZATION | 0.88+ |
platform nine | ORGANIZATION | 0.87+ |
Cube | ORGANIZATION | 0.87+ |
about eight years ago | DATE | 0.84+ |
openstack | EVENT | 0.83+ |
next couple years | DATE | 0.8+ |
billions of transactions a year | QUANTITY | 0.8+ |
Platform 9 | TITLE | 0.8+ |
Platform 9 | ORGANIZATION | 0.8+ |
platform nine | ORGANIZATION | 0.79+ |
Argo | TITLE | 0.78+ |
Arlo | ORGANIZATION | 0.75+ |
ec2 | TITLE | 0.72+ |
over decades | QUANTITY | 0.72+ |
cxos | ORGANIZATION | 0.71+ |
nine | QUANTITY | 0.69+ |
one big distributed computer | QUANTITY | 0.68+ |
x86 | TITLE | 0.67+ |
years | QUANTITY | 0.67+ |
Bhaskar | PERSON | 0.64+ |
Ingress | ORGANIZATION | 0.63+ |
docker | TITLE | 0.62+ |
Cloud Native at | TITLE | 0.62+ |
later | DATE | 0.62+ |
year | QUANTITY | 0.62+ |
Playbook | TITLE | 0.61+ |
Arlon | TITLE | 0.57+ |
CEO | PERSON | 0.57+ |
David Flynn Supercloud Audio
>> From every ISV to solve the problems. You want there to be tools in place that you can use, either open source tools or whatever it is that help you build it. And slowly over time, that building will become easier and easier. So my question to you was, where do you see you playing? Do you see yourself playing to ISVs as a set of tools, which will make their life a lot easier and provide that work? >> Absolutely. >> If they don't have, so they don't have to do it. Or you're providing this for the end users? Or both? >> So it's a progression. If you go to the ISVs first, you're doomed to starved before you have time for that other option. >> Yeah. >> Right? So it's a question of phase, the phasing of it. And also if you go directly to end users, you can demonstrate the power of it and get the attention of the ISVs. I believe that the ISVs, especially those with the biggest footprints and the most, you know, coveted estates, they have already made massive investments at trying to solve decentralization of their software stack. And I believe that they have used it as a hook to try to move to a software as a service model and rope people into leasing their infrastructure. So if you look at the clouds that have been propped up by Autodesk or by Adobe, or you name the company, they are building proprietary makeshift solutions for decentralizing or hybrid clouding. Or maybe they're not even doing that at all and all they're is saying hey, if you want to get location agnosticness, then what you should just, is just move into our cloud. >> Right. >> And then they try to solve on the background how to decentralize it between different regions so they can have decent offerings in each region. But those who are more advanced have already made larger investments and will be more averse to, you know, throwing that stuff away, all of their makeshift machinery away, and using a platform that gives them high performance parallel, low level file system access, while at the same time having metadata-driven, you know, policy-based, intent-based orchestration to manage the diffusion of data across a decentralized infrastructure. They are not going to be as open because they've made such an investment and they're going to look at how do they monetize it. So what we have found with like the movie studios who are using us already, many of the app they're using, many of those software offerings, the ISVs have their own cloud that offers that software for the cloud. But what we got when I asked about this, 'cause I was dealt specifically into this question because I'm very interested to know how we're going to make that leap from end user upstream into the ISVs where I believe we need to, and they said, look, we cannot use these software ISV-specific SAS clouds for two reasons. Number one is we lose control of the data. We're giving it to them. That's security and other issues. And here you're talking about we're doing work for Disney, we're doing work for Netflix, and they're not going to let us put our data on those software clouds, on those SAS clouds. Secondly, in any reasonable pipeline, the data is shared by many different applications. We need to be agnostic as to the application. 'Cause the inputs to one application, you know, the output for one application provides the input to the next, and it's not necessarily from the same vendor. So they need to have a data platform that lets them, you know, go from one software stack, and you know, to run it on another. Because they might do the rendering with this and yet, they do the editing with that, and you know, et cetera, et cetera. So I think the further you go up the stack in the structured data and dedicated applications for specific functions in specific verticals, the further up the stack you go, the harder it is to justify a SAS offering where you're basically telling the end users you need to park all your data with us and then you can run your application in our cloud and get this. That ultimately is a dead end path versus having the data be open and available to many applications across this supercloud layer. >> Okay, so-- >> Is that making any sense? >> Yes, so if I could just ask a clarifying question. So, if I had to take Snowflake as an example, I think they're doing exactly what you're saying is a dead end, put everything into our proprietary system and then we'll figure out how to distribute it. >> Yeah. >> And and I think if you're familiar with Zhamak Dehghaniis' data mesh concept. Are you? >> A little bit, yeah. >> But in her model, Snowflake, a Snowflake warehouse is just a node on the mesh and that mesh is-- >> That's right. >> Ultimately the supercloud and you're an enabler of that is what I'm hearing. >> That's right. What they're doing up at the structured level and what they're talking about at the structured level we're doing at the underlying, unstructured level, which by the way has implications for how you implement those distributed database things. In other words, implementing a Snowflake on top of Hammerspace would have made building stuff like in the first place easier. It would allow you to easily shift and run the database engine anywhere. You still have to solve how to shard and distribute at the transaction layer above, so I'm not saying we're a substitute for what you need to do at the app layer. By the way, there is another example of that and that's Microsoft Office, right? It's one thing to share that, to have a file share where you can share all the docs. It's something else to have Word and PowerPoint, Excel know how to allow people to be simultaneously editing the same doc. That's always going to happen in the app layer. But not all applications need that level of, you know, in-app decentralization. You know, many of them, many workflows are pipelined, especially the ones that are very data intensive where you're doing drug discovery or you're doing rendering, or you're doing machine learning training. These things are human in the loop with large stages of processing across tens of thousands of cores. And I think that kind of data processing pipeline is what we're focusing on first. Not so much the Microsoft Office or the Snowflake, you know, parking a relational database because that takes a lot of application layer stuff and that's what they're good at. >> Right. >> But I think... >> Go ahead, sorry. >> Later entrance in these markets will find Hammerspace as a way to accelerate their work so they can focus more narrowly on just the stuff that's app-specific, higher level sharing in the app. >> Yes, Snowflake founders-- >> I think it might be worth mentioning also, just keep this confidential guys, but one of our customers is Blue Origin. And one of the things that we have found is kind of the point of what you're talking about with our customers. They're needing to build this and since it's not commercially available or they don't know where to look for it to be commercially available, they're all building themselves. So this layer is needed. And Blue is just one of the examples of quite a few we're now talking to. And like manufacturing, HPC, research where they're out trying to solve this problem with their own scripting tools and things like that. And I just, I don't know if there's anything you want to add, David, but you know, but there's definitely a demand here and customers are trying to figure out how to solve it beyond what Hammerspace is doing. Like the need is so great that they're just putting developers on trying to do it themselves. >> Well, and you know, Snowflake founders, they didn't have a Hammerspace to lean on. But, one of the things that's interesting about supercloud is we feel as though industry clouds will emerge, that as part of company's digital transformations, they will, you know, every company's a software company, they'll begin to build their own clouds and they will be able to use a Hammerspace to do that. >> A super pass layer. >> Yes. It's really, I don't know if David's speaking, I don't want to speak over him, but we can't hear you. May be going through a bad... >> Well, a regional, regional talks that make that possible. And so they're doing these render farms and editing farms, and it's a cloud-specific to the types of workflows in the median entertainment world. Or clouds specifically to workflows in the chip design world or in the drug and bio and life sciences exploration world. There are large organizations that are kind of a blend of end users, like the Broad, which has their own kind of cloud where they're asking collaborators to come in and work with them. So it starts to even blur who's an end user versus an ISV. >> Yes. >> Right? When you start talking about the massive data is the main gravity is to having lots of people participate. >> Yep, and that's where the value is. And that's where the value is. And this is a megatrend that we see. And so it's really important for us to get to the point of what is and what is not a supercloud and, you know, that's where we're trying to evolve. >> Let's talk about this for a second 'cause I want to, I want to challenge you on something and it's something that I got challenged on and it has led me to thinking differently than I did at first, which Molly can attest to. Okay? So, we have been looking for a way to talk about the concept of cloud of utility computing, run anything anywhere that isn't addressed in today's realization of cloud. 'Cause today's cloud is not run anything anywhere, it's quite the opposite. You park your data in AWS and that's where you run stuff. And you pretty much have to. Same with with Azure. They're using data gravity to keep you captive there, just like the old infrastructure guys did. But now it's even worse because it's coupled back with the software to some degree, as well. And you have to use their storage, networking, and compute. It's not, I mean it fell back to the mainframe era. Anyhow, so I love the concept of supercloud. By the way, I was going to suggest that a better term might be hyper cloud since hyper speaks to the multidimensionality of it and the ability to be in a, you know, be in a different dimension, a different plane of existence kind of thing like hyperspace. But super and hyper are somewhat synonyms. I mean, you have hyper cars and you have super cars and blah, blah, blah. I happen to like hyper maybe also because it ties into the whole Hammerspace notion of a hyper-dimensional, you know, reality, having your data centers connected by a wormhole that is Hammerspace. But regardless, what I got challenged on is calling it something different at all versus simply saying, this is what cloud has always meant to be. This is the true cloud, this is real cloud, this is cloud. And I think back to what happened, you'll remember, at Fusion IO we talked about IO memory and we did that because people had a conceptualization of what an SSD was. And an SSD back then was low capacity, low endurance, made to go military, aerospace where things needed to be rugged but was completely useless in the data center. And we needed people to imagine this thing as being able to displace entire SAND, with the kind of capacity density, performance density, endurance. And so we talked IO memory, we could have said enterprise SSD, and that's what the industry now refers to for that concept. What will people be saying five and 10 years from now? Will they simply say, well this is cloud as it was always meant to be where you are truly able to run anything anywhere and have not only the same APIs, but you're same data available with high performance access, all forms of access, block file and object everywhere. So yeah. And I wonder, and this is just me throwing it out there, I wonder if, well, there's trade offs, right? Giving it a new moniker, supercloud, versus simply talking about how cloud is always intended to be and what it was meant to be, you know, the real cloud or true cloud, there are trade-offs. By putting a name on it and branding it, that lets people talk about it and understand they're talking about something different. But it also is that an affront to people who thought that that's what they already had. >> What's different, what's new? Yes, and so we've given a lot of thought to this. >> Right, it's like you. >> And it's because we've been asked that why does the industry need a new term, and we've tried to address some of that. But some of the inside baseball that we haven't shared is, you remember the Web 2.0, back then? >> Yep. >> Web 2.0 was the same thing. And I remember Tim Burners Lee saying, "Why do we need Web 2.0? "This is what the Web was always supposed to be." But the truth is-- >> I know, that was another perfect-- >> But the truth is it wasn't, number one. Number two, everybody hated the Web 2.0 term. John Furrier was actually in the middle of it all. And then it created this groundswell. So one of the things we wrote about is that supercloud is an evocative term that catalyzes debate and conversation, which is what we like, of course. And maybe that's self-serving. But yeah, HyperCloud, Metacloud, super, meaning, it's funny because super came from Latin supra, above, it was never the superlative. But the superlative was a convenient byproduct that caused a lot of friction and flack, which again, in the media business is like a perfect storm brewing. >> The bad thing to have to, and I think you do need to shake people out of their, the complacency of the limitations that they're used to. And I'll tell you what, the fact that you even have the terms hybrid cloud, multi-cloud, private cloud, edge computing, those are all just referring to the different boundaries that isolate the silo that is the current limited cloud. >> Right. >> So if I heard correctly, what just, in terms of us defining what is and what isn't in supercloud, you would say traditional applications which have to run in a certain place, in a certain cloud can't run anywhere else, would be the stuff that you would not put in as being addressed by supercloud. And over time, you would want to be able to run the data where you want to and in any of those concepts. >> Or even modern apps, right? Or even modern apps that are siloed in SAS within an individual cloud, right? >> So yeah, I guess it's twofold. Number one, if you're going at the high application layers, there's lots of ways that you can give the appearance of anything running anywhere. The ISV, the SAS vendor can engineer stuff to have the ability to serve with low enough latency to different geographies, right? So if you go too high up the stack, it kind of loses its meaning because there's lots of different ways to make due and give the appearance of omni-presence of the service. Okay? As you come down more towards the platform layer, it gets harder and harder to mask the fact that supercloud is something entirely different than just a good regionally-distributed SAS service. So I don't think you, I don't think you can distinguish supercloud if you go too high up the stack because it's just SAS, it's just a good SAS service where the SAS vendor has done the hard work to give you low latency access from different geographic regions. >> Yeah, so this is one of the hardest things, David. >> Common among them. >> Yeah, this is really an important point. This is one of the things I've had the most trouble with is why is this not just SAS? >> So you dilute your message when you go up to the SAS layer. If you were to focus most of this around the super pass layer, the how can you host applications and run them anywhere and not host this, not run a service, not have a service available everywhere. So how can you take any application, even applications that are written, you know, in a traditional legacy data center fashion and be able to run them anywhere and have them have their binaries and their datasets and the runtime environment and the infrastructure to start them and stop them? You know, the jobs, the, what the Kubernetes, the job scheduler? What we're really talking about here, what I think we're really talking about here is building the operating system for a decentralized cloud. What is the operating system, the operating environment for a decentralized cloud? Where you can, and that the main two functions of an operating system or an operating environment are the process scheduler, the thing that's scheduling what is running where and when and so forth, and the file system, right? The thing that's supplying a common view and access to data. So when we talk about this, I think that the strongest argument for supercloud is made when you go down to the platform layer and talk of it, talk about it as an operating environment on which you can run all forms of applications. >> Would you exclude--? >> Not a specific application that's been engineered as a SAS. (audio distortion) >> He'll come back. >> Are you there? >> Yeah, yeah, you just cut out for a minute. >> I lost your last statement when you broke up. >> We heard you, you said that not the specific application. So would you exclude Snowflake from supercloud? >> Frankly, I would. I would. Because, well, and this is kind of hard to do because Snowflake doesn't like to, Frank doesn't like to talk about Snowflake as a SAS service. It has a negative connotation. >> But it is. >> I know, we all know it is. We all know it is and because it is, yes, I would exclude them. >> I think I actually have him on camera. >> There's nothing in common. >> I think I have him on camera or maybe Benoit as saying, "Well, we are a SAS." I think it's Slootman. I think I said to Slootman, "I know you don't like to say you're a SAS." And I think he said, "Well, we are a SAS." >> Because again, if you go to the top of the application stack, there's any number of ways you can give it location agnostic function or you know, regional, local stuff. It's like let's solve the location problem by having me be your one location. How can it be decentralized if you're centralizing on (audio distortion)? >> Well, it's more decentralized than if it's all in one cloud. So let me actually, so the spectrum. So again, in the spirit of what is and what isn't, I think it's safe to say Hammerspace is supercloud. I think there's no debate there, right? Certainly among this crowd. And I think we can all agree that Dell, Dell Storage is not supercloud. Where it gets fuzzy is this Snowflake example or even, how about a, how about a Cohesity that instantiates its stack in different cloud regions in different clouds, and synchronizes, however magic sauce it does that. Is that a supercloud? I mean, so I'm cautious about having too strict of a definition 'cause then only-- >> Fair enough, fair enough. >> But I could use your help and thoughts on that. >> So I think we're talking about two different spectrums here. One is the spectrum of platform to application-specific. As you go up the application stack and it becomes this specific thing. Or you go up to the more and more structured where it's serving a specific application function where it's more of a SAS thing. I think it's harder to call a SAS service a supercloud. And I would argue that the reason there, and what you're lacking in the definition is to talk about it as general purpose. Okay? Now, that said, a data warehouse is general purpose at the structured data level. So you could make the argument for why Snowflake is a supercloud by saying that it is a general purpose platform for doing lots of different things. It's just one at a higher level up at the structured data level. So one spectrum is the high level going from platform to, you know, unstructured data to structured data to very application-specific, right? Like a specific, you know, CAD/CAM mechanical design cloud, like an Autodesk would want to give you their cloud for running, you know, and sharing CAD/CAM designs, doing your CAD/CAM anywhere stuff. Well, the other spectrum is how well does the purported supercloud technology actually live up to allowing you to run anything anywhere with not just the same APIs but with the local presence of data with the exact same runtime environment everywhere, and to be able to correctly manage how to get that runtime environment anywhere. So a Cohesity has some means of running things in different places and some means of coordinating what's where and of serving diff, you know, things in different places. I would argue that it is a very poor approximation of what Hammerspace does in providing the exact same file system with local high performance access everywhere with metadata ability to control where the data is actually instantiated so that you don't have to wait for it to get orchestrated. But even then when you do have to wait for it, it happens automatically and so it's still only a matter of, well, how quick is it? And on the other end of the spectrum is you could look at NetApp with Flexcache and say, "Is that supercloud?" And I would argue, well kind of because it allows you to run things in different places because it's a cache. But you know, it really isn't because it presumes some central silo from which you're cacheing stuff. So, you know, is it or isn't it? Well, it's on a spectrum of exactly how fully is it decoupling a runtime environment from specific locality? And I think a cache doesn't, it stretches a specific silo and makes it have some semblance of similar access in other places. But there's still a very big difference to the central silo, right? You can't turn off that central silo, for example. >> So it comes down to how specific you make the definition. And this is where it gets kind of really interesting. It's like cloud. Does IBM have a cloud? >> Exactly. >> I would say yes. Does it have the kind of quality that you would expect from a hyper-scale cloud? No. Or see if you could say the same thing about-- >> But that's a problem with choosing a name. That's the problem with choosing a name supercloud versus talking about the concept of cloud and how true up you are to that concept. >> For sure. >> Right? Because without getting a name, you don't have to draw, yeah. >> I'd like to explore one particular or bring them together. You made a very interesting observation that from a enterprise point of view, they want to safeguard their store, their data, and they want to make sure that they can have that data running in their own workflows, as well as, as other service providers providing services to them for that data. So, and in in particular, if you go back to, you go back to Snowflake. If Snowflake could provide the ability for you to have your data where you wanted, you were in charge of that, would that make Snowflake a supercloud? >> I'll tell you, in my mind, they would be closer to my conceptualization of supercloud if you can instantiate Snowflake as software on your own infrastructure, and pump your own data to Snowflake that's instantiated on your own infrastructure. The fact that it has to be on their infrastructure or that it's on their, that it's on their account in the cloud, that you're giving them the data and they're, that fundamentally goes against it to me. If they, you know, they would be a pure, a pure plate if they were a software defined thing where you could instantiate Snowflake machinery on the infrastructure of your choice and then put your data into that machinery and get all the benefits of Snowflake. >> So did you see--? >> In other words, if they were not a SAS service, but offered all of the similar benefits of being, you know, if it were a service that you could run on your own infrastructure. >> So did you see what they announced, that--? >> I hope that's making sense. >> It does, did you see what they announced at Dell? They basically announced the ability to take non-native Snowflake data, read it in from an object store on-prem, like a Dell object store. They do the same thing with Pure, read it in, running it in the cloud, and then push it back out. And I was saying to Dell, look, that's fine. Okay, that's interesting. You're taking a materialized view or an extended table, whatever you're doing, wouldn't it be more interesting if you could actually run the query locally with your compute? That would be an extension that would actually get my attention and extend that. >> That is what I'm talking about. That's what I'm talking about. And that's why I'm saying I think Hammerspace is more progressive on that front because with our technology, anybody who can instantiate a service, can make a service. And so I, so MSPs can use Hammerspace as a way to build a super pass layer and host their clients on their infrastructure in a cloud-like fashion. And their clients can have their own private data centers and the MSP or the public clouds, and Hammerspace can be instantiated, get this, by different parties in these different pieces of infrastructure and yet linked together to make a common file system across all of it. >> But this is data mesh. If I were HPE and Dell it's exactly what I'd be doing. I'd be working with Hammerspace to create my own data. I'd work with Databricks, Snowflake, and any other-- >> Data mesh is a good way to put it. Data mesh is a good way to put it. And this is at the lowest level of, you know, the underlying file system that's mountable by the operating system, consumed as a real file system. You can't get lower level than that. That's why this is the foundation for all of the other apps and structured data systems because you need to have a data mesh that can at least mesh the binary blob. >> Okay. >> That hold the binaries and that hold the datasets that those applications are running. >> So David, in the third week of January, we're doing supercloud 2 and I'm trying to convince John Furrier to make it a data slash data mesh edition. I'm slowly getting him to the knothole. I would very much, I mean you're in the Bay Area, I'd very much like you to be one of the headlines. As Zhamak Dehghaniis going to speak, she's the creator of Data Mesh, >> Sure. >> I'd love to have you come into our studio as well, for the live session. If you can't make it, we can pre-record. But you're right there, so I'll get you the dates. >> We'd love to, yeah. No, you can count on it. No, definitely. And you know, we don't typically talk about what we do as Data Mesh. We've been, you know, using global data environment. But, you know, under the covers, that's what the thing is. And so yeah, I think we can frame the discussion like that to line up with other, you know, with the other discussions. >> Yeah, and Data Mesh, of course, is one of those evocative names, but she has come up with some very well defined principles around decentralized data, data as products, self-serve infrastructure, automated governance, and and so forth, which I think your vision plugs right into. And she's brilliant. You'll love meeting her. >> Well, you know, and I think.. Oh, go ahead. Go ahead, Peter. >> Just like to work one other interface which I think is important. How do you see yourself and the open source? You talked about having an operating system. Obviously, Linux is the operating system at one level. How are you imagining that you would interface with cost community as part of this development? >> Well, it's funny you ask 'cause my CTO is the kernel maintainer of the storage networking stack. So how the Linux operating system perceives and consumes networked data at the file system level, the network file system stack is his purview. He owns that, he wrote most of it over the last decade that he's been the maintainer, but he's the gatekeeper of what goes in. And we have leveraged his abilities to enhance Linux to be able to use this decentralized data, in particular with decoupling the control plane driven by metadata from the data access path and the many storage systems on which the data gets accessed. So this factoring, this splitting of control plane from data path, metadata from data, was absolutely necessary to create a data mesh like we're talking about. And to be able to build this supercloud concept. And the highways on which the data runs and the client which knows how to talk to it is all open source. And we have, we've driven the NFS 4.2 spec. The newest NFS spec came from my team. And it was specifically the enhancements needed to be able to build a spanning file system, a data mesh at a file system level. Now that said, our file system itself and our server, our file server, our data orchestration, our data management stuff, that's all closed source, proprietary Hammerspace tech. But the highways on which the mesh connects are actually all open source and the client that knows how to consume it. So we would, honestly, I would welcome competitors using those same highways. They would be at a major disadvantage because we kind of built them, but it would still be very validating and I think only increase the potential adoption rate by more than whatever they might take of the market. So it'd actually be good to split the market with somebody else to come in and share those now super highways for how to mesh data at the file system level, you know, in here. So yeah, hopefully that answered your question. Does that answer the question about how we embrace the open source? >> Right, and there was one other, just that my last one is how do you enable something to run in every environment? And if we take the edge, for example, as being, as an environment which is much very, very compute heavy, but having a lot less capability, how do you do a hold? >> Perfect question. Perfect question. What we do today is a software appliance. We are using a Linux RHEL 8, RHEL 8 equivalent or a CentOS 8, or it's, you know, they're all roughly equivalent. But we have bundled and a software appliance which can be instantiated on bare metal hardware on any type of VM system from VMware to all of the different hypervisors in the Linux world, to even Nutanix and such. So it can run in any virtualized environment and it can run on any cloud instance, server instance in the cloud. And we have it packaged and deployable from the marketplaces within the different clouds. So you can literally spin it up at the click of an API in the cloud on instances in the cloud. So with all of these together, you can basically instantiate a Hammerspace set of machinery that can offer up this file system mesh. like we've been using the terminology we've been using now, anywhere. So it's like being able to take and spin up Snowflake and then just be able to install and run some VMs anywhere you want and boom, now you have a Snowflake service. And by the way, it is so complete that some of our customers, I would argue many aren't even using public clouds at all, they're using this just to run their own data centers in a cloud-like fashion, you know, where they have a data service that can span it all. >> Yeah and to Molly's first point, we would consider that, you know, cloud. Let me put you on the spot. If you had to describe conceptually without a chalkboard what an architectural diagram would look like for supercloud, what would you say? >> I would say it's to have the same runtime environment within every data center and defining that runtime environment as what it takes to schedule the execution of applications, so job scheduling, runtime stuff, and here we're talking Kubernetes, Slurm, other things that do job scheduling. We're talking about having a common way to, you know, instantiate compute resources. So a global compute environment, having a common compute environment where you can instantiate things that need computing. Okay? So that's the first part. And then the second is the data platform where you can have file block and object volumes, and have them available with the same APIs in each of these distributed data centers and have the exact same data omnipresent with the ability to control where the data is from one moment to the next, local, where all the data is instantiate. So my definition would be a common runtime environment that's bifurcate-- >> Oh. (attendees chuckling) We just lost them at the money slide. >> That's part of the magic makes people listen. We keep someone on pin and needles waiting. (attendees chuckling) >> That's good. >> Are you back, David? >> I'm on the edge of my seat. Common runtime environment. It was like... >> And just wait, there's more. >> But see, I'm maybe hyper-focused on the lower level of what it takes to host and run applications. And that's the stuff to schedule what resources they need to run and to get them going and to get them connected through to their persistence, you know, and their data. And to have that data available in all forms and have it be the same data everywhere. On top of that, you could then instantiate applications of different types, including relational databases, and data warehouses and such. And then you could say, now I've got, you know, now I've got these more application-level or structured data-level things. I tend to focus less on that structured data level and the application level and am more focused on what it takes to host any of them generically on that super pass layer. And I'll admit, I'm maybe hyper-focused on the pass layer and I think it's valid to include, you know, higher levels up the stack like the structured data level. But as soon as you go all the way up to like, you know, a very specific SAS service, I don't know that you would call that supercloud. >> Well, and that's the question, is there value? And Marianna Tessel from Intuit said, you know, we looked at it, we did it, and it just, it was actually negative value for us because connecting to all these separate clouds was a real pain in the neck. Didn't bring us any additional-- >> Well that's 'cause they don't have this pass layer underneath it so they can't even shop around, which actually makes it hard to stand up your own SAS service. And ultimately they end up having to build their own infrastructure. Like, you know, I think there's been examples like Netflix moving away from the cloud to their own infrastructure. Basically, if you're going to rent it for more than a few months, it makes sense to build it yourself, if it's at any kind of scale. >> Yeah, for certain components of that cloud. But if the Goldman Sachs came to you, David, and said, "Hey, we want to collaborate and we want to build "out a cloud and essentially build our SAS system "and we want to do that with Hammerspace, "and we want to tap the physical infrastructure "of not only our data centers but all the clouds," then that essentially would be a SAS, would it not? And wouldn't that be a Super SAS or a supercloud? >> Well, you know, what they may be using to build their service is a supercloud, but their service at the end of the day is just a SAS service with global reach. Right? >> Yeah. >> You know, look at, oh shoot. What's the name of the company that does? It has a cloud for doing bookkeeping and accounting. I forget their name, net something. NetSuite. >> NetSuite. NetSuite, yeah, Oracle. >> Yeah. >> Yep. >> Oracle acquired them, right? Is NetSuite a supercloud or is it just a SAS service? You know? I think under the covers you might ask are they using supercloud under the covers so that they can run their SAS service anywhere and be able to shop the venue, get elasticity, get all the benefits of cloud in the, to the benefit of their service that they're offering? But you know, folks who consume the service, they don't care because to them they're just connecting to some endpoint somewhere and they don't have to care. So the further up the stack you go, the more location-agnostic it is inherently anyway. >> And I think it's, paths is really the critical layer. We thought about IAS Plus and we thought about SAS Minus, you know, Heroku and hence, that's why we kind of got caught up and included it. But SAS, I admit, is the hardest one to crack. And so maybe we exclude that as a deployment model. >> That's right, and maybe coming down a level to saying but you can have a structured data supercloud, so you could still include, say, Snowflake. Because what Snowflake is doing is more general purpose. So it's about how general purpose it is. Is it hosting lots of other applications or is it the end application? Right? >> Yeah. >> So I would argue general purpose nature forces you to go further towards platform down-stack. And you really need that general purpose or else there is no real distinguishing. So if you want defensible turf to say supercloud is something different, I think it's important to not try to wrap your arms around SAS in the general sense. >> Yeah, and we've kind of not really gone, leaned hard into SAS, we've just included it as a deployment model, which, given the constraints that you just described for structured data would apply if it's general purpose. So David, super helpful. >> Had it sign. Define the SAS as including the hybrid model hold SAS. >> Yep. >> Okay, so with your permission, I'm going to add you to the list of contributors to the definition. I'm going to add-- >> Absolutely. >> I'm going to add this in. I'll share with Molly. >> Absolutely. >> We'll get on the calendar for the date. >> If Molly can share some specific language that we've been putting in that kind of goes to stuff we've been talking about, so. >> Oh, great. >> I think we can, we can share some written kind of concrete recommendations around this stuff, around the general purpose, nature, the common data thing and yeah. >> Okay. >> Really look forward to it and would be glad to be part of this thing. You said it's in February? >> It's in January, I'll let Molly know. >> Oh, January. >> What the date is. >> Excellent. >> Yeah, third week of January. Third week of January on a Tuesday, whatever that is. So yeah, we would welcome you in. But like I said, if it doesn't work for your schedule, we can prerecord something. But it would be awesome to have you in studio. >> I'm sure with this much notice we'll be able to get something. Let's make sure we have the dates communicated to Molly and she'll get my admin to set it up outside so that we have it. >> I'll get those today to you, Molly. Thank you. >> By the way, I am so, so pleased with being able to work with you guys on this. I think the industry needs it very bad. They need something to break them out of the box of their own mental constraints of what the cloud is versus what it's supposed to be. And obviously, the more we get people to question their reality and what is real, what are we really capable of today that then the more business that we're going to get. So we're excited to lend the hand behind this notion of supercloud and a super pass layer in whatever way we can. >> Awesome. >> Can I ask you whether your platforms include ARM as well as X86? >> So we have not done an ARM port yet. It has been entertained and won't be much of a stretch. >> Yeah, it's just a matter of time. >> Actually, entertained doing it on behalf of NVIDIA, but it will absolutely happen because ARM in the data center I think is a foregone conclusion. Well, it's already there in some cases, but not quite at volume. So definitely will be the case. And I'll tell you where this gets really interesting, discussion for another time, is back to my old friend, the SSD, and having SSDs that have enough brains on them to be part of that fabric. Directly. >> Interesting. Interesting. >> Very interesting. >> Directly attached to ethernet and able to create a data mesh global file system, that's going to be really fascinating. Got to run now. >> All right, hey, thanks you guys. Thanks David, thanks Molly. Great to catch up. Bye-bye. >> Bye >> Talk to you soon.
SUMMARY :
So my question to you was, they don't have to do it. to starved before you have I believe that the ISVs, especially those the end users you need to So, if I had to take And and I think Ultimately the supercloud or the Snowflake, you know, more narrowly on just the stuff of the point of what you're talking Well, and you know, Snowflake founders, I don't want to speak over So it starts to even blur who's the main gravity is to having and, you know, that's where to be in a, you know, a lot of thought to this. But some of the inside baseball But the truth is-- So one of the things we wrote the fact that you even have that you would not put in as to give you low latency access the hardest things, David. This is one of the things I've the how can you host applications Not a specific application Yeah, yeah, you just statement when you broke up. So would you exclude is kind of hard to do I know, we all know it is. I think I said to Slootman, of ways you can give it So again, in the spirit But I could use your to allowing you to run anything anywhere So it comes down to how quality that you would expect and how true up you are to that concept. you don't have to draw, yeah. the ability for you and get all the benefits of Snowflake. of being, you know, if it were a service They do the same thing and the MSP or the public clouds, to create my own data. for all of the other apps and that hold the datasets So David, in the third week of January, I'd love to have you come like that to line up with other, you know, Yeah, and Data Mesh, of course, is one Well, you know, and I think.. and the open source? and the client which knows how to talk and then just be able to we would consider that, you know, cloud. and have the exact same data We just lost them at the money slide. That's part of the I'm on the edge of my seat. And that's the stuff to schedule Well, and that's the Like, you know, I think But if the Goldman Sachs Well, you know, what they may be using What's the name of the company that does? NetSuite, yeah, Oracle. So the further up the stack you go, But SAS, I admit, is the to saying but you can have a So if you want defensible that you just described Define the SAS as including permission, I'm going to add you I'm going to add this in. We'll get on the calendar to stuff we've been talking about, so. nature, the common data thing and yeah. to it and would be glad to have you in studio. and she'll get my admin to set it up I'll get those today to you, Molly. And obviously, the more we get people So we have not done an ARM port yet. because ARM in the data center I think is Interesting. that's going to be really fascinating. All right, hey, thanks you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Slootman | PERSON | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
Adobe | ORGANIZATION | 0.99+ |
Molly | PERSON | 0.99+ |
Marianna Tessel | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Frank | PERSON | 0.99+ |
Disney | ORGANIZATION | 0.99+ |
Goldman Sachs | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
January | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
February | DATE | 0.99+ |
Peter | PERSON | 0.99+ |
Zhamak Dehghaniis | PERSON | 0.99+ |
Hammerspace | ORGANIZATION | 0.99+ |
Word | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
RHEL 8 | TITLE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Benoit | PERSON | 0.99+ |
Excel | TITLE | 0.99+ |
second | QUANTITY | 0.99+ |
Autodesk | ORGANIZATION | 0.99+ |
CentOS 8 | TITLE | 0.99+ |
David Flynn | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
PowerPoint | TITLE | 0.99+ |
first point | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
Tuesday | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
first part | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
each region | QUANTITY | 0.98+ |
Linux | TITLE | 0.98+ |
One | QUANTITY | 0.98+ |
Intuit | ORGANIZATION | 0.98+ |
Tim Burners Lee | PERSON | 0.98+ |
Zhamak Dehghaniis' | PERSON | 0.98+ |
Blue Origin | ORGANIZATION | 0.98+ |
Bay Area | LOCATION | 0.98+ |
two reasons | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
one application | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
first | QUANTITY | 0.98+ |
more than a few months | QUANTITY | 0.97+ |
SAS | ORGANIZATION | 0.97+ |
ARM | ORGANIZATION | 0.97+ |
Microsoft | ORGANIZATION | 0.97+ |
AMD & Oracle Partner to Power Exadata X9M
(upbeat jingle) >> The history of Exadata in the platform is really unique. And from my vantage point, it started earlier this century as a skunkworks inside of Oracle called Project Sage back when grid computing was the next big thing. Oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve. Last April, for example, Oracle announced the availability of Exadata X9M in OCI, Oracle Cloud Infrastructure. One thing that hasn't been as well publicized is that Exadata on OCI is using AMD's EPYC processors in the database service. EPYC is not Eastern Pacific Yacht Club for all you sailing buffs, rather it stands for Extreme Performance Yield Computing, the enterprise grade version of AMD's Zen architecture which has been a linchpin of AMD's success in terms of penetrating enterprise markets. And to focus on the innovations that AMD and Oracle are bringing to market, we have with us today, Juan Loaiza, who's executive vice president of mission critical technologies at Oracle, and Mark Papermaster, who's the CTO and EVP of technology and engineering at AMD. Juan, welcome back to the show. Mark, great to have you on The Cube in your first appearance, thanks for coming on. Juan, let's start with you. You've been on The Cube a number of times, as I said, and you've talked about how Exadata is a top platform for Oracle database. We've covered that extensively. What's different and unique from your point of view about Exadata Cloud Infrastructure X9M on OCI? >> So as you know, Exadata, it's designed top down to be the best possible platform for database. It has a lot of unique capabilities, like we make extensive use of RDMA, smart storage. We take advantage of everything we can in the leading hardware platforms. X9M is our next generation platform and it does exactly that. We're always wanting to be, to get all the best that we can from the available hardware that our partners like AMD produce. And so that's what X9M in it is, it's faster, more capacity, lower latency, more iOS, pushing the limits of the hardware technology. So we don't want to be the limit, the software database software should not be the limit, it should be the actual physical limits of the hardware. That that's what X9M's all about. >> Why, Juan, AMD chips in X9M? >> We're introducing AMD chips. We think they provide outstanding performance, both for OTP and for analytic workloads. And it's really that simple, we just think the performance is outstanding in the product. >> Mark, your career is quite amazing. I could riff on history for hours but let's focus on the Oracle relationship. Mark, what are the relevant capabilities and key specs of the AMD chips that are used in Exadata X9M on Oracle's cloud? >> Well, thanks. It's really the basis of the great partnership that we have with Oracle on Exadata X9M and that is that the AMD technology uses our third generation of Zen processors. Zen was architected to really bring high performance back to X86, a very strong roadmap that we've executed on schedule to our commitments. And this third generation does all of that, it uses a seven nanometer CPU that is a core that was designed to really bring throughput, bring really high efficiency to computing and just deliver raw capabilities. And so for Exadata X9M, it's really leveraging all of that. It's really a balanced processor and it's implemented in a way to really optimize high performance. That is our whole focus of AMD. It's where we've reset the company focus on years ago. And again, great to see the super smart database team at Oracle really partner with us, understand those capabilities and it's been just great to partner with them to enable Oracle to really leverage the capabilities of the Zen processor. >> Yeah. It's been a pretty amazing 10 or 11 years for both companies. But Mark, how specifically are you working with Oracle at the engineering and product level and what does that mean for your joint customers in terms of what they can expect from the collaboration? >> Well, here's where the collaboration really comes to play. You think about a processor and I'll say, when Juan's team first looked at it, there's general benchmarks and the benchmarks are impressive but they're general benchmarks. And they showed the base processing capability but the partnership comes to bear when it means optimizing for the workloads that Exadata X9M is really delivering to the end customers. And that's where we dive down and as we learn from the Oracle team, we learn to understand where bottlenecks could be, where is there tuning that we could in fact really boost the performance above that baseline that you get in the generic benchmarks. And that's what the teams have done, so for instance, you look at optimizing latency to our DMA, you look at optimizing throughput on oil TP and database processing. When you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust, we have thousands of parameters that can be adjusted for a given workload. And that's the beauty of the partnership. So we have the expertise on the CPU engineering, Oracle Exadata team knows innately what the customers need to get the most out of their platform. And when the teams came together, we actually achieved anywhere from 20% to 50% gains on specific workloads, it is really exciting to see. >> Mark, last question for you is how do you see this relationship evolving in the future? Can you share a little roadmap for the audience? >> You bet. First off, given the deep partnership that we've had on Exadata X9M, it's really allowed us to inform our future design. So in our current third generation, EPYC is that is really what we call our epic server offerings. And it's a 7,003 third gen and Exadara X9M. So what about fourth gen? Well, fourth gen is well underway, ready for the future, but it incorporates learning that we've done in partnership with Oracle. It's going to have even more through capabilities, it's going to have expanded memory capabilities because there's a CXL connect express link that'll expand even more memory opportunities. And I could go on. So that's the beauty of a deep partnership as it enables us to really take that learning going forward. It pays forward and we're very excited to fold all of that into our future generations and provide even a better capabilities to Juan and his team moving forward. >> Yeah, you guys have been obviously very forthcoming. You have to be with Zen and EPYC. Juan, anything you'd like to add as closing comments? >> Yeah. I would say that in the processor market there's been a real acceleration in innovation in the last few years, there was a big move 10, 15 years ago when multicore processors came out. And then we were on that for a while and then things started stagnating, but in the last two or three years, AMD has been leading this, there's been a dramatic acceleration in innovation so it's very exciting to be part of this and customers are getting a big benefit from this. >> All right. Hey, thanks for coming back on The Cube today. Really appreciate your time. >> Thanks. Glad to be here. >> All right and thank you for watching this exclusive Cube conversation. This is Dave Vellante from The Cube and we'll see you next time. (upbeat jingle)
SUMMARY :
in the database service. in the leading hardware platforms. And it's really that simple, and key specs of the the great partnership that we have expect from the collaboration? but the partnership comes to So that's the beauty of a deep partnership You have to be with Zen and EPYC. but in the last two or three years, coming back on The Cube today. Glad to be here. and we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Juan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Juan Loaiza | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
10 | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
Mark Papermaster | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Last April | DATE | 0.99+ |
11 years | QUANTITY | 0.99+ |
thousands | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
iOS | TITLE | 0.99+ |
7,003 | QUANTITY | 0.99+ |
X9M | TITLE | 0.99+ |
50% | QUANTITY | 0.99+ |
fourth gen | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
First | QUANTITY | 0.98+ |
Zen | COMMERCIAL_ITEM | 0.97+ |
third generation | QUANTITY | 0.97+ |
X86 | COMMERCIAL_ITEM | 0.97+ |
first appearance | QUANTITY | 0.97+ |
Exadata | TITLE | 0.97+ |
third gen | QUANTITY | 0.96+ |
earlier this century | DATE | 0.96+ |
seven nanometer | QUANTITY | 0.96+ |
Exadata | ORGANIZATION | 0.94+ |
first | QUANTITY | 0.92+ |
Eastern Pacific Yacht Club | ORGANIZATION | 0.9+ |
EPYC | ORGANIZATION | 0.87+ |
both | QUANTITY | 0.86+ |
OCI | TITLE | 0.85+ |
One thing | QUANTITY | 0.83+ |
Exadata X9M | COMMERCIAL_ITEM | 0.81+ |
Power Exadata | ORGANIZATION | 0.81+ |
The Cube | ORGANIZATION | 0.8+ |
OCI | ORGANIZATION | 0.79+ |
The Cube | COMMERCIAL_ITEM | 0.79+ |
Zen | ORGANIZATION | 0.78+ |
three years | QUANTITY | 0.78+ |
Exadata X9M | COMMERCIAL_ITEM | 0.74+ |
X9M | COMMERCIAL_ITEM | 0.74+ |
years | DATE | 0.73+ |
15 years ago | DATE | 0.7+ |
10 | DATE | 0.7+ |
EPYC | OTHER | 0.65+ |
Exadara | ORGANIZATION | 0.64+ |
Oracle Cloud Infrastructure | ORGANIZATION | 0.61+ |
last few years | DATE | 0.6+ |
Exadata Cloud Infrastructure X9M | TITLE | 0.6+ |
Oracle & AMD Partner to Power Exadata X9M
[Music] the history of exadata in the platform is really unique and from my vantage point it started earlier this century as a skunk works inside of oracle called project sage back when grid computing was the next big thing oracle saw that betting on standard hardware would put it on an industry curve that would rapidly evolve and i remember the oracle hp database machine which was announced at oracle open world almost 15 years ago and then exadata kept evolving after the sun acquisition it became a platform that had tightly integrated hardware and software and today exadata it keeps evolving almost like a chameleon to address more workloads and reach new performance levels last april for example oracle announced the availability of exadata x9m in oci oracle cloud infrastructure and introduced the ability to run the autonomous database service or the exa data database service you know oracle often talks about they call it stock exchange performance level kind of no description needed and sort of related capabilities the company as we know is fond of putting out benchmarks and comparisons with previous generations of product and sometimes competitive products that underscore the progress that's being made with exadata such as 87 percent more iops with metrics for latency measured in microseconds mics instead of milliseconds and many other numbers that are industry-leading and compelling especially for mission-critical workloads one thing that hasn't been as well publicized is that exadata on oci is using amd's epyc processors in the database service epyc is not eastern pacific yacht club for all your sailing buffs rather it stands for extreme performance yield computing the enterprise grade version of amd's zen architecture which has been a linchpin of amd's success in terms of penetrating enterprise markets and to focus on the innovations that amd and oracle are bringing to market we have with us today juan loyza who's executive vice president of mission critical technologies at oracle and mark papermaster who's the cto and evp of technology and engineering at amd juan welcome back to the show mark great to have you on thecube and your first appearance thanks for coming on yep happy to be here thank you all right juan let's start with you you've been on thecube a number of times as i said and you've talked about how exadata is a top platform for oracle database we've covered that extensively what's different and unique from your point of view about exadata cloud infrastructure x9m on oci yeah so as you know exadata it's designed top down to be the best possible platform for database uh it has a lot of unique capabilities like we make extensive use of rdma smart storage we take advantage of you know everything we can in the leading uh hardware platforms and x9m is our next generation platform and it does exactly that we're always wanting to be to get all the best that we can from the available hardware that our partners like amd produce and so that's what x9 in it is it's faster more capacity lower latency more ios pushing the limits of the hardware technology so we don't want to be the limit the software the database software should not be the limit it should be uh the actual physical limits of the hardware and that that's what x9m is all about why won amd chips in x9m uh yeah so we're we're uh introducing uh amd chips we think they provide outstanding performance uh both for oltp and for analytic workloads and it's really that simple we just think that performance is outstanding in the product yeah mark your career is quite amazing i've been around long enough to remember the transition to cmos from emitter coupled logic in the mainframe era back when you were at ibm that was an epic technology call at the time i was of course steeped as an analyst at idc in the pc era and like like many witnessed the tectonic shift that apple's ipod and iphone caused and the timing of you joining amd is quite important in my view because it coincided with the year that pc volumes peaked and marked the beginning of what i call a stagflation period for x86 i could riff on history for hours but let's focus on the oracle relationship mark what are the relevant capabilities and key specs of the amd chips that are used in exadata x9m on oracle's cloud well thanks and and uh it's really uh the basis of i think the great partnership that we have with oracle on exadata x9m and that is that the amd technology uses our third generation of zen processors zen was you know architected to really bring high performance you know back to x86 a very very strong road map that we've executed you know on schedule to our commitments and this third generation does all of that it uses a seven nanometer cpu that is a you know core that was designed to really bring uh throughput uh bring you know really high uh efficiency uh to computing uh and just deliver raw capabilities and so uh for uh exadata x9m uh it's really leveraging all of that it's it's a uh implemented in up to 64 cores per socket it's got uh you know really anywhere from 128 to 168 pcie gen 4 io connectivity so you can you can really attach uh you know all of the uh the necessary uh infrastructure and and uh storage uh that's needed uh for exadata performance and also memory you have to feed the beast for those analytics and for the oltp that juan was talking about and so it does have eight lanes of memory for high performance ddr4 so it's really as a balanced processor and it's implemented in a way to really optimize uh high performance that that is our whole focus of uh amd it's where we've you know reset the company focus on years ago and uh again uh you know great to see uh you know the the super smart uh you know database team at oracle really a partner with us understand those capabilities and it's been just great to partner with them to uh you know to you know enable oracle to really leverage the capabilities of the zen processor yeah it's been a pretty amazing 10 or 11 years for both companies but mark how specifically are you working with oracle at the engineering and product level you know and what does that mean for your joint customers in terms of what they can expect from the collaboration well here's where the collaboration really comes to play you think about a processor and you know i'll say you know when one's team first looked at it there's general benchmarks and the benchmarks are impressive but they're general benchmarks and you know and they showed you know the i'll say the you know the base processing capability but the partnership comes to bear uh when it when it means optimizing for the workloads that exadata x9m is really delivering to the end customers and that's where we dive down and and as we uh learn from the oracle team we learned to understand where bottlenecks could be uh where is there tuning that we could in fact in fact really boost the performance above i'll say that baseline that you get in the generic benchmarks and that's what the teams have done so for instance you look at you know optimizing latency to rdma you look at just throughput optimizing throughput on otp and database processing when you go through the workloads and you take the traces and you break it down and you find the areas that are bottlenecking and then you can adjust we have you know thousands of parameters that can be adjusted for a given workload and that's again that's the beauty of the partnership so we have the expertise on the cpu engineering uh you know oracle exudated team knows innately what the customers need to get the most out of their platform and when the teams came together we actually achieved anywhere from 20 percent to 50 gains on specific workloads it's really exciting to see so okay so so i want to follow up on that is that different from the competition how are you driving customer value you mentioned some you know some some percentage improvements are you measuring primarily with with latency how do you look at that well uh you know we are differentiated with the uh in the number of factors we bring a higher core density we bring the highest core density certainly in x86 and and moreover what we've led the industry is how to scale those cores we have a very high performance fabric that connects those together so as as a customer needs more cores again we scale anywhere from 8 to 64 cores but what the trick is uh that is you add more cores you want the scale the scale to be as close to linear as possible and so that's a differentiation we have and we enable that again with that balanced computer of cpu io and memory that we design but the key is you know we pride ourselves at amd of being able to partner in a very deep fashion with our customers we listen very well i think that's uh what we've had the opportunity uh to do with uh juan and his team we appreciate that and and that is how we got the kind of performance benefits that i described earlier it's working together almost like one team and in bringing that best possible capability to the end customers great thank you for that one i want to come back to you can both the exadata database service and the autonomous database service can they take advantage of exadata cloud x9m capabilities that are in that platform yeah absolutely um you know autonomous is basically our self-driving version of the oracle database but fundamentally it is the same uh database course so both of them will take advantage of the tremendous performance that we're getting now you know when when mark takes about 64 cores that's for chip we have two chips you know it's a two socket server so it's 128 128-way processor and then from our point of view there's two threads so from the database point there's 200 it's a 256-way processor and so there's a lot of raw performance there and we've done a lot of work with the amd team to make sure that we deliver that to our customers for all the different kinds of workload including otp analytics but also including for our autonomous database so yes absolutely allah takes advantage of it now juan you know i can't let you go without asking about the competition i've written extensively about the big four hyperscale clouds specifically aws azure google and alibaba and i know that don't hate me sometimes it angers some of my friends at oracle ibm too that i don't include you in that list but but i see oracle specifically is different and really the cloud for the most demanding applications and and top performance databases and not the commodity cloud which of course that angers all my friends at those four companies so i'm ticking everybody off so how does exadata cloud infrastructure x9m compare to the likes of aws azure google and other database cloud services in terms of oltp and analytics value performance cost however you want to frame it yeah so our architecture is fundamentally different uh we've architected our database for the scale out environment so for example we've moved intelligence in the storage uh we've put uh remote direct memory access we put persistent memory into our product so we've done a lot of architectural changes that they haven't and you're starting to see a little bit of that like if you look at some of the things that amazon and google are doing they're starting to realize that hey if you're gonna achieve good results you really need to push some database uh processing into the storage so so they're taking baby steps toward that you know you know roughly 15 years after we we've had a product and again at some point they're gonna realize you really need rdma you really need you know more uh direct access to those capabilities so so they're slowly getting there but you know we're well ahead and what you know the way this is delivered is you know better availability better performance lower latency higher iops so and this is why our customers love our product and you know if you if you look at the global fortune 100 over 90 percent of them are running exit data today and even in the in our cloud uh you know over 60 of the global 100 are running exadata in the oracle cloud because of all the differentiated uh benefits that they get uh from the product uh so yeah we're we're well ahead in the in the database space mark last question for you is how do you see this relationship evolving in the future can you share a little road map for the audience you bet well first off you know given the deep partnership that we've had on exudate x9m uh it it's really allowed us to inform our future design so uh in our current uh third generation epic epyc is uh that is really uh what we call our epic server offerings and it's a 7003 third gen in and exudate x9m so what about fourth gen well fourth gen is well underway uh you know it and uh and uh you know ready to you know for the for the future but it incorporates learning uh that we've done in partnership with with oracle uh it's gonna have even more through capabilities it's gonna have expanded memory capabilities because there's a cxl connect express link that'll expand even more memory opportunities and i could go on so you know that's the beauty of a deep partnership as it enables us to really take that learning going forward it pays forward and we're very excited to to fold all of that into our future generations and provide even a better capabilities to one and his team moving forward yeah you guys have been obviously very forthcoming you have to be with with with zen and epic juan anything you'd like to add as closing comments yeah i would say that in the processor market there's been a real acceleration in innovation in the last few years um there was you know a big move 10 15 years ago when multi-core processors came out and then you know we were on that for a while and then things started staggering but in the last two or three years and amd has been leading this um there's been a dramatic uh acceleration in innovation in this space so it's very exciting to be part of this and and customers are getting a big benefit from this all right chance hey thanks for coming back in the cube today really appreciate your time thanks glad to be here all right thank you for watching this exclusive cube conversation this is dave vellante from thecube and we'll see you next time [Music]
**Summary and Sentiment Analysis are not been shown because of improper transcript**
ENTITIES
Entity | Category | Confidence |
---|---|---|
20 percent | QUANTITY | 0.99+ |
juan loyza | PERSON | 0.99+ |
amd | ORGANIZATION | 0.99+ |
amazon | ORGANIZATION | 0.99+ |
8 | QUANTITY | 0.99+ |
256-way | QUANTITY | 0.99+ |
10 | QUANTITY | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
alibaba | ORGANIZATION | 0.99+ |
87 percent | QUANTITY | 0.99+ |
128 | QUANTITY | 0.99+ |
oracle | ORGANIZATION | 0.99+ |
two threads | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
11 years | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
50 | QUANTITY | 0.99+ |
200 | QUANTITY | 0.99+ |
ipod | COMMERCIAL_ITEM | 0.99+ |
both | QUANTITY | 0.99+ |
two chips | QUANTITY | 0.99+ |
both companies | QUANTITY | 0.99+ |
10 | DATE | 0.98+ |
iphone | COMMERCIAL_ITEM | 0.98+ |
earlier this century | DATE | 0.98+ |
last april | DATE | 0.98+ |
third generation | QUANTITY | 0.98+ |
juan | PERSON | 0.98+ |
64 cores | QUANTITY | 0.98+ |
128-way | QUANTITY | 0.98+ |
two socket | QUANTITY | 0.98+ |
eight lanes | QUANTITY | 0.98+ |
aws | ORGANIZATION | 0.97+ |
AMD | ORGANIZATION | 0.97+ |
ios | TITLE | 0.97+ |
fourth gen | QUANTITY | 0.96+ |
168 pcie | QUANTITY | 0.96+ |
dave vellante | PERSON | 0.95+ |
third gen | QUANTITY | 0.94+ |
aws azure | ORGANIZATION | 0.94+ |
apple | ORGANIZATION | 0.94+ |
thousands of parameters | QUANTITY | 0.92+ |
years | DATE | 0.91+ |
15 years | QUANTITY | 0.9+ |
Power Exadata | ORGANIZATION | 0.9+ |
over 90 percent | QUANTITY | 0.89+ |
four companies | QUANTITY | 0.89+ |
first | QUANTITY | 0.88+ |
oci | ORGANIZATION | 0.87+ |
first appearance | QUANTITY | 0.85+ |
one team | QUANTITY | 0.84+ |
almost 15 years ago | DATE | 0.83+ |
seven nanometer | QUANTITY | 0.83+ |
last few years | DATE | 0.82+ |
one thing | QUANTITY | 0.82+ |
15 years ago | DATE | 0.82+ |
epyc | TITLE | 0.8+ |
over 60 | QUANTITY | 0.79+ |
amd produce | ORGANIZATION | 0.79+ |
*****NEEDS TO STAY UNLISTED FOR REVIEW***** Tom Gillis | Advanced Security Business Group
(bright music) >> Welcome back everyone. theCube's live coverage here. Day two, of two sets, three days of theCube coverage here at VMware Explore. This is our 12th year covering VMware's annual conference, formerly called VM World. I'm John Furrier, with Dave Vellante. We'd love seeing the progress and we've got great security comes Tom Gill, senior vices, president general manager, networking and advanced security business group at VMware. Great to see you. Thanks for coming on. >> Thanks. for having me. >> Yeah, really happy we could have you on. >> I think this is my sixth edition on the theCube. Do I get frequent flyer points or anything? >> Yeah. >> You first get the VIP badge. We'll make that happen. You can start getting credits. >> Okay, there we go. >> We won't interrupt you. Seriously, you got a great story in security here. The security story is kind of embedded everywhere, so it's not called out and blown up and talked specifically about on stage. It's kind of in all the narratives in the VM World for this year. But you guys have an amazing security story. So let's just step back and to set context. Tell us the security story for what's going on here at VMware and what that means to this supercloud, multi-cloud and ongoing innovation with VMware. >> Yeah, sure thing. So probably the first thing I'll point out is that security's not just built in at VMware. It's built differently. So, we're not just taking existing security controls and cut and pasting them into our software. But we can do things because of our platform, because of the virtualization layer that you really can't do with other security tools. And where we're very, very focused is what we call lateral security or East-West movement of an attacker. 'Cause frankly, that's the name of the game these days. Attackers, you've got to assume that they're already in your network. Already assume that they're there. Then how do we make it hard for them to get to the stuff that you really want? Which is the data that they're going after. And that's where we really should. >> All right. So we've been talking a lot, coming into VMware Explore, and here, the event. About two things. Security, as a state. >> Yeah. >> I'm secure right now. >> Yeah. >> Or I think I'm secure right now, even though someone might be in my network or in my environment. To the notion of being defensible. >> Yeah. >> Meaning I have to defend and be ready at a moment's notice to attack, fight, push back, red team, blue team. Whatever you're going to call it. But something's happening. I got to be able to defend. >> Yeah. So what you're talking about is the principle of Zero Trust. When I first started doing security, the model was we have a perimeter. And everything on one side of the perimeter is dirty, ugly, old internet. And everything on this side, known good, trusted. What could possibly go wrong. And I think we've seen that no matter how good you make that perimeter, bad guys find a way in. So Zero Trust says, you know what? Let's just assume they're already in. Let's assume they're there. How do we make it hard for them to move around within the infrastructure and get to the really valuable assets? 'Cause for example, if they bust into your laptop, you click on a link and they get code running on your machine. They might find some interesting things on your machine. But they're not going to find 250 million credit cards. >> Right. >> Or the script of a new movie or the super secret aircraft plans. That lives in a database somewhere. And so it's that movement from your laptop to that database. That's where the damage is done and that's where VMware shines. >> So if they don't have the right to get to that database, they're not in. >> And it's not even just the right. So they're so clever and so sneaky that they'll steal a credential off your machine, go to another machine, steal a credential off of that. So, it's like they have the key to unlock each one of these doors. And we've gotten good enough where we can look at that lateral movement, even though it has a credential and a key, we're like wait a minute. That's not a real CIS Admin making a change. That's ransomware. And that's where you. >> You have to earn your way in. >> That's right. That's right. Yeah. >> And we're all kinds of configuration errors. But also some user problems. I've heard one story where there's so many passwords and username and passwords and systems that the bad guys scour, the dark web for passwords that have been exposed. >> Correct. >> And go test them against different accounts. Oh one hit over here. >> Correct. >> And people don't change their passwords all the time. >> Correct. >> That's a known vector. >> Just the idea that users are going to be perfect and never make a mistake. How long have we been doing this? Humans are the weakest link. So people are going to make mistakes. Attackers are going to be in. Here's another way of thinking about it. Remember log4j? Remember that whole fiasco? Remember that was at Christmas time. That was nine months ago. And whoever came up with that vulnerability, they basically had a skeleton key that could access every network on the planet. I don't know if a single customer that said, "Oh yeah, I wasn't impacted by log4j." So here's some organized entity had access to every network on the planet. What was the big breach? What was that movie script that got stolen? So there wasn't one, right? We haven't heard anything. So the point is, the goal of attackers is to get in and stay in. Imagine someone breaks into your house, steals your laptop and runs. That's a breach. Imagine someone breaks into your house and stays for nine months. It's untenable, in the real world, right? >> Right. >> We don't know in there, hiding in the closet. >> They're still in. >> They're watching everything. >> Hiding in your closet, exactly. >> Moving around, nibbling on your cookies. >> Drinking your beer. >> Yeah. >> So let's talk about how this translates into the new reality of cloud-native. Because now you hear about automated pentesting is a new hot thing right now. You got antivirus on data is hot within APIs, for instance. >> Yeah. >> API security. So all kinds of new hot areas. Cloud-native is very iterative. You know, you can't do a pentest every week. >> Right. >> You got to do it every second. >> So this is where it's going. It's not so much simulation. It's actually real testing. >> Right. Right. >> How do you view that? How does that fit into this? 'cause that seems like a good direction to me. >> Yeah. If it's right in, and you were talking to my buddy, Ahjay, earlier about what VMware can do to help our customers build cloud native applications with Tanzu. My team is focused on how do we secure those applications? So where VMware wants to be the best in the world is securing these applications from within. Looking at the individual piece parts and how they talk to each other and figuring out, wait a minute, that should never happen. By almost having an x-ray machine on the innards of the application. So we do it for both for VMs and for container based applications. So traditional apps are VM based. Modern apps are container based. And we have a slightly different insertion mechanism. It's the same idea. So for VMs, we do it with a hypervisor with NSX. We see all the inner workings. In a container world we have this thing called a service mesh that lets us look at each little snippet of code and how they talk to each other. And once you can see that stuff, then you can actually apply. It's almost like common sense logic of like, wait a minute. This API is giving back credit card numbers and it gives five an hour. All of a sudden, it's now asking for 20,000 or a million credit cards. That doesn't make any sense. The anomalies stick out like a sore thumb. If you can see them. At VMware, our unique focus in the infrastructure is that we can see each one of these little transactions and understand the conversation. That's what makes us so good at that East-West or lateral security. >> You don't belong in this room, get out or that that's some weird call from an in memory database, something over here. >> Exactly. Where other security solutions won't even see that. It's not like there algorithms aren't as good as ours or better or worse. It's the access to the data. We see the inner plumbing of the app and therefore we can protect the app from. >> And there's another dimension that I want to get in the table here. 'Cause to my knowledge only AWS, Google, I believe Microsoft and Alibaba and VMware have this. >> Correct >> It's Nitro. The equivalent of a Nitro. >> Yes. >> Project Monterey. >> Yeah. >> That's unique. It's the future of computing architectures. Everybody needs a Nitro. I've written about this. >> Yeah. >> Right. So explain your version. >> Yeah. >> It's now real. >> Yeah. >> It's now in the market, right? >> Yeah. >> Or soon will be. >> Here's our mission. >> Salient aspects. >> Yeah. Here's our mission of VMware. Is that we want to make every one of our enterprise customers. We want their private cloud to be as nimble, as agile, as efficient as the public cloud. >> And secure. >> And secure. In fact, I'll argue, we can make it actually more secure because we're thinking about putting security everywhere in this infrastructure. Not just on the edges of it. Okay. How do we go on that journey? As you pointed out, the public cloud providers realized five years ago that the right way to build computers was not just a CPU and a graphics process unit, GPU. But there's this third thing that the industry's calling a DPU, data processing unit. And so there's kind of three pieces of a computer. And the DPU is sometimes called a Smartnic. It's the network interface card. It does all that network handling and analytics and it takes it off the CPU. So they've been building and deploying those systems themselves. That's what Nitro is. And so we have been working with the major Silicon vendors to bring that architecture to everybody. So with vSphere 8, we have the ability to take the network processing, that East-West inspection I talked about, take it off of the CPU and put it into this dedicated processing element called the DPU and free up the CPU to run the applications that Ahjay and team are building. >> So no performance degradation at all? >> Correct. To CPU offload. >> So even the opposite, right? I mean you're running it basically Bare Metal speeds. >> Yes, yes and yes. >> And you're also isolating the storage from the security, the management, and. >> There's an isolation angle to this, which is that firewall, that we're putting everywhere. Not just that the perimeter, but we put it in each little piece of the server is running when it runs on one of these DPUs it's a different memory space. So even if an attacker gets to root in the OS, they it's very, very, never say never, but it's very difficult. >> So who has access to that resource? >> Pretty much just the infrastructure layer, the cloud provider. So it's Amazon, Google, Microsoft, and the enterprise. >> Application can't get in. >> Can't get in there. Cause you would've to literally bridge from one memory space to another. Never say never, but it would be very. >> But it hasn't earned the trust to get. >> It's more than barbwire. It's multiple walls. >> Yes. And it's like an air gap. It puts an air gap in the server itself so that if the server is compromised, it's not going to get into the network. Really powerful. >> What's the big thing that you're seeing with this supercloud transition. We're seeing multi-cloud and this new, not just SaaS hosted on the cloud. >> Yeah. >> You're seeing a much different dynamic of, combination of large scale CapEx, cloud-native, and then now cloud-native drills on premises and edge. Kind of changing what a cloud looks like if the cloud's on a cloud. >> Yeah. >> So we're the customer, I'm building on a cloud and I have on premise stuff. So, I'm getting scale CapEx relief from the hyperscalers. >> I think there's an important nuance on what you're talking about. Which is in the early days of the cloud customers. Remember those first skepticism? Oh, it'll never work. Oh, that's consumer grade. Oh, that's not really going to work. Oh some people realize. >> It's not secure. >> Yeah. It's not secure. >> That one's like, no, no, no it's secure. It works. And it's good. So then there was this sort of over rush. Let's put everything on the cloud. And I had a lot of customers that took VM based applications said, I'm going to move those onto the cloud. You got to take them all apart, put them on the cloud and put them all back together again. And little tiny details like changing an IP address. It's actually much harder than it looks. So my argument is, for existing workloads for VM based workloads, we are VMware. We're so good at running VM based workloads. And now we run them on anybody's cloud. So whether it's your east coast data center, your west coast data center, Amazon, Google, Microsoft, Alibaba, IBM keep going. We pretty much every. >> And the benefit of the customer is what. >> You can literally VMotion and just pick it up and move it from private to public, public to private, private to public, Back and forth. >> Remember when we called Vmotion BS, years ago? >> Yeah. Yeah. >> VMotion is powerful. >> We were very skeptical. We're like, that'll never happen. I mean we were. This supposed to be pat ourselves on the back. >> Well because alchemy. It seems like what you can't possibly do that. And now we do it across clouds. So it's not quite VMotion, but it's the same idea. You can just move these things over. I have one customer that had a production data center in the Ukraine. Things got super tense, super fast and they had to go from their private cloud data center in the Ukraine, to a public cloud data center out of harm's way. They did it over a weekend. 48 hours. If you've ever migrated a data center, that's usually six months. Right. And a lot of heartburn and a lot of angst. Boop. They just drag and dropped and moved it on over. That's the power of what we call the cloud operating model. And you can only do this when all your infrastructures defined in software. If you're relying on hardware, load balancers, hardware, firewalls, you can't move those. They're like a boat anchor. You're stuck with them. And by the way, they're really, really expensive. And by the way, they eat a lot of power. So that was an architecture from the 90's. In the cloud operating model your data center. And this comes back to what you were talking about is just racks and racks of X86 with these magic DPUs, or smart nics, to make any individual node go blisteringly fast and do all the functions that you used to do in network appliances. >> We just had Ahjay taking us to school, and everyone else to school on applications, middleware, abstraction layer. And Kit Culbert was also talking about this across cloud. We're talking supercloud, super pass. If this continues to happen, which we would think it will happen. What does the security posture look like? It feels to me, and again, this is your wheelhouse. If supercloud happens with this kind of past layer where there's vMotioning going on. All kinds of spanning applications and data across environments. >> Yeah. Assume there's an operating system working on behind the scenes. >> Right. >> What's the security posture in all this? >> Yeah. So remember my narrative about the bad guys are getting in and they're moving around and they're so sneaky that they're using legitimate pathways. The only way to stop that stuff, is you've got to understand it at what we call Layer 7. At the application layer. Trying to do security to the infrastructure layer. It was interesting 20 years ago, kind of less interesting 10 years ago. And now it's becoming irrelevant because the infrastructure is oftentimes not even visible. It's buried in some cloud provider. So Layer 7 understanding, application awareness, understanding the APIs and reading the content. That's the name of the game in security. That's what we've been focused on. Nothing to do with the infrastructure. >> And where's the progress bar on that paradigm. One to ten. Ten being everyone's doing it. >> Right now. Well, okay. So we as a vendor can do this today. All the stuff I talked about, reading APIs, understanding the individual services looking at, Hey, wait a minute this credit card anomalies, that's all shipping production code. Where is it in customer adoption life cycle? Early days 10%. So there's a whole lot of headroom for people to understand, Hey, I can put these controls in place. They're software based. They don't require appliances. It's Layer 7, so it has contextual awareness and it's works on every single cloud. >> We talked about the pandemic being an accelerator. It really was a catalyst to really rethink. Remember we used to talk about Pat as a security do over. He's like, yes, if it's the last thing I do, I'm going to fix security. Well, he decided to go try to fix Intel instead. >> He's getting some help from the government. >> But it seems like CISOs have totally rethought their security strategy. And at least in part, as a function of the pandemic. >> When I started at VMware four years ago, Pat sat me down in his office and he said to me what he said to you, which is like, "Tom," he said, "I feel like we have fundamentally changed servers. We fundamentally change storage. We fundamentally change networking. The last piece of the puzzle of security. I want you to go fundamentally change it." And I'll argue that the work that we're doing with this horizontal security, understanding the lateral movement. East- West inspection. It fundamentally changes how security works. It's got nothing to do with firewalls. It's got nothing to do with Endpoint. It's a unique capability that VMware is uniquely suited to deliver on. And so Pat, thanks for the mission. We delivered it and it's available now. >> Those WET web applications firewall for instance are around, I mean. But to your point, the perimeter's gone. >> Exactly. >> And so you got to get, there's no perimeter. so it's a surface area problem. >> Correct. And access. And entry. >> Correct. >> They're entering here easy from some manual error, or misconfiguration or bad password that shouldn't be there. They're in. >> Think about it this way. You put the front door of your house, you put a big strong door and a big lock. That's a firewall. Bad guys come in the window. >> And then the windows open. With a ladder. >> Oh my God. Cause it's hot, bad user behavior trumps good security every time. >> And then they move around room to room. We're the room to room people. We see each little piece of the thing. Wait, that shouldn't happen. Right. >> I want to get you a question that we've been seeing and maybe we're early on this or it might be just a false data point. A lot of CSOs and we're talking to are, and people in industry in the customer environment are looking at CISOs and CSOs, two roles. Chief information security officer, and then chief security officer. Amazon, actually Steven Schmidt is now CSO at Reinforce. They actually called that out. And the interesting point that he made, we had some other situations that verified this, is that physical security is now tied to online, to your point about the service area. If I get a password, I still got the keys to the physical goods too. >> Right. So physical security, whether it's warehouse for them or store or retail. Digital is coming in there. >> Yeah. So is there a CISO anymore? Is it just CSO? What's the role? Or are there two roles you see that evolving? Or is that just circumstance. >> I think it's just one. And I think that the stakes are incredibly high in security. Just look at the impact that these security attacks are having on. Companies get taken down. Equifax market cap was cut 80% with a security breach. So security's gone from being sort of a nuisance to being something that can impact your whole kind of business operation. And then there's a whole nother domain where politics get involved. It determines the fate of nations. I know that sounds grand, but it's true. And so companies care so much about it they're looking for one leader, one throat to choke. One person that's going to lead security in the virtual domain, in the physical domain, in the cyber domain, in the actual. >> I mean, you mention that, but I mean, you look at Ukraine. I mean that cyber is a component of that war. I mean, it's very clear. I mean, that's new. We've never seen. this. >> And in my opinion, the stuff that we see happening in the Ukraine is small potatoes compared to what could happen. >> Yeah. >> So the US, we have a policy of strategic deterrence. Where we develop some of the most sophisticated cyber weapons in the world. We don't use them. And we hope never to use them. Because our adversaries, who could do stuff like, I don't know, wipe out every bank account in North America. Or turn off the lights in New York City. They know that if they were to do something like that, we could do something back. >> This is the red line conversation I want to go there. So, I had this discussion with Robert Gates in 2016 and he said, "We have a lot more to lose." Which is really your point. >> So this brand. >> I agree that there's to have freedom and liberty, you got to strike back with divorce. And that's been our way to balance things out. But with cyber, the red line, people are already in banks. So they're are operating below the red line line. Red line meaning before we know you're in there. So do we move the red line down because, hey, Sony got hacked. The movie. Because they don't have their own militia. >> Yeah. >> If their were physical troops on the shores of LA breaking into the file cabinets. The government would've intervened. >> I agree with you that it creates tension for us in the US because our adversaries don't have the clear delineation between public and private sector. Here you're very, very clear if you're working for the government. Or you work for an private entity. There's no ambiguity on that. >> Collaboration, Tom, and the vendor community. I mean, we've seen efforts to try to. >> That's a good question. >> Monetize private data and private reports. >> So at VMware, I'm very proud of the security capabilities we've built. But we also partner with people that I think of as direct competitors. We've got firewall vendors and Endpoint vendors that we work with and integrate. And so coopetition is something that exists. It's hard. Because when you have these kind of competing. So, could we do more? Of course we probably could. But I do think we've done a fair amount of cooperation, data sharing, product integration, et cetera. And as the threats get worse, you'll probably see us continue to do more. >> And the government is going to trying to force that too. >> And the government also drives standards. So let's talk about crypto. Okay. So there's a new form of encryption coming out called processing quantum. >> Quantum. Quantum computers have the potential to crack any crypto cipher we have today. That's bad. Okay. That's not good at all because our whole system is built around these private communications. So the industry is having conversations about crypto agility. How can we put in place the ability to rapidly iterate the ciphers in encryption. So, when the day quantum becomes available, we can change them and stay ahead of these quantum people. >> Well, didn't NIST just put out a quantum proof algo that's being tested right now by the community? >> There's a lot of work around that. Correct. And NIST is taking the lead on this, but Google's working on it. VMware's working on it. We're very, very active in how do we keep ahead of the attackers and the bad guys? Because this quantum thing is a, it's an x-ray machine. It's like a dilithium crystal that can power a whole ship. It's a really, really, really powerful tool. >> Bad things will happen. >> Bad things could happen. >> Well, Tom, great to have you on the theCube. Thanks for coming on. Take the last minute to just give a plug for what's going on for you here at VMWorld this year, just VMware Explore this year. >> Yeah. We announced a bunch of exciting things. We announced enhancements to our NSX family, with our advanced load balancer. With our edge firewall. And they're all in service of one thing, which is helping our customers make their private cloud like the public cloud. So I like to say 0, 0, 0. If you are in the cloud operating model, you have zero proprietary appliances. You have zero tickets to launch a workload. You have zero network taps and Zero Trust built into everything you do. And that's what we're working on. Pushing that further and further. >> Tom Gill, senior vices president, head of the networking at VMware. Thanks for coming on. We do appreciate it. >> Thanks for having us. >> Always getting the security data. That's killer data and security of the two ops that get the most conversations around DevOps and Cloud Native. This is The theCube bringing you all the action here in San Francisco for VMware Explore 2022. I'm John Furrier with Dave Vellante. Thanks for watching. (bright music)
SUMMARY :
We'd love seeing the progress for having me. we could have you on. edition on the theCube. You first get the VIP It's kind of in all the narratives So probably the first thing and here, the event. To the notion of being defensible. I got to be able to defend. the model was we have a perimeter. or the super secret aircraft plans. right to get to that database, And it's not even just the right. Yeah. systems that the bad guys scour, And go test them And people don't change So the point is, the goal of attackers hiding in the closet. nibbling on your cookies. into the new reality of cloud-native. So all kinds of new hot areas. So this is where it's going. Right. a good direction to me. of the application. get out or that that's some weird call It's the access to the data. 'Cause to my knowledge only AWS, Google, The equivalent of a Nitro. It's the future of So explain your version. as efficient as the public cloud. that the right way to build computers So even the opposite, right? from the security, the management, and. Not just that the perimeter, Microsoft, and the enterprise. from one memory space to another. It's more than barbwire. server itself so that if the not just SaaS hosted on the cloud. if the cloud's on a cloud. relief from the hyperscalers. of the cloud customers. It's not secure. Let's put everything on the cloud. And the benefit of and move it from private to public, ourselves on the back. in the Ukraine, to a What does the security posture look like? Yeah. and reading the content. One to ten. All the stuff I talked We talked about the help from the government. function of the pandemic. And I'll argue that the work But to your point, the perimeter's gone. And so you got to get, And access. password that shouldn't be there. You put the front door of your house, And then the windows Cause it's hot, bad user behavior We're the room to room people. the keys to the physical goods too. So physical security, whether What's the role? in the cyber domain, in the actual. component of that war. the stuff that we see So the US, we have a policy This is the red line I agree that there's to breaking into the file cabinets. have the clear delineation and the vendor community. and private reports. And as the threats get worse, And the government is going And the government So the industry is having conversations And NIST is taking the lead on this, Take the last minute to just So I like to say 0, 0, 0. head of the networking at VMware. that get the most conversations
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Tom Gill | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
Tom Gillis | PERSON | 0.99+ |
Pat | PERSON | 0.99+ |
Ukraine | LOCATION | 0.99+ |
2016 | DATE | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
20,000 | QUANTITY | 0.99+ |
Tom | PERSON | 0.99+ |
Sony | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
New York City | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
nine months | QUANTITY | 0.99+ |
six months | QUANTITY | 0.99+ |
Zero Trust | ORGANIZATION | 0.99+ |
Reinforce | ORGANIZATION | 0.99+ |
two sets | QUANTITY | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
North America | LOCATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
sixth edition | QUANTITY | 0.99+ |
Kit Culbert | PERSON | 0.99+ |
48 hours | QUANTITY | 0.99+ |
Robert Gates | PERSON | 0.99+ |
two roles | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
12th year | QUANTITY | 0.99+ |
Ahjay | PERSON | 0.99+ |
three days | QUANTITY | 0.99+ |
two ops | QUANTITY | 0.99+ |
Ten | QUANTITY | 0.99+ |
third thing | QUANTITY | 0.99+ |
five an hour | QUANTITY | 0.99+ |
Equifax | ORGANIZATION | 0.99+ |
ten | QUANTITY | 0.98+ |
zero tickets | QUANTITY | 0.98+ |
nine months ago | DATE | 0.98+ |
one customer | QUANTITY | 0.98+ |
four years ago | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
LA | LOCATION | 0.98+ |
250 million credit cards | QUANTITY | 0.98+ |
Day two | QUANTITY | 0.98+ |
five years ago | DATE | 0.98+ |
a million credit cards | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
10 years ago | DATE | 0.97+ |
Intel | ORGANIZATION | 0.97+ |
this year | DATE | 0.97+ |
90's | DATE | 0.97+ |
one story | QUANTITY | 0.97+ |
one | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
Layer 7 | OTHER | 0.96+ |
20 years ago | DATE | 0.96+ |
One person | QUANTITY | 0.96+ |
Christmas | EVENT | 0.96+ |
three pieces | QUANTITY | 0.96+ |
Nitro | ORGANIZATION | 0.95+ |
Tanzu | ORGANIZATION | 0.95+ |
One | QUANTITY | 0.94+ |
10% | QUANTITY | 0.94+ |
one leader | QUANTITY | 0.94+ |
Jason Collier, AMD | VMware Explore 2022
(upbeat music) >> Welcome back to San Francisco, "theCUBE" is live, our day two coverage of VMware Explore 2022 continues. Lisa Martin with Dave Nicholson. Dave and I are pleased to welcome Jason Collier, principal member of technical staff at AMD to the program. Jason, it's great to have you. >> Thank you, it's great to be here. >> So what's going on at AMD? I hear you have some juicy stuff to talk about. >> Oh, we've got a ton of juicy stuff to talk about. Clearly the Project Monterey announcement was big for us, so we've got that to talk about. Another thing that I really wanted to talk about was a tool that we created and we call it, it's the VMware Architecture Migration Tool, call it VAMT for short. It's a tool that we created and we worked together with VMware and some of their professional services crew to actually develop this tool. And it is also an open source based tool. And really the primary purpose is to easily enable you to move from one CPU architecture to another CPU architecture, and do that in a cold migration fashion. >> So we're probably not talking about CPUs from Tandy, Radio Shack systems, likely this would be what we might refer to as other X86 systems. >> Other X86 systems is a good way to refer to it. >> So it's interesting timing for the development and the release of a tool like this, because in this sort of X86 universe, there are players who have been delayed in terms of delivering their next gen stuff. My understanding is AMD has been public with the idea that they're on track for by the end of the year, Genoa, next gen architecture. So can you imagine a situation where someone has an existing set of infrastructure and they're like, hey, you know what I want to get on board, the AMD train, is this something they can use from the VMware environment? >> Absolutely, and when you think about- >> Tell us exactly what that would look like, walk us through 100 servers, VMware, 1000 VMs, just to make the math easy. What do you do? How does it work? >> So one, there's several things that the tool can do, we actually went through, the design process was quite extensive on this. And we went through all of the planning phases that you need to go through to do these VM migrations. Now this has to be a cold migration, it's not a live migration. You can't do that between the CPU architectures. But what we do is you create a list of all of the virtual machines that you want to migrate. So we take this CSV file, we import this CSV file, and we ask for things like, okay, what's the name? Where do you want to migrate it to? So from one cluster to another, what do you want to migrate it to? What are the networks that you want to move it to? And then the storage platform. So we can move storage, it could either be shared storage, or we could move say from VSAN to VSAN, however you want to set it up. So it will do those storage migrations as well. And then what happens is it's actually going to go through, it's going to shut down the VM, it's going to take a snapshot, it is going to then basically move the compute and/or storage resources over. And once it does that, it's going to power 'em back up. And it's going to check, we've got some validation tools, where it's going to make sure VM Tools comes back up where everything is copacetic, it didn't blue screen or anything like that. And once it comes back up, then everything's good, it moves onto the next one. Now a couple of things that we've got feature wise, we built into it. You can parallelize these tasks. So you can say, how many of these machines do you want to do at any given time? So it could be, say 10 machines, 50 machines, 100 machines at a time, that you want to go through and do this move. Now, if it did blue screen, it will actually roll it back to that snapshot on the origin cluster. So that there is some protection on that. A couple other things that are actually in there are things like audit tracking. So we do full audit logging on this stuff, we take a snapshot, there's basically kind of an audit trail of what happens. There's also full logging, SYS logging, and then also we'll do email reporting. So you can say, run this and then shoot me a report when this is over. Now, one other cool thing is you can also actually define a change window. So I don't want to do this in the middle of the afternoon on a Tuesday. So I want to do this later at night, over the weekend, you can actually just queue this up, set it, schedule it, it'll run. You can also define how long you want that change window to be. And what it'll do, it'll do as many as it can, then it'll effectively stop, finish up, clean up the tasks and then send you a report on what all was successfully moved. >> Okay, I'm going to go down the rabbit hole a little bit on this, 'cause I think it's important. And if I say something incorrect, you correct me. >> No problem. >> In terms of my technical understanding. >> I got you. >> So you've got a VM, essentially a virtual machine typically will consist of an entire operating system within that virtual machine. So there's a construct that containerizes, if you will, the operating system, what is the difference, where is the difference in the instruction set? Where does it lie? Is it in the OS' interaction with the CPU or is it between the construct that is the sort of wrapper around the VM that is the difference? >> It's really primarily the OS, right? And we've not really had too many issues doing this and most of the time, what is going to happen, that OS is going to boot up, it's going to recognize the architecture that it's on, it's going to see the underlying architecture, and boot up. All the major operating systems that we test worked fine. I mean, typically they're going to work on all the X86 platforms. But there might be instruction sets that are kind of enabled in one architecture that may not be in another architecture. >> And you're looking for that during this process. >> Well usually the OS itself is going to kind of detect that. So if it pops up, the one thing that is kind of a caution that you need to look for. If you've got an application that's explicitly using an instruction set that's on one CPU vendor and not the other CPU vendor. That's the one thing where you're probably going to see some application differences. That said, it'll probably be compatible, but you may not get that instruction set advantage in it. >> But this tool remediates against that. >> Yeah, and what we do, we're actually using VM Tools itself to go through and validate a lot of those components. So we'll look and make sure VM Tools is enabled in the first place, on the source system. And then when it gets to the destination system, we also look at VM Tools to see what is and what is not enabled. >> Okay, I'm going to put you on the spot here. What's the zinger, where doesn't it work? You already said cold, we understand, you can schedule for cold migrations, that's not a zinger. What's the zinger, where doesn't it work? >> It doesn't work like, live migrations just don't work. >> No live, okay, okay, no live. What about something else? What's the oh, you've got that version, you've got that version of X86 architecture, it-won't work, anything? >> A majority of those cases work, where it would fail, where it's going to kick back and say, hey, VM Tools is not installed. So where you would see this is if you're running a virtual appliance from some vendor, like insert vendor here that say, got a firewall, or got something like that, and they don't have VM Tools enabled. It's going to fail it out of the gate, and say, hey, VM Tools is not on this, you might want to manually do it. >> But you can figure out how to fix that? >> You can figure out how to do that. You can also, and there's a flag in there, so in kind of the options that you give it, you say, ignore VM Tools, don't care, move it anyway. So if you've got less, some VMs that are in there, but they're not a priority VM, then it's going to migrate just fine. >> Got It. >> Can you elaborate a little bit on the joint development work that AMD and VMware are doing together and the value in it for customers? >> Yeah, so it's one of those things we worked with VMware to basically produce this open source tool. So we did a lot of the core component and design and we actually engaged VMware Professional Services. And a big shout out to Austin Browder. He helped us a ton in this project specifically. And we basically worked, we created this, kind of co-designed, what it was going to look like. And then jointly worked together on the coding, of pulling this thing together. And then after that, and this is actually posted up on VMware's public repos now in GitHub. So you can go to GitHub, you can go to the VMware samples code, and you can download this thing that we've created. And it's really built to help ease migrations from one architecture to another. So if you're looking for a big data center move and you got a bunch of VMs to move. I mean, even if it's same architecture to same architecture, it's definitely going to ease the pain of going through and doing a migration of, it's one thing when you're doing 10 machines, but when you're doing 10,000 virtual machines, that's a different story. It gets to be quite operationally inefficient. >> I lose track after three. >> Yeah. >> So I'm good for three, not four. >> I was going to ask you what your target market segment is here. Expand on that a little bit and talk to me about who you're working with and those organizations. >> So really this is targeted toward organizations that have large deployments in enterprise, but also I think this is a big play with channel partners as well. So folks out there in the channel that are doing these migrations and they do a lot of these, when you're thinking about the small and mid-size organizations, it's a great fit for that. Especially if they're kind of doing that upgrade, the lift and shift upgrade, from here's where you've been five to seven years on an architecture and you want to move to a new architecture. This is really going to help. And this is not a point and click GUI kind of thing. It's command line driven, it's using PowerShell, we're using PowerCLI to do the majority of this work. And for channel partners, this is an excellent opportunity to put the value and the value add and VAR, And there's a lot of opportunity for, I think, channel partners to really go and take this. And once again, being open source. We expect this to be extensible, we want the community to contribute and put back into this to basically help grow it and make it a more useful tool for doing these cold migrations between CPU architectures. >> Have you seen any in the last couple of years of dynamics, obviously across the world, any industries in particular that are really leading edge for what you guys are doing? >> Yeah, that's really, really interesting. I mean, we've seen it, it's honestly been a very horizontal problem, pretty much across all vertical markets. I mean, we've seen it in financial services, we've seen it in, honestly, pretty much across the board. Manufacturing, financial services, healthcare, we have seen kind of a strong interest in that. And then also we we've actually taken this and presented this to some of our channel partners as well. And there's been a lot of interest in it. I think we presented it to about 30 different channel partners, a couple of weeks back about this. And I got contact from 30 different channel partners that said they're interested in basically helping us work on it. >> Tagging on to Lisa's question, do you have visibility into the AMD thought process around the timing of your next gen release versus others that are competitors in the marketplace? How you might leverage that in terms of programs where partners are going out and saying, hey, perfect time, you need a refresh, perfect time to look at AMD, if you haven't looked at them recently. Do you have any insight into that in what's going on? I know you're focused on this area. But what are your thoughts on, well, what's the buzz? What's the buzz inside AMD on that? >> Well, when you look overall, if you look at the Gartner Hype Cycle, when VMware was being broadly adopted, when VMware was being broadly adopted, I'm going to be blunt, and I'm going to be honest right here, AMD didn't have a horse in the race. And the majority of those VMware deployments we see are not running on AMD. Now that said, there's an extreme interest in the fact that we've got these very cored in systems that are now coming up on, now you're at that five to seven year refresh window of pulling in new hardware. And we have extremely attractive hardware when it comes to running virtualized workloads. The test cluster that I'm running at home, I've got that five to seven year old gear, and I've got some of the, even just the Milan systems that we've got. And I've got three nodes of another architecture going onto AMD. And when I got these three nodes completely maxed to the number of VMs that I can run on 'em, I'm at a quarter of the capacity of what I'm putting on the new stuff. So what you get is, I mean, we worked the numbers, and it's definitely, it's like a 30% decrease in the amount of resources that you need. >> That's a compelling number. >> It's a compelling number. >> 5%, 10%, nobody's going to do anything for that. You talk 30%. >> 30%. It's meaningful, it's meaningful. Now you you're out of Austin, right? >> Yes. >> So first thing I thought of when you talk about running clusters in your home is the cost of electricity, but you're okay. >> I'm okay. >> You don't live here, you don't live here, you don't need to worry about that. >> I'm okay. >> Do you have a favorite customer example that you think really articulates the value of AMD when you're in customer conversations and they go, why AMD and you hit back with this? >> Yeah. Actually it's funny because I had a conversation like that last night, kind of random person I met later on in the evening. We were going through this discussion and they were facing exactly this problem. They had that five to seven year infrastructure. It's funny, because the guy was a gamer too, and he's like, man, I've always been a big AMD fan, I love the CPUs all the way since back in basically the Opterons and Athlons right. He's like, I've always loved the AMD systems, loved the graphics cards. And now with what we're doing with Ryzen and all that stuff. He's always been a big AMD fan. He's like, and I'm going through doing my infrastructure refresh. And I told him, I'm just like, well, hey, talk to your VAR and have 'em plug some AMD SKUs in there from the Dells, HPs and Lenovos. And then we've got this tool to basically help make that migration easier on you. And so once we had that discussion and it was great, then he swung by the booth today and I was able to just go over, hey, this is the tool, this is how you use it, here's all the info. Call me if you need any help. >> Yeah, when we were talking earlier, we learned that you were at Scale. So what are you liking about AMD? How does that relate? >> The funny thing is this is actually the first time in my career that I've actually had a job where I didn't work for myself. I've been doing venture backed startups the last 25 years and we've raised couple hundred million dollars worth of investment over the years. And so one, I figured, here I am going to AMD, a larger corporation. I'm just like, am I going to be able to make it a year? And I have been here longer than a year and I absolutely love it. The culture at AMD is amazing. We still have that really, I mean, almost it's like that underdog mentality within the organization. And the team that I'm working with is a phenomenal team. And it's actually, our EVP and our Corp VP, were actually my executive sponsors, we were at a prior company. They were one of my executive sponsors when I was at Scale. And so my now VP boss calls me up and says, hey, I'm putting a band together, are you interested? And I was kind of enjoying a semi-retirement lifestyle. And then I'm just like, man, because it's you, yes, I am interested. And the group that we're in, the work that we're doing, the way that we're really focusing on forward looking things that are affecting the data center, what's going to be the data center like three to five years from now. It's exciting, and I am having a blast, I'm having the time of my life. I absolutely love it. >> Well, that relationship and the trust that you will have with each other, that bleeds into the customer conversations, the partner conversations, the employee conversations, it's all inextricably linked. >> Yes it is. >> And we want to know, you said three to five years out, like what? Like what? Just general futurist stuff, where do you think this is going. >> Well, it's interesting. >> So moon collides with the earth in 2025, we already know that. >> So we dialed this back to the Pensando acquisition. When you look at the Pensando acquisition and you look at basically where data centers are today, but then you look at where basically the big hyperscalers are. You look at an AWS, you look at their architecture, you specifically wrap Nitro around that, that's a very different architecture than what's being run in the data center. And when you look at what Pensando does, that's a lot of starting to bring what these real clouds out there, what these big hyperscalers are running into the grasps of the data center. And so I think you're going to see a fundamental shift. The next 10 years are going to be exciting because the way you look at a data center now, when you think of what CPUs do, what shared storage, how the networking is all set up, it ain't going to look the same. >> Okay, so the competing vision with that, to play devil's advocate, would be DPUs are kind of expensive. Why don't we just use NICs, give 'em some more bandwidth, and use the cheapest stuff. That's the competing vision. >> That could be. >> Or the alternative vision, and I imagine everything else we've experienced in our careers, they will run in parallel paths, fit for function. >> Well, parallel paths always exist, right? Otherwise, 'cause you know how many times you've heard mainframe's dead, tape's dead, spinning disk is dead. None of 'em dead, right? The reality is you get to a point within an industry where it basically goes from instead of a growth curve like that, it goes to a growth curve of like that, it's pretty flat. So from a revenue growth perspective, I don't think you're going to see the revenue growth there. I think you're going to see the revenue growth in DPUs. And when you actually take, they may be expensive now, but you look at what Monterey's doing and you look at the way that those DPUs are getting integrated in at the OEM level. It's going to be a part of it. You're going to order your VxRail and VSAN style boxes, they're going to come with them. It's going to be an integrated component. Because when you start to offload things off the CPU, you've driven your overall utilization up. When you don't have to process NSX on basically the X86, you've just freed up cores and a considerable amount of them. And you've also moved that to where there's a more intelligent place for that pack to be processed right, out here on this edge. 'Cause you know what, that might not need to go into the host bus at all. So you have just alleviated any transfers over a PCI bus, over the PCI lanes, into DRAM, all of these components, when you're like, but all to come with, oh, that bit needs to be on this other machine. So now it's coming in and it's making that decision there. And then you take and integrate that into things like the Aruba Smart Switch, that's running the Pensando technology. So now you got top of rack that is already making those intelligent routing decisions on where packets really need to go. >> Jason, thank you so much for joining us. I know you guys could keep talking. >> No, I was going to say, you're going to have to come back. You're going to have to come back. >> We've just started to peel the layers of the onion, but we really appreciate you coming by the show, talking about what AMD and VMware are doing, what you're enabling customers to achieve. Sounds like there's a lot of tailwind behind you. That's awesome. >> Yeah. >> Great stuff, thank you. >> It's a great time to be at AMD, I can tell you that. >> Oh, that's good to hear, we like it. Well, thank you again for joining us, we appreciate it. For our guest and Dave Nicholson, I'm Lisa Martin. You're watching "theCUBE Live" from San Francisco, VMware Explore 2022. We'll be back with our next guest in just a minute. (upbeat music)
SUMMARY :
Jason, it's great to have you. I hear you have some to easily enable you to move So we're probably good way to refer to it. and the release of a tool like this, 1000 VMs, just to make the math easy. And it's going to check, we've Okay, I'm going to In terms of my that is the sort of wrapper and most of the time, that during this process. that you need to look for. in the first place, on the source system. What's the zinger, where doesn't it work? It doesn't work like, live What's the oh, you've got that version, So where you would see options that you give it, And a big shout out to Austin Browder. I was going to ask you what and the value add and VAR, and presented this to some of competitors in the marketplace? in the amount of resources that you need. nobody's going to do anything for that. Now you you're out of Austin, right? is the cost of electricity, you don't live here, you don't They had that five to So what are you liking about AMD? that are affecting the data center, Well, that relationship and the trust where do you think this is going. we already know that. because the way you look Okay, so the competing Or the alternative vision, And when you actually take, I know you guys could keep talking. You're going to have to come back. peel the layers of the onion, to be at AMD, I can tell you that. Oh, that's good to hear, we like it.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Jason Collier | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
50 machines | QUANTITY | 0.99+ |
10 machines | QUANTITY | 0.99+ |
Jason | PERSON | 0.99+ |
10 machines | QUANTITY | 0.99+ |
100 machines | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Austin | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
five | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
100 servers | QUANTITY | 0.99+ |
seven year | QUANTITY | 0.99+ |
theCUBE Live | TITLE | 0.99+ |
10,000 virtual machines | QUANTITY | 0.99+ |
Lenovos | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
2025 | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
four | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
10% | QUANTITY | 0.99+ |
30 different channel partners | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
earth | LOCATION | 0.99+ |
5% | QUANTITY | 0.99+ |
1000 VMs | QUANTITY | 0.99+ |
Dells | ORGANIZATION | 0.99+ |
GitHub | ORGANIZATION | 0.99+ |
seven years | QUANTITY | 0.98+ |
Austin Browder | PERSON | 0.98+ |
a year | QUANTITY | 0.98+ |
Tandy | ORGANIZATION | 0.98+ |
Radio Shack | ORGANIZATION | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
Monterey | ORGANIZATION | 0.98+ |
today | DATE | 0.97+ |
HPs | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
Tuesday | DATE | 0.97+ |
Scale | ORGANIZATION | 0.97+ |
VM Tools | TITLE | 0.97+ |
one thing | QUANTITY | 0.96+ |
last night | DATE | 0.96+ |
about 30 different channel partners | QUANTITY | 0.95+ |
first | QUANTITY | 0.95+ |
Athlons | COMMERCIAL_ITEM | 0.95+ |
VxRail | COMMERCIAL_ITEM | 0.95+ |
X86 | TITLE | 0.94+ |
Pensando | ORGANIZATION | 0.94+ |
VMware Explore 2022 | TITLE | 0.94+ |
Ryzen | COMMERCIAL_ITEM | 0.94+ |
five years | QUANTITY | 0.93+ |
Mark Nickerson & Paul Turner | VMware Explore 2022
(soft joyful music) >> Welcome back everyone to the live CUBE coverage here in San Francisco for VMware Explore '22. I'm John Furrier with my host Dave Vellante. Three days of wall to wall live coverage. Two sets here at the CUBE, here on the ground floor in Moscone, and we got VMware and HPE back on the CUBE. Paul Turner, VP of products at vSphere and cloud infrastructure at VMware. Great to see you. And Mark Nickerson, Director of Go to Mark for Compute Solutions at Hewlett-Packard Enterprise. Great to see you guys. Thanks for coming on. >> Yeah. >> Thank you for having us. >> So we, we are seeing a lot of traction with GreenLake, congratulations over there at HPE. The customers changing their business model consumption, starting to see that accelerate. You guys have the deep partnership, we've had you guys on earlier yesterday. Talked about the technology partnership. Now, on the business side, where's the action at with the HP and you guys with the customer? Because, now as they go cloud native, third phase of the inflection point, >> Yep. >> Multi-cloud, hybrid-cloud, steady state. Where's the action at? >> So I think the action comes in a couple of places. Um, one, we see increased scrutiny around, kind of not only the cost model and the reasons for moving to GreenLake that we've all talked about there, but it's really the operational efficiencies as well. And, this is an area where the long term partnership with VMware has really been a huge benefit. We've actually done a lot of joint engineering over the years, continuing to do that co-development as we bring products like Project Monterey, or next generations of VCF solutions, to live in a GreenLake environment. That's an area where customers not only see the benefits of GreenLake from a business standpoint, um, on a consumption model, but also around the efficiency operationally as well. >> Paul, I want to, I want to bring up something that we always talk about on the CUBE, which is experience in the enterprise. Usually it's around, you know, technology strategy, making the right product market fit, but HPE and VMware, I mean, have exceptional depth and experience in the enterprise. You guys have a huge customer base, doesn't churn much, steady state there, you got vSphere, killer product, with a new release coming out, HP, unprecedented, great sales force. Everyone knows that you guys have great experience serving customers. And, it seems like now the fog is clearing, we're seeing clear line of sight into value proposition, you know, what it's worth, how do you make money with it, how do partners make money? So, it seems like the puzzle's coming together right now with consumption, self-service, developer focus. It just seems to be clicking. What's your take on all this because... >> Oh, absolutely. >> you got that engine there at VMware. >> Yeah. I think what customers are looking for, customers want that cloud kind of experience, but they want it on their terms. So, the work that we're actually doing with the GreenLake offerings that we've done, we've released, of course, our subscription offerings that go along with that. But, so, customers can now get cloud on their terms. They can get systems services. They know that they've got the confidence that we have integrated those services really well. We look at something like vSphere 8, we just released it, right? Well, immediately, day zero, we come out, we've got trusted integrated servers from HPE, Mark and his team have done a phenomenal job. We make sure that it's not just the vSphere releases but VSAN and we get VSAN ready nodes available. So, the customers get that trusted side of things. And, you know, just think about it. We've... 200,000 joined customers. >> Yeah, that's a lot. >> We've a hundred thousand kind of enabled partners out there. We've an enormous kind of install base of customers. But also, those customers want us to modernize. And, you know, the fact that we can do that with GreenLake, and then of course with our new features, and our new releases. >> Yeah. And it's nice that the products market fits going well on both sides. But can you guys share, both of you share, the cadence of the relationship? I mean, we're talking about vSphere, every two years, a major release. Now since 6, vSphere 6, you guys are doing three months' releases, which is amazing. So you guys got your act together there, doing great. But, you guys, so many joint customers, what's the cadence? As stuff comes out, how do you guys put that together? How tightly integrated? Can you share a quick... insight into that dynamic? >> Yeah, sure. So, I mean Mark can and add to this too, but the teams actually work very closely, where it's every release that we do is jointly qualified. So that's a really, really important thing. But it's more interesting is this... the innovation side of things. Right? If you just think about it, 'cause it's no use to just qualify. That's not that interesting. But, like I said, we've released with vSphere 8 you know... the new enhanced storage architecture. All right? The new, next generation of vSphere. We've got that immediately qualified, ready on HPE equipment. We built out new AI servers, actually with Invidia and with HPE. And, we're able to actually push the extremes of... AI and intelligence... on systems. So that's kind of work. And then, of course, our Project Monterey work. Project Monterey Distributed Services Engine. That's something we're really excited about, because we're not just building a new server anymore, we're actually going to change the way servers are built. Monterey gives us a new platform to build from that we're actually jointly working. >> So double click on that, and then to explain how HPE is taking advantage of it. I mean, obvious you have more diversity of XPU's, you've got isolation, you've got now better security, and confidential computing, all that stuff. Explain that in some detail, and how does HPE take advantage of that? >> Yeah, definitely. So, if you think about vSphere 8, vSphere 8 I can now virtualize anything. I can virtualize your CPU's, your GPU's, and now what we call DPU's, or data processing units. A data processing unit, it's... think of it as we're running, actually, effectively another version of ESX, sitting down on this processor. But, that gives us an ability to run applications, and some of the virtualization services, actually down on that DPU. It's separated away from where you run your application. So, all your applications get to consume all your CPU. It's all available to you. Your DPU is used for that virtualization and virtualization services. And that's what we've done. We've been working with HPE and HPE and Pensando. Maybe you can talk some of the new systems that we've built around this too. >> Yeah. So, I mean, that's one of the... you talked about the cadence and that... back to the cadence question real briefly. Paul hit on it. Yeah, there's a certain element of, "Let's make sure that we're certified, we're qualified, we're there day zero." But, that cadence goes a lot beyond it. And, I think Project Monterey is a great example of where that cadence expands into really understanding the solutioning that goes into what the customer's expecting from us. So, to Paul's point, yeah, we could have just qualified the ESX version to go run on a DPU and put that in the market and said, "Okay, great. Customers, We know that it works." We've actually worked very tightly with VMware to really understand the use case, what the customer needs out of that operating environment, and then provide, in the first instantiation, three very discrete product solutions aimed at different use cases, whether that's a more robust use case for customers who are looking at data intensive, analytic intensive, environments, other customers might be looking at VDI or even edge applications. And so, we've worked really closely with VMware to engineer solutions specific to those use cases, not just to a qualification of an operating environment, not just a qualification of certain software stack, but really into an understanding of the use case, the customer solution, and how we take that to market with a very distinct point of view alongside our partners. >> And you can configure the processors based on that workload. Is that right? And match the workload characteristics with the infrastructure is that what I'm getting? >> You do, and actually, well, you've got the same flexibility that we've actually built in why you love virtualization, why people love it, right? You've got the ability to kind of bring harness hardware towards your application needs in a very dynamic way. Right? So if you even think about what we built in vSphere 8 from an AI point of view, we're able to scale. We built the ability to actually take network device cards, and GPU cards, you're to able to build those into a kind of composed device. And, you're able to provision those as you're provisioning out VM's. And, the cool thing about that, is you want to be able to get extreme IO performance when you're doing deep learning applications, and you can now do that, and you can do it very dynamically, as part of the provisioning. So, that's the kind of stuff. You've got to really think, like, what's the use case? What's the applications? How do we build it? And, for the DPU side of things, yes, we've looked at how do we take some of our security services, some of our networking services, and we push those services down onto the SmartNIC. It frees up processors. I think the most interesting thing, that you probably saw on the keynote, was we did benchmarks with Reddit databases. We were seeing 20 plus, I'm sure the exact number, I think it was 27%, I have to get exact number, but a 27% latency improvement, to me... I came from the database background, latency's everything. Latency's king. It's not just... >> Well it's... it's number one conversation. >> I mean, we talk about multi-cloud, and as you start getting into hybrid. >> Right. >> Latency, data movement, efficiency, I mean, this is all in the workload mindset that the workhorses that you guys have been working at HPE with the compute, vSphere, this is heart center of the discussion. I mean, it is under the hood, and we're talking about the engine here, right? >> Sure. >> And people care about this stuff, Mark. This is like... Kubernetes only helps this better with containers. I mean, it's all kind of coming together. Where's that developer piece? 'Cause remember, infrastructure is code, what everybody wants. That's the reality. >> Right. Well, I think if you take a look at... at where the Genesis of the desire to have this capability came from, it came directly out of the fact that you take a look at the big cloud providers, and sure, the ability to have a part of that operating environment, separated out of the CPU, free up as much processing as you possibly can, but it was all in this very lockdown proprietary, can't touch it, can't develop on it. The big cloud guys owned it. VMware has come along and said, "Okay, we're going to democratize that. We're going to make this available for the masses. We're opening this up so that developers can optimize workloads, can optimize applications to run in this kind of environment." And so, really it's about bringing that cloud experience, that demand that customers have for that simplicity, that flexibility, that efficiency, and then marrying it with the agility and security of having your on premises or hybrid cloud environment. And VMware is kind of helping with that... >> That's resonating with the customer, I got to imagine. >> Yeah. >> What's the feedback you're hearing? When you talk to customers about that, the like, "Wait a minute, we'd have to like... How long is that going to take? 'Cause that sounds like a one off." >> Yeah. I'll tell you what... >> Everything is a one off now. You could do a one off. It scales. >> What I hear is give me more. We love where we're going in the first instantiation of what we can do with the Distributed Services Engine. We love what we're seeing. How do we do more? How do we drive more workloads in here? How do we get more efficiency? How can we take more of the overhead out of the CPU, free up more cores. And so, it's a tremendously positive response. And then, it's a response that's resonating with, "Love it. Give me more." >> Oh, if you're democratizing, I love that word because it means democratization, but someone's being democratized. Who's... What's... Something when... that means good things are happening, which means someone's not going to be winning out. Who's that? What... >> Well it, it's not necessarily that someone's not winning out. (laughs) What you read, it comes down to... Democratizing means you've got to look at it, making it widely available. It's available to all. And these things... >> No silos. No gatekeepers. Kind of that kind of thing. >> It's a little operationally difficult to use. You've got... Think about the DPU market. It was a divergent market with different vendors going into that market with different kind of operating systems, and that doesn't work. Right? You've got to actually go and virtualize those DPU's. So then, we can actually bring application innovation onto those DPU's. We can actually start using them in smart ways. We did the same thing with GPU's. We made them incredibly easy to use. We virtualized those GPU's, we're able to, you know, you can provision them in a very simple way. And, we did the same thing with Kubernetes. You mentioned about container based applications and modern apps in the one platform now, you can just set a cluster and you can just say, "Hey I want that as a modern apps enabled cluster." And boom. It's done. And, all of the configurations, set up, Kubernetes, it's done for you. >> But the thing that just GreenLake too, the democratization aspect of how that changed the business model unleashes... >> Right. >> ...efficiency and just simplicity. >> Oh yeah, absolutely. >> But the other thing was the 20% savings on the Reddit's benchmark, with no change required at the application level, correct? >> No change at the application level. In the vCenter, you have to set a little flag. >> Okay. You got to tick a box. >> You got to tick a little box... >> So I can live with that. But the point I'm making is that traditionally, we've had... We have an increasing amount of waste to do offloads, and now you're doing them much more efficiently, right? >> Yes. >> Instead of using the traditional x86 way of doing stuff, you're now doing purpose built, applying that to be much more efficient >> Totally agree. And I think it's becoming, it's going to become even more important. Look at, we are... our run times for our applications, We've got to move to a world where we're building completely confidential applications at all time. And that means that they are secured, encrypted, all traffic is encrypted, whether it's storage traffic, whether it's IO traffic, we've got to make sure we've got complete route of trust of the applications. And so, to do all of that is actually a... compute intensive. It just is. And so, I think as we move forward and people build much more complete, confidential, compute secured environments, you're going to be encrypting all traffic all the time. You're going to be doing micro-zoning and firewalling down at the VM level so that you've got the protection. You can take a VM, you can move it up to the cloud, it will inherit all of its policies, will move with it. All of that will take compute capacity. >> Yup. >> The great thing is that the DPU's give us this ability to offload and to use some of that spare compute capacity. >> And isolate so the application chance can't just tunnel in and get access to that >> You guys got so much going on. You can have your own CUBE show, just on the updating, what's going on between the two companies, and then the innovation. We got one minute left. Just quickly, what's the goal in the partnership? What's next? You guys going to be in the field together, doing joint customer work? Is there bigger plans? Is there events out there? What are some of your plans together in the marketplace? >> That's you. >> Yup. So, I think, Paul kind of alluded to it. Talk about the fact that you've got a hundred thousand partners in common. The venn diagram of looking at the HPE channel and the VMware channel, clearly there's an opportunity there to continue to drive a joint, go to market message, through both of our sales organizations, and through our shared channel. We have a 25,000 strong... solution architect... force that we can leverage. So as we get these exciting things to talk about, I mean, you talk about Project Monterey, the Distributed Services Engine. That's big news. There's big news around vSphere 8. And so, having those great things to go talk about with that strong sales team, with that strong channel organization, I think you're going to see a lot stronger partnership between VMware and HPE as we continue to do this joint development and joint selling >> Lots to get enthused about, pretty much there. >> Oh yeah! >> Yeah, I would just add in that we're actually in a very interesting point as well, where Intel's just coming out with Next Rev systems, we're building the next gen of these systems. I think this is a great time for customers to look at that aging infrastructure that they have in place. Now is a time we can look at upgrading it, but when they're moving it, they can move it also to a cloud subscription based model, you know can modernize not just what you have in terms of the capabilities and densify and get much better efficiency, but you can also modernize the way you buy from us and actually move to... >> Real positive change transformation. Checks the boxes there. And put some position for... >> You got it. >> ... cloud native development. >> Absolutely. >> Guys, thanks for coming on the CUBE. Really appreciate you coming out of that busy schedule and coming on and give us the up... But again, we can do a whole show some... all the moving parts and innovation going on with you guys. So thanks for coming on. Appreciate it. Thank you. I'm John Dave Vellante we're back with more live coverage day two, two sets, three days of wall to wall coverage. This is the CUBE at VMware Explorer. We'll be right back.
SUMMARY :
Great to see you guys. You guys have the deep partnership, Where's the action at? kind of not only the cost and experience in the enterprise. just the vSphere releases and then of course with our new features, both of you share, but the teams actually work very closely, and then to explain how HPE and some of the virtualization services, and put that in the market and said, And match the workload characteristics We built the ability to actually number one conversation. and as you start getting into hybrid. that the workhorses that That's the reality. the ability to have a part of customer, I got to imagine. How long is that going to take? Everything is a one off now. in the first instantiation I love that word because It's available to all. Kind of that kind of thing. We did the same thing with GPU's. But the thing that just GreenLake too, In the vCenter, you have But the point I'm making and firewalling down at the VM level the DPU's give us this ability just on the updating, and the VMware channel, Lots to get enthused about, the way you buy from us Checks the boxes there. and innovation going on with you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Mark Nickerson | PERSON | 0.99+ |
Paul Turner | PERSON | 0.99+ |
Mark | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
John Dave Vellante | PERSON | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
27% | QUANTITY | 0.99+ |
Hewlett-Packard Enterprise | ORGANIZATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Moscone | LOCATION | 0.99+ |
two companies | QUANTITY | 0.99+ |
Monterey | ORGANIZATION | 0.99+ |
Pensando | ORGANIZATION | 0.99+ |
25,000 | QUANTITY | 0.99+ |
two sets | QUANTITY | 0.99+ |
one minute | QUANTITY | 0.99+ |
vSphere | TITLE | 0.99+ |
both sides | QUANTITY | 0.99+ |
vSphere 8 | TITLE | 0.99+ |
three months' | QUANTITY | 0.99+ |
ESX | TITLE | 0.99+ |
three days | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Invidia | ORGANIZATION | 0.99+ |
Two sets | QUANTITY | 0.99+ |
vSphere 6 | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
one platform | QUANTITY | 0.98+ |
20 plus | QUANTITY | 0.98+ |
first instantiation | QUANTITY | 0.98+ |
Project Monterey | ORGANIZATION | 0.97+ |
6 | TITLE | 0.97+ |
GreenLake | ORGANIZATION | 0.97+ |
VMware Explorer | ORGANIZATION | 0.95+ |
Kubernetes | TITLE | 0.94+ |
Three days | QUANTITY | 0.94+ |
day two | QUANTITY | 0.94+ |
vCenter | TITLE | 0.93+ |
hundred thousand | QUANTITY | 0.92+ |
third phase | QUANTITY | 0.92+ |
200,000 joined customers | QUANTITY | 0.92+ |
one | QUANTITY | 0.91+ |
Project Monterey | ORGANIZATION | 0.89+ |
Intel | ORGANIZATION | 0.85+ |
8 | TITLE | 0.84+ |
VCF | ORGANIZATION | 0.84+ |
vSphere | COMMERCIAL_ITEM | 0.83+ |
vSphere | ORGANIZATION | 0.81+ |
20% savings | QUANTITY | 0.81+ |
VMware Explore '22 | EVENT | 0.81+ |
every two years | QUANTITY | 0.8+ |
CUBE | ORGANIZATION | 0.79+ |
hundred thousand partners | QUANTITY | 0.79+ |
three very discrete product | QUANTITY | 0.79+ |
Distributed Services Engine | ORGANIZATION | 0.76+ |
Breaking Analysis: How the cloud is changing security defenses in the 2020s
>> Announcer: From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is "Breaking Analysis" with Dave Vellante. >> The rapid pace of cloud adoption has changed the way organizations approach cybersecurity. Specifically, the cloud is increasingly becoming the first line of cyber defense. As such, along with communicating to the board and creating a security aware culture, the chief information security officer must ensure that the shared responsibility model is being applied properly. Meanwhile, the DevSecOps team has emerged as the critical link between strategy and execution, while audit becomes the free safety, if you will, in the equation, i.e., the last line of defense. Hello, and welcome to this week's, we keep on CUBE Insights, powered by ETR. In this "Breaking Analysis", we'll share the latest data on hyperscale, IaaS, and PaaS market performance, along with some fresh ETR survey data. And we'll share some highlights and the puts and takes from the recent AWS re:Inforce event in Boston. But first, the macro. It's earning season, and that's what many people want to talk about, including us. As we reported last week, the macro spending picture is very mixed and weird. Think back to a week ago when SNAP reported. A player like SNAP misses and the Nasdaq drops 300 points. Meanwhile, Intel, the great semiconductor hope for America misses by a mile, cuts its revenue outlook by 15% for the year, and the Nasdaq was up nearly 250 points just ahead of the close, go figure. Earnings reports from Meta, Google, Microsoft, ServiceNow, and some others underscored cautious outlooks, especially those exposed to the advertising revenue sector. But at the same time, Apple, Microsoft, and Google, were, let's say less bad than expected. And that brought a sigh of relief. And then there's Amazon, which beat on revenue, it beat on cloud revenue, and it gave positive guidance. The Nasdaq has seen this month best month since the isolation economy, which "Breaking Analysis" contributor, Chip Symington, attributes to what he calls an oversold rally. But there are many unknowns that remain. How bad will inflation be? Will the fed really stop tightening after September? The Senate just approved a big spending bill along with corporate tax hikes, which generally don't favor the economy. And on Monday, August 1st, the market will likely realize that we are in the summer quarter, and there's some work to be done. Which is why it's not surprising that investors sold the Nasdaq at the close today on Friday. Are people ready to call the bottom? Hmm, some maybe, but there's still lots of uncertainty. However, the cloud continues its march, despite some very slight deceleration in growth rates from the two leaders. Here's an update of our big four IaaS quarterly revenue data. The big four hyperscalers will account for $165 billion in revenue this year, slightly lower than what we had last quarter. We expect AWS to surpass 83 billion this year in revenue. Azure will be more than 2/3rds the size of AWS, a milestone from Microsoft. Both AWS and Azure came in slightly below our expectations, but still very solid growth at 33% and 46% respectively. GCP, Google Cloud Platform is the big concern. By our estimates GCP's growth rate decelerated from 47% in Q1, and was 38% this past quarter. The company is struggling to keep up with the two giants. Remember, both GCP and Azure, they play a shell game and hide the ball on their IaaS numbers, so we have to use a survey data and other means of estimating. But this is how we see the market shaping up in 2022. Now, before we leave the overall cloud discussion, here's some ETR data that shows the net score or spending momentum granularity for each of the hyperscalers. These bars show the breakdown for each company, with net score on the right and in parenthesis, net score from last quarter. lime green is new adoptions, forest green is spending up 6% or more, the gray is flat, pink is spending at 6% down or worse, and the bright red is replacement or churn. Subtract the reds from the greens and you get net score. One note is this is for each company's overall portfolio. So it's not just cloud. So it's a bit of a mixed bag, but there are a couple points worth noting. First, anything above 40% or 40, here as shown in the chart, is considered elevated. AWS, as you can see, is well above that 40% mark, as is Microsoft. And if you isolate Microsoft's Azure, only Azure, it jumps above AWS's momentum. Google is just barely hanging on to that 40 line, and Alibaba is well below, with both Google and Alibaba showing much higher replacements, that bright red. But here's the key point. AWS and Azure have virtually no churn, no replacements in that bright red. And all four companies are experiencing single-digit numbers in terms of decreased spending within customer accounts. People may be moving some workloads back on-prem selectively, but repatriation is definitely not a trend to bet the house on, in our view. Okay, let's get to the main subject of this "Breaking Analysis". TheCube was at AWS re:Inforce in Boston this week, and we have some observations to share. First, we had keynotes from Steven Schmidt who used to be the chief information security officer at Amazon on Web Services, now he's the CSO, the chief security officer of Amazon. Overall, he dropped the I in his title. CJ Moses is the CISO for AWS. Kurt Kufeld of AWS also spoke, as did Lena Smart, who's the MongoDB CISO, and she keynoted and also came on theCUBE. We'll go back to her in a moment. The key point Schmidt made, one of them anyway, was that Amazon sees more data points in a day than most organizations see in a lifetime. Actually, it adds up to quadrillions over a fairly short period of time, I think, it was within a month. That's quadrillion, it's 15 zeros, by the way. Now, there was drill down focus on data protection and privacy, governance, risk, and compliance, GRC, identity, big, big topic, both within AWS and the ecosystem, network security, and threat detection. Those are the five really highlighted areas. Re:Inforce is really about bringing a lot of best practice guidance to security practitioners, like how to get the most out of AWS tooling. Schmidt had a very strong statement saying, he said, "I can assure you with a 100% certainty that single controls and binary states will absolutely positively fail." Hence, the importance of course, of layered security. We heard a little bit of chat about getting ready for the future and skating to the security puck where quantum computing threatens to hack all of the existing cryptographic algorithms, and how AWS is trying to get in front of all that, and a new set of algorithms came out, AWS is testing. And, you know, we'll talk about that maybe in the future, but that's a ways off. And by its prominent presence, the ecosystem was there enforced, to talk about their role and filling the gaps and picking up where AWS leaves off. We heard a little bit about ransomware defense, but surprisingly, at least in the keynotes, no discussion about air gaps, which we've talked about in previous "Breaking Analysis", is a key factor. We heard a lot about services to help with threat detection and container security and DevOps, et cetera, but there really wasn't a lot of specific talk about how AWS is simplifying the life of the CISO. Now, maybe it's inherently assumed as AWS did a good job stressing that security is job number one, very credible and believable in that front. But you have to wonder if the world is getting simpler or more complex with cloud. And, you know, you might say, "Well, Dave, come on, of course it's better with cloud." But look, attacks are up, the threat surface is expanding, and new exfiltration records are being set every day. I think the hard truth is, the cloud is driving businesses forward and accelerating digital, and those businesses are now exposed more than ever. And that's why security has become such an important topic to boards and throughout the entire organization. Now, the other epiphany that we had at re:Inforce is that there are new layers and a new trust framework emerging in cyber. Roles are shifting, and as a direct result of the cloud, things are changing within organizations. And this first hit me in a conversation with long-time cyber practitioner and Wikibon colleague from our early Wikibon days, and friend, Mike Versace. And I spent two days testing the premise that Michael and I talked about. And here's an attempt to put that conversation into a graphic. The cloud is now the first line of defense. AWS specifically, but hyperscalers generally provide the services, the talent, the best practices, and automation tools to secure infrastructure and their physical data centers. And they're really good at it. The security inside of hyperscaler clouds is best of breed, it's world class. And that first line of defense does take some of the responsibility off of CISOs, but they have to understand and apply the shared responsibility model, where the cloud provider leaves it to the customer, of course, to make sure that the infrastructure they're deploying is properly configured. So in addition to creating a cyber aware culture and communicating up to the board, the CISO has to ensure compliance with and adherence to the model. That includes attracting and retaining the talent necessary to succeed. Now, on the subject of building a security culture, listen to this clip on one of the techniques that Lena Smart, remember, she's the CISO of MongoDB, one of the techniques she uses to foster awareness and build security cultures in her organization. Play the clip >> Having the Security Champion program, so that's just, it's like one of my babies. That and helping underrepresented groups in MongoDB kind of get on in the tech world are both really important to me. And so the Security Champion program is purely purely voluntary. We have over 100 members. And these are people, there's no bar to join, you don't have to be technical. If you're an executive assistant who wants to learn more about security, like my assistant does, you're more than welcome. Up to, we actually, people grade themselves when they join us. We give them a little tick box, like five is, I walk on security water, one is I can spell security, but I'd like to learn more. Mixing those groups together has been game-changing for us. >> Now, the next layer is really where it gets interesting. DevSecOps, you know, we hear about it all the time, shifting left. It implies designing security into the code at the dev level. Shift left and shield right is the kind of buzz phrase. But it's getting more and more complicated. So there are layers within the development cycle, i.e., securing the container. So the app code can't be threatened by backdoors or weaknesses in the containers. Then, securing the runtime to make sure the code is maintained and compliant. Then, the DevOps platform so that change management doesn't create gaps and exposures, and screw things up. And this is just for the application security side of the equation. What about the network and implementing zero trust principles, and securing endpoints, and machine to machine, and human to app communication? So there's a lot of burden being placed on the DevOps team, and they have to partner with the SecOps team to succeed. Those guys are not security experts. And finally, there's audit, which is the last line of defense or what I called at the open, the free safety, for you football fans. They have to do more than just tick the box for the board. That doesn't cut it anymore. They really have to know their stuff and make sure that what they sign off on is real. And then you throw ESG into the mix is becoming more important, making sure the supply chain is green and also secure. So you can see, while much of this stuff has been around for a long, long time, the cloud is accelerating innovation in the pace of delivery. And so much is changing as a result. Now, next, I want to share a graphic that we shared last week, but a little different twist. It's an XY graphic with net score or spending velocity in the vertical axis and overlap or presence in the dataset on the horizontal. With that magic 40% red line as shown. Okay, I won't dig into the data and draw conclusions 'cause we did that last week, but two points I want to make. First, look at Microsoft in the upper-right hand corner. They are big in security and they're attracting a lot of dollars in the space. We've reported on this for a while. They're a five-star security company. And every time, from a spending standpoint in ETR data, that little methodology we use, every time I've run this chart, I've wondered, where the heck is AWS? Why aren't they showing up there? If security is so important to AWS, which it is, and its customers, why aren't they spending money with Amazon on security? And I asked this very question to Merrit Baer, who resides in the office of the CISO at AWS. Listen to her answer. >> It doesn't mean don't spend on security. There is a lot of goodness that we have to offer in ESS, external security services. But I think one of the unique parts of AWS is that we don't believe that security is something you should buy, it's something that you get from us. It's something that we do for you a lot of the time. I mean, this is the definition of the shared responsibility model, right? >> Now, maybe that's good messaging to the market. Merritt, you know, didn't say it outright, but essentially, Microsoft they charge for security. At AWS, it comes with the package. But it does answer my question. And, of course, the fact is that AWS can subsidize all this with egress charges. Now, on the flip side of that, (chuckles) you got Microsoft, you know, they're both, they're competing now. We can take CrowdStrike for instance. Microsoft and CrowdStrike, they compete with each other head to head. So it's an interesting dynamic within the ecosystem. Okay, but I want to turn to a powerful example of how AWS designs in security. And that is the idea of confidential computing. Of course, AWS is not the only one, but we're coming off of re:Inforce, and I really want to dig into something that David Floyer and I have talked about in previous episodes. And we had an opportunity to sit down with Arvind Raghu and J.D. Bean, two security experts from AWS, to talk about this subject. And let's share what we learned and why we think it matters. First, what is confidential computing? That's what this slide is designed to convey. To AWS, they would describe it this way. It's the use of special hardware and the associated firmware that protects customer code and data from any unauthorized access while the data is in use, i.e., while it's being processed. That's oftentimes a security gap. And there are two dimensions here. One is protecting the data and the code from operators on the cloud provider, i.e, in this case, AWS, and protecting the data and code from the customers themselves. In other words, from admin level users are possible malicious actors on the customer side where the code and data is being processed. And there are three capabilities that enable this. First, the AWS Nitro System, which is the foundation for virtualization. The second is Nitro Enclaves, which isolate environments, and then third, the Nitro Trusted Platform Module, TPM, which enables cryptographic assurances of the integrity of the Nitro instances. Now, we've talked about Nitro in the past, and we think it's a revolutionary innovation, so let's dig into that a bit. This is an AWS slide that was shared about how they protect and isolate data and code. On the left-hand side is a classical view of a virtualized architecture. You have a single host or a single server, and those white boxes represent processes on the main board, X86, or could be Intel, or AMD, or alternative architectures. And you have the hypervisor at the bottom which translates instructions to the CPU, allowing direct execution from a virtual machine into the CPU. But notice, you also have blocks for networking, and storage, and security. And the hypervisor emulates or translates IOS between the physical resources and the virtual machines. And it creates some overhead. Now, companies like VMware have done a great job, and others, of stripping out some of that overhead, but there's still an overhead there. That's why people still like to run on bare metal. Now, and while it's not shown in the graphic, there's an operating system in there somewhere, which is privileged, so it's got access to these resources, and it provides the services to the VMs. Now, on the right-hand side, you have the Nitro system. And you can see immediately the differences between the left and right, because the networking, the storage, and the security, the management, et cetera, they've been separated from the hypervisor and that main board, which has the Intel, AMD, throw in Graviton and Trainium, you know, whatever XPUs are in use in the cloud. And you can see that orange Nitro hypervisor. That is a purpose-built lightweight component for this system. And all the other functions are separated in isolated domains. So very strong isolation between the cloud software and the physical hardware running workloads, i.e., those white boxes on the main board. Now, this will run at practically bare metal speeds, and there are other benefits as well. One of the biggest is security. As we've previously reported, this came out of AWS's acquisition of Annapurna Labs, which we've estimated was picked up for a measly $350 million, which is a drop in the bucket for AWS to get such a strategic asset. And there are three enablers on this side. One is the Nitro cards, which are accelerators to offload that wasted work that's done in traditional architectures by typically the X86. We've estimated 25% to 30% of core capacity and cycles is wasted on those offloads. The second is the Nitro security chip, which is embedded and extends the root of trust to the main board hardware. And finally, the Nitro hypervisor, which allocates memory and CPU resources. So the Nitro cards communicate directly with the VMs without the hypervisors getting in the way, and they're not in the path. And all that data is encrypted while it's in motion, and of course, encryption at rest has been around for a while. We asked AWS, is this an, we presumed it was an Arm-based architecture. We wanted to confirm that. Or is it some other type of maybe hybrid using X86 and Arm? They told us the following, and quote, "The SoC, system on chips, for these hardware components are purpose-built and custom designed in-house by Amazon and Annapurna Labs. The same group responsible for other silicon innovations such as Graviton, Inferentia, Trainium, and AQUA. Now, the Nitro cards are Arm-based and do not use any X86 or X86/64 bit CPUs. Okay, so it confirms what we thought. So you may say, "Why should we even care about all this technical mumbo jumbo, Dave?" Well, a year ago, David Floyer and I published this piece explaining why Nitro and Graviton are secret weapons of Amazon that have been a decade in the making, and why everybody needs some type of Nitro to compete in the future. This is enabled, this Nitro innovations and the custom silicon enabled by the Annapurna acquisition. And AWS has the volume economics to make custom silicon. Not everybody can do it. And it's leveraging the Arm ecosystem, the standard software, and the fabrication volume, the manufacturing volume to revolutionize enterprise computing. Nitro, with the alternative processor, architectures like Graviton and others, enables AWS to be on a performance, cost, and power consumption curve that blows away anything we've ever seen from Intel. And Intel's disastrous earnings results that we saw this past week are a symptom of this mega trend that we've been talking about for years. In the same way that Intel and X86 destroyed the market for RISC chips, thanks to PC volumes, Arm is blowing away X86 with volume economics that cannot be matched by Intel. Thanks to, of course, to mobile and edge. Our prediction is that these innovations and the Arm ecosystem are migrating and will migrate further into enterprise computing, which is Intel's stronghold. Now, that stronghold is getting eaten away by the likes of AMD, Nvidia, and of course, Arm in the form of Graviton and other Arm-based alternatives. Apple, Tesla, Amazon, Google, Microsoft, Alibaba, and others are all designing custom silicon, and doing so much faster than Intel can go from design to tape out, roughly cutting that time in half. And the premise of this piece is that every company needs a Nitro to enable alternatives to the X86 in order to support emergent workloads that are data rich and AI-based, and to compete from an economic standpoint. So while at re:Inforce, we heard that the impetus for Nitro was security. Of course, the Arm ecosystem, and its ascendancy has enabled, in our view, AWS to create a platform that will set the enterprise computing market this decade and beyond. Okay, that's it for today. Thanks to Alex Morrison, who is on production. And he does the podcast. And Ken Schiffman, our newest member of our Boston Studio team is also on production. Kristen Martin and Cheryl Knight help spread the word on social media and in the community. And Rob Hof is our editor in chief over at SiliconANGLE. He does some great, great work for us. Remember, all these episodes are available as podcast. Wherever you listen, just search "Breaking Analysis" podcast. I publish each week on wikibon.com and siliconangle.com. Or you can email me directly at David.Vellante@siliconangle.com or DM me @dvellante, comment on my LinkedIn post. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. Be well, and we'll see you next time on "Breaking Analysis." (upbeat theme music)
SUMMARY :
This is "Breaking Analysis" and the Nasdaq was up nearly 250 points And so the Security Champion program the SecOps team to succeed. of the shared responsibility model, right? and it provides the services to the VMs.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
David Floyer | PERSON | 0.99+ |
Mike Versace | PERSON | 0.99+ |
Michael | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Steven Schmidt | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Kurt Kufeld | PERSON | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
J.D. Bean | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Arvind Raghu | PERSON | 0.99+ |
Lena Smart | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
40% | QUANTITY | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Schmidt | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2022 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
two days | QUANTITY | 0.99+ |
Annapurna Labs | ORGANIZATION | 0.99+ |
6% | QUANTITY | 0.99+ |
SNAP | ORGANIZATION | 0.99+ |
five-star | QUANTITY | 0.99+ |
Chip Symington | PERSON | 0.99+ |
47% | QUANTITY | 0.99+ |
Annapurna | ORGANIZATION | 0.99+ |
$350 million | QUANTITY | 0.99+ |
Boston | LOCATION | 0.99+ |
Merrit Baer | PERSON | 0.99+ |
CJ Moses | PERSON | 0.99+ |
40 | QUANTITY | 0.99+ |
Merritt | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Breaking Analysis: H1 of ‘22 was ugly…H2 could be worse Here’s why we’re still optimistic
>> From theCUBE Studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> After a two-year epic run in tech, 2022 has been an epically bad year. Through yesterday, The NASDAQ composite is down 30%. The S$P 500 is off 21%. And the Dow Jones Industrial average 16% down. And the poor holders at Bitcoin have had to endure a nearly 60% decline year to date. But judging by the attendance and enthusiasm, in major in-person tech events this spring. You'd never know that tech was in the tank. Moreover, walking around the streets of Las Vegas, where most tech conferences are held these days. One can't help but notice that the good folks of Main Street, don't seem the least bit concerned that the economy is headed for a recession. Hello, and welcome to this weeks Wiki Bond Cube Insights powered by ETR. In this Breaking Analysis we'll share our main takeaways from the first half of 2022. And talk about the outlook for tech going forward, and why despite some pretty concerning headwinds we remain sanguine about tech generally, but especially enterprise tech. Look, here's the bumper sticker on why many folks are really bearish at the moment. Of course, inflation is high, other than last year, the previous inflation high this century was in July of 2008, it was 5.6%. Inflation has proven to be very, very hard to tame. You got gas at $7 dollars a gallon. Energy prices they're not going to suddenly drop. Interest rates are climbing, which will eventually damage housing. Going to have that ripple effect, no doubt. We're seeing layoffs at companies like Tesla and the crypto names are also trimming staff. Workers, however are still in short supply. So wages are going up. Companies in retail are really struggling with the right inventory, and they can't even accurately guide on their earnings. We've seen a version of this movie before. Now, as it pertains to tech, Crawford Del Prete, who's the CEO of IDC explained this on theCUBE this very week. And I thought he did a really good job. He said the following, >> Matt, you have a great statistic that 80% of companies used COVID as their point to pivot into digital transformation. And to invest in a different way. And so what we saw now is that tech is now where I think companies need to focus. They need to invest in tech. They need to make people more productive with tech and it played out in the numbers. Now so this year what's fascinating is we're looking at two vastly different markets. We got gasoline at $7 a gallon. We've got that affecting food prices. Interesting fun fact recently it now costs over $1,000 to fill an 18 wheeler. All right, based on, I mean, this just kind of can't continue. So you think about it. >> Don't put the boat in the water. >> Yeah, yeah, yeah. Good luck if ya, yeah exactly. So a family has kind of this bag of money, and that bag of money goes up by maybe three, 4% every year, depending upon earnings. So that is sort of sloshing around. So if food and fuel and rent is taking up more, gadgets and consumer tech are not, you're going to use that iPhone a little longer. You're going to use that Android phone a little longer. You're going to use that TV a little longer. So consumer tech is getting crushed, really it's very, very, and you saw it immediately in ad spending. You've seen it in Meta, you've seen it in Facebook. Consumer tech is doing very, very, it is tough. Enterprise tech, we haven't been in the office for two and a half years. We haven't upgraded whether that be campus wifi, whether that be servers, whether that be commercial PCs as much as we would have. So enterprise tech, we're seeing double digit order rates. We're seeing strong, strong demand. We have combined that with a component shortage, and you're seeing some enterprise companies with a quarter of backlog, I mean that's really unheard of. >> And higher prices, which also profit. >> And therefore that drives up the prices. >> And this is a theme that we've heard this year at major tech events, they've really come roaring back. Last year, theCUBE had a huge presence at AWS Reinvent. The first Reinvent since 2019, it was really well attended. Now this was before the effects of the omicron variant, before they were really well understood. And in the first quarter of 2022, things were pretty quiet as far as tech events go But theCUBE'a been really busy this spring and early into the summer. We did 12 physical events as we're showing here in the slide. Coupa, did Women in Data Science at Stanford, Coupa Inspire was in Las Vegas. Now these are both smaller events, but they were well attended and beat expectations. San Francisco Summit, the AWS San Francisco Summit was a bit off, frankly 'cause of the COVID concerns. They were on the rise, then we hit Dell Tech World which was packed, it had probably around 7,000 attendees. Now Dockercon was virtual, but we decided to include it here because it was a huge global event with watch parties and many, many tens of thousands of people attending. Now the Red Hat Summit was really interesting. The choice that Red Hat made this year. It was purposefully scaled down and turned into a smaller VIP event in Boston at the Western, a couple thousand people only. It was very intimate with a much larger virtual presence. VeeamON was very well attended, not as large as previous VeeamON events, but again beat expectations. KubeCon and Cloud Native Con was really successful in Spain, Valencia, Spain. PagerDuty Summit was again a smaller intimate event in San Francisco. And then MongoDB World was at the new Javits Center and really well attended over the three day period. There were lots of developers there, lots of business people, lots of ecosystem partners. And then the Snowflake summit in Las Vegas, it was the most vibrant from the standpoint of the ecosystem with nearly 10,000 attendees. And I'll come back to that in a moment. Amazon re:Mars is the Amazon AI robotic event, it's smaller but very, very cool, a lot of innovation. And just last week we were at HPE Discover. They had around 8,000 people attending which was really good. Now I've been to over a dozen HPE or HPE Discover events, within Europe and the United States over the past decade. And this was by far the most vibrant, lot of action. HPE had a little spring in its step because the company's much more focused now but people was really well attended and people were excited to be there, not only to be back at physical events, but also to hear about some of the new innovations that are coming and HPE has a long way to go in terms of building out that ecosystem, but it's starting to form. So we saw that last week. So tech events are back, but they are smaller. And of course now a virtual overlay, they're hybrid. And just to give you some context, theCUBE did, as I said 12 physical events in the first half of 2022. Just to compare that in 2019, through June of that year we had done 35 physical events. Yeah, 35. And what's perhaps more interesting is we had our largest first half ever in our 12 year history because we're doing so much hybrid and virtual to compliment the physical. So that's the new format is CUBE plus digital or sometimes just digital but that's really what's happening in our business. So I think it's a reflection of what's happening in the broader tech community. So everyone's still trying to figure that out but it's clear that events are back and there's no replacing face to face. Or as I like to say, belly to belly, because deals are done at physical events. All these events we've been to, the sales people are so excited. They're saying we're closing business. Pipelines coming out of these events are much stronger, than they are out of the virtual events but the post virtual event continues to deliver that long tail effect. So that's not going to go away. The bottom line is hybrid is the new model. Okay let's look at some of the big themes that we've taken away from the first half of 2022. Now of course, this is all happening under the umbrella of digital transformation. I'm not going to talk about that too much, you've had plenty of DX Kool-Aid injected into your veins over the last 27 months. But one of the first observations I'll share is that the so-called big data ecosystem that was forming during the hoop and around, the hadoop infrastructure days and years. then remember it dispersed, right when the cloud came in and kind of you know, not wiped out but definitely dampened the hadoop enthusiasm for on-prem, the ecosystem dispersed, but now it's reforming. There are large pockets that are obviously seen in the various clouds. And we definitely see a ecosystem forming around MongoDB and the open source community gathering in the data bricks ecosystem. But the most notable momentum is within the Snowflake ecosystem. Snowflake is moving fast to win the day in the data ecosystem. They're providing a single platform that's bringing different data types together. Live data from systems of record, systems of engagement together with so-called systems of insight. These are converging and while others notably, Oracle are architecting for this new reality, Snowflake is leading with the ecosystem momentum and a new stack is emerging that comprises cloud infrastructure at the bottom layer. Data PaaS layer for app dev and is enabling an ecosystem of partners to build data products and data services that can be monetized. That's the key, that's the top of the stack. So let's dig into that further in a moment but you're seeing machine intelligence and data being driven into applications and the data and application stacks they're coming together to support the acceleration of physical into digital. It's happening right before our eyes in every industry. We're also seeing the evolution of cloud. It started with the SaaS-ification of the enterprise where organizations realized that they didn't have to run their own software on-prem and it made sense to move to SaaS for CRM or HR, certainly email and collaboration and certain parts of ERP and early IS was really about getting out of the data center infrastructure management business called that cloud 1.0, and then 2.0 was really about changing the operating model. And now we're seeing that operating model spill into on-prem workloads finally. We're talking about here about initiatives like HPE's Green Lake, which we heard a lot about last week at Discover and Dell's Apex, which we heard about in May, in Las Vegas. John Furrier had a really interesting observation that basically this is HPE's and Dell's version of outposts. And I found that interesting because outpost was kind of a wake up call in 2018 and a shot across the bow at the legacy enterprise infrastructure players. And they initially responded with these flexible financial schemes, but finally we're seeing real platforms emerge. Again, we saw this at Discover and at Dell Tech World, early implementations of the cloud operating model on-prem. I mean, honestly, you're seeing things like consoles and billing, similar to AWS circa 2014, but players like Dell and HPE they have a distinct advantage with respect to their customer bases, their service organizations, their very large portfolios, especially in the case of Dell and the fact that they have more mature stacks and knowhow to run mission critical enterprise applications on-prem. So John's comment was quite interesting that these firms are basically building their own version of outposts. Outposts obviously came into their wheelhouse and now they've finally responded. And this is setting up cloud 3.0 or Supercloud, as we like to call it, an abstraction layer, that sits above the clouds that serves as a unifying experience across a continuum of on-prem across clouds, whether it's AWS, Azure, or Google. And out to both the near and far edge, near edge being a Lowes or a Home Depot, but far edge could be space. And that edge again is fragmented. You've got the examples like the retail stores at the near edge. Outer space maybe is the far edge and IOT devices is perhaps the tiny edge. No one really knows how the tiny edge is going to play out but it's pretty clear that it's not going to comprise traditional X86 systems with a cool name tossed out to the edge. Rather, it's likely going to require a new low cost, low power, high performance architecture, most likely RM based that will enable things like realtime AI inferencing at that edge. Now we've talked about this a lot on Breaking Analysis, so I'm not going to double click on it. But suffice to say that it's very possible that new innovations are going to emerge from the tiny edge that could really disrupt the enterprise in terms of price performance. Okay, two other quick observations. One is that data protection is becoming a much closer cohort to the security stack where data immutability and air gaps and fast recovery are increasingly becoming a fundamental component of the security strategy to combat ransomware and recover from other potential hacks or disasters. And I got to say from our observation, Veeam is leading the pack here. It's now claiming the number one revenue spot in a statistical dead heat with the Dell's data protection business. That's according to Veeam, according to IDC. And so that space continues to be of interest. And finally, Broadcom's acquisition of Dell. It's going to have ripple effects throughout the enterprise technology business. And there of course, there are a lot of questions that remain, but the one other thing that John Furrier and I were discussing last night John looked at me and said, "Dave imagine if VMware runs better on Broadcom components and OEMs that use Broadcom run VMware better, maybe Broadcom doesn't even have to raise prices on on VMware licenses. Maybe they'll just raise prices on the OEMs and let them raise prices to the end customer." Interesting thought, I think because Broadcom is so P&L focused that it's probably not going to be the prevailing model but we'll see what happens to some of the strategic projects rather like Monterey and Capitola and Thunder. We've talked a lot about project Monterey, the others we'll see if they can make the cut. That's one of the big concerns because it's how OEMs like the ones that are building their versions of outposts are going to compete with the cloud vendors, namely AWS in the future. I want to come back to the comment on the data stack for a moment that we were talking about earlier, we talked about how the big data ecosystem that was once coalescing around hadoop dispersed. Well, the data value chain is reforming and we think it looks something like this picture, where cloud infrastructure lives at the bottom. We've said many times the cloud is expanding and evolving. And if companies like Dell and HPE can truly build a super cloud infrastructure experience then they will be in a position to capture more of the data value. If not, then it's going to go to the cloud players. And there's a live data layer that is increasingly being converged into platforms that not only simplify the movement in ELTing of data but also allow organizations to compress the time to value. Now there's a layer above that, we sometimes call it the super PaaS layer if you will, that must comprise open source tooling, partners are going to write applications and leverage platform APIs and build data products and services that can be monetized at the top of the stack. So when you observe the battle for the data future it's unlikely that any one company is going to be able to do this all on their own, which is why I often joke that the 2020s version of a sweaty Steve Bomber running around the stage, screaming, developers, developers developers, and getting the whole audience into it is now about ecosystem ecosystem ecosystem. Because when you need to fill gaps and accelerate features and provide optionality a list of capabilities on the left hand side of this chart, that's going to come from a variety of different companies and places, we're talking about catalogs and AI tools and data science capabilities, data quality, governance tools and it should be of no surprise to followers of Breaking Analysis that on the right hand side of this chart we're including the four principles of data mesh, which of course were popularized by Zhamak Dehghani. So decentralized data ownership, data as products, self-serve platform and automated or computational governance. Now whether this vision becomes a reality via a proprietary platform like Snowflake or somehow is replicated by an open source remains to be seen but history generally shows that a defacto standard for more complex problems like this is often going to emerge prior to an open source alternative. And that would be where I would place my bets. Although even that proprietary platform has to include open source optionality. But it's not a winner take all market. It's plenty of room for multiple players and ecosystem innovators, but winner will definitely take more in my opinion. Okay, let's close with some ETR data that looks at some of those major platform plays who talk a lot about digital transformation and world changing impactful missions. And they have the resources really to compete. This is an XY graphic. It's a view that we often show, it's got net score on the vertical access. That's a measure of spending momentum, and overlap or presence in the ETR survey. That red, that's the horizontal access. The red dotted line at 40% indicates that the platform is among the highest in terms of spending velocity. Which is why I always point out how impressive that makes AWS and Azure because not only are they large on the horizontal axis, the spending momentum on those two platforms rivals even that of Snowflake which continues to lead all on the vertical access. Now, while Google has momentum, given its goals and resources, it's well behind the two leaders. We've added Service Now and Salesforce, two platform names that have become the next great software companies. Joining likes of Oracle, which we show here and SAP not shown along with IBM, you can see them on this chart. We've also plotted MongoDB, which we think has real momentum as a company generally but also with Atlas, it's managed cloud database as a service specifically and Red Hat with trying to become the standard for app dev in Kubernetes environments, which is the hottest trend right now in application development and application modernization. Everybody's doing something with Kubernetes and of course, Red Hat with OpenShift wants to make that a better experience than do it yourself. The DYI brings a lot more complexity. And finally, we've got HPE and Dell both of which we've talked about pretty extensively here and VMware and Cisco. Now Cisco is executing on its portfolio strategy. It's got a lot of diverse components to its company. And it's coming at the cloud of course from a networking and security perspective. And that's their position of strength. And VMware is a staple of the enterprise. Yes, there's some uncertainty with regards to the Broadcom acquisition, but one thing is clear vSphere isn't going anywhere. It's entrenched and will continue to run lots of IT for years to come because it's the best platform on the planet. Now, of course, these are just some of the players in the mix. We expect that numerous non-traditional technology companies this is important to emerge as new cloud players. We've put a lot of emphasis on the data ecosystem because to us that's really going to be the main spring of digital, i.e., a digital company is a data company and that means an ecosystem of data partners that can advance outcomes like better healthcare, faster drug discovery, less fraud, cleaner energy, autonomous vehicles that are safer, smarter, more efficient grids and factories, better government and virtually endless litany of societal improvements that can be addressed. And these companies will be building innovations on top of cloud platforms creating their own super clouds, if you will. And they'll come from non-traditional places, industries, finance that take their data, their software, their tooling bring them to their customers and run them on various clouds. Okay, that's it for today. Thanks to Alex Myerson, who is on production and does the podcast for Breaking Analysis, Kristin Martin and Cheryl Knight, they help get the word out. And Rob Hoofe is our editor and chief over at Silicon Angle who helps edit our posts. Remember all these episodes are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com or DM me at dvellante, or comment on my LinkedIn posts. And please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE's Insights powered by ETR. Thanks for watching be well. And we'll see you next time on Breaking Analysis. (upbeat music)
SUMMARY :
This is Breaking Analysis that the good folks of Main Street, and it played out in the numbers. haven't been in the office And higher prices, And therefore that is that the so-called big data ecosystem
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
Rob Hoofe | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
July of 2008 | DATE | 0.99+ |
Europe | LOCATION | 0.99+ |
5.6% | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
Spain | LOCATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Boston | LOCATION | 0.99+ |
San Francisco | LOCATION | 0.99+ |
Monterey | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
12 year | QUANTITY | 0.99+ |
2018 | DATE | 0.99+ |
Discover | ORGANIZATION | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2019 | DATE | 0.99+ |
May | DATE | 0.99+ |
June | DATE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
Last year | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
Silicon Angle | ORGANIZATION | 0.99+ |
Crawford Del Prete | PERSON | 0.99+ |
30% | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
12 physical events | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
KubeCon | EVENT | 0.99+ |
last week | DATE | 0.99+ |
United States | LOCATION | 0.99+ |
Android | TITLE | 0.99+ |
Dockercon | EVENT | 0.99+ |
40% | QUANTITY | 0.99+ |
two and a half years | QUANTITY | 0.99+ |
35 physical events | QUANTITY | 0.99+ |
Steve Bomber | PERSON | 0.99+ |
Capitola | ORGANIZATION | 0.99+ |
Cloud Native Con | EVENT | 0.99+ |
Red Hat Summit | EVENT | 0.99+ |
two leaders | QUANTITY | 0.99+ |
San Francisco Summit | EVENT | 0.99+ |
last year | DATE | 0.99+ |
21% | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
John Furrier | PERSON | 0.99+ |
VeeamON | EVENT | 0.99+ |
this year | DATE | 0.99+ |
16% | QUANTITY | 0.99+ |
$7 a gallon | QUANTITY | 0.98+ |
each week | QUANTITY | 0.98+ |
over $1,000 | QUANTITY | 0.98+ |
35 | QUANTITY | 0.98+ |
PagerDuty Summit | EVENT | 0.98+ |
Mike Beltrano, AMD & Phil Soper, HPE | HPE Discover 2022
(soft upbeat music) >> Narrator: theCUBE presents HPE Discover 2022 brought to you by HPE. >> Hey everyone. Welcome back to Las Vegas. theCUBE is live. We love saying that. theCUBE is live at HPE Discover '22. It's about 8,000 HP folks here, customers, partners, leadership. It's been an awesome day one. We're looking forward to a great conversation next. Lisa Martin, Dave Vellante, two guests join us. We're going to be talking about the power of the channel. Mike Beltrano joins us, Worldwide Channel Sales Leader at AMD, and Phil Soper is here, the North America Head of Channel Sales at HPE. Guys, great to have you. >> Thanks for having us. >> Great to be here. >> So we're talking a lot today about the ecosystem. It's evolved tremendously. Talk to us about the partnership. Mike, we'll start with you. Phil, we'll go to you. What's new with HPE and AMD Better Together? >> It's more than a partnership. It's actually a relationship. We are really tied at the hip, not just in X86 servers but we're really starting to get more diverse in HP's portfolio. We're in their hyper-converged solutions, we're in their storage solutions, we're in GreenLake. It's pretty hard to get away from AMD within the HP portfolio so the relationship is really good. It's gone beyond just a partnership so starting to transition now down into the channel, and we're really excited about it. >> Phil, talk about that more. Talk about the evolution of the partnership and that kind of really that pull-down. >> I think there's an impression sometimes that AMD is kind of the processor that's in our computers and it's so much more, the relationship is so much more than the inclusion of the technology. We co-develop solutions. Interesting news today at Antonio's presentation of the first Exascale supercomputer. We're solving health problems with the supercomputer that was co-developed between AMD and HPE. The other thing I would add is from a channel perspective, it's way more than just what's in the technology. It's how we engage and how we go to market together. And we're very active in working together to offer our solutions to customers and to be competitive and to win. >> Describe that go-to-market model that you guys have, specifically in the channel. >> So, there is a, his organization and mine, we develop joint go-to-market channel programs. We work through the same channel ecosystem of partners. We engage on specific opportunities. We work together to make sure we have the right creative solution pricing to be aggressive in the marketplace and to compete. >> It's a great question because we're in a supply chain crisis right now, right? And you look at the different ways that HP can go to market through the channel. There's probably about four or five ways that channel partners can provide solutions, but it's also route to purchase for the customers. So, we're in a supply chain crisis right now, but we have HP AMD servers in stock in distribution right now. That's a real big competitive advantage, okay? And if those aren't exactly what you need, HP can do custom solutions with AMD platforms all day, across the board. And if you want to go ahead and do it through the cloud, you've got AMD technology in GreenLake. So, it's pretty much have it your way for the customers through the channel and it's really great for the customers too because there's multiple ways for them to procure the equipment through the channel so we really love the way that HP allows us to kind of integrate into their products, but then integrate into their procurement model down through the channel for the end user to make the right choice. So, it's fantastic. >> You mentioned that AMD's in HCI, in storage, in GreenLake and in the channel. What are the different requirements within those areas? How does the channel influence those requirements and what you guys actually go to market with? >> Well, it comes down to awareness. Awareness is our biggest enemy and the channel's just huge for us because AMD's competitive advantage in our technology is much different. And when you think about price and performance and security and sustainability, that's what we're delivering. And really the channel kind of plugs that in and educates their customers through their marketing and demand gen, kind of influences when they hear from their customers or if they're proactively touching them, influences the route to purchase based on their situation, if they want to pay for it as a service, if they want to finance it, if it does happen to be in stock and speed of delivery is important to them, the channel partner influences that through the relationships and distribution or they can go ahead and place it as a custom to order. So, it's just really based on where they're at in their purchasing cycle and also, it's not about the hardware as much as it's about the software and the applications and the high-value workloads that they're running and that kind of just dictates the platform. >> Does hardware matter? >> Yes, it sure does. It does, man. We're just kind of, it's kind of like the vessel at this point and our processors and our GPS are in the HP vessel, but it is about the application. >> I love that analogy. I would say, absolutely does, workloads matter more and then what's the hardware to run those workloads is really critical. >> And to your point though, it's not just about the CPU anymore. It's about, you guys have made some acquisitions to sort of diversify. It's about all the other supporting sort of actors, if you will, that support those new workloads. >> Let me give you an example that's being showcased at this show, okay? Our extreme search solution with being driven by Splunk, okay? And it's a cybersecurity solution that the industry is going to have to be able to handle in regards to response to any sort of breach and when you think about, they have to search through the data and how they have to get through it and do it in a timely fashion. What we've done is developed a DL385 solution where we have a epic processor from AMD, we have a Xilinx which who we own now, they're FGPA, and Samsung SSDs which are four terabytes per drive packed in a DL385. Now you add the Splunk solution on top of that and if there ever is a breach, it would normally take about days to go ahead and access that breach. Now it can be done in 25 minutes and we have that solution here right now so it's not like we acquire Xilinx and we're waiting to integrate it. We hit the ground running and it's fantastic 'cause the solution's being driven by one of our top partners, WWT, and it's live in their booth here today so we're kind of showing that integration of what AMD is doing with our acquisitions in HP servers and being able to show that today with a workload on top of it is real deal. >> Purpose-built to scan through all those log files and actually surface the inside. >> Exactly what it is, and it's on public sector right now, that's a requirement to be able to do that and to not have it take weeks and be able to do it in 25 minutes is pretty impressive. >> Those are the outcomes customers are demanding? >> That's it. People are, if you're purchasing an outcome, HP can deliver it with AMD and if you're looking to build your own, we can give it to you that way too so, it's flexibility. >> Absolutely critical. Mike, from your perspective on the partnership we've seen and obviously a lot of transformation at HPE over the last couple of years, Antonio stood on this stage three years ago and said, "By 2022, we're going to deliver the entire portfolio as a service." How influential has AMD been from a relationship perspective on what he said three years ago and where they are today? >> Oh my gosh! We've been with them all the way through. I mean, HP is just such a great partner, and right now, we're the VDI solution on GreenLake so it's HP GreenLake, VDI solutions powered by AMD. We love that brand recognition as a service, okay? Same with high-performance computing powered by AMD, offered on HP GreenLake so it's really changed it a lot because as a service, it's just a different way for a customer to procure it and they don't have to worry about that hardware and the stack and anything like that. It's more about them going into that GreenLake portal and being able to understand that they're paying it just like they pay their phone bill or anything else so it's really Antonio's been spot-on with that because that's a reality today and it's being delivered through the channel and AMD's proud to be a part of it and it's much different 'cause we don't need to be as evolved as we have to be from a hardware sale perspective when it's going through GreenLake and it makes it much easier for us. >> Phil, you talked about workloads, really kind of what matter, how are they evolving? How is that affecting? What are customers grabbing you and saying, "We need this." What do you and from a workload standpoint and how are you delivering that? >> Well, the edge to the cloud platform or GreenLake is very much as a service offering, aimed at workloads. And so, if HPE is building and focusing its solutions on addressing specific workload needs, it's not about necessarily the performance you mentioned, or you're asking the question about hardware. It's not necessarily about that. It's, what is the workload, should the workload be, or could the workload be in public cloud or is it a workload that needs to be on premise and customers are making those choices and we're working with those customers to help them drive those strategies and then we adapt depending on where the customer wants the workload. >> Well, it's interesting, because Antonio in his keynote today said, "That's the wrong question," and my reaction was that's the question everybody's asking. It may be the wrong question, but that's what so, your challenge is to, I guess, get them to stop asking that question and just run the right tool for the right job kind of thing. >> That's exactly what it's about because you take high-value workloads, okay? And that can mean a lot of different things and if you just pick one of them, let's say like VDI or hyper-converged. HP's the only game in town where they can kind of go into a gun, a battle with four different guns. They give you a lot of choices and they offer them on an AMD platform and they're not locking you in. They give you a lot of flexibility and choice. So, if you were doing hyper-converged through HPE and you were looking to do it on AMD platform, they can offer to you with VMware vSAN ReadyNodes. They can offer it to you with SimpliVity. They can offer it to you with Nutanix. They can offer it to you with Microsoft, all on an AMD stack. And if you want to bring your own VMware and go bare metal, HP will just give you the notes. If you want to go factory integrated or if you want to purchase it via OEM through HP and have them support it, they just deliver it any way you want to get it. It's just a fantastic story. >> I'll just say, look, others could do that, but they don't want to, okay? That's the fact. Sometimes it happens, sometimes the channel cobbles it together in the field, but it's like they do it grinding their teeth so I mean, I think that is a differentiator of HPE. You're agnostic to that. In fact, by design. >> They can bring your own, you can bring your own software. I mean, it's like, you just bring your own. I mean, if you have it, why would we make a customer buy it again? And HP gives them that flexibility and if it's multiple hypervisors and it's brand agnostic, it's more about, let's deliver you the nodes, purpose-built, for the application that you're going to run in that workload and then HP goes ahead and does that across their portfolio on a custom to order. It's just beautiful for us to fit the need for the customer. >> Well, you're meeting customers where they are. >> Yes. >> Which in today's world is critical. There's no, really no other option for companies. Customers are demanding. Demands are not going to go. We're not going to see a decrease after the pandemic's over of demand, right? And the expectations on businesses. So meeting the customers where they are, giving them that choice, that flexibility is table stakes. >> How has those, you've mentioned supply chain constraints, it sounds like you guys are managing that pretty well. It's I think it's a lot of these hard to get supporting components, maybe not the most expensive component, but they just don't have it. So you can't ship the car or you can't ship the server, whatever it is, how is that affecting the channel? How are they dealing with that? Maybe you could give us an update. >> Oh, the channel's just, we love them, they're the front line, that's who the customers call in, who's been waiting to get their technology and we're wading through it, thank goodness that we have GreenLake because if you wanted to buy it traditionally, because HP is supplying supply-to-purchase through distribution in stock, but it's very limited. And then if you go customer order, that's where the long lead times come into place because it's not just the hard drives and memory and the traditional things that are constrained now. Now it's like the clips and the intangibles and things like that and when you get to that point, you got to just do the best you can and HP supply chain has just been fantastic, super informative, AMD, we're not the problem. We got HP, plenty of processors and plenty of accelerators and GPUs and we're standing with them because that back to the relationship, we're facing the customer with them and managing their expectations to the best we can and trying to give them options to keep their business floating. >> So is that going to be, is this a supply chain constraints could be an accelerant for GreenLake because that capacity is in place for you to service your customers with GreenLake presumably. You're planning for that. There's headroom there in terms of being able to deliver that. If you can't deliver GreenLake, all this promise. >> I would say I would be careful not to position GreenLake as an answer to supply chain challenges, right? I think there's a greater value proposition to a client, and keep in mind, you still have technology at the heart of it, right? And so, and to your question though about our partners, honestly in a lot of ways, it's heartbreaking given the challenges that they face, not just with HPE, but other vendors that they sell and support and without our partners and managing those, we'd be in a world of hurt, frankly and we're working on options. We work with our partners really closely. We work with AMD where we have constraints to move to other potential configurations. >> Does GreenLake make it harder or easier for you to forecast? Because on the one hand, it's as a service and on the other hand, I can dial it down as a customer or dial it up and spike it up if I need to. Do you have enough experience to know at this point, whether it's easier or harder to forecast? >> I think intuitively it's probably harder because you have that variable component that you can't forecast, right? It's with GreenLake, you have your baseline so you know what that baseline is going to be, the baseline commitment and you build in that variable component which is as a service, you pay for what you consume. So that variable component is the one thing that is we can estimate but we don't know exactly what the customer is going to use. >> When you do a GreenLake deal, how does it work? Let's say it's a two-year deal or a three-year deal, whatever and you negotiate a price with a customer for price per X. Do you know like what that contract value is going to be over the life or do you only know that that baseline and then everything else is upside for you and extra additional cost? So how does that work? >> It's a good question. So you know both, you know the baseline and you know what the variable capacity is, what the limits are. So at the beginning of the contract, that's what you know, whether or not a customer determines that they have to expand or do a change order to add another workload into the configuration is the one thing that we hope happens. You don't know. >> But you know with certainty that over the life of that contract, the amount of that contract that's booked, you're going to recognize at some point that. You just don't know when. >> Yes, and so that, and that's to your question, you know that element, the fluctuation in terms of usage is depending on what's happening in the world, right? The pandemic, as an example, with GreenLake customers, probably initially at the beginning of the pandemic, their usage went down for obvious reasons and then it fluctuates up. >> I think a lot of people don't understand that. That's an interesting nuance. Cool, thank you. >> Guys, thanks so much for joining us on the program, talking about the relationship that AMD and HPE have together, the benefits for customers on the outcomes that it's achieving. We appreciate your insights and your time. >> Thanks for having us, guys. >> Appreciate it. >> Our pleasure. >> Phil: Thank you. >> For our guests and Dave Vellante. I'm Lisa Martin live in Las Vegas at HPE Discover '22. Stick around. Our keynote analysis is up next. (soft upbeat music)
SUMMARY :
brought to you by HPE. and Phil Soper is here, to us about the partnership. It's pretty hard to get away from AMD and that kind of really that pull-down. and to be competitive and to win. model that you guys have, to make sure we have the right that HP can go to market and what you guys actually and also, it's not about the hardware it's kind of like the vessel at this point and then what's the hardware it's not just about the CPU anymore. and being able to show and actually surface the inside. and be able to do it in 25 and if you're looking to build your own, on the partnership we've seen and they don't have to and how are you delivering that? Well, the edge to the that question and just run the right tool they can offer to you with That's the fact. and if it's multiple hypervisors customers where they are. So meeting the customers where they are, that affecting the channel? and the traditional things So is that going to be, is and keep in mind, you and on the other hand, I can the customer is going to use. and you negotiate a price with and you know what the that over the life of that contract, that's to your question, I think a lot of people on the outcomes that it's achieving. analysis is up next.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Mike | PERSON | 0.99+ |
Antonio | PERSON | 0.99+ |
Mike Beltrano | PERSON | 0.99+ |
Phil | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
two-year | QUANTITY | 0.99+ |
three-year | QUANTITY | 0.99+ |
HP | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Phil Soper | PERSON | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
Las Vegas | LOCATION | 0.99+ |
two guests | QUANTITY | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
GreenLake | ORGANIZATION | 0.99+ |
25 minutes | QUANTITY | 0.99+ |
five ways | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
25 minutes | QUANTITY | 0.99+ |
three years ago | DATE | 0.99+ |
today | DATE | 0.99+ |
Xilinx | ORGANIZATION | 0.98+ |
2022 | DATE | 0.98+ |
one | QUANTITY | 0.97+ |
one thing | QUANTITY | 0.96+ |
pandemic | EVENT | 0.95+ |
theCUBE | ORGANIZATION | 0.95+ |
Breaking Analysis: Broadcom, Taming the VMware Beast
>> From theCUBE studios in Palo Alto in Boston, bringing you data driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> In the words of my colleague CTO David Nicholson, Broadcom buys old cars, not to restore them to their original luster and beauty. Nope. They buy classic cars to extract the platinum that's inside the catalytic converter and monetize that. Broadcom's planned 61 billion acquisition of VMware will mark yet another new era and chapter for the virtualization pioneer, a mere seven months after finally getting spun out as an independent company by Dell. For VMware, this means a dramatically different operating model with financial performance and shareholder value creation as the dominant and perhaps the sole agenda item. For customers, it will mean a more focused portfolio, less aspirational vision pitches, and most certainly higher prices. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we'll share data, opinions and customer insights about this blockbuster deal and forecast the future of VMware, Broadcom and the broader ecosystem. Let's first look at the key deal points, it's been well covered in the press. But just for the record, $61 billion in a 50/50 cash and stock deal, resulting in a blended price of $138 per share, which is a 44% premium to the unaffected price, i.e. prior to the news breaking. Broadcom will assume 8 billion of VMware debt and promises that the acquisition will be immediately accretive and will generate 8.5 billion in EBITDA by year three. That's more than 4 billion in EBITDA relative to VMware's current performance today. In a classic Broadcom M&A approach, the company promises to dilever debt and maintain investment grade ratings. They will rebrand their software business as VMware, which will now comprise about 50% of revenues. There's a 40 day go shop and importantly, Broadcom promises to continue to return 60% of its free cash flow to shareholders in the form of dividends and buybacks. Okay, with that out of the way, we're going to get to the money slide literally in a moment that Broadcom shared on its investor call. Broadcom has more than 20 business units. It's CEO Hock Tan makes it really easy for his business unit managers to understand. Rule number one, you agreed to an operating plan with targets for revenue, growth, EBITDA, et cetera, hit your numbers consistently and we're good. You'll be very well compensated and life will be wonderful for you and your family. Miss the number, and we're going to have a frank and uncomfortable bottom line discussion. You'll four, perhaps five quarters to turn your business around, if you don't, we'll kill it or sell it if we can. Rule number two, refer to rule number one. Hello, VMware, here's the money slide. I'll interpret the bullet points on the left for clarity. Your fiscal year 2022 EBITDA was 4.7 billion. By year three, it will be 8.5 billion. And we Broadcom have four knobs to turn with you, VMware to help you get there. First knob, if it ain't recurring revenue with rubber stamp renewals, we're going to convert that revenue or kill it. Knob number two, we're going to focus R&D in the most profitable areas of the business. AKA expect the R&D budget to be cut. Number three, we're going to spend less on sales and marketing by focusing on existing customers. We're not going to lose money today and try to make it up many years down the road. And number four, we run Broadcom with 1% GNA. You will too. Any questions? Good. Now, just to give you a little sense of how Broadcom runs its business and how well run a company it is, let's do a little simple comparison with this financial snapshot. All we're doing here is taking the most recent quarterly earnings reports from Broadcom and VMware respectively. We take the quarterly revenue and multiply by four X to get the revenue run rate and then we calculate the ratios off of the most recent quarters revenue. It's worth spending some time on this to get a sense of how profitable the Broadcom business actually is and what the spreadsheet gurus at Broadcom are seeing with respect to the possibilities for VMware. So combined, we're talking about a 40 plus billion dollar company. Broadcom is growing at more than 20% per year. Whereas VMware's latest quarter showed a very disappointing 3% growth. Broadcom is mostly a hardware company, but its gross margin is in the high seventies. As a software company of course VMware has higher gross margins, but FYI, Broadcom's software business, the remains of Symantec and what they purchased as CA has 90% gross margin. But the I popper is operating margin. This is all non gap. So it excludes things like stock based compensation, but Broadcom had 61% operating margin last quarter. This is insanely off the charts compared to VMware's 25%. Oracle's non gap operating margin is 47% and Oracle is an incredibly profitable company. Now the red box is where the cuts are going to take place. Broadcom doesn't spend much on marketing. It doesn't have to. It's SG&A is 3% of revenue versus 18% for VMware and R&D spend is almost certainly going to get cut. The other eye popper is free cash flow as a percentage of revenue at 51% for Broadcom and 29% for VMware. 51%. That's incredible. And that my dear friends is why Broadcom a company with just under 30 billion in revenue has a market cap of 230 billion. Let's dig into the VMware portfolio a bit more and identify the possible areas that will be placed under the microscope by Hock Tan and his managers. The data from ETR's latest survey shows the net score or spending momentum across VMware's portfolio in this chart, net score essentially measures the net percent of customers that are spending more on a specific product or vendor. The yellow bar is the most recent survey and compares the April 22 survey data to April 21 and January of 22. Everything is down in the yellow from January, not surprising given the economic outlook and the change in spending patterns that we've reported. VMware Cloud on AWS remains the product in the ETR survey with the most momentum. It's the only offering in the portfolio with spending momentum above the 40% line, a level that we consider highly elevated. Unified Endpoint Management looks more than respectable, but that business is a rock fight with Microsoft. VMware Cloud is things like VMware Cloud foundation, VCF and VMware's cross cloud offerings. NSX came from the Nicira acquisition. Tanzu is not yet pervasive and one wonders if VMware is making any money there. Server is ESX and vSphere and is the bread and butter. That is where Broadcom is going to focus. It's going to look at VSAN and NSX, which is software probably profitable. And of course the other products and see if the investments are paying off, if they are Broadcom will keep, if they are not, you can bet your socks, they will be sold off or killed. Carbon Black is at the far right. VMware paid $2.1 billion for Carbon Black. And it's the lowest performer on this list in terms of net score or spending momentum. And that doesn't mean it's not profitable. It just doesn't have the momentum you'd like to see, so you can bet that is going to get scrutiny. Remember VMware's growth has been under pressure for the last several years. So it's been buying companies, dozens of them. It bought AirWatch, bought Heptio, Carbon Black, Nicira, SaltStack, Datrium, Versedo, Bitnami, and on and on and on. Many of these were to pick up engineering teams. Some of them were to drive new revenue. Now this is definitely going to be scrutinized by Broadcom. So that helps explain why Michael Dell would sell VMware. And where does VMware go from here? It's got great core product. It's an iconic name. It's got an awesome ecosystem, fantastic distribution channel, but its growth is slowing. It's got limited developer chops in a world that developers and cloud native is all the rage. It's got a far flung R&D agenda going at war with a lot of different places. And it's increasingly fighting this multi front war with cloud companies, companies like Cisco, IBM Red Hat, et cetera. VMware's kind of becoming a heavy lift. It's a perfect acquisition target for Broadcom and why the street loves this deal. And we titled this Breaking Analysis taming the VMware beast because VMware is a beast. It's ubiquitous. It's an epic software platform. EMC couldn't control it. Dell used it as a piggy bank, but really didn't change its operating model. Broadcom 100% will. Now one of the things that we get excited about is the future of systems architectures. We published a breaking analysis about a year ago, talking about AWS's secret weapon with Nitro and it's Annapurna custom Silicon efforts. Remember it acquired Annapurna for a measly $350 million. And we talked about how there's a new architecture and a new price performance curve emerging in the enterprise, driven by AWS and being followed by Microsoft, Google, Alibaba, a trend toward custom Silicon with the arm based Nitro and which is AWS's hypervisor and Nick strategy, enabling processor diversity with things like Graviton and Trainium and other diverse processors, really diversifying away from x86 and how this leads to much faster product cycles, faster tape out, lower costs. And our premise was that everyone in the data center is going to competes, is going to need a Nitro to be competitive long term. And customers are going to gravitate toward the most economically favorable platform. And as we describe the landscape with this chart, we've updated this for this Breaking Analysis and we'll come back to nitro in a moment. This is a two dimensional graphic with net score or spending momentum on the vertical axis and overlap formally known as market share or presence within the survey, pervasiveness that's on the horizontal axis. And we plot various companies and products and we've inserted VMware's net score breakdown. The granularity in those colored bars on the bottom right. Net score is essentially the green minus the red and a couple points on that. VMware in the latest survey has 6% new adoption. That's that lime green. It's interesting. The question Broadcom is going to ask is, how much does it cost you to acquire that 6% new. 32% of VMware customers in the survey are increasing spending, meaning they're increasing spending by 6% or more. That's the forest green. And the question Broadcom will dig into is what percent of that increased spend (chuckles) you're capturing is profitable spend? Whatever isn't profitable is going to be cut. Now that 52% gray area flat spending that is ripe for the Broadcom picking, that is the fat middle, and those customers are locked and loaded for future rent extraction via perpetual renewals and price increases. Only 8% of customers are spending less, that's the pinkish color and only 3% are defecting, that's the bright red. So very, very sticky profile. Perfect for Broadcom. Now the rest of the chart lays out some of the other competitor names and we've plotted many of the VMware products so you can see where they fit. They're all pretty respectable on the vertical axis, that's spending momentum. But what Broadcom wants is that core ESX vSphere base where we've superimposed the Broadcom logo. Broadcom doesn't care so much about spending momentum. It cares about profitability potential and then momentum. AWS and Azure, they're setting the pace in this business, in the upper right corner. Cisco very huge presence in the data center, as does Intel, they're not in the ETR survey, but we've superimposed them. Now, Intel of course, is in a dog fight within Nvidia, the Arm ecosystem, AMD, don't forget China. You see a Google cloud platform is in there. Oracle is also on the chart as well, somewhat lower on the vertical axis, but it doesn't have that spending momentum, but it has a big presence. And it owns a cloud as we've talked about many times and it's highly differentiated. It's got a strategy that allows it to differentiate from the pack. It's very financially driven. It knows how to extract lifetime value. Safra Catz operates in many ways, similar to what we're seeing from Hock Tan and company, different from a portfolio standpoint. Oracle's got the full stack, et cetera. So it's a different strategy. But very, very financially savvy. You could see IBM and IBM Red Hat in the mix and then Dell and HP. I want to come back to that momentarily to talk about where value is flowing. And then we plotted Nutanix, which with Acropolis could suck up some V tax avoidance business. Now notice Symantec and CA, relatively speaking in the ETR survey, they have horrible spending momentum. As we said, Broadcom doesn't care. Hock Tan is not going for growth at the expense of profitability. So we fully expect VMware to come down on the vertical axis over time and go up on the profit scale. Of course, ETR doesn't measure the profitability here. Now back to Nitro, VMware has this thing called Project Monterey. It's essentially their version of Nitro and will serve as their future architecture diversifying off x86 and accommodating alternative processors. And a much more efficient performance, price in energy consumption curve. Now, one of the things that we've advocated for, we said this about Dell and others, including VMware to take a page out of AWS and start developing custom Silicon to better integrate hardware and software and accelerate multi-cloud or what we call supercloud. That layer above the cloud, not just running on individual clouds. So this is all about efficiency and simplicity to own this space. And we've challenged organizations to do that because otherwise we feel like the cloud guys are just going to have consistently better costs, not necessarily price, but better cost structures, but it begs the question. What happens to Project Monterey? Hock Tan and Broadcom, they don't invest in something that is unproven and doesn't throw off free cash flow. If it's not going to pay off for years to come, they're probably not going to invest in it. And yet Project Monterey could help secure VMware's future in not only the data center, but at the edge and compete more effectively with cloud economics. So we think either Project Monterey is toast or the VMware team will knock on the door of one of Broadcom's 20 plus business units and say, guys, what if we work together with you to develop a version of Monterey that we can use and sell to everyone, it'd be the arms dealer to everyone and be competitive with the cloud and other players out there and create the de facto standard for data center performance and supercloud. I mean, it's not outrageously expensive to develop custom Silicon. Tesla is doing it for example. And Broadcom obviously is capable of doing it. It's got good relationships with semiconductor fabs. But I think this is going to be a tough sell to Broadcom, unless VMware can hide this in plain site and make it profitable fast, like AWS most likely has with Nitro and Graviton. Then Project Monterey and our pipe dream of alternatives to Nitro in the data center could happen but if it can't, it's going to be toast. Or maybe Intel or Nvidia will take it over or maybe the Monterey team will spin out a VMware and do a Pensando like deal and demonstrate the viability of this concept and then Broadcom will buy it back in 10 years. Here's a double click on that previous data that we put in tabular form. It's how the data on that previous slide was plotted. I just want to give you the background data here. So net score spending momentum is the sorted on the left. So it's sorted by net score in the left hand chart, that was the y-axis in the previous data set and then shared and or presence in the data set is the right hand chart. In other words, it's sorted on the right hand chart, right hand table. That right most column is shared and you can see it's sorted top to bottom, and that was the x-axis on the previous chart. The point is not many on the left hand side are above the 40% line. VMware Cloud on AWS is, it's expensive, so it's probably profitable and it's probably a keeper. We'll see about the rest of VMware's portfolio. Like what happens to Tanzu for example. On the right, we drew a red line, just arbitrarily at those companies and products with more than a hundred mentions in the survey, everything but Tanzu from VMware makes that cut. Again, this is no indication of profitability here, and that's what's going to matter to Broadcom. Now let's take a moment to address the question of Broadcom as a software company. What the heck do they know about software, right. Well, they're not dumb over there and they know how to run a business, but there is a strategic rationale to this move beyond just doing portfolios and extracting rents and cutting R&D, et cetera, et cetera. Why, for example, isn't Broadcom going after coming back to Dell or HPE, it could pick up for a lot less than VMware, and they got way more revenue than VMware. Well, it's obvious, software's more profitable of course, and Broadcom wants to move up the stack, but there's a trend going on, which Broadcom is very much in touch with. First, it sells to Dell and HPE and Cisco and all the OEM. so it's not going to disrupt that. But this chart shows that the value is flowing away from traditional servers and storage and networking to two places, merchant Silicon, which itself is morphing. Broadcom... We focus on the left hand side of this chart. Broadcom correctly believes that the world is shifting from a CPU centric center of gravity to a connectivity centric world. We've talked about this on theCUBE a lot. You should listen to Broadcom COO Charlie Kawwas speak about this. It's all that supporting infrastructure around the CPU where value is flowing, including of course, alternative GPUs and XPUs, and NPUs et cetera, that are sucking the value out of the traditional x86 architecture, offloading some of the security and networking and storage functions that traditionally have been done in x86 which are part of the waste right now in the data center. This is that shifting dynamic of Moore's law. Moore's law, not keeping pace. It's slowing down. It's slower relative to some of the combinatorial factors. When you add up in all the CPU and GPU and NPU and accelerators, et cetera. So we've talked about this a lot in Breaking Analysis episodes. So the value is shifting left within that middle circle. And it's shifting left within that left circle toward components, other than CPU, many of which Broadcom supplies. And then you go back to the middle, value is shifting from that middle section, that traditional data center up into hyperscale clouds, and then to the right toward infrastructure software to manage all that equipment in the data center and across clouds. And look Broadcom is an arms dealer. They simply sell to everyone, locking up key vectors of the value chain, cutting costs and raising prices. It's a pretty straightforward strategy, but not for the fate of heart. And Broadcom has become pretty good at it. Let's close with the customer feedback. I spoke with ETRs Eric Bradley this morning. He and I both reached out to VMware customers that we know and got their input. And here's a little snapshot of what they said. I'll just read this. Broadcom will be looking to invest in the core and divest of any underperforming assets, right on. It's just what we were saying. This doesn't bode well for future innovation, this is a CTO at a large travel company. Next comment, we're a Carbon Black customer. VMware didn't seem to interfere with Carbon Black, but now that we're concerned about short term disruption to their tech roadmap and long term, are they going to split and be sold off like Symantec was, this is a CISO at a large hospitality organization. Third comment, I got directly from a VMware practitioner, an IT director at a manufacturing firm. This individual said, moving off VMware would be very difficult for us. We have over 500 applications running on VMware, and it's really easy to manage. We're not going to move those into the cloud and we're worried Broadcom will raise prices and just extract rents. Last comment, we'll share as, Broadcom sees the cloud data center and IoT is their next revenue source. The VMware acquisition provides them immediate virtualization capabilities to support a lightweight IoT offering. Big concern for customers is what technology they will invest in and innovate, and which will be stripped off and sold. Interesting. I asked David Floyer to give me a back of napkin estimate for the following question. I said, David, if you're running mission critical applications on VMware, how much would it increase your operating cost moving those applications into the cloud? Or how much would it save? And he said, Dave, VMware's really easy to run. It can run any application pretty much anywhere, and you don't need an army of people to manage it. All your processes are tied to VMware, you're locked and loaded. Move that into the cloud and your operating cost would double by his estimates. Well, there you have it. Broadcom will pinpoint the optimal profit maximization strategy and raise prices to the point where customers say, you know what, we're still better off staying with VMware. And sadly, for many practitioners there aren't a lot of choices. You could move to the cloud and increase your cost for a lot of your applications. You could do it yourself with say Zen or OpenStack. Good luck with that. You could tap Nutanix. That will definitely work for some applications, but are you going to move your entire estate, your application portfolio to Nutanix? It's not likely. So you're going to pay more for VMware and that's the price you're going to pay for two decades of better IT. So our advice is get out ahead of this, do an application portfolio assessment. If you can move apps to the cloud for less, and you haven't yet, do it, start immediately. Definitely give Nutanix a call, but going to have to be selective as to what you actually can move, forget porting to OpenStack, or do it yourself Hypervisor, don't even go there. And start building new cloud native apps where it makes sense and let the VMware stuff go into manage decline. Let certain apps just die through attrition, shift your development resources to innovation in the cloud and build a brick wall around the stable apps with VMware. As Paul Maritz, the former CEO of VMware said, "We are building the software mainframe". Now marketing guys got a hold of that and said, Paul, stop saying that, but it's true. And with Broadcom's help that day we'll soon be here. That's it for today. Thanks to Stephanie Chan who helps research our topics for Breaking Analysis. Alex Myerson does the production and he also manages the Breaking Analysis podcast. Kristen Martin and Cheryl Knight help get the word out on social and thanks to Rob Hof, who was our editor in chief at siliconangle.com. Remember, these episodes are all available as podcast, wherever you listen, just search Breaking Analysis podcast. Check out ETRs website at etr.ai for all the survey action. We publish a full report every week on wikibon.com and siliconangle.com. You can email me directly at david.vellante@siliconangle.com. You can DM me at DVellante or comment on our LinkedIn posts. This is Dave Vellante for theCUBE Insights powered by ETR. Have a great week, stay safe, be well. And we'll see you next time. (upbeat music)
SUMMARY :
This is Breaking Analysis and promises that the acquisition
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Symantec | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
April 22 | DATE | 0.99+ |
HP | ORGANIZATION | 0.99+ |
David Floyer | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Paul Maritz | PERSON | 0.99+ |
Broadcom | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Eric Bradley | PERSON | 0.99+ |
April 21 | DATE | 0.99+ |
NSX | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
January | DATE | 0.99+ |
$61 billion | QUANTITY | 0.99+ |
8.5 billion | QUANTITY | 0.99+ |
$2.1 billion | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Acropolis | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
90% | QUANTITY | 0.99+ |
6% | QUANTITY | 0.99+ |
4.7 billion | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Hock Tan | ORGANIZATION | 0.99+ |
60% | QUANTITY | 0.99+ |
44% | QUANTITY | 0.99+ |
40 day | QUANTITY | 0.99+ |
61% | QUANTITY | 0.99+ |
8 billion | QUANTITY | 0.99+ |
Michael Dell | PERSON | 0.99+ |
52% | QUANTITY | 0.99+ |
47% | QUANTITY | 0.99+ |
Wrap with Stephanie Chan | Red Hat Summit 2022
(upbeat music) >> Welcome back to theCUBE. We're covering Red Hat Summit 2022. We're going to wrap up now, Dave Vellante, Paul Gillin. We want to introduce you to Stephanie Chan, who's our new correspondent. Stephanie, one of your first events, your very first CUBE event. So welcome. >> Thank you. >> Up from NYC. Smaller event, but intimate. You got a chance to meet some folks last night at some of the after parties. What are your overall impressions? What'd you learn this week? >> So this has been my first in-person event in over two years. And even though, like you said, is on the smaller scale, roughly around 1000 attendees, versus it's usual eight to 10,000 attendees. There's so much energy, and excitement, and openness in these events and sessions. Even before and after the sessions people have been mingling and socializing and hanging out. So, I think a lot of people appreciate these in-person events and are really excited to be here. >> Cool. So, you also sat in some of the keynotes, right? Pretty technical, right? Which is kind of new to sort of your genre, right? I mean, I know you got a financial background but, so what'd you think of the keynotes? What'd you think of the format, the theater in the round? Any impressions of that? >> So, I think there's three things that are really consistent in these Red Hat Summit keynotes. There's always a history lesson. There's always, you know, emphasis in the culture of openness. And, there's also inspirational stories about how people utilize open source. And I found a lot of those examples really compelling and interesting. For instance, people use open source in (indistinct), and even in space. So I really enjoyed, you know, learning about all these different people and stories. What about you guys? What do you think were the big takeaways and the best stories that came out of the keynotes? >> Paul, want to start? >> Clearly the Red Hat Enterprise Linux 9 is a major rollout. They do that only about every three years. So that's a big deal to this audience. I think what they did in the area of security, with rolling out sigstore, which is a major new, I think an important new project that was sort of incubated at Red Hat. And they're trying to put in to create an open source ecosystem around that now. And the alliances. I'm usually not that much on partnerships, but the Accenture and the Microsoft partnerships do seem to be significant to the company. And, finally, the GM partnership which I think was maybe kind of the bombshell that they sort of rushed in at the last minute. But I think has the biggest potential impact on Red Hat and its partner ecosystem that is really going to anchor their edge architecture going forward. So I didn't see it so much on the product front, but the sense of Red Hat spreading its wings, and partnering with more companies, and seeing its itself as really the center of an ecosystem indicates that they are, you know, they're in a very solid position in their business. >> Yeah, and also like the pandemic has really forced us into this new normal, right? So customer demand is changing. There has been the shift to remote. There's always going to be a new normal according to Paul, and open source carries us through that. So how do you guys think Red Hat has helped its portfolio through this new normal and the shift? >> I mean, when you think of Red Hat, you think of Linux. I mean, that's where it all started. You think OpenShift which is the application development platforms. Linux is the OS. OpenShift is the application development platform for Kubernetes. And then of course, Ansible is the automation framework. And I agree with you, ecosystem is really the other piece of this. So, I mean, I think you take those three pieces and extend that into the open source community. There's a lot of innovation that's going around each of those, but ecosystems are the key. We heard from Stefanie Chiras, that fundamental, I mean, you can't do this without those gap fillers and those partnerships. And then another thing that's notable here is, you know, this was, I mean, IBM was just another brand, right? I mean, if anything it was probably a sub-brand, I mean, you didn't hear much about IBM. You certainly had no IBM presence, even though they're right across the street running Think. No Arvind present, no keynote from Arvind, no, you know, Big Blue washing. And so, I think that's a testament to Arvind himself. We heard that from Paul Cormier, he said, hey, this guy's been great, he's left us alone. And he's allowed us to continue innovating. It's good news. IBM has not polluted Red Hat. >> Yes, I think that the Red Hat was, I said at the opening, I think Red Hat is kind of the tail wagging the dog right now. And their position seems very solid in the market. Clearly the market has come to them in terms of their evangelism of open source. They've remained true to their business model. And I think that gives them credibility that, you know, a lot of other open source companies have lacked. They have stuck with the plan for over 20 years now and have really not changed it, and it's paying off. I think they're emerging as a company that you can trust to do business with. >> Now I want to throw in something else here. I thought the conversation with IDC analyst, Jim Mercer, was interesting when he said that they surveyed customers and they wanted to get the security from their platform vendor, versus having to buy these bespoke tools. And it makes a lot of sense to me. I don't think that's going to happen, right? Because you're going to have an identity specialist. You're going to have an endpoint specialist. You're going to have a threat detection specialist. And they're going to be best of breed, you know, Red Hat's never going to be all of those things. What they can do is partner with those companies through APIs, through open source integrations, they can add them in as part of the ecosystem and maybe be the steward of that. Maybe that's the answer. They're never going to be the best at all those different security disciplines. There's no way in the world, Red Hat, that's going to happen. But they could be the integration point. And that would be, that would be a simplifying layer to the equation. >> And I think it's smart. You know, they're not pretending to be an identity in access management or an anti-malware company, or even a zero trust company. They are sticking to their knitting, which is operating system and developers. Evangelizing DevSecOps, which is a good thing. And, that's what they're going to do. You know, you have to admire this company. It has never gotten outside of its swim lane. I think it's understood well really what it wants to be good at. And, you know, in the software business knowing what not to do is more important than knowing what to do. Is companies that fail are usually the ones that get overextended, this company has never overextended itself. >> What else do you want to know? >> And a term that kept popping up was multicloud, or otherwise known as metacloud. We know what the cloud is, but- >> Oh, supercloud, metacloud. >> Supercloud, yeah, here we go. We know what the cloud is but, what does metacloud mean to you guys? And why has it been so popular in these conversations? >> I'm going to boot this to Dave, because he's the expert on this. >> Well, expert or not, but I mean, again, we've coined this term supercloud. And the idea behind the supercloud or what Ashesh called metacloud, I like his name, cause it allows Web 3.0 to come into the equation. But the idea is that instead of building on each individual cloud and have compatibility with that cloud, you build a layer across clouds. So you do the hard work as a platform supplier to hide the underlying primitives and APIs from the end customer, or the end developer, they can then add value on top of that. And that abstraction layer spans on-prem, clouds, across clouds, ultimately out to the edge. And it's new, a new value layer that builds on top of the hyperscale infrastructure, or existing data center infrastructure, or emerging edge infrastructure. And the reason why that is important is because it's so damn complicated, number one. Number two, every company's becoming a software company, a technology company. They're bringing their services through digital transformation to their customers. And you've got to have a cloud to do that. You're not going to build your own data center. That's like Charles Wang says, not Charles Wang. (Paul laughing) Charles Phillips. We were just talking about CA. Charles Phillips. Friends don't let friends build data centers. So that supercloud concept, or what Ashesh calls metacloud, is this new layer that's going to be powered by ecosystems and platform companies. And I think it's real. I think it's- >> And OpenShift, OpenShift is a great, you know, key card for them or leverage for them because it is perhaps the best known Kubernetes platform. And you can see here they're really doubling down on adding features to OpenShift, security features, scalability. And they see it as potentially this metacloud, this supercloud abstraction layer. >> And what we said is, in order to have a supercloud you got to have a superpaz layer and OpenShift is that superpaz layer. >> So you had conversations with a lot of people within the past two days. Some people include companies, from Verizon, Intel, Accenture. Which conversation stood out to you the most? >> Which, I'm sorry. >> Which conversation stood out to you the most? (Paul sighs) >> The conversation with Stu Miniman was pretty interesting because we talked about culture. And really, he has a lot of credibility in that area because he's not a Red Hat. You know, he hasn't been a Red Hat forever, he's fairly new to the company. And got a sense from him that the culture there really is what they say it is. It's a culture of openness and that's, you know, that's as important as technology for a company's success. >> I mean, this was really good content. I mean, there were a lot, I mean Stefanie's awesome. Stefanie Chiras, we're talking about the ecosystem. Chris Wright, you know, digging into some of the CTO stuff. Ashesh, who coined metacloud, I love that. The whole in vehicle operating system conversation was great. The security discussion that we just had. You know, the conversations with Accenture were super thoughtful. Of course, Paul Cormier was a highlight. I think that one's going to be a well viewed interview, for sure. And, you know, I think that the customer conversations are great. Red Hat did a really good job of carrying the keynote conversations, which were abbreviated this year, to theCUBE. >> Right. >> I give 'em a lot of kudos for that. And because, theCUBE, it allows us to double click, go deeper, peel the onion a little bit, you know, all the buzz words, and cliches. But it's true. You get to clarify some of the things you heard, which were, you know, the keynotes were, were scripted, but tight. And so we had some good follow up questions. I thought it was super useful. I know I'm leaving somebody out, but- >> We're also able to interview representatives from Intel and Nvidia, which at a software conference you don't typically do. I mean, there's the assimilation, the combination of hardware and software. It's very clear that, and this came out in the keynote, that Red Hat sees hardware as matter. It matters. It's important again. And it's going to be a source of innovation in the future. That came through clearly. >> Yeah. The hardware matters theme, you know, the old days you would have an operating system and the hardware were intrinsically linked. MVS in the mainframe, VAX, VMS in the digital mini computers. DG had its own operating system. Wang had his own operating system. Prime with Prime OS. You remember these days? >> Oh my God. >> Right? (Paul laughs) And then of course Microsoft. >> And then x86, everything got abstracted. >> Right. >> Everything became x86 and now it's all atomizing again. >> Although WinTel, right? I mean, MS-DOS and Windows were intrinsically linked for many, many years with Intel x86. And it wasn't until, you know, well, and then, you know, Sun Solaris, but it wasn't until Linux kind of blew that apart. And the internet is built on the lamp stack. And of course, Linux is the fundamental foundation for Red Hat. So my point is, that the operating system and the hardware have always been very closely tied together. Whether it's security, or IO, or registries and memory management, everything controlled by the OS are very close to the hardware. And so that's why I think you've got an affinity in Red Hat to hardware. >> But Linux is breaking that bond, don't you think? >> Yes, but it still has to understand the underlying hardware. >> Right. >> You heard today, how taking advantage of Nvidia, and the AI capabilities. You're seeing that with ARM, you're seeing that with Intel. How you can optimize the operating system to take advantage of new generations of CPU, and NPU, and CPU, and PU, XPU, you know, across the board. >> Yep. >> Well, I really enjoyed this conference and it really stressed how important open source is to a lot of different industries. >> Great. Well, thanks for coming on. Paul, thank you. Great co-hosting with you. And thank you. >> Always, Dave. >> For watching theCUBE. We'll be on the road, next week we're at KubeCon in Valencia, Spain. We're at VeeamON. We got a ton of stuff going on. Check out thecube.net. Check out siliconangle.com for all the news. Wikibon.com. We publish there weekly, our breaking analysis series. Thanks for watching everybody. Dave Vellante, for Paul Gillin, and Stephanie Chan. Thanks to the crew. Shout out, Andrew, Alex, Sonya. Amazing job, Sonya. Steven, thanks you guys for coming out here. Mark, good job corresponding. Go to SiliconANGLE, Mark's written some great stuff. And thank you for watching. We'll see you next time. (calm music)
SUMMARY :
We're going to wrap up now, at some of the after parties. And even though, like you I mean, I know you got And I found a lot of those examples indicates that they are, you know, There has been the shift to remote. and extend that into the Clearly the market has come to them And it makes a lot of sense to me. And I think it's smart. And a term that kept but, what does metacloud mean to you guys? because he's the expert on this. And the idea behind the supercloud And you can see here and OpenShift is that superpaz layer. out to you the most? that the culture there really I think that one's going to of the things you heard, And it's going to be a source and the hardware were And then of course Microsoft. And then x86, And it wasn't until, you know, well, the underlying hardware. and PU, XPU, you know, across the board. to a lot of different industries. And thank you. And thank you for watching.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Verizon | ORGANIZATION | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Jim Mercer | PERSON | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Arvind | PERSON | 0.99+ |
Paul Cormier | PERSON | 0.99+ |
Stefanie Chiras | PERSON | 0.99+ |
Stephanie Chan | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Stephanie | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Andrew | PERSON | 0.99+ |
Sonya | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Alex | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Steven | PERSON | 0.99+ |
NYC | LOCATION | 0.99+ |
Stefanie | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Charles Phillips | PERSON | 0.99+ |
Charles Wang | PERSON | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
eight | QUANTITY | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
Ashesh | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
first | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
IDC | ORGANIZATION | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
Linux | TITLE | 0.99+ |
OpenShift | TITLE | 0.99+ |
Red Hat | TITLE | 0.99+ |
Windows | TITLE | 0.98+ |
Red Hat Summit 2022 | EVENT | 0.98+ |
Valencia, Spain | LOCATION | 0.98+ |
over 20 years | QUANTITY | 0.98+ |
over two years | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
three pieces | QUANTITY | 0.98+ |
first events | QUANTITY | 0.98+ |
Wang | PERSON | 0.97+ |
x86 | TITLE | 0.97+ |
around 1000 attendees | QUANTITY | 0.97+ |
zero trust | QUANTITY | 0.97+ |
Red Hat Summit | EVENT | 0.97+ |
this week | DATE | 0.96+ |
MS-DOS | TITLE | 0.96+ |
today | DATE | 0.96+ |
three things | QUANTITY | 0.96+ |
each | QUANTITY | 0.96+ |
10,000 attendees | QUANTITY | 0.96+ |
WinTel | TITLE | 0.96+ |
Ashesh | ORGANIZATION | 0.96+ |
Red Hat Enterprise Linux 9 | TITLE | 0.95+ |
last night | DATE | 0.95+ |
this year | DATE | 0.94+ |
Red Hat | ORGANIZATION | 0.94+ |
GM | ORGANIZATION | 0.93+ |
ARM | ORGANIZATION | 0.93+ |
Tushar Katarki & Justin Boitano | Red Hat Summit 2022
(upbeat music) >> We're back. You're watching theCUBE's coverage of Red Hat Summit 2022 here in the Seaport in Boston. I'm Dave Vellante with my co-host, Paul Gillin. Justin Boitano is here. He's the Vice President of Enterprise and Edge Computing at NVIDIA. Maybe you've heard of him. And Tushar Katarki who's the Director of Product Management at Red Hat. Gentlemen, welcome to theCUBE, good to see you. >> Thank you. >> Great to be here, thanks >> Justin, you are a keynote this morning. You got interviewed and shared your thoughts on AI. You encourage people to got to think bigger on AI. I know it's kind of self-serving but why? Why should we think bigger? >> When you think of AI, I mean, it's a monumental change. It's going to affect every industry. And so when we think of AI, you step back, you're challenging companies to build intelligence and AI factories, and factories that can produce intelligence. And so it, you know, forces you to rethink how you build data centers, how you build applications. It's a very data centric process where you're bringing in, you know, an exponential amount of data. You have to label that data. You got to train a model. You got to test the model to make sure that it's accurate and delivers business value. Then you push it into production, it's going to generate more data, and you kind of work through that cycle over and over and over. So, you know, just as Red Hat talks about, you know, CI/CD of applications, we're talking about CI/CD of the AI model itself, right? So it becomes a continuous improvement of AI models in production which is a big, big business transformation. >> Yeah, Chris Wright was talking about basically take your typical application development, you know, pipeline, and life cycle, and apply that type of thinking to AI. I was saying those two worlds have to come together. Actually, you know, the application stack and the data stack including AI need to come together. What's the role of Red Hat? What's your sort of posture on AI? Where do you fit with OpenShift? >> Yeah, so we're really excited about AI. I mean, a lot of our customers obviously are looking to take that data and make meaning out of it using AI is definitely a big important tool. And OpenShift, and our approach to Open Hybrid Cloud really forms a successful platform to base all your AI journey on with the partners such as NVIDIA whom we are working very closely with. And so the idea really is as Justin was saying, you know, the end to end, when you think about life of a model, you've got data, you mine that data, you create models, you deploy it into production. That whole thing, what we call CI/CD, as he was saying DevOps, DevSecOps, and the hybrid cloud that Red Hat has been talking about, although with OpenShift as the center forms a good basis for that. >> So somebody said the other day, I'm going to ask you, is INVIDIA a hardware company or a software company? >> We are a company that people know for our hardware but, you know, predominantly now we're a software company. And that's what we were on stage talking about. I mean, ultimately, a lot of these customers know that they've got to embark on this journey to apply AI, to transform their business with it. It's such a big competitive advantage going into, you know, the next decade. And so the faster they get ahead of it, the more they're going to win, right? But some of them, they're just not really sure how to get going. And so a lot of this is we want to lower the barrier to entry. We built this program, we call it Launchpad to basically make it so they get instant access to the servers, the AI servers, with OpenShift, with the MLOps tooling, with example applications. And then we walk them through examples like how do you build a chatbot? How do you build a vision system for quality control? How do you build a price recommendation model? And they can do hands on labs and walk out of, you know, Launchpad with all the software they need, I'll say the blueprint for building their application. They've got a way to have the software and containers supported in production, and they know the blueprint for the infrastructure and operating that a scale with OpenShift. So more and more, you know, to come back to your question is we're focused on the software layers and making that easy to help, you know, either enterprises build their apps or work with our ecosystem and developers to buy, you know, solutions off the shelf. >> On the harbor side though, I mean, clearly NVIDIA has prospered on the backs of GPUs, as the engines of AI development. Is that how it's going to be for the foreseeable future? Will GPUs continue to be core to building and training AI models or do you see something more specific to AI workloads? >> Yeah, I mean, it's a good question. So I think for the next decade, well, plus, I mean not forever, we're going to always monetize hardware. It's a big, you know, market opportunity. I mean, Jensen talks about a $100 billion, you know, market opportunity for NVIDIA just on hardware. It's probably another a $100 billion opportunity on the software. So the reality is we're getting going on the software side, so it's still kind of early days, but that's, you know, a big area of growth for us in the future and we're making big investments in that area. On the hardware side, and in the data center, you know, the reality is since Moore's law has ended, acceleration is really the thing that's going to advance all data centers. So I think in the future, every server will have GPUs, every server will have DPUs, and we can talk a bit about what DPUs are. And so there's really kind of three primary processors that have to be there to form the foundation of the enterprise data center in the future. >> Did you bring up an interesting point about DPUs and MPUs, and sort of the variations of GPUs that are coming about? Do you see those different PU types continuing to proliferate? >> Oh, absolutely. I mean, we've done a bunch of work with Red Hat, and we've got a, I'll say a beta of OpenShift 4.10 that now supports DPUs as the, I'll call it the control plane like software defined networking offload in the data center. So it takes all the software defined networking off of CPUs. When everybody talks about, I'll call it software defined, you know, networking and core data centers, you can think of that as just a CPU tax up to this point. So what's nice is it's all moving over to DPU to, you know, offload and isolate it from the x86 cores. It increases security of data center. It improves the throughput of your data center. And so, yeah, DPUs, we see everybody copying that model. And, you know to give credit where credit is due, I think, you know, companies like AWS, you know, they bought Annapurna, they turned it into Nitro which is the foundation of their data centers. And everybody wants the, I'll call it democratized version of that to run their data centers. And so every financial institution and bank around the world sees the value of this technology, but running in their data centers. >> Hey, everybody needs a Nitro. I've written about it. It's Annapurna acquisition, 350 million. I mean, peanuts in the grand scheme of things. It's interesting, you said Moore's law is dead. You know, we have that conversation all the time. Pat Gelsinger promised that Moore's law is alive and well. But the interesting thing is when you look at the numbers, that's, you know, Moore's law, we all know it, doubling of the transistor densities every 18 to 24 months. Let's say that, that promise that he made is true. What I think the industry maybe doesn't appreciate, I'm sure you do, being in NVIDIA, when you combine what you were just saying, the CPU, the GPU, Paul, the MPU, accelerators, all the XPUs, you're talking about, I mean, look at Apple with the M1, I mean 6X in 15 months versus doubling every 18 to 24. The A15 is probably averaging over the last five years, a 110% performance improvement each year versus the historical Moore's law which is 40%. It's probably down to the low 30s now. So it's a completely different world that we're entering now. And the new applications are going to be developed on these capabilities. It's just not your general purpose market anymore. From an application development standpoint, what does that mean to the world? >> Yeah, I mean, yeah, it is a great point. I mean, from an application, I mean first of all, I mean, just talk about AI. I mean, they are all very compute intensive. They're data intensive. And I mean to move data focus so much in to compute and crunch those numbers. I mean, I'd say you need all the PUs that you mentioned in the world. And also there are other concerns that will augment that, right? Like we want to, you know, security is so important so we want to secure everything. Cryptography is going to take off to new levels, you know, that we are talking about, for example, in the case of DPUs, we are talking about, you know, can that be used to offload your encryption and firewalling, and so on and so forth. So I think there are a lot of opportunities even from an application point of view to take of this capacity. So I'd say we've never run out of the need for PUs if you will. >> So is OpenShift the layer that's going to simplify all that for the developer. >> That's right. You know, so one of the things that we worked with NVIDIA, and in fact was we developed this concept of an operator for GPUs, but you can use that pattern for any of the PUs. And so the idea really is that, how do you, yeah-- (all giggle) >> That's a new term. >> Yeah, it's a new term. (all giggle) >> XPUs. >> XPUs, yeah. And so that pattern becomes very easy for GPUs or any other such accelerators to be easily added as a capacity. And for the Kubernetes scaler to understand that there is that capacity so that an application which says that I want to run on a GPU then it becomes very easy for it to run on that GPU. And so that's the abstraction to your point about how we are making that happen. >> And to add to this. So the operator model, it's this, you know, open source model that does the orchestration. So Kubernetes will say, oh, there's a GPU in that node, let me run the operator, and it installs our entire run time. And our run time now, you know, it's got a MIG configuration utility. It's got the driver. It's got, you know, telemetry and metering of the actual GPU and the workload, you know, along with a bunch of other components, right? They get installed in that Kubernetes cluster. So instead of somebody trying to chase down all the little pieces and parts, it just happens automatically in seconds. We've extended the operator model to DPUs and networking cards as well, and we have all of those in the operator hub. So for somebody that's running OpenShift in their data centers, it's really simple to, you know, turn on Node Feature Discovery, you point to the operators. And when you see new accelerated nodes, the entire run time is automatically installed for you. So it really makes, you know, GPUs and our networking, our advanced networking capabilities really first class citizens in the data center. >> So you can kind of connect the dots and see how NVIDIA and the Red Hat partnership are sort of aiming at the enterprise. I mean, NVIDIA, obviously, they got the AI piece. I always thought maybe 25% of the compute cycles in the data center were wasted doing storage offloads or networking offload, security. I think Jensen says it's 30%, probably a better number than I have. But so now you're seeing a lot of new innovation in new hardware devices that are attacking that with alternative processors. And then my question is, what about the edge? Is that a blue field out at the edge? What does that look like to NVIDIA and where does OpenShift play? >> Yeah, so when we talk about the edge, we always going to start talking about like which edge are we talking about 'cause it's everything outside the core data center. I mean, some of the trends that we see with regard to the edges is, you know, when you get to the far edge, it's single nodes. You don't have the guards, gates, and guns protection of the data center. So you start having to worry about physical security of the hardware. So you can imagine there's really stringent requirements on protecting the intellectual property of the AI model itself. You spend millions of dollars to build it. If I push that out to an edge data center, how do I make sure that that's fully protected? And that's the area that we just announced a new processor that we call Hopper H100. It supports confidential computing so that you can basically ensure that model is always encrypted in system memory across the bus, of the PCI bus to the GPU, and it's run in a confidential way on the GPU. So you're protecting your data which is your model plus the data flowing through it, you know, in transit, wallet stored, and then in use. So that really adds to that edge security model. >> I wanted to ask you about the cloud, correct me if I'm wrong. But it seems to me that that AI workloads have been slower than most to make their way to the cloud. There are a lot of concerns about data transfer capacity and even cost. Do you see that? First of all, do you agree with that? And secondly, is that going to change in the short-term? >> Yeah, so I think there's different classes of problems. So we'll take, there's some companies where their data's generated in the cloud and we see a ton of, I'll say, adoption of AI by cloud service providers, right? Recommendation engines, translation engines, conversational AI services, that all the clouds are building. That's all, you know, our processors. There's also problems that enterprises have where now I'm trying to take some of these automation capabilities but I'm trying to create an intelligent factory where I want to, you know, merge kind of AI with the physical world. And that really has to run at the edge 'cause there's too much data being generated by cameras to bring that all the way back into the cloud. So, you know, I think we're seeing mass adoption in the cloud today. I think at the edge a lot of businesses are trying to understand how do I deploy that reliably and securely and scale it. So I do think, you know, there's different problems that are going to run in different places, and ultimately we want to help anybody apply AI where the business is generating the data. >> So obviously very memory intensive applications as well. We've seen you, NVIDIA, architecturally kind of move away from the traditional, you know, x86 approach, take better advantage of memories where obviously you have relationships with Arm. So you've got a very diverse set of capabilities. And then all these other components that come into use, to just be a kind of x86 centric world. And now it's all these other supporting components to support these new applications and it's... How should we think about the future? >> Yeah, I mean, it's very exciting for sure, right? Like, you know, the future, the data is out there at the edge, the data can be in the data center. And so we are trying to weave a hybrid cloud footprint that spans that. I mean, you heard Paul come here, talk about it. But, you know, we've talked about it for some time now. And so the paradigm really that is, that be it an application, and when I say application, it could be even an AI model as a service. It can think about that as an application. How does an application span that entire paradigm from the core to the edge and beyond is where the future is. And, of course, there's a lot of technical challenges, you know, for us to get there. And I think partnerships like this are going to help us and our customers to get there. So the world is very exciting. You know, I'm very bullish on how this will play out, right? >> Justin, we'll give you the last word, closing thoughts. >> Well, you know, I think a lot of this is like I said, it's how do we reduce the complexity for enterprises to get started which is why Launchpad is so fundamental. It gives, you know, access to the entire stack instantly with like hands on curated labs for both IT and data scientists. So they can, again, walk out with the blueprints they need to set this up and, you know, start on a successful AI journey. >> Just a position, is Launchpad more of a Sandbox, more of a school, or more of an actual development environment. >> Yeah, think of it as it's, again, it's really for trial, like hands on labs to help people learn all the foundational skills they need to like build an AI practice and get it into production. And again, it's like, you don't need to go champion to your executive team that you need access to expensive infrastructure and, you know, and bring in Red Hat to set up OpenShift. Everything's there for you so you can instantly get started. Do kind of a pilot project and then use that to explain to your executive team everything that you need to then go do to get this into production and drive business value for the company. >> All right, great stuff, guys. Thanks so much for coming to theCUBE. >> Yeah, thanks. >> Thank you for having us. >> All right, thank you for watching. Keep it right there, Dave Vellante and Paul Gillin. We'll be back right after this short break at the Red Hat Summit 2022. (upbeat music)
SUMMARY :
here in the Seaport in Boston. Justin, you are a keynote this morning. And so it, you know, forces you to rethink Actually, you know, the application And so the idea really to buy, you know, solutions off the shelf. Is that how it's going to be the data center, you know, of that to run their data centers. I mean, peanuts in the of the need for PUs if you will. all that for the developer. And so the idea really is Yeah, it's a new term. And so that's the So it really makes, you know, Is that a blue field out at the edge? across the bus, of the PCI bus to the GPU, First of all, do you agree with that? And that really has to run at the edge you know, x86 approach, from the core to the edge and beyond Justin, we'll give you the Well, you know, I think a lot of this is Launchpad more of a that you need access to Thanks so much for coming to theCUBE. at the Red Hat Summit 2022.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Tushar Katarki | PERSON | 0.99+ |
Justin | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Justin Boitano | PERSON | 0.99+ |
Chris Wright | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
110% | QUANTITY | 0.99+ |
25% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
$100 billion | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
INVIDIA | ORGANIZATION | 0.99+ |
Annapurna | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Seaport | LOCATION | 0.99+ |
350 million | QUANTITY | 0.99+ |
15 months | QUANTITY | 0.99+ |
24 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
24 months | QUANTITY | 0.99+ |
next decade | DATE | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.98+ |
18 | QUANTITY | 0.98+ |
Boston | LOCATION | 0.98+ |
OpenShift | TITLE | 0.98+ |
30s | QUANTITY | 0.97+ |
each year | QUANTITY | 0.97+ |
A15 | COMMERCIAL_ITEM | 0.97+ |
secondly | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
today | DATE | 0.96+ |
6X | QUANTITY | 0.96+ |
next decade | DATE | 0.96+ |
both | QUANTITY | 0.96+ |
Open Hybrid Cloud | TITLE | 0.95+ |
Kubernetes | TITLE | 0.95+ |
theCUBE | ORGANIZATION | 0.94+ |
Launchpad | TITLE | 0.94+ |
two worlds | QUANTITY | 0.93+ |
millions of dollars | QUANTITY | 0.92+ |
M1 | COMMERCIAL_ITEM | 0.92+ |
Nitro | ORGANIZATION | 0.91+ |
Vice President | PERSON | 0.91+ |
OpenShift 4.10 | TITLE | 0.89+ |
single nodes | QUANTITY | 0.88+ |
DevSecOps | TITLE | 0.86+ |
Jensen | ORGANIZATION | 0.83+ |
one | QUANTITY | 0.82+ |
three primary processors | QUANTITY | 0.82+ |
DevOps | TITLE | 0.81+ |
first | QUANTITY | 0.8+ |
last five years | DATE | 0.79+ |
this morning | DATE | 0.79+ |
Moore | PERSON | 0.77+ |
x86 cores | QUANTITY | 0.71+ |
Keynote Analysis | Red Hat Summit 2022
[Music] thecube's coverage of red hat summit 2022 thecube has been covering red hat summit for a number of years of course the last two years were virtual coverage now the red hat summit is one of the industry's most premier events and and typically red hat summits are many thousands of people i think the last one i went to was eight or nine thousand people very heavy developer conference this year red hat has taken a different approach it's a hybrid event it's kind of a vip event at the westin in boston with a lot more executives here than we would normally expect versus developers but a huge virtual audience my name is dave vellante i'm here with my co-host paul gillin paul this is a location that you and i have broadcast from many times and um of course 2019 the summer of 2019 ibm acquired red hat and um we of course we did red hat summit that year but now we're seeing a completely new red hat and a new ibm and you wouldn't know ibm owned red hat for what they've been talking about at this conference we just came out of the keynote where uh in the in the hour-long keynote ibm was not mentioned once and only appeared the logo only appeared once on the screen in fact so this is uh very much red hat being red hat not being a subsidiary at ibm and perhaps that's justified given that ibm's track record with acquisitions is that they gradually envelop the acquired company and and it becomes part of the ibm board yeah they blue wash the whole thing right it's ironic because ibm think is going on right across the street arvin krishna is here but no presence here and i think that's by design i mean it reminds me of when you know emc owned vmware you know the vmware team didn't want to publicize that they had an ecosystem of partners that they wanted to cater to and they wanted to treat everybody equally even though perhaps behind the scenes they were forced to do certain things that they might not have necessarily wanted to because they were owned by another company and i think that you know certainly ibm's done a good job of leaving the brand separate but when they talk about the con the conference calls ibm's earnings calls you certainly get a heavy dose of red hat when red hat was acquired by ibm it was just north of three billion dollars in revenue obviously ibm paid 34 billion dollars for the company actually by today's valuations probably a bargain you know despite the market sell-off in the last several months uh but now we've heard public statements from arvind kushner that that red hat is a 5 billion plus revenue company it's a little unclear what's in there of course when you listen to ibm earnings you know consulting is their big business red hat's growing at 21 but when i remember paul when red hat was acquired stu miniman and i did a session and i said this is not about cloud this is about consulting and modernizing applications and sure there's some cloud in there with openshift but from a financial standpoint ibm was able to take red hat and jam it right into its application modernization initiatives so it's hard to tell how much of that 5 billion is actually you know legacy red hat but i guess it doesn't matter anymore it's working ibm mathematics is notoriously opaque they if the business isn't going well it'll tend to be absorbed into another number in the in the earnings report that that does show some growth so we've heard uh certainly ibm talks a lot about red hat on its earnings calls it's very clear that red hat is the growth engine within ibm i'd say it's a bit of the tail wagging the dog right now where red hat really is dictating where ibm goes with its hypercloud strategy which is the foundation not only of its technology portfolio but of its consulting business and so red hat is really in the driver's seat of of hybrid cloud and that's the future for ibm and you see that very much at this conference where uh red hat is putting out its uh series of announcements today about improvements to his hybrid cloud the new release of route 9 red hat enterprise linux 9 improvements to its hybrid cloud portfolio it very much is going its own way with that and i sense that ibm is going to go along with wherever red hat chooses to go yeah i think you're absolutely right if by the way if you go to siliconangle.com paul just published a piece on red hat reds hats their roll out of their parade which of course is as you pointed out led by enterprise linux but to your point about hybrid cloud it is the linchpin of of certainly ibm strategy but many companies hybrid cloud strategies if you think about it openshift in particular it's it's the modern application development environment for kubernetes you can get kubernetes you can buy eks you can get that for free in a lot of places but you have to do dozens and dozens of things and acquire dozens of services to do what openshift does to get the reliability the recoverability the security and that's really red hat's play and they're the the thing about red hat combining with linux their linux heritage they're doing that everywhere it's going to open shift everywhere red hat everywhere whether it's on-prem in aws azure google out to the edge you heard paul cormier today saying he expects that in the next several years hardware is going to become one of the most important you know factors i agree i think we're going to enter a hardware renaissance you've seen the work that we've done on arm i think 2017 was when red hat and arm announced kind of their initial collaboration could have even been before that today we're hearing a lot about intel and nvidia and so affinity with all of these alternative processes i think they did throw in today in the keynote power and so i think i heard that that was the other ibm branding they sort of tucked that in there but the point is red hat runs everywhere so it's fundamental to building out hybrid cloud and that is fundamental to a lot of company strategies and red hat has been all over kubernetes with openshift it's i mean it's a drum beat here uh the openshift strategy is what really makes hybrid cloud possible because kubernetes is what makes it possible to shift workloads seamlessly from platform to platform you make an interesting point about hardware we have seen kind of a renaissance in hardware these last couple of years as these specific chipsets and uh and even full-scale processors have come to market we're seeing several in the ai area right now where startups are developing full-blown chipsets and and systems uh just for ai processing and nvidia of course that's that's really kind of their stock and trade these days so uh a a company that can run across all of those different platforms a platform like like rel which can run all across those different platforms is going to have a leg up on on anybody else and the implications for application development are considerable when you when you think about we talk about a lot about these alternative processes when flash replaced the spinning disk that had a huge impact on how applications are developed developers now didn't have to wait for that that disc to spin even though it's spinning very fast it's mechanical compared to electrons forget it and and the second big piece here is how memory is actually utilized the x86 you know traditional x86 you know memory everything goes through that core processor intel for years grabbed more and more function and you're seeing now that function become dispersed in fact a lot of people think we're moving from a processor-centric world to a connect centric world meaning connecting all these piece parts alternative processors memory controllers you know storage controllers io network interface cards smartnics and things like that where the communication across those resources is now where a lot of the innovation is going you see you're seeing a lot of that and now of course applications can take advantage of that especially now at the edge which is just a whole new frontier the edge certainly is part of that equation when you look at machine learning at training machine learning models the cpu actually does relatively little work most of it is happening in gpus in these parallel processes that are going on and the cpu is kind of acting as a traffic cop and you see that in the edge as well it's the same model at the edge where more of the intelligence is going to be out in discrete devices spread across the network and the cpu is going to be less of a uh you know less of a engine of intelligence at the same time though we've got cpus with we've got 100 core cpus are on the horizon and there are even 200 and 300 core cpus that we may see in the next uh in the next couple of years so cpus aren't standing still they are evolving to become really kind of super traffic cops for all of these other processors out in the network and on the edge so it's a very exciting time to be in hardware because so much innovation is happening really at the microprocessor level well we saw this you and i lived through the pc era and we saw a whole raft of applications come about as a result of the microprocessor the shift of the microprocessor-based economy we're going to see so we are seeing something similar with mobile and the edge you know just think about some of the numbers if you think about the traditional moore's law doubling a number of transistors every let's call it two years 18 to 24 months pat gelsinger at intel promises that intel is on that pace still but if you look at the apple m1 ultra they increased the transistor density 6x in the last 15 months okay so where is this another data point is the historical moore's law curve is 40 that's moderating to somewhere down you know down in the low 30s if you look at the apple a series i mean that thing is on average increasing performance at 110 a year when you add up into the combinatorial factors of the cpu the neural processing unit the gpu all the accelerators so we are seeing a new era the thing i i i wanted to bring up paul is you mentioned ai much of the ai work that's done today is modeling that's done in the cloud and when we talk about edge we think that the future of ai is ai inferencing in real time at the edge so you may not even be persisting that data but you're going to create a lot of data you're going to be operating on that data in streams and it's going to require a whole new new architectural thinking of hardware very low cost very low power very high performance to drive all that intelligence at the edge and a lot of that data is going to stay at the edge and and that's we're going to talk about some of that today with some of the ev innovations and the vehicle innovations and the intelligence in these vehicles yeah and in talking in its edge strategy which it outlined today and the announcements that are made today red hat very much uh playing to the importance of being able to run red hat enterprise linux at the edge the idea is you do these big machine learning models centrally and then you you take the you take what results from that and you move it out to smaller processors it's the only way we can cope with it with the explosion of data that will be uh that these sensors and other devices will be generating so some of the themes we're hearing in the uh announcements today that you wrote about paul obviously rel9 is huge uh red hat enterprise linux version nine uh new capabilities a lot of edge a lot of security uh new cross portfolio capabilities for the edge security in the software supply chain that's a big conversation especially post solar winds managed ansible when you think about red hat you really i think anyway about three things rel which is such as linux it powers the internet powers everything uh you think of openshift which is application development you think about ansible which is automation so itops so that's one of the announcements ansible on azure and then a lot of hybrid cloud talk and you're gonna hear a lot of talk this week about red hat's cloud services portfolio packaging red hat as services as managed services that's you know a much more popular delivery mechanism with clients because they're trying to make it easy and this is complicated stuff and it gets more complicated the more features they add and the more the more components of the red hat portfolio are are available it's it's gonna be complex to build these hybrid clouds so like many of these so thecube started doing physical events last summer by the way and so this is this is new to a lot of people uh they're here for the first time people are really excited we've definitely noticed a trend people are excited to be back together paul cormier talked about that he talked about the new normal you can define the new normal any way you want so paul cormier gave the uh the the intro keynote bidani interviewed amex stephanie cheris interviewed accenture both those firms are coming out stephanie's coming on with the in accenture as well matt hicks talked about product innovation i loved his reference to ada lovelace that was very cool he talked about uh serena uh ramyanajan a famous mathematician who nobody knew about when he was just a kid these were ignored individuals in the 1800s for years and years and years in the case of ada lovelace for a century even he asked the question what if we had discovered them earlier and acted on them and been able to iterate on them earlier and his point tied that to open source very brilliantly i thought and um keynotes which i appreciate are much shorter much shorter intimate they did a keynote in the round this time uh which i haven't seen before there's maybe a thousand people in there so a much smaller group much more intimate setting not a lot of back and forth but uh but there is there is a feeling of a more personal feel to this event than i've seen it past red hat summits yeah and i think that's a trend that we're going to see more of where the live audience is kind of the on the ground it's going to the vip audience but still catering to the virtual audience you don't want to lose them so that's why the keynotes are a lot tighter okay paul thank you for setting up red hat summit 2022 you're watching the cube's coverage we'll be right back wall-to-wall coverage for two days right after this short break [Music] you
SUMMARY :
the numbers if you think about the
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
paul cormier | PERSON | 0.99+ |
arvind kushner | PERSON | 0.99+ |
200 | QUANTITY | 0.99+ |
34 billion dollars | QUANTITY | 0.99+ |
ibm | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
dozens | QUANTITY | 0.99+ |
dave vellante | PERSON | 0.99+ |
two days | QUANTITY | 0.99+ |
stephanie | PERSON | 0.99+ |
today | DATE | 0.99+ |
stephanie cheris | PERSON | 0.99+ |
5 billion | QUANTITY | 0.99+ |
paul gillin paul | PERSON | 0.99+ |
6x | QUANTITY | 0.99+ |
two years | QUANTITY | 0.99+ |
24 months | QUANTITY | 0.99+ |
red hat summit 2022 | EVENT | 0.99+ |
red hat | ORGANIZATION | 0.99+ |
2019 | DATE | 0.98+ |
arvin krishna | PERSON | 0.98+ |
2017 | DATE | 0.98+ |
nvidia | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
Red Hat Summit 2022 | EVENT | 0.97+ |
1800s | DATE | 0.97+ |
m1 ultra | COMMERCIAL_ITEM | 0.97+ |
red hat summits | EVENT | 0.97+ |
last summer | DATE | 0.96+ |
siliconangle.com | OTHER | 0.96+ |
paul | PERSON | 0.96+ |
nine thousand people | QUANTITY | 0.96+ |
110 a year | QUANTITY | 0.96+ |
once | QUANTITY | 0.96+ |
one | QUANTITY | 0.95+ |
18 | QUANTITY | 0.95+ |
vmware | ORGANIZATION | 0.95+ |
this year | DATE | 0.95+ |
both | QUANTITY | 0.95+ |
this week | DATE | 0.95+ |
dozens of services | QUANTITY | 0.94+ |
30s | QUANTITY | 0.94+ |
21 | QUANTITY | 0.94+ |
linux | TITLE | 0.93+ |
5 billion plus | QUANTITY | 0.93+ |
apple | ORGANIZATION | 0.93+ |
red hat and arm | ORGANIZATION | 0.92+ |
40 | QUANTITY | 0.91+ |
second big piece | QUANTITY | 0.9+ |
linux 9 | TITLE | 0.9+ |
red hat summit | EVENT | 0.9+ |
a thousand people | QUANTITY | 0.89+ |
three billion dollars | QUANTITY | 0.89+ |
next couple of years | DATE | 0.89+ |
thousands of people | QUANTITY | 0.88+ |
bidani | PERSON | 0.88+ |
100 core cpus | QUANTITY | 0.88+ |
red hat | ORGANIZATION | 0.87+ |
red hat summit | EVENT | 0.87+ |
last couple of years | DATE | 0.87+ |
Gunnar Hellekson, Red Hat | Red Hat Summit 2022
(upbeat music) >> Welcome back to Boston, Massachusetts. We're here at the Seaport. You're watching theCUBE's coverage of Red Hat Summit 2022. My name is Dave Vellante and Paul Gillin is here. He's my cohost for the next day. We are going to dig in to the famous RHEL, Red Hat Enterprise Linux. Gunnar Hellekson is here, he's the Vice President and General Manager of Red Hat Enterprise Linux. Gunnar, welcome to theCUBE. Good to see you. >> Thanks for having me. Nice to be here, Dave, Paul. >> RHEL 9 is, wow, nine, Holy cow. It's been a lot of iterations. >> It's the highest version of RHEL we've ever shipped. >> And now we're talking edge. >> Yeah, that's right. >> And so, what's inside, tell us. to keep happy with a new RHEL release. to keep happy with a new RHEL release. The first is the hardware partners, right, because they rely on RHEL to light up all their delicious hardware that they're making, then you got application developers and the ISVs who rely on RHEL to be that kind of stable platform for innovation, and then you've got the operators, the people who are actually using the operating system itself and trying to keep it running every day. So we've got on the, I'll start with the hardware side, So we've got on the, I'll start with the hardware side, which is something, as you know, RHEL success, and I think you talked about this with Matt, just in a few sessions earlier that the success of RHEL is really, hinges on our partnerships with the hardware partners and in this case, we've got, let's see, in RHEL 9 we've got all the usual hardware suspects and we've added, just recently in January, we added support for ARM servers, as general ARM server class hardware. And so that's something customers have been asking for, delighted to be shipping that in RHEL 9. So now ARM is kind of a first-class citizen, right? Alongside x86, PowerZ and all the other usual suspects. And then of course, working with our favorite public cloud providers. So making sure that RHEL 9 is available at AWS and Azure and GCP and all our other cloud friends, right? >> Yeah, you mentioned ARM, we're seeing ARM in the enterprise. We're obviously seeing ARM at the edge. You guys have been working with ARM for a long time. You're working with Intel, you're working with NVIDIA, you've got some announcements this week. Gunnar, how do you keep Linux from becoming Franken OS with all these capabilities? >> This is a great question. First is, the most important thing is to be working closely with, I mean, the whole point of Linux and the reason why Linux works is because you have all these people working together to make the same thing, right? And so fighting that is a bad idea. Working together with everyone, leaning into that collaboration, that's an important part of making it work over time. The other one is having, just like in any good relationship, having healthy boundaries. And so making sure that we're clear about the things that we need to keep stable and the places where we're allowed to innovate and striking the right balance between those two things, that allows us to continue to ship one coherent operating system while still keeping literally thousands of platforms happy. >> So you're not trying to suck in all the full function, you're trying to accommodate that function that the ecosystem is going to develop? >> Yeah, that's right. So the idea is that what we strive for is consistency across all of the infrastructures and then allowing for kind of optimizations and we still let ourselves take advantage of whatever indigenous feature might appear on, such an ARM chip or thus in a such cloud platform. But really, we're trying to deliver a uniform platform experience to the application developers, right? Because they can't be having, like there can't be kind of one version of RHEL over here and another version of RHEL over here, the ecosystem wouldn't work. The whole point of Linux and the whole point of Red Hat Enterprise Linux is to be the same so that everything else can be different. >> And what incentives do you use to keep customers current? >> To keep customers current? Well so the best thing to do I found is to meet customers where they are. So a lot of people think we release RHEL 9 at the same time we have Red Hat Enterprise Linux 8, we have Red Hat Enterprise Linux 7, all these are running at the same time, and then we also have multiple minor release streams inside those. So at any given time, we're running, let's say, a dozen different versions of RHEL are being maintained and kept up-to-date, and we do this precisely to make sure that we're not force marching people into the new version and they have a Red Hat Enterprise Linux subscription, they should just be able to sit there and enjoy the minor version that they like. And we try and keep that going for as long as possible. >> Even if it's 10 years out of date? >> So, 10 years, interesting you chose that number because that's the end of life. >> That's the end of the life cycle. >> Right. And so 10 years is about, that's the natural life of a given major release, but again inside that you have several 10-year life cycles kind of cascading on each other, right? So nine is the start of the next 10-year cycle while we're still living inside the 10-year cycle of seven and eight. So lots of options for customers. >> How are you thinking about the edge? how do you define, let's not go to the definition, but at high level. (Gunnar laughing) Like I've been in a conference last week. It was Dell Tech World, I'll just say it. They were sort of the edge to them was the retail store. >> Yeah. >> Lowe's, okay, cool, I guess that's edgy, I guess, But I think space is the edge. (Gunnar chuckling) >> Right, right, right. >> Or a vehicle. How do you think about the edge? All the above or but the exciting stuff to me is that far edge, but I wonder if you can comment. >> Yeah, so there's all kinds of taxonomies out there for the edge. For me, I'm a simple country product manager at heart and so, I try to keep it simple, right? And the way I think about the edge is, here's a use case in which somebody needs a small operating system that deploys on probably a small piece of hardware, usually varying sizes, but it could be pretty small. That thing needs to be updated without any human touching it, right? And it needs to be reliably maintained without any human touching it. Usually in the edge cases, actually touching the hardware is a very expensive proposition. So we're trying to be as hands off as possible. >> No truck rolls. >> No truck rolls ever, right, exactly. (Dave chuckling) And then, now that I've got that stable base, I'm going to go take an application. I'll probably put it in a container for simplicity's sake and same thing, I want to be able to deploy that application. If something goes wrong, I need to build a roll back to a known good state and then I need to set of management tools that allow me to touch things, make sure that everything is healthy, make sure that the updates roll out correctly, maybe do some AB testing, things like that. So I think about that as, that's the, when we talk about the edge case for RHEL, that's the horizontal use case and then we can do specializations inside particular verticals or particular industries, but at bottom that's the use case we're talking about when we talk about the edge. >> And an assumption of connectivity at some point? >> Yeah. >> Right, you didn't have to always be on. >> Intermittent, latent, eventual connectivity. >> Eventual connectivity. (chuckles) That's right in some tech terms. >> Red Hat was originally a one trick pony. I mean, RHEL was it and now you've got all of these other extensions and different markets that you expanded into. What's your role in coordinating what all those different functions are doing? >> Yes, you look at all the innovations we've made, whether it's in storage, whether it's in OpenShift and elsewhere, RHEL remains the beating heart, right? It's the place where everything starts. And so a lot of what my team does is, yes, we're trying to make all the partners happy, we're also trying to make our internal partners happy, right? So the OpenShift folks need stuff out of RHEL, just like any other software vendor. And so I really think about RHEL is yes, we're a platform, yes, we're a product in our own right, but we're also a service organization for all the other parts of the portfolio. And the reason for that is we need to make sure all this stuff works together, right? Part of the whole reasoning behind the Red Hat Portfolio at large is that each of these pieces build on each other and compliment each other, right? I think that's an important part of the Red Hat mission, the RHEL mission. >> There's an article in the journal yesterday about how the tech industry was sort of pounding the drum on H-1B visas, there's a limit. I think it's been the same limit since 2005, 65,000 a year. We are facing, customers are facing, you guys, I'm sure as well, we are, real skills shortage, there's a lack of talent. How are you seeing companies deal with that? What are you advising them? What are you guys doing yourselves? >> Yeah, it's interesting, especially as everybody went through some flavor of digital transformation during the pandemic and now everybody's going through some, and kind of connected to that, everybody's making a move to the public cloud. They're making operating system choices when they're making those platform choices, right? And I think what's interesting is that, what they're coming to is, "Well, I have a Linux skills shortage and for a thousand reasons the market has not provided enough Linux admins." I mean, these are very lucrative positions, right? With command a lot of money, you would expect their supply would eventually catch up, but for whatever reason, it's not catching up. So I can't solve this by throwing bodies at it so I need to figure out a more efficient way of running my Linux operation. People are making a couple choices. The first is they're ensuring that they have consistency in their operating system choices, whether it's on premise or in the cloud, or even out on the edge, if I have to juggle three, four different operating systems, as I'm going through these three or four different infrastructures, that doesn't make any sense, 'cause the one thing is most precious to me is my Linux talent, right? And so I need to make sure that they're consistent, optimized and efficient. The other thing they're doing is tooling and automation and especially through tools like Ansible, right? Being able to take advantage of as much automation as possible and much consistency as possible so that they can make the most of the Linux talent that they do have. And so with Red Hat Enterprise Linux 9, in particular, you see us make a big investment in things like more automation tools for things like SAP and SQL server deployments, you'll see us make investments in things like basic stuff like the web console, right? We should now be able to go and point and click and go basic Linux administration tasks that lowers the barrier to entry and makes it easier to find people to actually administer the systems that you have. >> As you move out onto these new platforms, particularly on the edge, many of them will be much smaller, limited function. How do you make the decisions about what features you're going to keep or what you're going to keep in RHEL when you're running on a thermostat? >> Okay, so let me be clear, I don't want RHEL to run on a thermostat. (everybody laughing) >> I gave you advantage over it. >> I can't handle the margins on something like that, but at the end. >> You're running on, you're running on the GM. >> Yeah, no that's, right? And so the, so the choice at the, the most important thing we can do is give customers the tools that they need to make the choice that's appropriate for their deployment. I have learned over several years in this business that if I start choosing what content a customer decide wants on their operating system I will always guess it wrong, right? So my job is to make sure that I have a library of reliable, secure software options for them, that they can use as ingredients into their solution. And I give them tools that allow them to kind of curate the operating system that they need. So that's the tool like Image Builder, which we just announced, the image builder service lets a customer go in and point and click and kind of compose the edge operating system they need, hit a button and now they have an atomic image that they can go deploy out on the edge reliably, right? >> Gunnar can you clarify the cadence of releases? >> Oh yeah. >> You guys, the change that you made there. >> Yeah. >> Why that change occurred and what what's the standard today? >> Yeah, so back when we released RHEl 8, so we were just talking about hardware and you know, it's ARM and X86, all these different kinds of hardware, the hardware market is internally. I tell everybody the hardware market just got real weird, right? It's just got, the schedules are crazy. We got so many more entrance. Everything is kind of out of sync from where it used to be, it used to be there was a metronome, right? You mentioned Moore's law earlier. It was like a 18 month metronome. Everybody could kind of set their watch to. >> Right. >> So that's gone, and so now we have so much hardware that we need to reconcile. The only way for us to provide the kind of stability and consistency that customers were looking for was to set a set our own clock. So we said three years for every major release, six months for every minor release and that we will ship a new minor release every six months and a new major release every three years, whether we need it or not. And that has value all by itself. It means that customers can now plan ahead of time and know, okay, in 36 months, the next major release is going to come on. And now that's something I can plan my workload around, that something I can plan a data center migration around, things like that. So the consistency of this and it was a terrifying promise to make three years ago. I am now delighted to announce that we actually made good on it three years later, right? And plan two again, three years from now. >> Is it follow up, is it primarily the processor, optionality and diversity, or as I was talking to an architect, system architect the other day in his premise was that we're moving from a processor centric world to a connect centric world, not just the processor, but the memories, the IO, the controllers, the nics and it's just keeping that system in balance. Does that affect you or is it primarily the processor? >> Oh, it absolutely affects us, yeah. >> How so? >> Yeah, so the operating system is the thing that everyone relies on to hide all that stuff from everybody else, right? And so if we cannot offer that abstraction from all of these hardware choices that people need to make, then we're not doing our job. And so that means we have to encompass all the hardware configurations and all the hardware use cases that we can in order to make an application successful. So if people want to go disaggregate all of their components, we have to let 'em do that. If they want to have a kind of more traditional kind of boxed up OEM experience, they should be able to do that too. So yeah, this is what I mean is because it is RHEL responsibility and our duty to make sure that people are insulated from all this chaos underneath, that is a good chunk of the job, yeah. >> The hardware and the OS used to be inseparable right before (indistinct) Hence the importance of hardware. >> Yeah, that's right. >> I'm curious how your job changes, so you just, every 36 months you roll on a new release, which you did today, you announced a new release. You go back into the workplace two days, how is life different? >> Not at all, so the only constant is change, right? And to be honest, a major release, that's a big event for our release teams. That's a big event for our engineering teams. It's a big event for our product management teams, but all these folks have moved on and like we're now we're already planning. RHEL 9.1 and 9.2 and 8.7 and the rest of the releases. And so it's kind of like brief celebration and then right back to work. >> Okay, don't change so much. >> What can we look forward to? What's the future look like of RHEL, RHEL 10? >> Oh yeah, more bigger, stronger, faster, more optimized for those and such and you get, >> Longer lower, wider. >> Yeah, that's right, yeah, that's right, yeah. >> I am curious about CentOS Stream because there was some controversy around the end of life for CentOS and the move to CentOS Stream. >> Yeah. >> A lot of people including me are not really clear on what stream is and how it differs from CentOS, can you clarify that? >> Absolutely, so when Red Hat Enterprise Linux was first created, this was back in the days of Red Hat Linux, right? And because we couldn't balance the needs of the hobbyist market from the needs of the enterprise market, we split into Red Hat Enterprise Linux and Fedora, okay? So then for 15 years, yeah, about 15 years we had Fedora which is where we took all of our risks. That was kind of our early program where we started integrating new components, new open source projects and all the rest of it. And then eventually we would take that innovation and then feed it into the next version of Red Hat Enterprise Linux. The trick with that is that the Red Hat Enterprise Linux work that we did was largely internal to Red Hat and wasn't accessible to partners. And we've just spent a lot of time talking about how much we need to be collaborating with partners. They really had, a lot of them had to wait until like the beta came out before they actually knew what was going to be in the box, okay, well that was okay for a while but now that the market is the way that it is, things are moving so quickly. We need a better way to allow partners to work together with us further upstream from the actual product development. So that's why we created CentOS Stream. So CentOS Stream is the place where we kind of host the party and people can watch the next version of Red Hat Enterprise get developed in real time, partners can come in and help, customers can come in and help. And we've been really proud of the fact that Red Hat Enterprise Linux 9 is the first release that came completely out of CentOS Stream. Another way of putting that is that Red Hat Enterprise Linux 9 is the first version of RHEL that was actually built, 80, 90% of it was built completely in the open. >> Okay, so that's the new playground. >> Yeah, that's right. >> You took a lot of negative pushback when you made the announcement, is that basically because the CentOS users didn't understand what you were doing? >> No, I think the, the CentOS Linux, when we brought CentOS Linux on, this was one of the things that we wanted to do, is we wanted to create this space where we could start collaborating with people. Here's the lesson we learned. It is very difficult to collaborate when you are downstream of the product you're trying to improve because you've already shipped the product. And so once you're for collaborating downstream, any changes you make have to go all the way up the water slide and before they can head all the way back down. So this was the real pivot that we made was moving that partnership and that collaboration activity from the downstream of Red Hat Enterprise Linux to putting it right in the critical path of Red Hat Enterprise Linux development. >> Great, well, thank you for that Gunnar. Thanks for coming on theCUBE, it's great to, >> Yeah, my pleasure. >> See you and have a great day tomorrow. Thanks, and we look forward to seeing you tomorrow. We start at 9:00 AM. East Coast time. I think the keynotes, we will be here right after that to break that down, Paul Gillin and myself. This is day one for theCUBE's coverage of Red Hat Summit 2022 from Boston. We'll see you tomorrow, thanks for watching. (upbeat music)
SUMMARY :
He's my cohost for the next day. Nice to be here, Dave, Paul. It's been a lot of iterations. It's the highest version that the success of RHEL is really, We're obviously seeing ARM at the edge. and the places where across all of the infrastructures Well so the best thing to do because that's the end of life. So nine is the start of to them was the retail store. But I think space is the edge. the exciting stuff to me And the way I think about the make sure that the updates That's right in some tech terms. that you expanded into. of the Red Hat mission, the RHEL mission. in the journal yesterday that lowers the barrier to entry particularly on the edge, Okay, so let me be clear, I can't handle the margins you're running on the GM. So that's the tool like Image Builder, You guys, the change I tell everybody the hardware market So the consistency of this but the memories, the IO, and all the hardware use cases that we can The hardware and the OS You go back into the workplace two days, Not at all, so the only Yeah, that's right, for CentOS and the move to CentOS Stream. but now that the market Here's the lesson we learned. Great, well, thank you for that Gunnar. to seeing you tomorrow.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Gunnar Hellekson | PERSON | 0.99+ |
Paul Gillin | PERSON | 0.99+ |
January | DATE | 0.99+ |
NVIDIA | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
tomorrow | DATE | 0.99+ |
Red Hat Linux | TITLE | 0.99+ |
Boston | LOCATION | 0.99+ |
RHEL 9 | TITLE | 0.99+ |
Gunnar | PERSON | 0.99+ |
six months | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
RHEL | TITLE | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
Red Hat Enterprise Linux | TITLE | 0.99+ |
First | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
10-year | QUANTITY | 0.99+ |
Matt | PERSON | 0.99+ |
15 years | QUANTITY | 0.99+ |
10 years | QUANTITY | 0.99+ |
Boston, Massachusetts | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
RHEL 9.1 | TITLE | 0.99+ |
seven | QUANTITY | 0.99+ |
two days | QUANTITY | 0.99+ |
9:00 AM | DATE | 0.99+ |
two things | QUANTITY | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
2005 | DATE | 0.99+ |
Linux | TITLE | 0.99+ |
CentOS Linux | TITLE | 0.99+ |
RHEL 10 | TITLE | 0.99+ |
each | QUANTITY | 0.99+ |
Paul | PERSON | 0.99+ |
CentOS Stream | TITLE | 0.99+ |
Red Hat Enterprise Linux 7 | TITLE | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
18 month | QUANTITY | 0.99+ |
Red Hat Enterprise Linux 9 | TITLE | 0.99+ |
Red Hat Enterprise Linux 8 | TITLE | 0.99+ |
eight | QUANTITY | 0.99+ |
CentOS | TITLE | 0.99+ |
H-1B | OTHER | 0.99+ |
Red Hat Summit 2022 | EVENT | 0.99+ |
36 months | QUANTITY | 0.99+ |
Red Hat | TITLE | 0.99+ |
thousands | QUANTITY | 0.99+ |
three years later | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
first release | QUANTITY | 0.98+ |
Ryan Fournier, Dell Technologies & Muneyb Minhazuddin, VMWare | Dell Technologies World 2022
>> the CUBE presents Dell Technologies World brought to you by Dell. >> Hey everyone, welcome back to the CUBE'S coverage day one, Dell Technologies World 2022 live from The Venetian in Las Vegas. Lisa Martin, with Dave Vellante. We've been here the last couple of hours. You can hear probably the buzz behind me. Lots of folks here, we're think around seven to eight thousand folks in this solution expo, the vibe is awesome. We've got two guests helping to round out our day one coverage. Ryan Fournier joins us, senior director of product management Edge Solutions at Dell Technologies. And MuneyB Minttazuddin vice president of Edge Computing at VMware. Guys, welcome to the program. >> Oh, glad to be here. >> Yeah. >> Isn't it great to be here in person? >> Oh man, yes. >> The vibe, the vibe of day one is awesome. >> Yes. >> Oh yeah. >> I think it's fantastic. >> Like people give energy off to each other, right? >> Absolutely. So lots of some good news coming out today so far on day one. Let's talk about, Ryan let's start with you. With Edge, it's not new. We've been talking about it for a while, but what are some of the things that are new? What are some of the key trends that you are seeing that are driving changes at the Edge? >> Great, good question. We've been talking to a lot of customers. Okay, a lot of the customers you know, the different verticals really find that is a common theme happening around a massive digital transformation and really based on the pandemic, okay. Which caused some acceleration in some, but also not, but many are kind of laggers left behind. And one primary reason is the culture of the OT, IT, you know, lack of barriers or something like that. The OT is obviously the business outcomes, okay. Focused where the IT is more enabling the function and it'll take retail. For example, that's accelerated a significant usage of an in-store frictionless experience, okay. As well as supply chain automation, warehousing logistics, connected inventory, a lot of the new use cases in this new normal post that pandemic. It's really that new retail operating landscape. >> Consumers we are so demanding, we want the same experience that we have online and we want that in the store and that's really driving a lot of this out of consumer demand. >> Oh yeah, no. I think, you know, retail you know, the way you shop for milk and bread change during the pandemic, right? There was pre-pandemic. The online shopping in the United States was only 5%, but during the pandemic and afterwards that's going to caught up to 25, 30%. That's huge. How do you bring new processes in? How do you create omnichannel consumer experiences where online well as physical are blended together? Becomes a massive challenge for the retailers. So yes, Edge has been there for a long time. Innovation hasn't happened, but a simple credit card swipe When you used to pre-pandemic, just to go do your checkout now has become into a curbside pickup. Integration with like, it's just simple payment card processing is not complicated like, you know, crazy. So people are forced to go in a way and that's happening in manufacturing because they're supply chain issues, could be not. So a lot of that has accelerated this investment and what's kind of driving Edge Computing is if everything ran out of the cloud, then you almost need infinite bandwidth. So suddenly people are realizing that everything runs out of cloud. I can't process my video analytics in a store. That's a lot of video, right? >> So we often ask ourselves, okay, who's going to win the edge? You know, we have that conversation. The cloud guys? VMware? You know, Dell? How are they going to go at it? And so to your point, you're not going to do a round trip to the cloud too expensive, too slow. Now cloud guys will try to bring their cloud basically on prem or out to the edge. You're kind of bringing it from the data center. So how do you see that evolution? >> No, great question. As the edge market happens, right? So there's market data now which says enterprise edge workloads in the next five years are going to be the fastest growing workloads. But then you have different communities coming to solve that problem. Like you just said, John is, you know, hyperscalers are going, Hey, all of the new workloads were built on us, let's bring them to the edge. Data center workloads move to the edge. >> Now important community here are, you know, Telcos and Service Providers because they have assets that are highly distributed at the edge. However, they're networking assets like cell towers and stuff like that. There's opportunity to convert them into computer and storage assets. So you can provide edge computing POPs. So you're seeing a convergence of lot of industry segments, traditional IT, hyperscalers, telcos, and then OT like Ryan pointed out is naturally transforming itself. There's almost this confluence of this pot where all these different technologies need to come together. From VMware and Dell perspective, our mission is a multi-cloud edge. We want to be able to support multi-cloud services because you've heard this multiple times, is at the edge consumers and customers will require services from all the hyperscalers. They don't want buy a one hyperscaler suit to suit solution. They want to mix and match. So not bound. We want be multi-cloud south bound to support IT and OT environments. So that becomes our value proposition in the middle. >> Yep. >> So Ryan, you were talking about that IT, OT schism. And we talk about that a lot. I wonder if you could help us parse that a little bit, because you were using, for instance retail, as an example. Sometimes I think about in the industrial. >> And I think the OT people are kind of like having an engineering mindset. Don't touch my stuff. Kind of like the IT guys too, but different, you know. So there's so much opportunity at the edge. I wonder how you guys think about that? How you segment it? How you prioritize it? Obviously retail telco are big enough. >> Yep. >> That you can get your hands around them, but then there's to your point about all this data that's going to going to compute. It's going to come in pockets. And I wonder how you guys think about that schism and the other opportunity. >> Yeah, out there. It's also a great question, you know, in manufacturing. There's the true OT persona. >> Yeah. >> Okay, and that really is focused on the business outcomes. Things like predictive maintenance use cases, operational equipment effectiveness, like that's really around bottleneck analysis, and the process that go through that. If the plant goes down, they're fine, okay. They can still work on their own systems, but they're not needing that high availability solution. But they're also the decision makers and where to buy the Edge Computing, okay. So we need to talk more to the OT persona from a Dell perspective, okay. And to add on to Ryan, right. So industrial is an interesting challenge, right? So one of the things we did, and this is VMware and Dell working together at vMware it was virtual. We announced something called edge compute stack. And for the first time in 23 years of vMware history, we made the hypervisor layer real-time. >> Yep. >> What that means is in order to capture some of these OT workloads, you need to get in and operate it between the industrial PC and the program of logical controllers at a sub millisecond performance level, because now you're controlling robotic arms that you cannot miss a beat. So we actually created this real time functionality. With that functionality in the last six months, we've been able to virtualize PLCs, IPCs. So what I'm getting at is we're opening up an entire wide space of operational technology workloads, which we was not accessible to our market for the last 20 plus years. >> Now we're talking. >> Yeah. And that allows us that control plane infrastructure to edge compute. There's purpose built for edge allows us to pivot and do other solutions like analytics with the adoption of AI Analytics with our recent announcement of Deep North, okay. That provides that in store video analytics functionality. And then we also partner with PTC based on a manufacturing solution, working with that same edge compute stack. Think of that as that control plane, where again, like I said, you can pivot off a different solutions. Okay, so we leverage PTCs thing works. >> So, okay, great. So I wanted to go to that. So real-time's really interesting. >> 'Cause most of much of AI today is modeling done in the cloud. >> Yes. >> The real opportunity is real time inferencing at the edge. >> You got it. >> Okay, now this is why this gets so interesting. And I wonder if Project Monterey fits into this at all. because I feel like so why did Intel win? Intel won, it crushed all the Unix systems out there because it had PC volumes. And the edge volume's going to dwarf anything we've ever seen before. >> Yeah. >> So I feel like there's this new cocktail, you guys describe this convergence and this mixture and it's unknown. What's going to happen? That's why Project Monterey is so interesting. >> Of course. >> Yeah. >> Right? Because you're bringing together kind of hedging a lot of bets and serving a lot of different use cases. Maybe you could talk about where that might fit here. >> Oh absolutely. So the edge compute stack is made up of vSphere, Tanzu, which is vSphere's you know, VM container and Tanzu's our container technology and vSphere contains Monterey in it, right. And we've added vSAN a for storage at the edge. And connectivity is SD-WAN because a lot of the times it's far location. So you're not having a large footprint, you have one or two hoses, it's more wide area, narrow area. So the edge compute stack supports real-time, non-real-time time workloads. VMs and containers, CPU GPU, right. >> NPU, accelerators, >> NPU, DPU all of them, right. Because what you're dealing with here is that inferencing real time, because to Ryan's point, when you're doing predictive maintenance, you got to pick these signals up in like milliseconds. >> Yes. >> So we've gone our stack down to microseconds and we pick up and inform because if I can save this predictive maintenance in two seconds, I save millions of dollars in you know, wastage of product, right? >> And you may not even persist that data, right? You might just let it go, I mean, how much data does Tesla save? Right? I mean. >> You're absolutely right. A lot of the times, all you're doing is this volume of data coming at you. You're matching it to an inferencing pattern. If it doesn't match, you just drop, right. It's not persistent, but the moment you hit a trigger, immediately everything lights go off, you're login, you're applying outcome. So like super interesting at the edge. >> And the compute is going to go through the roof. So yeah, my premise is that, you know, general purpose x86 running SAP is not going to be the architecture for the edge. >> You're absolutely right. >> Going to be low cost, low power, super performance. 'Cause when you combine the CPU, GPU, NPU, you're going to blow away the performance that we've ever seen on the curves. >> There's also a new application pattern. I've called out something called edge-native applications. We went through this client-server architecture era. We built all this, you know, a very clear in architecture. We went through cloud native where everything was hyperscaled in the cloud. Both of the times we optimize our own compute. >> Yeah. >> At the edge, we got to optimize our owns data because it's not ephemeral compute that you have in hyperscale compute space, you have ephemeral data you got to deal with. So a new nature of application workloads are emerging. We call it edge-native apps. >> Yep. >> And those have very different characteristics, you know, to client server apps or you know, cloud native apps, which is amazing. It's driven by data analysts like developers, not like dot net Java developers. It's actually data analysts who are trying to mine this with fast patents and come out with outcomes, right? >> Yeah, I love that edge-native apps Lisa, that's a new term for me. >> Right, just trademark it on me. I made made it up. (panel laughing) >> Can you guys talk about a joint customer that you've really helped to dramatically transform in the last six months? >> You want to shout or I can go-- >> I think my industry is fine. >> Yeah, yeah. So, you know, at VMworld we talked about Oshkosh, which is again, like in the manufacturing space, we have retailers and manufacturers and we also brought in, you know, Proctor and Gamble and et cetera, et cetera, right? So the customers look at us jointly because you know edge doesn't happen in its own silo. It's a continuum from the data center to the cloud, to the edge, right. There's the continuum exists. So if only edge was in its own silo, you would do things. But the key thing about all of this, there's no right place, it's about that workload placement. Where do I place the workload for the most optimal business outcome? Now for real-time applications, it's at the edge. For non-real-time stuff it could be in the data center, it could be in a cloud. It doesn't really matter, where VMware and Dell strengths comes in with Oshkosh or all of those folks. We have the end-to-end. From you want place it in the data center, You want to place it in your charge to public cloud, You want to derive some of these applications. You want to place it at the far edge, which is a customer prem or a near edge, which is a telco. We've done joint announcements with telcos, like South Dakota Telecom, where we've taken their cell towers and converted them into compute and storage. So they can actually store it at the near edge, right. So this is 5G solutions. I also own the 5G part of the vMware business, but doesn't matter. Compute network storage, we got to find the right mix for placing the workload at the right place. >> You call that the near edge. I think of it as the far edge, but that's what you mean, right? >> Yeah, yeah. >> Way out there in the (mumbles), okay. >> It's all about just optimizing operations, reducing cost, increasing profitability for the customer. >> So you said edge, not its own silo. And I agree. >> it's not a silo. Is mobile a valid sort of example or a little test case because when we developed mobile apps, it drove a lot of things in the data center and in the cloud. Is that a way to think of about it as opposed to like PCs work under their own silo? Yeah, we connect to the internet, but is mobile a reasonable proxy or no? >> Mobile is an interesting proxy. When you think about the application again, you know, you got a platform by the way, you'll get excited by this. We've got mobile developers, mobile device manufacturers. You can count them in your fingers. They want to now have these devices sitting in factory floors because now these devices are so smart. They have sensors, temperature controls. They can act like these multisensory device at the edge, but the app landscape is quite interesting. I think John, where you were going was they have a very thin shim app layer that can be pushed from anywhere. The, the notion of these edge-native applications could be virtual machines, could be containers, could be, you know, this new thing called Web Assembly Wasm, which is a new type of technology, very thin shim layer which is mobile like app layer. But you know, all of these are combination of how these applications may get expressed. The target platforms could be anywhere from mobile devices to IOT gateways, to IOT devices, to servers, to, you know, massive data centers. So what's amazing is this thing can just go everywhere. And our goal is consistent infrastructure, consistent operations across the board. That's where VMware and Dell win together. >> Yeah. >> Yeah, excellent. And I was just talking to a customer today, a major airline manufacturer, okay. About their airport and the future with the mobile device just being frictionless, okay, no one wants to touch anything anymore. You can use your mobile device to do your check-in and you've got to you avoid kiosks, okay. So they're trying to figure out how to get rid of the kiosk. Now you need a kiosk for like checking baggage, okay. You can't get in the way of that, but at least that frictionless experience, for that airport in the future, but it brings in some other issues. >> It does, but I like the sound of that. Last question guys, where can customers go to learn more information about the joint solutions? >> So you can go to like our public websites obviously search on edge. And if you hear at the show, there's a lot of hands on labs, okay. There's a booth over there. A lot of Edge Solutions that we offer. >> Yeah, no, this is I guess as Ryan pointed our websites have these. We've had a lot of partnership in announcements together because you know, one of the things as we've expressed, manufacturing, retail, you know, when you get in the use cases, they involve ISPs, right? So they you know, they bring the value of you know, not just having a horizontal AI platform. We like opinionated models of fraud detection. So we're actually working with ecosystem of partners to make this real. >> So we may even hear more. >> The rich vertical solution, I call it the ISVs. They enrich our vertical solutions. >> Right. >> Oh, WeMo is going to be revolutionary. >> All right, can't wait. Guys thank you so much for joining David and me today and talking about what Dell and vMware are doing together and helping retailers manufacturers really convert the edge to incredible success. We appreciate your time. >> Thank you very much. Thanks Lisa, thanks John for having us. >> For Dave Vellante, I'm Lisa Martin. You're watching the CUBE. We are wrapping up day one of our coverage of Dell Technologies World 2022. We'll be back tomorrow, John Farrer and Dave Nicholson will join us. We'll see you then. (soft music)
SUMMARY :
brought to you by Dell. You can hear probably the buzz behind me. of day one is awesome. that are driving changes at the Edge? Okay, a lot of the customers you know, a lot of this out of consumer demand. So a lot of that has So how do you see that evolution? Hey, all of the new that are highly distributed at the edge. So Ryan, you were talking Kind of like the IT guys And I wonder how you guys you know, in manufacturing. So one of the things we did, and the program of logical controllers you can pivot off a different solutions. So real-time's really interesting. is modeling done in the cloud. The real opportunity is real And the edge volume's going to dwarf you guys describe this Maybe you could talk about because a lot of the you got to pick these signals And you may not even So like super interesting at the edge. And the compute is going 'Cause when you combine the CPU, GPU, NPU, Both of the times we At the edge, we got characteristics, you know, Yeah, I love that edge-native apps I made made it up. So the customers look at us jointly You call that the near edge. increasing profitability for the customer. So you said edge, not its own silo. and in the cloud. I think John, where you were going for that airport in the future, It does, but I like the sound of that. So you can go to So they you know, they bring the value solution, I call it the ISVs. really convert the edge Thank you very much. We'll see you then.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David | PERSON | 0.99+ |
Ryan | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Ryan Fournier | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
John Farrer | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Tesla | ORGANIZATION | 0.99+ |
two seconds | QUANTITY | 0.99+ |
23 years | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
United States | LOCATION | 0.99+ |
MuneyB Minttazuddin | PERSON | 0.99+ |
Muneyb Minhazuddin | PERSON | 0.99+ |
two guests | QUANTITY | 0.99+ |
VMworld | ORGANIZATION | 0.99+ |
tomorrow | DATE | 0.99+ |
today | DATE | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Oshkosh | ORGANIZATION | 0.99+ |
South Dakota Telecom | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
vMware | ORGANIZATION | 0.99+ |
two hoses | QUANTITY | 0.99+ |
millions of dollars | QUANTITY | 0.99+ |
Proctor and Gamble | ORGANIZATION | 0.99+ |
Both | QUANTITY | 0.98+ |
VMWare | ORGANIZATION | 0.98+ |
Deep North | ORGANIZATION | 0.98+ |
first time | QUANTITY | 0.98+ |
eight thousand folks | QUANTITY | 0.97+ |
Tanzu | ORGANIZATION | 0.97+ |
day one | QUANTITY | 0.97+ |
Las Vegas | LOCATION | 0.96+ |
telco | ORGANIZATION | 0.96+ |
Intel | ORGANIZATION | 0.96+ |
dot net | ORGANIZATION | 0.95+ |
Dell Technologies World | EVENT | 0.95+ |
pandemic | EVENT | 0.95+ |
up to 25, 30% | QUANTITY | 0.95+ |
Edge | ORGANIZATION | 0.94+ |
last six months | DATE | 0.91+ |
Monterey | ORGANIZATION | 0.9+ |
last 20 plus years | DATE | 0.9+ |
Technologies World 2022 | EVENT | 0.88+ |
Java | TITLE | 0.87+ |
Edge Solutions | ORGANIZATION | 0.85+ |
Edge Computing | ORGANIZATION | 0.84+ |
vSphere | ORGANIZATION | 0.84+ |
Edge Solutions | ORGANIZATION | 0.83+ |
Power Panel: Does Hardware Still Matter
(upbeat music) >> The ascendancy of cloud and SAS has shown new light on how organizations think about, pay for, and value hardware. Once sought after skills for practitioners with expertise in hardware troubleshooting, configuring ports, tuning storage arrays, and maximizing server utilization has been superseded by demand for cloud architects, DevOps pros, developers with expertise in microservices, container, application development, and like. Even a company like Dell, the largest hardware company in enterprise tech touts that it has more software engineers than those working in hardware. Begs the question, is hardware going the way of Coball? Well, not likely. Software has to run on something, but the labor needed to deploy, and troubleshoot, and manage hardware infrastructure is shifting. At the same time, we've seen the value flow also shifting in hardware. Once a world dominated by X86 processors value is flowing to alternatives like Nvidia and arm based designs. Moreover, other componentry like NICs, accelerators, and storage controllers are becoming more advanced, integrated, and increasingly important. The question is, does it matter? And if so, why does it matter and to whom? What does it mean to customers, workloads, OEMs, and the broader society? Hello and welcome to this week's Wikibon theCUBE Insights powered by ETR. In this breaking analysis, we've organized a special power panel of industry analysts and experts to address the question, does hardware still matter? Allow me to introduce the panel. Bob O'Donnell is president and chief analyst at TECHnalysis Research. Zeus Kerravala is the founder and principal analyst at ZK Research. David Nicholson is a CTO and tech expert. Keith Townson is CEO and founder of CTO Advisor. And Marc Staimer is the chief dragon slayer at Dragon Slayer Consulting and oftentimes a Wikibon contributor. Guys, welcome to theCUBE. Thanks so much for spending some time here. >> Good to be here. >> Thanks. >> Thanks for having us. >> Okay before we get into it, I just want to bring up some data from ETR. This is a survey that ETR does every quarter. It's a survey of about 1200 to 1500 CIOs and IT buyers and I'm showing a subset of the taxonomy here. This XY axis and the vertical axis is something called net score. That's a measure of spending momentum. It's essentially the percentage of customers that are spending more on a particular area than those spending less. You subtract the lesses from the mores and you get a net score. Anything the horizontal axis is pervasion in the data set. Sometimes they call it market share. It's not like IDC market share. It's just the percentage of activity in the data set as a percentage of the total. That red 40% line, anything over that is considered highly elevated. And for the past, I don't know, eight to 12 quarters, the big four have been AI and machine learning, containers, RPA and cloud and cloud of course is very impressive because not only is it elevated in the vertical access, but you know it's very highly pervasive on the horizontal. So what I've done is highlighted in red that historical hardware sector. The server, the storage, the networking, and even PCs despite the work from home are depressed in relative terms. And of course, data center collocation services. Okay so you're seeing obviously hardware is not... People don't have the spending momentum today that they used to. They've got other priorities, et cetera, but I want to start and go kind of around the horn with each of you, what is the number one trend that each of you sees in hardware and why does it matter? Bob O'Donnell, can you please start us off? >> Sure Dave, so look, I mean, hardware is incredibly important and one comment first I'll make on that slide is let's not forget that hardware, even though it may not be growing, the amount of money spent on hardware continues to be very, very high. It's just a little bit more stable. It's not as subject to big jumps as we see certainly in other software areas. But look, the important thing that's happening in hardware is the diversification of the types of chip architectures we're seeing and how and where they're being deployed, right? You refer to this in your opening. We've moved from a world of x86 CPUs from Intel and AMD to things like obviously GPUs, DPUs. We've got VPU for, you know, computer vision processing. We've got AI-dedicated accelerators, we've got all kinds of other network acceleration tools and AI-powered tools. There's an incredible diversification of these chip architectures and that's been happening for a while but now we're seeing them more widely deployed and it's being done that way because workloads are evolving. The kinds of workloads that we're seeing in some of these software areas require different types of compute engines than traditionally we've had. The other thing is (coughs), excuse me, the power requirements based on where geographically that compute happens is also evolving. This whole notion of the edge, which I'm sure we'll get into a little bit more detail later is driven by the fact that where the compute actually sits closer to in theory the edge and where edge devices are, depending on your definition, changes the power requirements. It changes the kind of connectivity that connects the applications to those edge devices and those applications. So all of those things are being impacted by this growing diversity in chip architectures. And that's a very long-term trend that I think we're going to continue to see play out through this decade and well into the 2030s as well. >> Excellent, great, great points. Thank you, Bob. Zeus up next, please. >> Yeah, and I think the other thing when you look at this chart to remember too is, you know, through the pandemic and the work from home period a lot of companies did put their office modernization projects on hold and you heard that echoed, you know, from really all the network manufacturers anyways. They always had projects underway to upgrade networks. They put 'em on hold. Now that people are starting to come back to the office, they're looking at that now. So we might see some change there, but Bob's right. The size of those market are quite a bit different. I think the other big trend here is the hardware companies, at least in the areas that I look at networking are understanding now that it's a combination of hardware and software and silicon that works together that creates that optimum type of performance and experience, right? So some things are best done in silicon. Some like data forwarding and things like that. Historically when you look at the way network devices were built, you did everything in hardware. You configured in hardware, they did all the data for you, and did all the management. And that's been decoupled now. So more and more of the control element has been placed in software. A lot of the high-performance things, encryption, and as I mentioned, data forwarding, packet analysis, stuff like that is still done in hardware, but not everything is done in hardware. And so it's a combination of the two. I think, for the people that work with the equipment as well, there's been more shift to understanding how to work with software. And this is a mistake I think the industry made for a while is we had everybody convinced they had to become a programmer. It's really more a software power user. Can you pull things out of software? Can you through API calls and things like that. But I think the big frame here is, David, it's a combination of hardware, software working together that really make a difference. And you know how much you invest in hardware versus software kind of depends on the performance requirements you have. And I'll talk about that later but that's really the big shift that's happened here. It's the vendors that figured out how to optimize performance by leveraging the best of all of those. >> Excellent. You guys both brought up some really good themes that we can tap into Dave Nicholson, please. >> Yeah, so just kind of picking up where Bob started off. Not only are we seeing the rise of a variety of CPU designs, but I think increasingly the connectivity that's involved from a hardware perspective, from a kind of a server or service design perspective has become increasingly important. I think we'll get a chance to look at this in more depth a little bit later but when you look at what happens on the motherboard, you know we're not in so much a CPU-centric world anymore. Various application environments have various demands and you can meet them by using a variety of components. And it's extremely significant when you start looking down at the component level. It's really important that you optimize around those components. So I guess my summary would be, I think we are moving out of the CPU-centric hardware model into more of a connectivity-centric model. We can talk more about that later. >> Yeah, great. And thank you, David, and Keith Townsend I really interested in your perspectives on this. I mean, for years you worked in a data center surrounded by hardware. Now that we have the software defined data center, please chime in here. >> Well, you know, I'm going to dig deeper into that software-defined data center nature of what's happening with hardware. Hardware is meeting software infrastructure as code is a thing. What does that code look like? We're still trying to figure out but servicing up these capabilities that the previous analysts have brought up, how do I ensure that I can get the level of services needed for the applications that I need? Whether they're legacy, traditional data center, workloads, AI ML, workloads, workloads at the edge. How do I codify that and consume that as a service? And hardware vendors are figuring this out. HPE, the big push into GreenLake as a service. Dale now with Apex taking what we need, these bare bone components, moving it forward with DDR five, six CXL, et cetera, and surfacing that as cold or as services. This is a very tough problem. As we transition from consuming a hardware-based configuration to this infrastructure as cold paradigm shift. >> Yeah, programmable infrastructure, really attacking that sort of labor discussion that we were having earlier, okay. Last but not least Marc Staimer, please. >> Thanks, Dave. My peers raised really good points. I agree with most of them, but I'm going to disagree with the title of this session, which is, does hardware matter? It absolutely matters. You can't run software on the air. You can't run it in an ephemeral cloud, although there's the technical cloud and that's a different issue. The cloud is kind of changed everything. And from a market perspective in the 40 plus years I've been in this business, I've seen this perception that hardware has to go down in price every year. And part of that was driven by Moore's law. And we're coming to, let's say a lag or an end, depending on who you talk to Moore's law. So we're not doubling our transistors every 18 to 24 months in a chip and as a result of that, there's been a higher emphasis on software. From a market perception, there's no penalty. They don't put the same pressure on software from the market to reduce the cost every year that they do on hardware, which kind of bass ackwards when you think about it. Hardware costs are fixed. Software costs tend to be very low. It's kind of a weird thing that we do in the market. And what's changing is we're now starting to treat hardware like software from an OPEX versus CapEx perspective. So yes, hardware matters. And we'll talk about that more in length. >> You know, I want to follow up on that. And I wonder if you guys have a thought on this, Bob O'Donnell, you and I have talked about this a little bit. Marc, you just pointed out that Moore's laws could have waning. Pat Gelsinger recently at their investor meeting said that he promised that Moore's law is alive and well. And the point I made in breaking analysis was okay, great. You know, Pat said, doubling transistors every 18 to 24 months, let's say that Intel can do that. Even though we know it's waning somewhat. Look at the M1 Ultra from Apple (chuckles). In about 15 months increased transistor density on their package by 6X. So to your earlier point, Bob, we have this sort of these alternative processors that are really changing things. And to Dave Nicholson's point, there's a whole lot of supporting components as well. Do you have a comment on that, Bob? >> Yeah, I mean, it's a great point, Dave. And one thing to bear in mind as well, not only are we seeing a diversity of these different chip architectures and different types of components as a number of us have raised the other big point and I think it was Keith that mentioned it. CXL and interconnect on the chip itself is dramatically changing it. And a lot of the more interesting advances that are going to continue to drive Moore's law forward in terms of the way we think about performance, if perhaps not number of transistors per se, is the interconnects that become available. You're seeing the development of chiplets or tiles, people use different names, but the idea is you can have different components being put together eventually in sort of a Lego block style. And what that's also going to allow, not only is that going to give interesting performance possibilities 'cause of the faster interconnect. So you can share, have shared memory between things which for big workloads like AI, huge data sets can make a huge difference in terms of how you talk to memory over a network connection, for example, but not only that you're going to see more diversity in the types of solutions that can be built. So we're going to see even more choices in hardware from a silicon perspective because you'll be able to piece together different elements. And oh, by the way, the other benefit of that is we've reached a point in chip architectures where not everything benefits from being smaller. We've been so focused and so obsessed when it comes to Moore's law, to the size of each individual transistor and yes, for certain architecture types, CPUs and GPUs in particular, that's absolutely true, but we've already hit the point where things like RF for 5g and wifi and other wireless technologies and a whole bunch of other things actually don't get any better with a smaller transistor size. They actually get worse. So the beauty of these chiplet architectures is you could actually combine different chip manufacturing sizes. You know you hear about four nanometer and five nanometer along with 14 nanometer on a single chip, each one optimized for its specific application yet together, they can give you the best of all worlds. And so we're just at the very beginning of that era, which I think is going to drive a ton of innovation. Again, gets back to my comment about different types of devices located geographically different places at the edge, in the data center, you know, in a private cloud versus a public cloud. All of those things are going to be impacted and there'll be a lot more options because of this silicon diversity and this interconnect diversity that we're just starting to see. >> Yeah, David. David Nicholson's got a graphic on that. They're going to show later. Before we do that, I want to introduce some data. I actually want to ask Keith to comment on this before we, you know, go on. This next slide is some data from ETR that shows the percent of customers that cited difficulty procuring hardware. And you can see the red is they had significant issues and it's most pronounced in laptops and networking hardware on the far right-hand side, but virtually all categories, firewalls, peripheral servers, storage are having moderately difficult procurement issues. That's the sort of pinkish or significant challenges. So Keith, I mean, what are you seeing with your customers in the hardware supply chains and bottlenecks? And you know we're seeing it with automobiles and appliances but so it goes beyond IT. The semiconductor, you know, challenges. What's been the impact on the buyer community and society and do you have any sense as to when it will subside? >> You know, I was just asked this question yesterday and I'm feeling the pain. People question, kind of a side project within the CTO advisor, we built a hybrid infrastructure, traditional IT data center that we're walking with the traditional customer and modernizing that data center. So it was, you know, kind of a snapshot of time in 2016, 2017, 10 gigabit, ARISTA switches, some older Dell's 730 XD switches, you know, speeds and feeds. And we said we would modern that with the latest Intel stack and connected to the public cloud and then the pandemic hit and we are experiencing a lot of the same challenges. I thought we'd easily migrate from 10 gig networking to 25 gig networking path that customers are going on. The 10 gig network switches that I bought used are now double the price because you can't get legacy 10 gig network switches because all of the manufacturers are focusing on the more profitable 25 gig for capacity, even the 25 gig switches. And we're focused on networking right now. It's hard to procure. We're talking about nine to 12 months or more lead time. So we're seeing customers adjust by adopting cloud. But if you remember early on in the pandemic, Microsoft Azure kind of gated customers that didn't have a capacity agreement. So customers are keeping an eye on that. There's a desire to abstract away from the underlying vendor to be able to control or provision your IT services in a way that we do with VMware VP or some other virtualization technology where it doesn't matter who can get me the hardware, they can just get me the hardware because it's critically impacting projects and timelines. >> So that's a great setup Zeus for you with Keith mentioned the earlier the software-defined data center with software-defined networking and cloud. Do you see a day where networking hardware is monetized and it's all about the software, or are we there already? >> No, we're not there already. And I don't see that really happening any time in the near future. I do think it's changed though. And just to be clear, I mean, when you look at that data, this is saying customers have had problems procuring the equipment, right? And there's not a network vendor out there. I've talked to Norman Rice at Extreme, and I've talked to the folks at Cisco and ARISTA about this. They all said they could have had blowout quarters had they had the inventory to ship. So it's not like customers aren't buying this anymore. Right? I do think though, when it comes to networking network has certainly changed some because there's a lot more controls as I mentioned before that you can do in software. And I think the customers need to start thinking about the types of hardware they buy and you know, where they're going to use it and, you know, what its purpose is. Because I've talked to customers that have tried to run software and commodity hardware and where the performance requirements are very high and it's bogged down, right? It just doesn't have the horsepower to run it. And, you know, even when you do that, you have to start thinking of the components you use. The NICs you buy. And I've talked to customers that have simply just gone through the process replacing a NIC card and a commodity box and had some performance problems and, you know, things like that. So if agility is more important than performance, then by all means try running software on commodity hardware. I think that works in some cases. If performance though is more important, that's when you need that kind of turnkey hardware system. And I've actually seen more and more customers reverting back to that model. In fact, when you talk to even some startups I think today about when they come to market, they're delivering things more on appliances because that's what customers want. And so there's this kind of app pivot this pendulum of agility and performance. And if performance absolutely matters, that's when you do need to buy these kind of turnkey, prebuilt hardware systems. If agility matters more, that's when you can go more to software, but the underlying hardware still does matter. So I think, you know, will we ever have a day where you can just run it on whatever hardware? Maybe but I'll long be retired by that point. So I don't care. >> Well, you bring up a good point Zeus. And I remember the early days of cloud, the narrative was, oh, the cloud vendors. They don't use EMC storage, they just run on commodity storage. And then of course, low and behold, you know, they've trot out James Hamilton to talk about all the custom hardware that they were building. And you saw Google and Microsoft follow suit. >> Well, (indistinct) been falling for this forever. Right? And I mean, all the way back to the turn of the century, we were calling for the commodity of hardware. And it's never really happened because you can still drive. As long as you can drive innovation into it, customers will always lean towards the innovation cycles 'cause they get more features faster and things. And so the vendors have done a good job of keeping that cycle up but it'll be a long time before. >> Yeah, and that's why you see companies like Pure Storage. A storage company has 69% gross margins. All right. I want to go jump ahead. We're going to bring up the slide four. I want to go back to something that Bob O'Donnell was talking about, the sort of supporting act. The diversity of silicon and we've marched to the cadence of Moore's law for decades. You know, we asked, you know, is Moore's law dead? We say it's moderating. Dave Nicholson. You want to talk about those supporting components. And you shared with us a slide that shift. You call it a shift from a processor-centric world to a connect-centric world. What do you mean by that? And let's bring up slide four and you can talk to that. >> Yeah, yeah. So first, I want to echo this sentiment that the question does hardware matter is sort of the answer is of course it matters. Maybe the real question should be, should you care about it? And the answer to that is it depends who you are. If you're an end user using an application on your mobile device, maybe you don't care how the architecture is put together. You just care that the service is delivered but as you back away from that and you get closer and closer to the source, someone needs to care about the hardware and it should matter. Why? Because essentially what hardware is doing is it's consuming electricity and dollars and the more efficiently you can configure hardware, the more bang you're going to get for your buck. So it's not only a quantitative question in terms of how much can you deliver? But it also ends up being a qualitative change as capabilities allow for things we couldn't do before, because we just didn't have the aggregate horsepower to do it. So this chart actually comes out of some performance tests that were done. So it happens to be Dell servers with Broadcom components. And the point here was to peel back, you know, peel off the top of the server and look at what's in that server, starting with, you know, the PCI interconnect. So PCIE gen three, gen four, moving forward. What are the effects on from an interconnect versus on performance application performance, translating into new orders per minute, processed per dollar, et cetera, et cetera? If you look at the advances in CPU architecture mapped against the advances in interconnect and storage subsystem performance, you can see that CPU architecture is sort of lagging behind in a way. And Bob mentioned this idea of tiling and all of the different ways to get around that. When we do performance testing, we can actually peg CPUs, just running the performance tests without any actual database environments working. So right now we're at this sort of imbalance point where you have to make sure you design things properly to get the most bang per kilowatt hour of power per dollar input. So the key thing here what this is highlighting is just as a very specific example, you take a card that's designed as a gen three PCIE device, and you plug it into a gen four slot. Now the card is the bottleneck. You plug a gen four card into a gen four slot. Now the gen four slot is the bottleneck. So we're constantly chasing these bottlenecks. Someone has to be focused on that from an architectural perspective, it's critically important. So there's no question that it matters. But of course, various people in this food chain won't care where it comes from. I guess a good analogy might be, where does our food come from? If I get a steak, it's a pink thing wrapped in plastic, right? Well, there are a lot of inputs that a lot of people have to care about to get that to me. Do I care about all of those things? No. Are they important? They're critically important. >> So, okay. So all I want to get to the, okay. So what does this all mean to customers? And so what I'm hearing from you is to balance a system it's becoming, you know, more complicated. And I kind of been waiting for this day for a long time, because as we all know the bottleneck was always the spinning disc, the last mechanical. So people who wrote software knew that when they were doing it right, the disc had to go and do stuff. And so they were doing other things in the software. And now with all these new interconnects and flash and things like you could do atomic rights. And so that opens up new software possibilities and combine that with alternative processes. But what's the so what on this to the customer and the application impact? Can anybody address that? >> Yeah, let me address that for a moment. I want to leverage some of the things that Bob said, Keith said, Zeus said, and David said, yeah. So I'm a bit of a contrarian in some of this. For example, on the chip side. As the chips get smaller, 14 nanometer, 10 nanometer, five nanometer, soon three nanometer, we talk about more cores, but the biggest problem on the chip is the interconnect from the chip 'cause the wires get smaller. People don't realize in 2004 the latency on those wires in the chips was 80 picoseconds. Today it's 1300 picoseconds. That's on the chip. This is why they're not getting faster. So we maybe getting a little bit slowing down in Moore's law. But even as we kind of conquer that you still have the interconnect problem and the interconnect problem goes beyond the chip. It goes within the system, composable architectures. It goes to the point where Keith made, ultimately you need a hybrid because what we're seeing, what I'm seeing and I'm talking to customers, the biggest issue they have is moving data. Whether it be in a chip, in a system, in a data center, between data centers, moving data is now the biggest gating item in performance. So if you want to move it from, let's say your transactional database to your machine learning, it's the bottleneck, it's moving the data. And so when you look at it from a distributed environment, now you've got to move the compute to the data. The only way to get around these bottlenecks today is to spend less time in trying to move the data and more time in taking the compute, the software, running on hardware closer to the data. Go ahead. >> So is this what you mean when Nicholson was talking about a shift from a processor centric world to a connectivity centric world? You're talking about moving the bits across all the different components, not having the processor you're saying is essentially becoming the bottleneck or the memory, I guess. >> Well, that's one of them and there's a lot of different bottlenecks, but it's the data movement itself. It's moving away from, wait, why do we need to move the data? Can we move the compute, the processing closer to the data? Because if we keep them separate and this has been a trend now where people are moving processing away from it. It's like the edge. I think it was Zeus or David. You were talking about the edge earlier. As you look at the edge, who defines the edge, right? Is the edge a closet or is it a sensor? If it's a sensor, how do you do AI at the edge? When you don't have enough power, you don't have enough computable. People were inventing chips to do that. To do all that at the edge, to do AI within the sensor, instead of moving the data to a data center or a cloud to do the processing. Because the lag in latency is always limited by speed of light. How fast can you move the electrons? And all this interconnecting, all the processing, and all the improvement we're seeing in the PCIE bus from three, to four, to five, to CXL, to a higher bandwidth on the network. And that's all great but none of that deals with the speed of light latency. And that's an-- Go ahead. >> You know Marc, no, I just want to just because what you're referring to could be looked at at a macro level, which I think is what you're describing. You can also look at it at a more micro level from a systems design perspective, right? I'm going to be the resident knuckle dragging hardware guy on the panel today. But it's exactly right. You moving compute closer to data includes concepts like peripheral cards that have built in intelligence, right? So again, in some of this testing that I'm referring to, we saw dramatic improvements when you basically took the horsepower instead of using the CPU horsepower for the like IO. Now you have essentially offload engines in the form of storage controllers, rate controllers, of course, for ethernet NICs, smart NICs. And so when you can have these sort of offload engines and we've gone through these waves over time. People think, well, wait a minute, raid controller and NVMe? You know, flash storage devices. Does that make sense? It turns out it does. Why? Because you're actually at a micro level doing exactly what you're referring to. You're bringing compute closer to the data. Now, closer to the data meaning closer to the data storage subsystem. It doesn't solve the macro issue that you're referring to but it is important. Again, going back to this idea of system design optimization, always chasing the bottleneck, plugging the holes. Someone needs to do that in this value chain in order to get the best value for every kilowatt hour of power and every dollar. >> Yeah. >> Well this whole drive performance has created some really interesting architectural designs, right? Like Nickelson, the rise of the DPU right? Brings more processing power into systems that already had a lot of processing power. There's also been some really interesting, you know, kind of innovation in the area of systems architecture too. If you look at the way Nvidia goes to market, their drive kit is a prebuilt piece of hardware, you know, optimized for self-driving cars, right? They partnered with Pure Storage and ARISTA to build that AI-ready infrastructure. I remember when I talked to Charlie Giancarlo, the CEO of Pure about when the three companies rolled that out. He said, "Look, if you're going to do AI, "you need good store. "You need fast storage, fast processor and fast network." And so for customers to be able to put that together themselves was very, very difficult. There's a lot of software that needs tuning as well. So the three companies partner together to create a fully integrated turnkey hardware system with a bunch of optimized software that runs on it. And so in that case, in some ways the hardware was leading the software innovation. And so, the variety of different architectures we have today around hardware has really exploded. And I think it, part of the what Bob brought up at the beginning about the different chip design. >> Yeah, Bob talked about that earlier. Bob, I mean, most AI today is modeling, you know, and a lot of that's done in the cloud and it looks from my standpoint anyway that the future is going to be a lot of AI inferencing at the edge. And that's a radically different architecture, Bob, isn't it? >> It is, it's a completely different architecture. And just to follow up on a couple points, excellent conversation guys. Dave talked about system architecture and really this that's what this boils down to, right? But it's looking at architecture at every level. I was talking about the individual different components the new interconnect methods. There's this new thing called UCIE universal connection. I forget what it stands answer for, but it's a mechanism for doing chiplet architectures, but then again, you have to take it up to the system level, 'cause it's all fine and good. If you have this SOC that's tuned and optimized, but it has to talk to the rest of the system. And that's where you see other issues. And you've seen things like CXL and other interconnect standards, you know, and nobody likes to talk about interconnect 'cause it's really wonky and really technical and not that sexy, but at the end of the day it's incredibly important exactly. To the other points that were being raised like mark raised, for example, about getting that compute closer to where the data is and that's where again, a diversity of chip architectures help and exactly to your last comment there Dave, putting that ability in an edge device is really at the cutting edge of what we're seeing on a semiconductor design and the ability to, for example, maybe it's an FPGA, maybe it's a dedicated AI chip. It's another kind of chip architecture that's being created to do that inferencing on the edge. Because again, it's that the cost and the challenges of moving lots of data, whether it be from say a smartphone to a cloud-based application or whether it be from a private network to a cloud or any other kinds of permutations we can think of really matters. And the other thing is we're tackling bigger problems. So architecturally, not even just architecturally within a system, but when we think about DPUs and the sort of the east west data center movement conversation that we hear Nvidia and others talk about, it's about combining multiple sets of these systems to function together more efficiently again with even bigger sets of data. So really is about tackling where the processing is needed, having the interconnect and the ability to get where the data you need to the right place at the right time. And because those needs are diversifying, we're just going to continue to see an explosion of different choices and options, which is going to make hardware even more essential I would argue than it is today. And so I think what we're going to see not only does hardware matter, it's going to matter even more in the future than it does now. >> Great, yeah. Great discussion, guys. I want to bring Keith back into the conversation here. Keith, if your main expertise in tech is provisioning LUNs, you probably you want to look for another job. So maybe clearly hardware matters, but with software defined everything, do people with hardware expertise matter outside of for instance, component manufacturers or cloud companies? I mean, VMware certainly changed the dynamic in servers. Dell just spun off its most profitable asset and VMware. So it obviously thinks hardware can stand alone. How does an enterprise architect view the shift to software defined hyperscale cloud and how do you see the shifting demand for skills in enterprise IT? >> So I love the question and I'll take a different view of it. If you're a data analyst and your primary value add is that you do ETL transformation, talk to a CDO, a chief data officer over midsize bank a little bit ago. He said 80% of his data scientists' time is done on ETL. Super not value ad. He wants his data scientists to do data science work. Chances are if your only value is that you do LUN provisioning, then you probably don't have a job now. The technologies have gotten much more intelligent. As infrastructure pros, we want to give infrastructure pros the opportunities to shine and I think the software defined nature and the automation that we're seeing vendors undertake, whether it's Dell, HP, Lenovo take your pick that Pure Storage, NetApp that are doing the automation and the ML needed so that these practitioners don't spend 80% of their time doing LUN provisioning and focusing on their true expertise, which is ensuring that data is stored. Data is retrievable, data's protected, et cetera. I think the shift is to focus on that part of the job that you're ensuring no matter where the data's at, because as my data is spread across the enterprise hybrid different types, you know, Dave, you talk about the super cloud a lot. If my data is in the super cloud, protecting that data and securing that data becomes much more complicated when than when it was me just procuring or provisioning LUNs. So when you say, where should the shift be, or look be, you know, focusing on the real value, which is making sure that customers can access data, can recover data, can get data at performance levels that they need within the price point. They need to get at those datasets and where they need it. We talked a lot about where they need out. One last point about this interconnecting. I have this vision and I think we all do of composable infrastructure. This idea that scaled out does not solve every problem. The cloud can give me infinite scale out. Sometimes I just need a single OS with 64 terabytes of RAM and 204 GPUs or GPU instances that single OS does not exist today. And the opportunity is to create composable infrastructure so that we solve a lot of these problems that just simply don't scale out. >> You know, wow. So many interesting points there. I had just interviewed Zhamak Dehghani, who's the founder of Data Mesh last week. And she made a really interesting point. She said, "Think about, we have separate stacks. "We have an application stack and we have "a data pipeline stack and the transaction systems, "the transaction database, we extract data from that," to your point, "We ETL it in, you know, it takes forever. "And then we have this separate sort of data stack." If we're going to inject more intelligence and data and AI into applications, those two stacks, her contention is they have to come together. And when you think about, you know, super cloud bringing compute to data, that was what Haduck was supposed to be. It ended up all sort of going into a central location, but it's almost a rhetorical question. I mean, it seems that that necessitates new thinking around hardware architectures as it kind of everything's the edge. And the other point is to your point, Keith, it's really hard to secure that. So when you can think about offloads, right, you've heard the stats, you know, Nvidia talks about it. Broadcom talks about it that, you know, that 30%, 25 to 30% of the CPU cycles are wasted on doing things like storage offloads, or networking or security. It seems like maybe Zeus you have a comment on this. It seems like new architectures need to come other to support, you know, all of that stuff that Keith and I just dispute. >> Yeah, and by the way, I do want to Keith, the question you just asked. Keith, it's the point I made at the beginning too about engineers do need to be more software-centric, right? They do need to have better software skills. In fact, I remember talking to Cisco about this last year when they surveyed their engineer base, only about a third of 'em had ever made an API call, which you know that that kind of shows this big skillset change, you know, that has to come. But on the point of architectures, I think the big change here is edge because it brings in distributed compute models. Historically, when you think about compute, even with multi-cloud, we never really had multi-cloud. We'd use multiple centralized clouds, but compute was always centralized, right? It was in a branch office, in a data center, in a cloud. With edge what we creates is the rise of distributed computing where we'll have an application that actually accesses different resources and at different edge locations. And I think Marc, you were talking about this, like the edge could be in your IoT device. It could be your campus edge. It could be cellular edge, it could be your car, right? And so we need to start thinkin' about how our applications interact with all those different parts of that edge ecosystem, you know, to create a single experience. The consumer apps, a lot of consumer apps largely works that way. If you think of like app like Uber, right? It pulls in information from all kinds of different edge application, edge services. And, you know, it creates pretty cool experience. We're just starting to get to that point in the business world now. There's a lot of security implications and things like that, but I do think it drives more architectural decisions to be made about how I deploy what data where and where I do my processing, where I do my AI and things like that. It actually makes the world more complicated. In some ways we can do so much more with it, but I think it does drive us more towards turnkey systems, at least initially in order to, you know, ensure performance and security. >> Right. Marc, I wanted to go to you. You had indicated to me that you wanted to chat about this a little bit. You've written quite a bit about the integration of hardware and software. You know, we've watched Oracle's move from, you know, buying Sun and then basically using that in a highly differentiated approach. Engineered systems. What's your take on all that? I know you also have some thoughts on the shift from CapEx to OPEX chime in on that. >> Sure. When you look at it, there are advantages to having one vendor who has the software and hardware. They can synergistically make them work together that you can't do in a commodity basis. If you own the software and somebody else has the hardware, I'll give you an example would be Oracle. As you talked about with their exit data platform, they literally are leveraging microcode in the Intel chips. And now in AMD chips and all the way down to Optane, they make basically AMD database servers work with Optane memory PMM in their storage systems, not MVME, SSD PMM. I'm talking about the cards itself. So there are advantages you can take advantage of if you own the stack, as you were putting out earlier, Dave, of both the software and the hardware. Okay, that's great. But on the other side of that, that tends to give you better performance, but it tends to cost a little more. On the commodity side it costs less but you get less performance. What Zeus had said earlier, it depends where you're running your application. How much performance do you need? What kind of performance do you need? One of the things about moving to the edge and I'll get to the OPEX CapEx in a second. One of the issues about moving to the edge is what kind of processing do you need? If you're running in a CCTV camera on top of a traffic light, how much power do you have? How much cooling do you have that you can run this? And more importantly, do you have to take the data you're getting and move it somewhere else and get processed and the information is sent back? I mean, there are companies out there like Brain Chip that have developed AI chips that can run on the sensor without a CPU. Without any additional memory. So, I mean, there's innovation going on to deal with this question of data movement. There's companies out there like Tachyon that are combining GPUs, CPUs, and DPUs in a single chip. Think of it as super composable architecture. They're looking at being able to do more in less. On the OPEX and CapEx issue. >> Hold that thought, hold that thought on the OPEX CapEx, 'cause we're running out of time and maybe you can wrap on that. I just wanted to pick up on something you said about the integrated hardware software. I mean, other than the fact that, you know, Michael Dell unlocked whatever $40 billion for himself and Silverlake, I was always a fan of a spin in with VMware basically become the Oracle of hardware. Now I know it would've been a nightmare for the ecosystem and culturally, they probably would've had a VMware brain drain, but what does anybody have any thoughts on that as a sort of a thought exercise? I was always a fan of that on paper. >> I got to eat a little crow. I did not like the Dale VMware acquisition for the industry in general. And I think it hurt the industry in general, HPE, Cisco walked away a little bit from that VMware relationship. But when I talked to customers, they loved it. You know, I got to be honest. They absolutely loved the integration. The VxRail, VxRack solution exploded. Nutanix became kind of a afterthought when it came to competing. So that spin in, when we talk about the ability to innovate and the ability to create solutions that you just simply can't create because you don't have the full stack. Dell was well positioned to do that with a potential span in of VMware. >> Yeah, we're going to be-- Go ahead please. >> Yeah, in fact, I think you're right, Keith, it was terrible for the industry. Great for Dell. And I remember talking to Chad Sakac when he was running, you know, VCE, which became Rack and Rail, their ability to stay in lockstep with what VMware was doing. What was the number one workload running on hyperconverged forever? It was VMware. So their ability to remain in lockstep with VMware gave them a huge competitive advantage. And Dell came out of nowhere in, you know, the hyper-converged market and just started taking share because of that relationship. So, you know, this sort I guess it's, you know, from a Dell perspective I thought it gave them a pretty big advantage that they didn't really exploit across their other properties, right? Networking and service and things like they could have given the dominance that VMware had. From an industry perspective though, I do think it's better to have them be coupled. So. >> I agree. I mean, they could. I think they could have dominated in super cloud and maybe they would become the next Oracle where everybody hates 'em, but they kick ass. But guys. We got to wrap up here. And so what I'm going to ask you is I'm going to go and reverse the order this time, you know, big takeaways from this conversation today, which guys by the way, I can't thank you enough phenomenal insights, but big takeaways, any final thoughts, any research that you're working on that you want highlight or you know, what you look for in the future? Try to keep it brief. We'll go in reverse order. Maybe Marc, you could start us off please. >> Sure, on the research front, I'm working on a total cost of ownership of an integrated database analytics machine learning versus separate services. On the other aspect that I would wanted to chat about real quickly, OPEX versus CapEx, the cloud changed the market perception of hardware in the sense that you can use hardware or buy hardware like you do software. As you use it, pay for what you use in arrears. The good thing about that is you're only paying for what you use, period. You're not for what you don't use. I mean, it's compute time, everything else. The bad side about that is you have no predictability in your bill. It's elastic, but every user I've talked to says every month it's different. And from a budgeting perspective, it's very hard to set up your budget year to year and it's causing a lot of nightmares. So it's just something to be aware of. From a CapEx perspective, you have no more CapEx if you're using that kind of base system but you lose a certain amount of control as well. So ultimately that's some of the issues. But my biggest point, my biggest takeaway from this is the biggest issue right now that everybody I talk to in some shape or form it comes down to data movement whether it be ETLs that you talked about Keith or other aspects moving it between hybrid locations, moving it within a system, moving it within a chip. All those are key issues. >> Great, thank you. Okay, CTO advisor, give us your final thoughts. >> All right. Really, really great commentary. Again, I'm going to point back to us taking the walk that our customers are taking, which is trying to do this conversion of all primary data center to a hybrid of which I have this hard earned philosophy that enterprise IT is additive. When we add a service, we rarely subtract a service. So the landscape and service area what we support has to grow. So our research focuses on taking that walk. We are taking a monolithic application, decomposing that to containers, and putting that in a public cloud, and connecting that back private data center and telling that story and walking that walk with our customers. This has been a super enlightening panel. >> Yeah, thank you. Real, real different world coming. David Nicholson, please. >> You know, it really hearkens back to the beginning of the conversation. You talked about momentum in the direction of cloud. I'm sort of spending my time under the hood, getting grease under my fingernails, focusing on where still the lions share of spend will be in coming years, which is OnPrem. And then of course, obviously data center infrastructure for cloud but really diving under the covers and helping folks understand the ramifications of movement between generations of CPU architecture. I know we all know Sapphire Rapids pushed into the future. When's the next Intel release coming? Who knows? We think, you know, in 2023. There have been a lot of people standing by from a practitioner's standpoint asking, well, what do I do between now and then? Does it make sense to upgrade bits and pieces of hardware or go from a last generation to a current generation when we know the next generation is coming? And so I've been very, very focused on looking at how these connectivity components like rate controllers and NICs. I know it's not as sexy as talking about cloud but just how these opponents completely change the game and actually can justify movement from say a 14th-generation architecture to a 15th-generation architecture today, even though gen 16 is coming, let's say 12 months from now. So that's where I am. Keep my phone number in the Rolodex. I literally reference Rolodex intentionally because like I said, I'm in there under the hood and it's not as sexy. But yeah, so that's what I'm focused on Dave. >> Well, you know, to paraphrase it, maybe derivative paraphrase of, you know, Larry Ellison's rant on what is cloud? It's operating systems and databases, et cetera. Rate controllers and NICs live inside of clouds. All right. You know, one of the reasons I love working with you guys is 'cause have such a wide observation space and Zeus Kerravala you, of all people, you know you have your fingers in a lot of pies. So give us your final thoughts. >> Yeah, I'm not a propeller heady as my chip counterparts here. (all laugh) So, you know, I look at the world a little differently and a lot of my research I'm doing now is the impact that distributed computing has on customer employee experiences, right? You talk to every business and how the experiences they deliver to their customers is really differentiating how they go to market. And so they're looking at these different ways of feeding up data and analytics and things like that in different places. And I think this is going to have a really profound impact on enterprise IT architecture. We're putting more data, more compute in more places all the way down to like little micro edges and retailers and things like that. And so we need the variety. Historically, if you think back to when I was in IT you know, pre-Y2K, we didn't have a lot of choice in things, right? We had a server that was rack mount or standup, right? And there wasn't a whole lot of, you know, differences in choice. But today we can deploy, you know, these really high-performance compute systems on little blades inside servers or inside, you know, autonomous vehicles and things. I think the world from here gets... You know, just the choice of what we have and the way hardware and software works together is really going to, I think, change the world the way we do things. We're already seeing that, like I said, in the consumer world, right? There's so many things you can do from, you know, smart home perspective, you know, natural language processing, stuff like that. And it's starting to hit businesses now. So just wait and watch the next five years. >> Yeah, totally. The computing power at the edge is just going to be mind blowing. >> It's unbelievable what you can do at the edge. >> Yeah, yeah. Hey Z, I just want to say that we know you're not a propeller head and I for one would like to thank you for having your master's thesis hanging on the wall behind you 'cause we know that you studied basket weaving. >> I was actually a physics math major, so. >> Good man. Another math major. All right, Bob O'Donnell, you're going to bring us home. I mean, we've seen the importance of semiconductors and silicon in our everyday lives, but your last thoughts please. >> Sure and just to clarify, by the way I was a great books major and this was actually for my final paper. And so I was like philosophy and all that kind of stuff and literature but I still somehow got into tech. Look, it's been a great conversation and I want to pick up a little bit on a comment Zeus made, which is this it's the combination of the hardware and the software and coming together and the manner with which that needs to happen, I think is critically important. And the other thing is because of the diversity of the chip architectures and all those different pieces and elements, it's going to be how software tools evolve to adapt to that new world. So I look at things like what Intel's trying to do with oneAPI. You know, what Nvidia has done with CUDA. What other platform companies are trying to create tools that allow them to leverage the hardware, but also embrace the variety of hardware that is there. And so as those software development environments and software development tools evolve to take advantage of these new capabilities, that's going to open up a lot of interesting opportunities that can leverage all these new chip architectures. That can leverage all these new interconnects. That can leverage all these new system architectures and figure out ways to make that all happen, I think is going to be critically important. And then finally, I'll mention the research I'm actually currently working on is on private 5g and how companies are thinking about deploying private 5g and the potential for edge applications for that. So I'm doing a survey of several hundred us companies as we speak and really looking forward to getting that done in the next couple of weeks. >> Yeah, look forward to that. Guys, again, thank you so much. Outstanding conversation. Anybody going to be at Dell tech world in a couple of weeks? Bob's going to be there. Dave Nicholson. Well drinks on me and guys I really can't thank you enough for the insights and your participation today. Really appreciate it. Okay, and thank you for watching this special power panel episode of theCube Insights powered by ETR. Remember we publish each week on Siliconangle.com and wikibon.com. All these episodes they're available as podcasts. DM me or any of these guys. I'm at DVellante. You can email me at David.Vellante@siliconangle.com. Check out etr.ai for all the data. This is Dave Vellante. We'll see you next time. (upbeat music)
SUMMARY :
but the labor needed to go kind of around the horn the applications to those edge devices Zeus up next, please. on the performance requirements you have. that we can tap into It's really important that you optimize I mean, for years you worked for the applications that I need? that we were having earlier, okay. on software from the market And the point I made in breaking at the edge, in the data center, you know, and society and do you have any sense as and I'm feeling the pain. and it's all about the software, of the components you use. And I remember the early days And I mean, all the way back Yeah, and that's why you see And the answer to that is the disc had to go and do stuff. the compute to the data. So is this what you mean when Nicholson the processing closer to the data? And so when you can have kind of innovation in the area that the future is going to be the ability to get where and how do you see the shifting demand And the opportunity is to to support, you know, of that edge ecosystem, you know, that you wanted to chat One of the things about moving to the edge I mean, other than the and the ability to create solutions Yeah, we're going to be-- And I remember talking to Chad the order this time, you know, in the sense that you can use hardware us your final thoughts. So the landscape and service area Yeah, thank you. in the direction of cloud. You know, one of the reasons And I think this is going to The computing power at the edge you can do at the edge. on the wall behind you I was actually a of semiconductors and silicon and the manner with which Okay, and thank you for watching
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Marc Staimer | PERSON | 0.99+ |
Keith Townson | PERSON | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Keith | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Bob O'Donnell | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Bob | PERSON | 0.99+ |
HP | ORGANIZATION | 0.99+ |
Lenovo | ORGANIZATION | 0.99+ |
2004 | DATE | 0.99+ |
Charlie Giancarlo | PERSON | 0.99+ |
ZK Research | ORGANIZATION | 0.99+ |
Pat | PERSON | 0.99+ |
10 nanometer | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
Keith Townsend | PERSON | 0.99+ |
10 gig | QUANTITY | 0.99+ |
25 | QUANTITY | 0.99+ |
Pat Gelsinger | PERSON | 0.99+ |
80% | QUANTITY | 0.99+ |
ARISTA | ORGANIZATION | 0.99+ |
64 terabytes | QUANTITY | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
Zeus Kerravala | PERSON | 0.99+ |
Zhamak Dehghani | PERSON | 0.99+ |
Larry Ellison | PERSON | 0.99+ |
25 gig | QUANTITY | 0.99+ |
14 nanometer | QUANTITY | 0.99+ |
2017 | DATE | 0.99+ |
2016 | DATE | 0.99+ |
Norman Rice | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Michael Dell | PERSON | 0.99+ |
69% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
OPEX | ORGANIZATION | 0.99+ |
Pure Storage | ORGANIZATION | 0.99+ |
$40 billion | QUANTITY | 0.99+ |
Dragon Slayer Consulting | ORGANIZATION | 0.99+ |