Opening Panel | Generative AI: Hype or Reality | AWS Startup Showcase S3 E1
(light airy music) >> Hello, everyone, welcome to theCUBE's presentation of the AWS Startup Showcase, AI and machine learning. "Top Startups Building Generative AI on AWS." This is season three, episode one of the ongoing series covering the exciting startups from the AWS ecosystem, talking about AI machine learning. We have three great guests Bratin Saha, VP, Vice President of Machine Learning and AI Services at Amazon Web Services. Tom Mason, the CTO of Stability AI, and Aidan Gomez, CEO and co-founder of Cohere. Two practitioners doing startups and AWS. Gentlemen, thank you for opening up this session, this episode. Thanks for coming on. >> Thank you. >> Thank you. >> Thank you. >> So the topic is hype versus reality. So I think we're all on the reality is great, hype is great, but the reality's here. I want to get into it. Generative AI's got all the momentum, it's going mainstream, it's kind of come out of the behind the ropes, it's now mainstream. We saw the success of ChatGPT, opens up everyone's eyes, but there's so much more going on. Let's jump in and get your early perspectives on what should people be talking about right now? What are you guys working on? We'll start with AWS. What's the big focus right now for you guys as you come into this market that's highly active, highly hyped up, but people see value right out of the gate? >> You know, we have been working on generative AI for some time. In fact, last year we released Code Whisperer, which is about using generative AI for software development and a number of customers are using it and getting real value out of it. So generative AI is now something that's mainstream that can be used by enterprise users. And we have also been partnering with a number of other companies. So, you know, stability.ai, we've been partnering with them a lot. We want to be partnering with other companies as well. In seeing how we do three things, you know, first is providing the most efficient infrastructure for generative AI. And that is where, you know, things like Trainium, things like Inferentia, things like SageMaker come in. And then next is the set of models and then the third is the kind of applications like Code Whisperer and so on. So, you know, it's early days yet, but clearly there's a lot of amazing capabilities that will come out and something that, you know, our customers are starting to pay a lot of attention to. >> Tom, talk about your company and what your focus is and why the Amazon Web Services relationship's important for you? >> So yeah, we're primarily committed to making incredible open source foundation models and obviously stable effusions been our kind of first big model there, which we trained all on AWS. We've been working with them over the last year and a half to develop, obviously a big cluster, and bring all that compute to training these models at scale, which has been a really successful partnership. And we're excited to take it further this year as we develop commercial strategy of the business and build out, you know, the ability for enterprise customers to come and get all the value from these models that we think they can get. So we're really excited about the future. We got hugely exciting pipeline for this year with new modalities and video models and wonderful things and trying to solve images for once and for all and get the kind of general value and value proposition correct for customers. So it's a really exciting time and very honored to be part of it. >> It's great to see some of your customers doing so well out there. Congratulations to your team. Appreciate that. Aidan, let's get into what you guys do. What does Cohere do? What are you excited about right now? >> Yeah, so Cohere builds large language models, which are the backbone of applications like ChatGPT and GPT-3. We're extremely focused on solving the issues with adoption for enterprise. So it's great that you can make a super flashy demo for consumers, but it takes a lot to actually get it into billion user products and large global enterprises. So about six months ago, we released our command models, which are some of the best that exist for large language models. And in December, we released our multilingual text understanding models and that's on over a hundred different languages and it's trained on, you know, authentic data directly from native speakers. And so we're super excited to continue pushing this into enterprise and solving those barriers for adoption, making this transformation a reality. >> Just real quick, while I got you there on the new products coming out. Where are we in the progress? People see some of the new stuff out there right now. There's so much more headroom. Can you just scope out in your mind what that looks like? Like from a headroom standpoint? Okay, we see ChatGPT. "Oh yeah, it writes my papers for me, does some homework for me." I mean okay, yawn, maybe people say that, (Aidan chuckles) people excited or people are blown away. I mean, it's helped theCUBE out, it helps me, you know, feed up a little bit from my write-ups but it's not always perfect. >> Yeah, at the moment it's like a writing assistant, right? And it's still super early in the technologies trajectory. I think it's fascinating and it's interesting but its impact is still really limited. I think in the next year, like within the next eight months, we're going to see some major changes. You've already seen the very first hints of that with stuff like Bing Chat, where you augment these dialogue models with an external knowledge base. So now the models can be kept up to date to the millisecond, right? Because they can search the web and they can see events that happened a millisecond ago. But that's still limited in the sense that when you ask the question, what can these models actually do? Well they can just write text back at you. That's the extent of what they can do. And so the real project, the real effort, that I think we're all working towards is actually taking action. So what happens when you give these models the ability to use tools, to use APIs? What can they do when they can actually affect change out in the real world, beyond just streaming text back at the user? I think that's the really exciting piece. >> Okay, so I wanted to tee that up early in the segment 'cause I want to get into the customer applications. We're seeing early adopters come in, using the technology because they have a lot of data, they have a lot of large language model opportunities and then there's a big fast follower wave coming behind it. I call that the people who are going to jump in the pool early and get into it. They might not be advanced. Can you guys share what customer applications are being used with large language and vision models today and how they're using it to transform on the early adopter side, and how is that a tell sign of what's to come? >> You know, one of the things we have been seeing both with the text models that Aidan talked about as well as the vision models that stability.ai does, Tom, is customers are really using it to change the way you interact with information. You know, one example of a customer that we have, is someone who's kind of using that to query customer conversations and ask questions like, you know, "What was the customer issue? How did we solve it?" And trying to get those kinds of insights that was previously much harder to do. And then of course software is a big area. You know, generating software, making that, you know, just deploying it in production. Those have been really big areas that we have seen customers start to do. You know, looking at documentation, like instead of you know, searching for stuff and so on, you know, you just have an interactive way, in which you can just look at the documentation for a product. You know, all of this goes to where we need to take the technology. One of which is, you know, the models have to be there but they have to work reliably in a production setting at scale, with privacy, with security, and you know, making sure all of this is happening, is going to be really key. That is what, you know, we at AWS are looking to do, which is work with partners like stability and others and in the open source and really take all of these and make them available at scale to customers, where they work reliably. >> Tom, Aidan, what's your thoughts on this? Where are customers landing on this first use cases or set of low-hanging fruit use cases or applications? >> Yeah, so I think like the first group of adopters that really found product market fit were the copywriting companies. So one great example of that is HyperWrite. Another one is Jasper. And so for Cohere, that's the tip of the iceberg, like there's a very long tail of usage from a bunch of different applications. HyperWrite is one of our customers, they help beat writer's block by drafting blog posts, emails, and marketing copy. We also have a global audio streaming platform, which is using us the power of search engine that can comb through podcast transcripts, in a bunch of different languages. Then a global apparel brand, which is using us to transform how they interact with their customers through a virtual assistant, two dozen global news outlets who are using us for news summarization. So really like, these large language models, they can be deployed all over the place into every single industry sector, language is everywhere. It's hard to think of any company on Earth that doesn't use language. So it's, very, very- >> We're doing it right now. We got the language coming in. >> Exactly. >> We'll transcribe this puppy. All right. Tom, on your side, what do you see the- >> Yeah, we're seeing some amazing applications of it and you know, I guess that's partly been, because of the growth in the open source community and some of these applications have come from there that are then triggering this secondary wave of innovation, which is coming a lot from, you know, controllability and explainability of the model. But we've got companies like, you know, Jasper, which Aidan mentioned, who are using stable diffusion for image generation in block creation, content creation. We've got Lensa, you know, which exploded, and is built on top of stable diffusion for fine tuning so people can bring themselves and their pets and you know, everything into the models. So we've now got fine tuned stable diffusion at scale, which is democratized, you know, that process, which is really fun to see your Lensa, you know, exploded. You know, I think it was the largest growing app in the App Store at one point. And lots of other examples like NightCafe and Lexica and Playground. So seeing lots of cool applications. >> So much applications, we'll probably be a customer for all you guys. We'll definitely talk after. But the challenges are there for people adopting, they want to get into what you guys see as the challenges that turn into opportunities. How do you see the customers adopting generative AI applications? For example, we have massive amounts of transcripts, timed up to all the videos. I don't even know what to do. Do I just, do I code my API there. So, everyone has this problem, every vertical has these use cases. What are the challenges for people getting into this and adopting these applications? Is it figuring out what to do first? Or is it a technical setup? Do they stand up stuff, they just go to Amazon? What do you guys see as the challenges? >> I think, you know, the first thing is coming up with where you think you're going to reimagine your customer experience by using generative AI. You know, we talked about Ada, and Tom talked about a number of these ones and you know, you pick up one or two of these, to get that robust. And then once you have them, you know, we have models and we'll have more models on AWS, these large language models that Aidan was talking about. Then you go in and start using these models and testing them out and seeing whether they fit in use case or not. In many situations, like you said, John, our customers want to say, "You know, I know you've trained these models on a lot of publicly available data, but I want to be able to customize it for my use cases. Because, you know, there's some knowledge that I have created and I want to be able to use that." And then in many cases, and I think Aidan mentioned this. You know, you need these models to be up to date. Like you can't have it staying. And in those cases, you augmented with a knowledge base, you know you have to make sure that these models are not hallucinating. And so you need to be able to do the right kind of responsible AI checks. So, you know, you start with a particular use case, and there are a lot of them. Then, you know, you can come to AWS, and then look at one of the many models we have and you know, we are going to have more models for other modalities as well. And then, you know, play around with the models. We have a playground kind of thing where you can test these models on some data and then you can probably, you will probably want to bring your own data, customize it to your own needs, do some of the testing to make sure that the model is giving the right output and then just deploy it. And you know, we have a lot of tools. >> Yeah. >> To make this easy for our customers. >> How should people think about large language models? Because do they think about it as something that they tap into with their IP or their data? Or is it a large language model that they apply into their system? Is the interface that way? What's the interaction look like? >> In many situations, you can use these models out of the box. But in typical, in most of the other situations, you will want to customize it with your own data or with your own expectations. So the typical use case would be, you know, these are models are exposed through APIs. So the typical use case would be, you know you're using these APIs a little bit for testing and getting familiar and then there will be an API that will allow you to train this model further on your data. So you use that AI, you know, make sure you augmented the knowledge base. So then you use those APIs to customize the model and then just deploy it in an application. You know, like Tom was mentioning, a number of companies that are using these models. So once you have it, then you know, you again, use an endpoint API and use it in an application. >> All right, I love the example. I want to ask Tom and Aidan, because like most my experience with Amazon Web Service in 2007, I would stand up in EC2, put my code on there, play around, if it didn't work out, I'd shut it down. Is that a similar dynamic we're going to see with the machine learning where developers just kind of log in and stand up infrastructure and play around and then have a cloud-like experience? >> So I can go first. So I mean, we obviously, with AWS working really closely with the SageMaker team, do fantastic platform there for ML training and inference. And you know, going back to your point earlier, you know, where the data is, is hugely important for companies. Many companies bringing their models to their data in AWS on-premise for them is hugely important. Having the models to be, you know, open sources, makes them explainable and transparent to the adopters of those models. So, you know, we are really excited to work with the SageMaker team over the coming year to bring companies to that platform and make the most of our models. >> Aidan, what's your take on developers? Do they just need to have a team in place, if we want to interface with you guys? Let's say, can they start learning? What do they got to do to set up? >> Yeah, so I think for Cohere, our product makes it much, much easier to people, for people to get started and start building, it solves a lot of the productionization problems. But of course with SageMaker, like Tom was saying, I think that lowers a barrier even further because it solves problems like data privacy. So I want to underline what Bratin was saying earlier around when you're fine tuning or when you're using these models, you don't want your data being incorporated into someone else's model. You don't want it being used for training elsewhere. And so the ability to solve for enterprises, that data privacy and that security guarantee has been hugely important for Cohere, and that's very easy to do through SageMaker. >> Yeah. >> But the barriers for using this technology are coming down super quickly. And so for developers, it's just becoming completely intuitive. I love this, there's this quote from Andrej Karpathy. He was saying like, "It really wasn't on my 2022 list of things to happen that English would become, you know, the most popular programming language." And so the barrier is coming down- >> Yeah. >> Super quickly and it's exciting to see. >> It's going to be awesome for all the companies here, and then we'll do more, we're probably going to see explosion of startups, already seeing that, the maps, ecosystem maps, the landscape maps are happening. So this is happening and I'm convinced it's not yesterday's chat bot, it's not yesterday's AI Ops. It's a whole another ballgame. So I have to ask you guys for the final question before we kick off the company's showcasing here. How do you guys gauge success of generative AI applications? Is there a lens to look through and say, okay, how do I see success? It could be just getting a win or is it a bigger picture? Bratin we'll start with you. How do you gauge success for generative AI? >> You know, ultimately it's about bringing business value to our customers. And making sure that those customers are able to reimagine their experiences by using generative AI. Now the way to get their ease, of course to deploy those models in a safe, effective manner, and ensuring that all of the robustness and the security guarantees and the privacy guarantees are all there. And we want to make sure that this transitions from something that's great demos to actual at scale products, which means making them work reliably all of the time not just some of the time. >> Tom, what's your gauge for success? >> Look, I think this, we're seeing a completely new form of ways to interact with data, to make data intelligent, and directly to bring in new revenue streams into business. So if businesses can use our models to leverage that and generate completely new revenue streams and ultimately bring incredible new value to their customers, then that's fantastic. And we hope we can power that revolution. >> Aidan, what's your take? >> Yeah, reiterating Bratin and Tom's point, I think that value in the enterprise and value in market is like a huge, you know, it's the goal that we're striving towards. I also think that, you know, the value to consumers and actual users and the transformation of the surface area of technology to create experiences like ChatGPT that are magical and it's the first time in human history we've been able to talk to something compelling that's not a human. I think that in itself is just extraordinary and so exciting to see. >> It really brings up a whole another category of markets. B2B, B2C, it's B2D, business to developer. Because I think this is kind of the big trend the consumers have to win. The developers coding the apps, it's a whole another sea change. Reminds me everyone use the "Moneyball" movie as example during the big data wave. Then you know, the value of data. There's a scene in "Moneyball" at the end, where Billy Beane's getting the offer from the Red Sox, then the owner says to the Red Sox, "If every team's not rebuilding their teams based upon your model, there'll be dinosaurs." I think that's the same with AI here. Every company will have to need to think about their business model and how they operate with AI. So it'll be a great run. >> Completely Agree >> It'll be a great run. >> Yeah. >> Aidan, Tom, thank you so much for sharing about your experiences at your companies and congratulations on your success and it's just the beginning. And Bratin, thanks for coming on representing AWS. And thank you, appreciate for what you do. Thank you. >> Thank you, John. Thank you, Aidan. >> Thank you John. >> Thanks so much. >> Okay, let's kick off season three, episode one. I'm John Furrier, your host. Thanks for watching. (light airy music)
SUMMARY :
of the AWS Startup Showcase, of the behind the ropes, and something that, you know, and build out, you know, Aidan, let's get into what you guys do. and it's trained on, you know, it helps me, you know, the ability to use tools, to use APIs? I call that the people and you know, making sure the first group of adopters We got the language coming in. Tom, on your side, what do you see the- and you know, everything into the models. they want to get into what you guys see and you know, you pick for our customers. then you know, you again, All right, I love the example. and make the most of our models. And so the ability to And so the barrier is coming down- and it's exciting to see. So I have to ask you guys and ensuring that all of the robustness and directly to bring in new and it's the first time in human history the consumers have to win. and it's just the beginning. I'm John Furrier, your host.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Tom | PERSON | 0.99+ |
Tom Mason | PERSON | 0.99+ |
Aidan | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Andrej Karpathy | PERSON | 0.99+ |
Bratin Saha | PERSON | 0.99+ |
December | DATE | 0.99+ |
2007 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
Aidan Gomez | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Billy Beane | PERSON | 0.99+ |
Bratin | PERSON | 0.99+ |
Moneyball | TITLE | 0.99+ |
one | QUANTITY | 0.99+ |
Ada | PERSON | 0.99+ |
last year | DATE | 0.99+ |
two | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
yesterday | DATE | 0.99+ |
Two practitioners | QUANTITY | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
ChatGPT | TITLE | 0.99+ |
next year | DATE | 0.99+ |
Code Whisperer | TITLE | 0.99+ |
third | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
App Store | TITLE | 0.99+ |
first time | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Inferentia | TITLE | 0.98+ |
EC2 | TITLE | 0.98+ |
GPT-3 | TITLE | 0.98+ |
both | QUANTITY | 0.98+ |
Lensa | TITLE | 0.98+ |
SageMaker | ORGANIZATION | 0.98+ |
three things | QUANTITY | 0.97+ |
Cohere | ORGANIZATION | 0.96+ |
over a hundred different languages | QUANTITY | 0.96+ |
English | OTHER | 0.96+ |
one example | QUANTITY | 0.96+ |
about six months ago | DATE | 0.96+ |
One | QUANTITY | 0.96+ |
first use | QUANTITY | 0.96+ |
SageMaker | TITLE | 0.96+ |
Bing Chat | TITLE | 0.95+ |
one point | QUANTITY | 0.95+ |
Trainium | TITLE | 0.95+ |
Lexica | TITLE | 0.94+ |
Playground | TITLE | 0.94+ |
three great guests | QUANTITY | 0.93+ |
HyperWrite | TITLE | 0.92+ |
Adam Wenchel, Arthur.ai | CUBE Conversation
(bright upbeat music) >> Hello and welcome to this Cube Conversation. I'm John Furrier, host of theCUBE. We've got a great conversation featuring Arthur AI. I'm your host. I'm excited to have Adam Wenchel who's the Co-Founder and CEO. Thanks for joining us today, appreciate it. >> Yeah, thanks for having me on, John, looking forward to the conversation. >> I got to say, it's been an exciting world in AI or artificial intelligence. Just an explosion of interest kind of in the mainstream with the language models, which people don't really get, but they're seeing the benefits of some of the hype around OpenAI. Which kind of wakes everyone up to, "Oh, I get it now." And then of course the pessimism comes in, all the skeptics are out there. But this breakthrough in generative AI field is just awesome, it's really a shift, it's a wave. We've been calling it probably the biggest inflection point, then the others combined of what this can do from a surge standpoint, applications. I mean, all aspects of what we used to know is the computing industry, software industry, hardware, is completely going to get turbo. So we're totally obviously bullish on this thing. So, this is really interesting. So my first question is, I got to ask you, what's you guys taking? 'Cause you've been doing this, you're in it, and now all of a sudden you're at the beach where the big waves are. What's the explosion of interest is there? What are you seeing right now? >> Yeah, I mean, it's amazing, so for starters, I've been in AI for over 20 years and just seeing this amount of excitement and the growth, and like you said, the inflection point we've hit in the last six months has just been amazing. And, you know, what we're seeing is like people are getting applications into production using LLMs. I mean, really all this excitement just started a few months ago, with ChatGPT and other breakthroughs and the amount of activity and the amount of new systems that we're seeing hitting production already so soon after that is just unlike anything we've ever seen. So it's pretty awesome. And, you know, these language models are just, they could be applied in so many different business contexts and that it's just the amount of value that's being created is again, like unprecedented compared to anything. >> Adam, you know, you've been in this for a while, so it's an interesting point you're bringing up, and this is a good point. I was talking with my friend John Markoff, former New York Times journalist and he was talking about, there's been a lot of work been done on ethics. So there's been, it's not like it's new. It's like been, there's a lot of stuff that's been baking over many, many years and, you know, decades. So now everyone wakes up in the season, so I think that is a key point I want to get into some of your observations. But before we get into it, I want you to explain for the folks watching, just so we can kind of get a definition on the record. What's an LLM, what's a foundational model and what's generative ai? Can you just quickly explain the three things there? >> Yeah, absolutely. So an LLM or a large language model, it's just a large, they would imply a large language model that's been trained on a huge amount of data typically pulled from the internet. And it's a general purpose language model that can be built on top for all sorts of different things, that includes traditional NLP tasks like document classification and sentiment understanding. But the thing that's gotten people really excited is it's used for generative tasks. So, you know, asking it to summarize documents or asking it to answer questions. And these aren't new techniques, they've been around for a while, but what's changed is just this new class of models that's based on new architectures. They're just so much more capable that they've gone from sort of science projects to something that's actually incredibly useful in the real world. And there's a number of companies that are making them accessible to everyone so that you can build on top of them. So that's the other big thing is, this kind of access to these models that can power generative tasks has been democratized in the last few months and it's just opening up all these new possibilities. And then the third one you mentioned foundation models is sort of a broader term for the category that includes LLMs, but it's not just language models that are included. So we've actually seen this for a while in the computer vision world. So people have been building on top of computer vision models, pre-trained computer vision models for a while for image classification, object detection, that's something we've had customers doing for three or four years already. And so, you know, like you said, there are antecedents to like, everything that's happened, it's not entirely new, but it does feel like a step change. >> Yeah, I did ask ChatGPT to give me a riveting introduction to you and it gave me an interesting read. If we have time, I'll read it. It's kind of, it's fun, you get a kick out of it. "Ladies and gentlemen, today we're a privileged "to have Adam Wenchel, Founder of Arthur who's going to talk "about the exciting world of artificial intelligence." And then it goes on with some really riveting sentences. So if we have time, I'll share that, it's kind of funny. It was good. >> Okay. >> So anyway, this is what people see and this is why I think it's exciting 'cause I think people are going to start refactoring what they do. And I've been saying this on theCUBE now for about a couple months is that, you know, there's a scene in "Moneyball" where Billy Beane sits down with the Red Sox owner and the Red Sox owner says, "If people aren't rebuilding their teams on your model, "they're going to be dinosaurs." And it reminds me of what's happening right now. And I think everyone that I talk to in the business sphere is looking at this and they're connecting the dots and just saying, if we don't rebuild our business with this new wave, they're going to be out of business because there's so much efficiency, there's so much automation, not like DevOps automation, but like the generative tasks that will free up the intellect of people. Like just the simple things like do an intro or do this for me, write some code, write a countermeasure to a hack. I mean, this is kind of what people are doing. And you mentioned computer vision, again, another huge field where 5G things are coming on, it's going to accelerate. What do you say to people when they kind of are leaning towards that, I need to rethink my business? >> Yeah, it's 100% accurate and what's been amazing to watch the last few months is the speed at which, and the urgency that companies like Microsoft and Google or others are actually racing to, to do that rethinking of their business. And you know, those teams, those companies which are large and haven't always been the fastest moving companies are working around the clock. And the pace at which they're rolling out LLMs across their suite of products is just phenomenal to watch. And it's not just the big, the large tech companies as well, I mean, we're seeing the number of startups, like we get, every week a couple of new startups get in touch with us for help with their LLMs and you know, there's just a huge amount of venture capital flowing into it right now because everyone realizes the opportunities for transforming like legal and healthcare and content creation in all these different areas is just wide open. And so there's a massive gold rush going on right now, which is amazing. >> And the cloud scale, obviously horizontal scalability of the cloud brings us to another level. We've been seeing data infrastructure since the Hadoop days where big data was coined. Now you're seeing this kind of take fruit, now you have vertical specialization where data shines, large language models all of a set up perfectly for kind of this piece. And you know, as you mentioned, you've been doing it for a long time. Let's take a step back and I want to get into how you started the company, what drove you to start it? Because you know, as an entrepreneur you're probably saw this opportunity before other people like, "Hey, this is finally it, it's here." Can you share the origination story of what you guys came up with, how you started it, what was the motivation and take us through that origination story. >> Yeah, absolutely. So as I mentioned, I've been doing AI for many years. I started my career at DARPA, but it wasn't really until 2015, 2016, my previous company was acquired by Capital One. Then I started working there and shortly after I joined, I was asked to start their AI team and scale it up. And for the first time I was actually doing it, had production models that we were working with, that was at scale, right? And so there was hundreds of millions of dollars of business revenue and certainly a big group of customers who were impacted by the way these models acted. And so it got me hyper-aware of these issues of when you get models into production, it, you know. So I think people who are earlier in the AI maturity look at that as a finish line, but it's really just the beginning and there's this constant drive to make them better, make sure they're not degrading, make sure you can explain what they're doing, if they're impacting people, making sure they're not biased. And so at that time, there really weren't any tools to exist to do this, there wasn't open source, there wasn't anything. And so after a few years there, I really started talking to other people in the industry and there was a really clear theme that this needed to be addressed. And so, I joined with my Co-Founder John Dickerson, who was on the faculty in University of Maryland and he'd been doing a lot of research in these areas. And so we ended up joining up together and starting Arthur. >> Awesome. Well, let's get into what you guys do. Can you explain the value proposition? What are people using you for now? Where's the action? What's the customers look like? What do prospects look like? Obviously you mentioned production, this has been the theme. It's not like people woke up one day and said, "Hey, I'm going to put stuff into production." This has kind of been happening. There's been companies that have been doing this at scale and then yet there's a whole follower model coming on mainstream enterprise and businesses. So there's kind of the early adopters are there now in production. What do you guys do? I mean, 'cause I think about just driving the car off the lot is not, you got to manage operations. I mean, that's a big thing. So what do you guys do? Talk about the value proposition and how you guys make money? >> Yeah, so what we do is, listen, when you go to validate ahead of deploying these models in production, starts at that point, right? So you want to make sure that if you're going to be upgrading a model, if you're going to replacing one that's currently in production, that you've proven that it's going to perform well, that it's going to be perform ethically and that you can explain what it's doing. And then when you launch it into production, traditionally data scientists would spend 25, 30% of their time just manually checking in on their model day-to-day babysitting as we call it, just to make sure that the data hasn't drifted, the model performance hasn't degraded, that a programmer did make a change in an upstream data system. You know, there's all sorts of reasons why the world changes and that can have a real adverse effect on these models. And so what we do is bring the same kind of automation that you have for other kinds of, let's say infrastructure monitoring, application monitoring, we bring that to your AI systems. And that way if there ever is an issue, it's not like weeks or months till you find it and you find it before it has an effect on your P&L and your balance sheet, which is too often before they had tools like Arthur, that was the way they were detected. >> You know, I was talking to Swami at Amazon who I've known for a long time for 13 years and been on theCUBE multiple times and you know, I watched Amazon try to pick up that sting with stage maker about six years ago and so much has happened since then. And he and I were talking about this wave, and I kind of brought up this analogy to how when cloud started, it was, Hey, I don't need a data center. 'Cause when I did my startup that time when Amazon, one of my startups at that time, my choice was put a box in the colo, get all the configuration before I could write over the line of code. So the cloud became the benefit for that and you can stand up stuff quickly and then it grew from there. Here it's kind of the same dynamic, you don't want to have to provision a large language model or do all this heavy lifting. So that seeing companies coming out there saying, you can get started faster, there's like a new way to get it going. So it's kind of like the same vibe of limiting that heavy lifting. >> Absolutely. >> How do you look at that because this seems to be a wave that's going to be coming in and how do you guys help companies who are going to move quickly and start developing? >> Yeah, so I think in the race to this kind of gold rush mentality, race to get these models into production, there's starting to see more sort of examples and evidence that there are a lot of risks that go along with it. Either your model says things, your system says things that are just wrong, you know, whether it's hallucination or just making things up, there's lots of examples. If you go on Twitter and the news, you can read about those, as well as sort of times when there could be toxic content coming out of things like that. And so there's a lot of risks there that you need to think about and be thoughtful about when you're deploying these systems. But you know, you need to balance that with the business imperative of getting these things into production and really transforming your business. And so that's where we help people, we say go ahead, put them in production, but just make sure you have the right guardrails in place so that you can do it in a smart way that's going to reflect well on you and your company. >> Let's frame the challenge for the companies now that you have, obviously there's the people who doing large scale production and then you have companies maybe like as small as us who have large linguistic databases or transcripts for example, right? So what are customers doing and why are they deploying AI right now? And is it a speed game, is it a cost game? Why have some companies been able to deploy AI at such faster rates than others? And what's a best practice to onboard new customers? >> Yeah, absolutely. So I mean, we're seeing across a bunch of different verticals, there are leaders who have really kind of started to solve this puzzle about getting AI models into production quickly and being able to iterate on them quickly. And I think those are the ones that realize that imperative that you mentioned earlier about how transformational this technology is. And you know, a lot of times, even like the CEOs or the boards are very personally kind of driving this sense of urgency around it. And so, you know, that creates a lot of movement, right? And so those companies have put in place really smart infrastructure and rails so that people can, data scientists aren't encumbered by having to like hunt down data, get access to it. They're not encumbered by having to stand up new platforms every time they want to deploy an AI system, but that stuff is already in place. There's a really nice ecosystem of products out there, including Arthur, that you can tap into. Compared to five or six years ago when I was building at a top 10 US bank, at that point you really had to build almost everything yourself and that's not the case now. And so it's really nice to have things like, you know, you mentioned AWS SageMaker and a whole host of other tools that can really accelerate things. >> What's your profile customer? Is it someone who already has a team or can people who are learning just dial into the service? What's the persona? What's the pitch, if you will, how do you align with that customer value proposition? Do people have to be built out with a team and in play or is it pre-production or can you start with people who are just getting going? >> Yeah, people do start using it pre-production for validation, but I think a lot of our customers do have a team going and they're starting to put, either close to putting something into production or about to, it's everything from large enterprises that have really sort of complicated, they have dozens of models running all over doing all sorts of use cases to tech startups that are very focused on a single problem, but that's like the lifeblood of the company and so they need to guarantee that it works well. And you know, we make it really easy to get started, especially if you're using one of the common model development platforms, you can just kind of turn key, get going and make sure that you have a nice feedback loop. So then when your models are out there, it's pointing out, areas where it's performing well, areas where it's performing less well, giving you that feedback so that you can make improvements, whether it's in training data or futurization work or algorithm selection. There's a number of, you know, depending on the symptoms, there's a number of things you can do to increase performance over time and we help guide people on that journey. >> So Adam, I have to ask, since you have such a great customer base and they're smart and they got teams and you're on the front end, I mean, early adopters is kind of an overused word, but they're killing it. They're putting stuff in the production's, not like it's a test, it's not like it's early. So as the next wave comes of fast followers, how do you see that coming online? What's your vision for that? How do you see companies that are like just waking up out of the frozen, you know, freeze of like old IT to like, okay, they got cloud, but they're not yet there. What do you see in the market? I see you're in the front end now with the top people really nailing AI and working hard. What's the- >> Yeah, I think a lot of these tools are becoming, or every year they get easier, more accessible, easier to use. And so, you know, even for that kind of like, as the market broadens, it takes less and less of a lift to put these systems in place. And the thing is, every business is unique, they have their own kind of data and so you can use these foundation models which have just been trained on generic data. They're a great starting point, a great accelerant, but then, in most cases you're either going to want to create a model or fine tune a model using data that's really kind of comes from your particular customers, the people you serve and so that it really reflects that and takes that into account. And so I do think that these, like the size of that market is expanding and its broadening as these tools just become easier to use and also the knowledge about how to build these systems becomes more widespread. >> Talk about your customer base you have now, what's the makeup, what size are they? Give a taste a little bit of a customer base you got there, what's they look like? I'll say Capital One, we know very well while you were at there, they were large scale, lot of data from fraud detection to all kinds of cool stuff. What do your customers now look like? >> Yeah, so we have a variety, but I would say one area we're really strong, we have several of the top 10 US banks, that's not surprising, that's a strength for us, but we also have Fortune 100 customers in healthcare, in manufacturing, in retail, in semiconductor and electronics. So what we find is like in any sort of these major verticals, there's typically, you know, one, two, three kind of companies that are really leading the charge and are the ones that, you know, in our opinion, those are the ones that for the next multiple decades are going to be the leaders, the ones that really kind of lead the charge on this AI transformation. And so we're very fortunate to be working with some of those. And then we have a number of startups as well who we love working with just because they're really pushing the boundaries technologically and so they provide great feedback and make sure that we're continuing to innovate and staying abreast of everything that's going on. >> You know, these early markups, even when the hyperscalers were coming online, they had to build everything themselves. That's the new, they're like the alphas out there building it. This is going to be a big wave again as that fast follower comes in. And so when you look at the scale, what advice would you give folks out there right now who want to tee it up and what's your secret sauce that will help them get there? >> Yeah, I think that the secret to teeing it up is just dive in and start like the, I think these are, there's not really a secret. I think it's amazing how accessible these are. I mean, there's all sorts of ways to access LLMs either via either API access or downloadable in some cases. And so, you know, go ahead and get started. And then our secret sauce really is the way that we provide that performance analysis of what's going on, right? So we can tell you in a very actionable way, like, hey, here's where your model is doing good things, here's where it's doing bad things. Here's something you want to take a look at, here's some potential remedies for it. We can help guide you through that. And that way when you're putting it out there, A, you're avoiding a lot of the common pitfalls that people see and B, you're able to really kind of make it better in a much faster way with that tight feedback loop. >> It's interesting, we've been kind of riffing on this supercloud idea because it was just different name than multicloud and you see apps like Snowflake built on top of AWS without even spending any CapEx, you just ride that cloud wave. This next AI, super AI wave is coming. I don't want to call AIOps because I think there's a different distinction. If you, MLOps and AIOps seem a little bit old, almost a few years back, how do you view that because everyone's is like, "Is this AIOps?" And like, "No, not kind of, but not really." How would you, you know, when someone says, just shoots off the hip, "Hey Adam, aren't you doing AIOps?" Do you say, yes we are, do you say, yes, but we do differently because it's doesn't seem like it's the same old AIOps. What's your- >> Yeah, it's a good question. AIOps has been a term that was co-opted for other things and MLOps also has people have used it for different meanings. So I like the term just AI infrastructure, I think it kind of like describes it really well and succinctly. >> But you guys are doing the ops. I mean that's the kind of ironic thing, it's like the next level, it's like NextGen ops, but it's not, you don't want to be put in that bucket. >> Yeah, no, it's very operationally focused platform that we have, I mean, it fires alerts, people can action off them. If you're familiar with like the way people run security operations centers or network operations centers, we do that for data science, right? So think of it as a DSOC, a Data Science Operations Center where all your models, you might have hundreds of models running across your organization, you may have five, but as problems are detected, alerts can be fired and you can actually work the case, make sure they're resolved, escalate them as necessary. And so there is a very strong operational aspect to it, you're right. >> You know, one of the things I think is interesting is, is that, if you don't mind commenting on it, is that the aspect of scale is huge and it feels like that was made up and now you have scale and production. What's your reaction to that when people say, how does scale impact this? >> Yeah, scale is huge for some of, you know, I think, I think look, the highest leverage business areas to apply these to, are generally going to be the ones at the biggest scale, right? And I think that's one of the advantages we have. Several of us come from enterprise backgrounds and we're used to doing things enterprise grade at scale and so, you know, we're seeing more and more companies, I think they started out deploying AI and sort of, you know, important but not necessarily like the crown jewel area of their business, but now they're deploying AI right in the heart of things and yeah, the scale that some of our companies are operating at is pretty impressive. >> John: Well, super exciting, great to have you on and congratulations. I got a final question for you, just random. What are you most excited about right now? Because I mean, you got to be pretty pumped right now with the way the world is going and again, I think this is just the beginning. What's your personal view? How do you feel right now? >> Yeah, the thing I'm really excited about for the next couple years now, you touched on it a little bit earlier, but is a sort of convergence of AI and AI systems with sort of turning into AI native businesses. And so, as you sort of do more, get good further along this transformation curve with AI, it turns out that like the better the performance of your AI systems, the better the performance of your business. Because these models are really starting to underpin all these key areas that cumulatively drive your P&L. And so one of the things that we work a lot with our customers is to do is just understand, you know, take these really esoteric data science notions and performance and tie them to all their business KPIs so that way you really are, it's kind of like the operating system for running your AI native business. And we're starting to see more and more companies get farther along that maturity curve and starting to think that way, which is really exciting. >> I love the AI native. I haven't heard any startup yet say AI first, although we kind of use the term, but I guarantee that's going to come in all the pitch decks, we're an AI first company, it's going to be great run. Adam, congratulations on your success to you and the team. Hey, if we do a few more interviews, we'll get the linguistics down. We can have bots just interact with you directly and ask you, have an interview directly. >> That sounds good, I'm going to go hang out on the beach, right? So, sounds good. >> Thanks for coming on, really appreciate the conversation. Super exciting, really important area and you guys doing great work. Thanks for coming on. >> Adam: Yeah, thanks John. >> Again, this is Cube Conversation. I'm John Furrier here in Palo Alto, AI going next gen. This is legit, this is going to a whole nother level that's going to open up huge opportunities for startups, that's going to use opportunities for investors and the value to the users and the experience will come in, in ways I think no one will ever see. So keep an eye out for more coverage on siliconangle.com and theCUBE.net, thanks for watching. (bright upbeat music)
SUMMARY :
I'm excited to have Adam Wenchel looking forward to the conversation. kind of in the mainstream and that it's just the amount Adam, you know, you've so that you can build on top of them. to give me a riveting introduction to you And you mentioned computer vision, again, And you know, those teams, And you know, as you mentioned, of when you get models into off the lot is not, you and that you can explain what it's doing. So it's kind of like the same vibe so that you can do it in a smart way And so, you know, that creates and make sure that you out of the frozen, you know, and so you can use these foundation models a customer base you got there, that are really leading the And so when you look at the scale, And so, you know, go how do you view that So I like the term just AI infrastructure, I mean that's the kind of ironic thing, and you can actually work the case, is that the aspect of and so, you know, we're seeing exciting, great to have you on so that way you really are, success to you and the team. out on the beach, right? and you guys doing great work. and the value to the users and
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John Markoff | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Adam Wenchel | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Red Sox | ORGANIZATION | 0.99+ |
John Dickerson | PERSON | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Adam | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2015 | DATE | 0.99+ |
Capital One | ORGANIZATION | 0.99+ |
five | QUANTITY | 0.99+ |
100% | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
13 years | QUANTITY | 0.99+ |
Snowflake | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
five | DATE | 0.99+ |
today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
four years | QUANTITY | 0.99+ |
Billy Beane | PERSON | 0.99+ |
over 20 years | QUANTITY | 0.99+ |
DARPA | ORGANIZATION | 0.99+ |
third one | QUANTITY | 0.98+ |
AWS | ORGANIZATION | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
University of Maryland | ORGANIZATION | 0.97+ |
first time | QUANTITY | 0.97+ |
US | LOCATION | 0.97+ |
first | QUANTITY | 0.96+ |
six years ago | DATE | 0.96+ |
New York Times | ORGANIZATION | 0.96+ |
ChatGPT | ORGANIZATION | 0.96+ |
Swami | PERSON | 0.95+ |
ChatGPT | TITLE | 0.95+ |
hundreds of models | QUANTITY | 0.95+ |
25, 30% | QUANTITY | 0.95+ |
single problem | QUANTITY | 0.95+ |
hundreds of millions of dollars | QUANTITY | 0.95+ |
10 | QUANTITY | 0.94+ |
Moneyball | TITLE | 0.94+ |
wave | EVENT | 0.91+ |
three things | QUANTITY | 0.9+ |
AIOps | TITLE | 0.9+ |
last six months | DATE | 0.89+ |
few months ago | DATE | 0.88+ |
big | EVENT | 0.86+ |
next couple years | DATE | 0.86+ |
DevOps | TITLE | 0.85+ |
Arthur | PERSON | 0.85+ |
CUBE | ORGANIZATION | 0.83+ |
dozens of models | QUANTITY | 0.8+ |
a few years back | DATE | 0.8+ |
six years ago | DATE | 0.78+ |
theCUBE | ORGANIZATION | 0.76+ |
SageMaker | TITLE | 0.75+ |
decades | QUANTITY | 0.75+ |
ORGANIZATION | 0.74+ | |
MLOps | TITLE | 0.74+ |
supercloud | ORGANIZATION | 0.73+ |
super AI wave | EVENT | 0.73+ |
a couple months | QUANTITY | 0.72+ |
Arthur | ORGANIZATION | 0.72+ |
100 customers | QUANTITY | 0.71+ |
Cube Conversation | EVENT | 0.69+ |
theCUBE.net | OTHER | 0.67+ |
Tony Jeffries, Dell Technologies & Honoré LaBourdette, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies: "Creating technologies that drive human progress." >> Good late afternoon from Barcelona, Spain at the Theater of Barcelona. It's Lisa Martin and Dave Nicholson of "theCUBE" covering MWC23. This is our third day of continuous wall-to-wall coverage on theCUBE. And you know we're going to be here tomorrow as well. We've been having some amazing conversations about the ecosystem. And we're going to continue those conversations next. Honore Labourdette is here, the VP global partner, Ecosystem Success Team, Telco Media and Entertainment at Red Hat. And Tony Jeffries joins us as well, a Senior Director of Product Management, Telecom Systems Business at Dell. Welcome to the theCUBE. >> Thank you. >> Thank you. >> Great to have both of you here. So we're going to be talking about the evolution of the telecom stack. We've been talking a lot about disaggregation the last couple of days. Honore, starting with you, talk about the evolution of the telecom stock. You were saying before we went live this is your 15th at least MWC. So you've seen a lot of evolution, but what are some of the things you're seeing right now? >> Well, I think the interesting thing about disaggregation, which is a key topic, right? 'Cause it's so relative to 5G and the 5G core and the benefits and the features of 5G core around disaggregation. But one thing we have to remember, when you disaggregate, you separate things. You have to bring those things back together again in a different way. And that's predominantly what we're doing in our partnership with Dell, is we're bringing those disaggregated components back together in a cohesive way that takes advantage of the new technology, at the same time taking out the complexity and making it easier for our Telco customers to deploy and to scale and to get much more, accelerate the time to revenue. So the trend now is, what we're seeing is two things I would say. One is how do we solve for the complexity with the disaggregation? And how do we leverage the ecosystem as a partner in order to help solve for some of those challenges? >> Tony, jump on in, talk about what you guys announced last week, Dell and Red Hat, and how it's addressing the complexities that Honore was saying, "Hey, they're there." >> Yeah. You know, our customers, our operators are saying, "Hey, I want disaggregation." "I want competition in the market." But at the same time who's going to support all this disaggregation, right? And so at the end of the day, there's going to be an operator that's going to have to figure this out. They're going to have an SLA that they're going to have to meet. And so they're going to want to go with a best-in-class partner with Red Hat and Dell, in terms of our infrastructure and their software together as one combined engineered system. And that's what we call a Dell Telecom infrastructure block for Red Hat. And so at the end of the day, things may go wrong, and if they do, who are they going to call for that support? And that's also really a key element of an engineered system, is this experience that they get both with Red Hat and with Dell together supporting the customer as one. Which is really important to solve this disaggregated problem that can arise from a disaggregated open network situation, yeah. >> So what is the market, the go to market motion look like? People have loyalties in the IT space to technologies that they've embraced and been successful with for years and years. So you have folks in the marketplace who are diehard, you know, dyed red, Red Hat folks. Is it primarily a pull from them? How does that work? How do you approach that to your, what are your end user joint customers? What does that look like from your perspective? >> Sure, well, interestingly enough both Red Hat and Dell have been in the marketplace for a very long time, right? So we do have the brand with those Telco customers for these solutions. What we're seeing with this solution is, it's an emerging market. It's an emerging market for a new technology. So there's an opportunity for both Red Hat and Dell together to leverage our brands with those customers with no friction in the marketplace as we go to market together. So our field sales teams will be motivated to, you know, take advantage of the solution for their customers, as will the Dell team. And I'll let Tony speak to the Dell, go to market. >> Yeah. You know, so we really co-sell together, right? We're the key partners. Dell will end up fulfilling that order, right? We send these engineered systems through our factories and we send that out either directly to a customer or to a OTEL lab, like an intermediate lab where we can further refine and customize that offer for that particular customer. And so we got a lot of options there, but we're essentially co-selling. And Dell is fulfilling that from an infrastructure perspective, putting Red Hat software on top and the licensing for that support. So it's a really good mix. >> And I think, if I may, one of the key differentiators is the actual capabilities that we're bringing together inside of this pre-integrated solution. So it includes the Red Hat OpenShift which is the container software, but we also add our advanced cluster management as well as our Ansible automation. And then Dell adds their orchestration capability along with the features and functionalities of the platform. And we put that together and we offer capability, remote automation orchestration and management capabilities that again reduces the operating expense, reduces the complexity, allows for easy scale. So it's, you know, certainly it's all about the partnership but it's also the capabilities of the combined technology. >> I was just going to ask about some of the numbers, and you mentioned some of them. Reduction of TCO I imagine is also a big capability that this solution enables besides reducing OpEx. Talk about the TCO reduction. 'Cause I know there's some numbers there that Dell and Red Hat have already delivered to the market. >> Yeah. You know, so these infrastructure blocks are designed specifically for Core, or for RAN, or for the Edge. We're starting out initially in the Core, but we've done some market research with a company called ACG. And ACG has looked at day zero, day one and day two TCO, FTE hours saved. And we're looking at over 40 to 50% TCO savings over you know, five year period, which is quite significant in terms of cost savings at a TCO level. But also we have a lot of numbers around power consumption and savings around power consumption. But also just that experience for our operator that says, hey, I'm going to go to one company to get the best in class from Red Hat and Dell together. That saves a lot of time in procurement and that entire ordering process as well. So you get a lot of savings that aren't exactly seen in the FTE hours around TCO, but just in that overall experience by talking to one company to get the best of both from both Red Hat and Dell together. >> I think the comic book character Charlie Brown once said, "The most discouraging thing in the world is having a lot of potential." (laughing) >> Right. >> And so when we talk about disaggregating and then reaggregating or reintegrating, that means choice. >> Tony: Yeah. >> How does an operator approach making that choice? Because, yeah, it sounds great. We have this integration lab and you have all these choices. Well, how do I decide, how does a person decide? This is a question for Honore from a Red Hat perspective, what's the secret sauce that you believe differentiates the Red Hat-infused stack versus some other assemblage of gear? >> Well, there's a couple of key characteristics, and the one that I think is most prevalent is that we're open, right? So "open" is in Red Hat's DNA because we're an open source technology company, and with that open source technology and that open platform, our customers can now add workloads. They have options to choose the workloads that they want to run on that open source platform. As they choose those workloads, they can be confident that those workloads have been certified and validated on our platform because we have a very robust ecosystem of ISVs that have already completed that process with open source, with Red Hat OpenShift. So then we take the Red Hat OpenShift and we put it on the Dell platform, which is market leader platform, right? Combine those two things, the customers can be confident that they can put those workloads on the combined platform that we're offering and that those workloads would run. So again, it goes back to making it simpler, making it easy to procure, easy to run workloads, easy to deploy, easy to operate. And all of that of course equates to saving time always equates to saving money. >> Yeah. Absolutely. >> Oh, I thought you wanted to continue. >> No, I think Honore sort of, she nailed it. You know, Red Hat is so dominant in 5G, and what they're doing in the market, especially in the Core and where we're going into the RAN, you know, next steps are to validate those workloads, those workload vendors on top of a stack. And the Red Hat leader in the Core is key, right? It's instant credibility in the core market. And so that's one of the reasons why we, Dell, want to partner with with Red Hat for the core market and beyond. We're going to be looking at not only Core but moving into RAN very soon. But then we do, we take that validated workload on top of that to optimize that workload and then be able to instantiate that in the core and the RAN. It's just a really streamlined, good experience for our operators. At the end of the day, we want happy customers in between our mutual customer base. And that's what you get whenever you do that combined stack together. >> Were operators, any operators, and you don't have to mention them by name, involved in the evolution of the infra blocks? I'm just curious how involved they were in helping to co-develop this. I imagine they were to some degree. >> Yeah, I could take that one. So, in doing so, yeah, we can't be myopic and just assume that we nailed it the first time, right? So yeah, we do work with partners all the way up and down the stack. A lot of our engineering work with Red Hat also brings in customer experience that is key to ensure that you're building and designing the right architecture for the Core. I would like to use the names, I don't know if I should, but a lot of those names are big names that are leaders in our industry. But yeah, their footprints, their fingerprints are all over those design best practices, those architectural designs that we build together. And then we further that by doing those validated workloads on top of that. So just to really prove the point that it's optimized for the Core, RAN, Edge kind of workload. >> And it's a huge added value for Red Hat to have a partner like Dell who can take all of those components, take the workload, take the Red Hat software, put it on the platform, and deliver that out to the customers. That's really, you know, a key part of the partnership and the value of the partnership because nobody really does that better than Dell. That center of excellence around delivery and support. >> Can you share any feedback from any of those nameless operators in terms of... I'm even kind of wondering what the catalyst was for the infra block. Was it operators saying, "Ah, we have these challenges here"? Was it the evolution of the Telco stack and Dell said, "We can come in with Red Hat and solve this problem"? And what's been some of their feedback? >> Yeah, it really comes down to what Honore said about, okay, you know, when we are looking at day zero, which is primarily your design, how much time savings can we do by creating that stack for them, right? We have industry experts designing that Core stack that's optimized for different levels of spectrum. When we do that we save a lot of time in terms of FTE hours for our architects, our operators, and then it goes into day one, right? Which is the deployment aspect for saving tons of hours for our operators by being able to deploy this. Speed to market is key. That ultimately ends up in, you know, faster time to revenue for our customers, right? So it's, when they see that we've already done the pre-work that they don't have to, that's what really resonates for them in terms of that, yeah. >> Honore, Lisa and I happen to be veterans of the Cloud native space, and what we heard from a lot of the folks in that ecosystem is that there is a massive hunger for developers to be able to deploy and manage and orchestrate environments that consist of Cloud native application infrastructure, microservices. >> Right. >> What we've heard here is that 5G equals Cloud native application stacks. Is that a fair assessment of the environment? And what are you seeing from a supply and demand for that kind of labor perspective? Is there still a hunger for those folks who develop in that space? >> Well, there is, because the very nature of an open source, Kubernetes-based container platform, which is what OpenShift is, the very nature of it is to open up that code so that developers can have access to the code to develop the workloads to the platform, right? And so, again, the combination of bringing together the Dell infrastructure with the Red Hat software, it doesn't change anything. The developer, the development community still has access to that same container platform to develop to, you know, Cloud native types of application. And you know, OpenShift is Red Hat's hybrid Cloud platform. So it runs on-prem, it runs in the public Cloud, it runs at the edge, it runs at the far edge. So any of the development community that's trying to develop Cloud native applications can develop it on this platform as they would if they were developing on an OpenShift platform in the public Cloud. >> So in "The Graduate", the advice to the graduate was, "Plastics." Plastics. As someone who has more children than I can remember, I forget how many kids I have. >> Four. >> That's right, I have four. That's right. (laughing) Three in college and grad school already at this point. Cloud native, I don't know. Kubernetes definitely a field that's going to, it's got some legs? >> Yes. >> Okay. So I can get 'em off my payroll quickly. >> Honore: Yes, yes. (laughing) >> Okay, good to know. Good to know. Any thoughts on that open Cloud native world? >> You know, there's so many changes that's going to happen in Kubernetes and services that you got to be able to update quickly. CICD, obviously the topic is huge. How quickly can we keep these systems up to date with new releases, changes? That's a great thing about an engineered system is that we do provide that lifecycle management for three to five years through this engagement with our customers. So we're constantly keeping them up with the latest and the greatest. >> David: Well do those customers have that expertise in-house, though? Do they have that now? Or is this a seismic cultural shift in those environments? >> Well, you know, they do have a lot of that experience, but it takes a lot of that time, and we're taking that off of their plate and putting that within us on our system, within our engineered system, and doing that automatically for them. And so they don't have to check in and try to understand what the release certification matrix is. Every quarter we're providing that to them. We're communicating out to the operator, telling them what's coming up latest and greatest, not only in terms of the software but the hardware and how to optimize it all together. That's the beauty of these systems. These are five year relationships with our operators that we're providing that lifecycle management end to end, for years to come. >> Lisa: So last question. You talked about joint GTM availability. When can operators get their hands on this? >> Yes. Yes. It's currently slated for early September release. >> Lisa: Awesome. So sometime this year? >> Yes. >> Well guys, thank you so much for talking with us today about Dell, Red Hat, what you're doing to really help evolve the telecom stack. We appreciate it. Next time come back with a customer, we can dig into it. That'd be fun. >> We sure will, absolutely. That may happen today actually, a little bit later. Not to let the cat out the bag, but good news. >> All right, well, geez, you're going to want to stick around. Thank you so much for your time. For our guests and for Dave Nicholson. This is Lisa Martin of theCUBE at MWC23 from Barcelona, Spain. We'll be back after a short break. (calm music)
SUMMARY :
that drive human progress." at the Theater of Barcelona. of the telecom stock. accelerate the time to revenue. and how it's addressing the complexities And so at the end of the day, the IT space to technologies in the marketplace as we and the licensing for that support. that again reduces the operating expense, about some of the numbers, in the FTE hours around TCO, in the world is having that means choice. the Red Hat-infused stack versus And all of that of course equates to And so that's one of the of the infra blocks? and just assume that we nailed and the value of the partnership Was it the evolution of the Which is the deployment aspect of the Cloud native space, of the environment? So any of the development So in "The Graduate", the Three in college and grad (laughing) Okay, good to know. is that we do provide but the hardware and how to Lisa: So last question. It's currently slated for So sometime this year? help evolve the telecom stack. the bag, but good news. going to want to stick around.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Tony | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Tony Jeffries | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Honore | PERSON | 0.99+ |
David | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
five year | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Charlie Brown | PERSON | 0.99+ |
Honore Labourdette | PERSON | 0.99+ |
four | QUANTITY | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
third day | QUANTITY | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
Three | QUANTITY | 0.99+ |
tomorrow | DATE | 0.99+ |
early September | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
one company | QUANTITY | 0.99+ |
five years | QUANTITY | 0.99+ |
Four | QUANTITY | 0.99+ |
one | QUANTITY | 0.98+ |
first time | QUANTITY | 0.98+ |
Red Hat | TITLE | 0.98+ |
Red Hat OpenShift | TITLE | 0.98+ |
this year | DATE | 0.98+ |
OpenShift | TITLE | 0.97+ |
Dell Technologies MWC 2023 Exclusive Booth Tour with David Nicholson
>> And I'm here at Dell's Presence at MWC with vice president of marketing for telecom and Edge Computing, Aaron Chaisson. Aaron, how's it going? >> Doing great. How's it going today, Dave? >> It's going pretty well. Pretty excited about what you've got going here and I'm looking forward to getting the tour. You ready to take a closer look? >> Ready to do it. Let's go take a look! For us in the telecom ecosystem, it's really all about how we bring together the different players that are innovating across the industry to drive value for our CSP customers. So, it starts really, for us, at the ecosystem layer, bringing partners, bringing telecommunication providers, bringing (stutters) a bunch of different technologies together to innovate together to drive new value. So Paul, take us a little bit through what we're doing to- to develop and bring in these partnerships and develop our ecosystem. >> Uh, sure. Thank you Aaron. Uh, you know, one of the things that we've been focusing on, you know, Dell is really working with many players in the open telecom ecosystem. Network equipment providers, independent software vendors, and the communication service providers. And, you know, through our lines of business or open telecom ecosystem labs, what we want to do is bring 'em together into a community with the goal of really being able to accelerate open innovation and, uh, open solutions into the market. And that's what this community is really about, is being able to, you know, have those communications, develop those collaborations whether it's through, you know, sharing information online, having webinars dedicated to sharing Dell information, whether it's our next generation hardware portfolio we announced here at the show, our use case directory, our- how we're dealing with new service opportunities, but as well as the community to share, too, which I think is an exciting way for us to be able to, you know- what is the knowledge thing? As well as activities at other events that we have coming up. So really the key thing I think about, the- the open telecom ecosystem community, it's collaboration and accelerating the open industry forward. >> So- So Aaron, if I'm hearing this correctly you're saying that you can't just say, "Hey, we're open", and throw a bunch of parts in a box and have it work? >> No, we've got to work together to integrate these pieces to be able to deliver value, and, you know, we opened up a- (stutters) in our open ecosystem labs, we started a- a self-certification process a couple of months back. We've already had 13 partners go through that, we've got 16 more in the pipeline. Everything you see in this entire booth has been innovated and worked with partnerships from Intel to Microsoft to, uh, to (stutters) Wind River and Red Hat and others. You go all the way around the booth, everything here has partnerships at its core. And why don't we go to the next section here where we're going to be showing how we're pulling that all together in our open ecosystems labs to drive that innovation? >> So Aaron, you talked about the kinds of validation and testing that goes on, so that you can prove out an open stack to deliver the same kinds of reliability and performance and availability that we expect from a wireless network. But in the opens- in the open world, uh, what are we looking at here? >> Yeah absolutely. So one of the- one of the challenges to a very big, broad open ecosystem is the complexity of integrating, deploying, and managing these, especially at telecom scale. You're not talking about thousands of servers in one site, you're talking about one server in thousands of sites. So how do you deploy that predictable stack and then also manage that at scale? I'm going to show you two places where we're talkin' about that. So, this is actually representing an area that we've been innovating in recently around creating an integrated infrastructure and virtualization stack for the telecom industry. We've been doing this for years in IT with VxBlocks and VxRails and others. Here what you see is we got, uh, Dell hardware infrastructure, we've got, uh, an open platform for virtualization providers, in this case we've created an infrastructure block for Red Hat to be able to supply an infrastructure for core operations and Packet Cores for telecoms. On the other side of this, you can actually see what we're doing with Wind River to drive innovation around RAN and being able to simplify RAN- vRAN and O-RAN deployments. >> What does that virtualization look like? Are we talking about, uh, traditional virtual machines with OSs, or is this containerized cloud native? What does it look like? >> Yeah, it's actually both, so it can support, uh, virtual, uh-uh, software as well as containerized software, so we leverage the (indistinct) distributions for these to be able to deploy, you know, cloud native applications, be able to modernize how they're deploying these applications across the telecom network. So in this case with Red Hat, uh, (stutters) leveraging OpenShift in order to support containerized apps in your Packet Core environments. >> So what are- what are some of the kinds of things that you can do once you have infrastructure like this deployed? >> Yeah, I mean by- by partnering broadly across the ecosystem with VMware, with Red Hat, uh, with- with Wind River and with others, it gives them the ability to be able to deploy the right virtualization software in their network for the types of applications they're deploying. They might want to use Red Hat in their core, they may want to use Wind River in their RAM, they may want to use, uh, Microsoft or VMware for their- for their Edge workloads, and we allow them to be able to deploy all those, but centrally manage those with a common user interface and a common set of APIs. >> Okay, well I'm dying to understand the link between this and the Lego city that the viewers can't see, yet, but it's behind me. Let's take a look. >> So let's take a look at the Lego city that shows how we not deploy just one of these, but dozens or hundreds of these at scale across a cityscape. >> So Aaron, I know we're not in Copenhagen. What's all the Lego about? >> Yeah, so the Lego city here is to show- and, uh, really there's multiple points of Presence across an entire Metro area that we want to be able to manage if we're a telecom provider. We just talked about one infrastructure block. What if I wanted to deploy dozens of these across the city to be able to manage my network, to be able to manage, uh, uh- to be able to deploy private mobility potentially out into a customer enterprise environment, and be able to manage all of these, uh, very simply and easily from a common interface? >> So it's interesting. Now I think I understand why you are VP of marketing for both telecom and Edge. Just heard- just heard a lot about Edge and I can imagine a lot of internet of things, things, hooked up at that Edge. >> Yeah, so why don't we actually go over to another area? We're actually going to show you how one small microbrewery (stutters) in one of our cities nearby, uh, (stutters) my hometown in Massachusetts is actually using this technology to go from more of an analyzed- analog world to digitizing their business to be able to brew better beer. >> So Aaron, you bring me to a brewery. What do we have- what do we have going on here? >> Yeah, so, actually (stutters) about- about a year ago or so, I- I was able to get my team to come together finally after COVID to be able to meet each other and have a nice team event. One of those nights, we went out to dinner at a- at a brewery called "Exhibit 'A'" in Massachusetts, and they actually gave us a tour of their facilities and showed us how they actually go through the process of brewing beer. What we saw as we were going through it, interestingly, was that everything was analog. They literally had people with pen and paper walking around checking time and temperature and the process of brewing the beer, and they weren't asking for help, but we actually saw an opportunity where what we're doing to help businesses digitize what they're doing in their manufacturing floor can actually help them optimize how they build whatever product they're building, in this case it was beer. >> Hey Warren, good to meet you! What do we have goin' on? >> Yeah, it's all right. So yeah, basically what we did is we took some of their assets in the, uh, brewery that were completely manually monitored. People were literally walking around the floor with clipboards, writing down values. And we censorized the asset, in this case fermentation tanks and we measured the, uh, pressure and the temperature, which in fermentation are very key to monitor those, because if they get out of range the entire batch of beer can go bad or you don't get the consistency from batch to batch if you don't tightly monitor those. So we censorized the fermentation tank, brought that into an industrial I/O network, and then brought that into a Dell gateway which is connected 5G up to the cloud, which then that data comes to a tablet or a phone, which they, rather than being out on the floor and monitor it, can look at this data remotely at any time. >> So I'm not sure the exact date, the first time we have evidence of beer being brewed by humanity... >> Yep. >> But I know it's thousands of years ago. So it's taken that long to get to the point where someone had to come along, namely Dell, to actually digitally transform the beer business. Is this sort of proof that if you can digitally transform this, you can digitally transform anything? >> Absolutely. You name it, anything that's being manufactured, sold, uh, uh, taken care of, (stutters) any business out there that's looking to be able to be modernize and deliver better service to their customers can benefit from technologies like this. >> So we've taken a look at the ecosystem, the way that you validate architectures, we've seen an example of that kind of open architecture. Now we've seen a real world use case. Do you want to take a look a little deeper under the covers and see what's powering all of this? >> We just this week announced a new line of servers that power Edge and RAN use cases, and I want to introduce Mike to kind of take us through what we've been working on and really what the power of what this providing. >> Hey Mike, welcome to theCube. >> Oh, glad- glad to be here. So, what I'd really like to talk about are the three new XR series servers that we just announced last week and we're showing here at Mobile World Congress. They are all short depth, ruggedized, uh, very environmentally tolerant, and able to withstand, you know, high temperatures, high humidities, and really be deployed to places where traditional data center servers just can't handle, you know, due to one fact or another, whether it's depth or the temperature. And so, the first one I'd like to show you is the XR7620. This is, uh, 450 millimeters deep, it's designed for, uh, high levels of acceleration so it can support up to 2-300 watt, uh, GPUs. But what I really want to show you over here, especially for Mobile World Congress, is our new XR8000. The XR8000 is based on Intel's latest Sapphire Rapids technology, and this is- happens to be one of the first, uh, EE boost processors that is out, and basically what it is (stutters) an embedded accelerator that makes, uh, the- the processing of vRAN loads very, uh, very efficient. And so they're actually projecting a, uh, 3x improvement, uh, of processing per watt over the previous generation of processors. This particular unit is also sledded. It's very much like, uh, today's traditional baseband unit, so it's something that is designed for low TCO and easy maintenance in the field. This is the frew. When anything fails, you'll pull one out, you pop a new one in, it comes back into service, and the- the, uh, you know, your radio is- is, uh, minimally disrupted. >> Yeah, would you describe this as quantitative and qualitative in terms of the kinds of performance gains that these underlying units are delivering to us? I mean, this really kind of changes the game, doesn't it? It's not just about more, is it about different also in terms of what we can do? >> Well we are (stutters) to his point, we are able to bring in new accelerator technologies. Not only are we doing it with the Intel, uh, uh, uh, of the vRAN boost technologies, but also (stutters) we can bring it, too, but there's another booth here where we're actually working with our own accelerator cards and other accelerator cards from our partners across the industry to be able to deliver the price and performance capabilities required by a vRAN or an O-RAN deployment in the network. So it's not- it's not just the chip technology, it's the integration and the innovation we're doing with others, as well as, of course, the unique power cooling capabilities that Dell provides in our servers that really makes these the most efficient way of being able to power a network. >> Any final thoughts recapping the whole picture here? >> Yeah, I mean I would just say if anybody's, uh, i- is still here in Mobile World Congress, wants to come and learn what we're doing, I only showed you a small section of the demos we've got here. We've got 13 demos across on 8th floor here. Uh, for those of you who want to talk to us (stutters) and have meetings with us, we've got 13 meeting rooms back there, over 500 costumer partner meetings this week, we've got some whisper suites for those of you who want to come and talk to us but we're innovating on going forward. So, you know, there's a lot that we're doing, we're really excited, there's a ton of passion at this event, and, uh, we're really excited about where the industry is going and our role in it. >> 'Preciate the tour, Aaron. Thanks Mike. >> Mike: Thank you! >> Well, for theCube... Again, Dave Nicholson here. Thanks for joining us on this tour of Dell's Presence here at MWC 2023.
SUMMARY :
with vice president of marketing for it going today, Dave? to getting the tour. the industry to drive value and the communication service providers. to be able to deliver value, and availability that we one of the challenges to a to be able to deploy, you know, the ecosystem with and the Lego city that the the Lego city that shows how What's all the Lego about? Yeah, so the Lego city here is to show- think I understand why you are to be able to brew better beer. So Aaron, you bring me to and temperature and the process to batch if you don't So I'm not sure the to get to the point that's looking to be able to the way that you validate architectures, to kind of take us through and really be deployed to the industry to be able to come and talk to us but we're 'Preciate the tour, Aaron. Thanks for joining us on this
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Aaron | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Aaron Chaisson | PERSON | 0.99+ |
Paul | PERSON | 0.99+ |
Massachusetts | LOCATION | 0.99+ |
Mike | PERSON | 0.99+ |
Copenhagen | LOCATION | 0.99+ |
Warren | PERSON | 0.99+ |
13 partners | QUANTITY | 0.99+ |
David Nicholson | PERSON | 0.99+ |
13 demos | QUANTITY | 0.99+ |
450 millimeters | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
last week | DATE | 0.99+ |
two places | QUANTITY | 0.99+ |
XR7620 | COMMERCIAL_ITEM | 0.99+ |
one site | QUANTITY | 0.99+ |
XR8000 | COMMERCIAL_ITEM | 0.99+ |
dozens | QUANTITY | 0.99+ |
Lego | ORGANIZATION | 0.99+ |
8th floor | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Edge | ORGANIZATION | 0.98+ |
this week | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
three | QUANTITY | 0.98+ |
Wind River | ORGANIZATION | 0.98+ |
hundreds | QUANTITY | 0.98+ |
13 meeting rooms | QUANTITY | 0.98+ |
thousands of years ago | DATE | 0.97+ |
thousands of servers | QUANTITY | 0.97+ |
one | QUANTITY | 0.97+ |
Wind River | ORGANIZATION | 0.97+ |
OpenShift | TITLE | 0.97+ |
Red Hat | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
one server | QUANTITY | 0.96+ |
3x | QUANTITY | 0.96+ |
Red Hat | TITLE | 0.96+ |
Mobile World Congress | EVENT | 0.95+ |
One | QUANTITY | 0.94+ |
first | QUANTITY | 0.94+ |
Mobile World Congress | EVENT | 0.93+ |
16 more | QUANTITY | 0.93+ |
first one | QUANTITY | 0.92+ |
Edge | TITLE | 0.92+ |
over 500 costumer partner meetings | QUANTITY | 0.92+ |
dozens of these | QUANTITY | 0.9+ |
MWC 2023 | EVENT | 0.88+ |
thousands of sites | QUANTITY | 0.88+ |
about a year ago | DATE | 0.87+ |
Sapphire Rapids | OTHER | 0.87+ |
RAN- vRAN | TITLE | 0.87+ |
one small microbrewery | QUANTITY | 0.86+ |
Edge Computing | ORGANIZATION | 0.86+ |
Wind River | TITLE | 0.83+ |
one infrastructure block | QUANTITY | 0.82+ |
up to 2-300 watt | QUANTITY | 0.82+ |
RAN | TITLE | 0.81+ |
VMware | ORGANIZATION | 0.8+ |
Andy Sheahen, Dell Technologies & Marc Rouanne, DISH Wireless | MWC Barcelona 2023
>> (Narrator) The CUBE's live coverage is made possible by funding by Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Welcome back to Fira Barcelona. It's theCUBE live at MWC23 our third day of coverage of this great, huge event continues. Lisa Martin and Dave Nicholson here. We've got Dell and Dish here, we are going to be talking about what they're doing together. Andy Sheahen joins as global director of Telecom Cloud Core and Next Gen Ops at Dell. And Marc Rouanne, one of our alumni is back, EVP and Chief Network Officer at Dish Wireless. Welcome guys. >> Great to be here. >> (Both) Thank you. >> (Lisa) Great to have you. Mark, talk to us about what's going on at Dish wireless. Give us the update. >> Yeah so we've built a network from scratch in the US, that covered the US, we use a cloud base Cloud native, so from the bottom of the tower all the way to the internet uses cloud distributed cloud, emits it, so there are a lot of things about that. But it's unique, and now it's working, so we're starting to play with it and that's pretty cool. >> What's some of the proof points, proof in the pudding? >> Well, for us, first of all it was to do basic voice and data on a smartphone and for me the success would that you won't see the difference for a smartphone. That's base line. the next step is bringing this to the enterprise for their use case. So we've covered- now we have services for smartphones. We use our brand, Boost brand, and we are distributing that across the US. But as I said, the real good stuff is when you start to making you know the machines and all the data and the applications for the enterprise. >> Andy, how is Dell a facilitator of what Marc just described and the use cases and what their able to deliver? >> We're providing a number of the servers that are being used out in their radio access network. The virtual DU servers, we're also providing some bare metal orchestration capabilities to help automate the process of deploying all these hundreds and thousands of nodes out in the field. Both of these, the servers and the bare metal orchestra product are things that we developed in concert with Dish, working together to understand the way, the best way to automate, based on the tooling their using in other parts of their network, and we've been with you guys since day one, really. >> (Marc) Absolutely, yeah. >> Making each others solutions better the whole way. >> Marc, why Dell? >> So, the way the networks work is you have a cloud, and you have a distributed edge you need someone who understands the diversity of the edge in order to bring the cloud software to the edge, and Dell is the best there, you know, you can, we can ask them to mix and match accelerators, processors memory, it's very diverse distributed edge. We are building twenty thousands sides so you imagine the size and the complexity and Dell was the right partner for that. >> (Andy) Thank you. >> So you mentioned addressing enterprise leads, which is interesting because there's nothing that would prevent you from going after consumer wireless technically, right but it sounds like you have taken a look at the market and said "we're going to go after this segment of the market." >> (Marc) Yeah. >> At least for now. Are there significant differences between what an enterprise expects from a 5G network than, verses a consumer? >> Yeah. >> (Dave) They have higher expectations, maybe, number one I guess is, if my bill is 150 dollars a month I can have certain levels of expectations whereas a large enterprise the may be making a much more significant investment, are their expectations greater? >> (Marc) Yeah. >> Do you have a higher bar to get over? >> So first, I mean first we use our network for consumers, but for us it's an enterprise. That's the consumer segment, an enterprise. So we expose the network like we would to a car manufacturer, or to a distributor of goods of food and beverage. But what you expect when you are an enterprise, you expect, manage your services. You expect to control the goodness of your services, and for this you need to observe what's happening. Are you delivering the right service? What is the feedback from the enterprise users, and that's what we call the observability. We have a data centric network, so our enterprises are saying "Yeah connecting is enough, but show us how it works, and show us how we can learn from the data, improve, improve, and become more competitive." That's the big difference. >> So what you say Marc, are some of the outcomes you achieved working with Dell? TCO, ROI, CapX, OpX, what are some of the outcomes so far, that you've been able to accomplish? >> Yeah, so obviously we don't share our numbers, but we're very competitive. Both on the CapX and the OpX. And the second thing is that we are much faster in terms of innovation, you know one of the things that Telecorp would not do, was to tap into the IT industry. So we access to the silicon and we have access to the software and at a scale that none of the Telecorp could ever do and for us it's like "wow" and it's a very powerful industry and we've been driving the consist- it's a bit technical but all the silicone, the accelerators, the processors, the GPU, the TPUs and it's like wow. It's really a transformation. >> Andy, is there anything anagallis that you've dealt with in the past to the situation where you have this true core edge, environment where you have to instrument the devices that you provide to give that level of observation or observability, whatever the new word is, that we've invented for that. >> Yeah, yeah. >> I mean has there, is there anything- >> Yeah absolutely. >> Is this unprecedented? >> No, no not at all. I mean Dell's been really working at the edge since before the edge was called the edge right, we've been selling, our hardware and infrastructure out to retail shops, branch office locations, you know just smaller form factors outside of data centers for a very long time and so that's sort of the consistency from what we've been doing for 30 years to now the difference is the volume, the different number of permutations as Marc was saying. The different type of accelerator cards, the different SKUS of different server types, the sheer volume of nodes that you have in a nationwide wireless network. So the volumes are much different, the amount of data is much different, but the process is really the same. It's about having the infrastructure in the right place at the right time and being able to understand if it's working well or if it's not and it's not just about a red light or a green light but healthy and unhealthy conditions and predicting when the red lights going to come on. And we've been doing that for a while it's just a different scale, and a different level of complexity when you're trying to piece together all these different components from different vendors. >> So we talk a lot about ecosystem, and sometimes because of the desire to talk about the outcomes and what the end users, customers, really care about sometimes we will stop at the layer where say a Dell lives, and we'll see that as the sum total of the component when really, when you talk about a server that Dish is using that in and of itself is an ecosystem >> Yep, yeah >> (Dave) or there's an ecosystem behind it you just mentioned it, the kinds of components and the choices that you make when you optimize these devices determine how much value Dish, >> (Andy) Absolutely. >> Can get out of that. How deep are you on that hardware? I'm a knuckle dragging hardware guy. >> Deep, very deep, I mean just the number of permutations that were working through with Dish and other operators as well, different accelerator cards that we talked about, different techniques for timing obviously there's different SKUs with the silicon itself, different chip sets, different chips from different providers, all those things have to come together, and we build the basic foundation and then we also started working with our cloud partners Red Hat, Wind River, all these guys, VM Ware, of course and that's the next layer up, so you've got all the different hardware components, you've got the extraction layer, with your virtualization layer and or ubernetise layer and all of that stuff together has to be managed compatibility matrices that get very deep and very big, very quickly and that's really the foundational challenge we think of open ran is thinking all these different pieces are going to fit together and not just work today but work everyday as everything gets updated much more frequently than in the legacy world. >> So you care about those things, so we don't have to. >> That's right. >> That's the beauty of it. >> Yes. >> Well thank you. (laughter) >> You're welcome. >> I want to understand, you know some of the things that we've been talking about, every company is a data company, regardless of whether it's telco, it's a retailer, if it's my bank, it's my grocery store and they have to be able to use data as quickly as possible to make decisions. One of the things they've been talking here is the monetization of data, the monetization of the network. How do you, how does Dell help, like a Dish be able to achieve the monetization of their data. >> Well as Marc was saying before the enterprise use cases are what we are all kind of betting on for 5G, right? And enterprises expect to have access to data and to telemetry to do whatever use cases they want to execute in their particular industry, so you know, if it's a health care provider, if it's a factory, an agricultural provider that's leveraging this network, they need to get the data from the network, from the devices, they need to correlate it, in order to do things like automatically turn on a watering system at a certain time, right, they need to know the weather around make sure it's not too windy and you're going to waste a lot of water. All that has data, it's going to leverage data from the network, it's going to leverage data from devices, it's going to leverage data from applications and that's data that can be monetized. When you have all that data and it's all correlated there's value, inherit to it and you can even go onto a forward looking state where you can intelligently move workloads around, based on the data. Based on the clarity of the traffic of the network, where is the right place to put it, and even based on current pricing for things like on demand insists from cloud providers. So having all that data correlated allows any enterprise to make an intelligent decision about how to move a workload around a network and get the most efficient placing of that workload. >> Marc, Andy mentions things like data and networks and moving data across the networks. You have on your business card, Chief Network Officer, what potentially either keeps you up at night in terror or gets you very excited about the future of your network? What's out there in the frontier and what are those key obstacles that have to be overcome that you work with? >> Yeah, I think we have the network, we have the baseline, but we don't yet have the consumption that is easy by the enterprise, you know an enterprise likes to say "I have 4K camera, I connect it to my software." Click, click, right? And that's where we need to be so we're talking about it APIs that are so simple that they become a click and we engineers we have a tendency to want to explain but we should not, it should become a click. You know, and the phone revolution with the apps became those clicks, we have to do the same for the enterprise, for video, for surveillance, for analytics, it has to be clicks. >> While balancing flexibility, and agility of course because you know the folks who were fans of CLIs come in light interfaces, who hate gooeys it's because they feel they have the ability to go down to another level, so obviously that's a balancing act. >> But that's our job. >> Yeah. >> Our job is to hide the complexity, but of course there is complexity. It's like in the cloud, an emprise scaler, they manage complex things but it's successful if they hide it. >> (Dave) Yeah. >> It's the same. You know we have to be emprise scaler of connectivity but hide it. >> Yeah. >> So that people connect everything, right? >> Well it's Andy's servers, we're all magicians hiding it all. >> Yeah. >> It really is. >> It's like don't worry about it, just know, >> Let us do it. >> Sit down, we will serve you the meal. Don't worry how it's cooked. >> That's right, the enterprises want the outcome. >> (Dave) Yeah. >> They don't want to deal with that bottom layer. But it is tremendously complex and we want to take that on and make it better for the industry. >> That's critical. Marc I'd love to go back to you and just I know that you've been in telco for such a long time and here we are day three of MWC the name changed this year, from Mobile World Congress, reflecting mobilism isn't the only thing, obviously it was the catalyst, but what some of the things that you've heard at the event, maybe seen at the event that give you the confidence that the right players are here to help move Dish wireless forward, for example. >> You know this is the first, I've been here for decades it's the first time, and I'm a Chief Network Officer, first time we don't talk about the network. >> (Andy) Yeah. >> Isn't that surprising? People don't tell me about speed, or latency, they talk about consumption. Apps, you know videos surveillance, or analytics or it's, so I love that, because now we're starting to talk about how we can consume and monetize but that's the first time. We use to talk about gigabytes and this and that, none of that not once. >> What does that signify to you, in terms of the evolution? >> Well you know, we've seen that the demand for the healthcare, for the smart cities, has been here for a decade, proof of concepts for a decade but the consumption has been behind and for me this is the oldest team is waking up to we are going to make it easy, so that the consumption can take off. The demand is there, we have to serve it. And the fact that people are starting to say we hide the complexity that's our problem, but don't even mention it, I love it. >> Yep. Drop the mic. >> (Andy and Marc) Yeah, yeah. >> Andy last question for you, some of the things we know Dell has a big and verging presents in telco, we've had a chance to see the booth, see the cool things you guys are featuring there, Dave did a great tour of it, talk about some of the things you've heard and maybe even from customers at this event that demonstrate to you that Dell is going in the right direction with it's telco strategy. >> Yeah, I mean personally for me this has been an unbelievable event for Dell we've had tons and tons of customer meetings of course and the feedback we're getting is that the things we're bring to market whether it's infrablocks, or purposeful servers that are designed for the telecom network are what our customers need and have always wanted. We get a lot of wows, right? >> (Lisa) That's nice. >> "Wow we didn't know Dell was doing this, we had no idea." And the other part of it is that not everybody was sure that we were going to move as fast as we have so the speed in which we've been able to bring some of these things to market and part of that was working with Dish, you know a pioneer, to make sure we were building the right things and I think a lot of the customers that we talked to really appreciate the fact that we're doing it with the industry, >> (Lisa) Yeah. >> You know, not at the industry and that comes across in the way they are responding and what their talking to us about now. >> And that came across in the interview that you just did. Thank you both for joining Dave and me. >> Thank you >> Talking about what Dell and Dish are doing together the proof is in the pudding, and you did a great job at explaining that, thanks guys, we appreciate it. >> Thank you. >> All right, our pleasure. For our guest and for Dave Nicholson, I'm Lisa Martin, you're watching theCUBE live from MWC 23 day three. We will be back with our next guest, so don't go anywhere. (upbeat music)
SUMMARY :
that drive human progress. we are going to be talking about Mark, talk to us about what's that covered the US, we use a cloud base and all the data and the and the bare metal orchestra product solutions better the whole way. and Dell is the best at the market and said between what an enterprise and for this you need to but all the silicone, the instrument the devices and so that's sort of the consistency from deep are you on that hardware? and that's the next So you care about those Well thank you. One of the things and get the most efficient the future of your network? You know, and the phone and agility of course It's like in the cloud, an emprise scaler, It's the same. Well it's Andy's Sit down, we will serve you the meal. That's right, the and make it better for the industry. that the right players are here to help it's the first time, and but that's the first easy, so that the consumption some of the things we know and the feedback we're getting is that so the speed in which You know, not at the industry And that came across in the the proof is in the pudding, We will be back with our next
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Marc Rouanne | PERSON | 0.99+ |
Marc | PERSON | 0.99+ |
Andy Sheahen | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Andy | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Telecorp | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Mark | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
30 years | QUANTITY | 0.99+ |
Dish | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
DISH Wireless | ORGANIZATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
hundreds | QUANTITY | 0.99+ |
first time | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Both | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first | QUANTITY | 0.98+ |
One | QUANTITY | 0.98+ |
Dish wireless | ORGANIZATION | 0.98+ |
Lisa | PERSON | 0.98+ |
MWC | EVENT | 0.98+ |
third day | QUANTITY | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Mobile World Congress | EVENT | 0.98+ |
Next Gen Ops | ORGANIZATION | 0.97+ |
TCO | ORGANIZATION | 0.97+ |
Dish Wireless | ORGANIZATION | 0.97+ |
CapX | ORGANIZATION | 0.97+ |
this year | DATE | 0.96+ |
Boost | ORGANIZATION | 0.95+ |
150 dollars a month | QUANTITY | 0.94+ |
OpX | ORGANIZATION | 0.92+ |
Telecom Cloud Core | ORGANIZATION | 0.91+ |
thousands | QUANTITY | 0.9+ |
ROI | ORGANIZATION | 0.9+ |
tons and tons of customer | QUANTITY | 0.86+ |
Peter Fetterolf, ACG Business Analytics & Charles Tsai, Dell Technologies | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (light airy music) >> Hi, everybody, welcome back to the Fira in Barcelona. My name is Dave Vellante. I'm here with my co-host Dave Nicholson. Lisa Martin is in the house. John Furrier is pounding the news from our Palo Alto studio. We are super excited to be talking about cloud at the edge, what that means. Charles Tsai is here. He's the Senior Director of product management at Dell Technologies and Peter Fetterolf is the Chief Technology Officer at ACG Business Analytics, a firm that goes deep into the TCO and the telco space, among other things. Gents, welcome to theCUBE. Thanks for coming on. Thank you. >> Good to be here. >> Yeah, good to be here. >> So I've been in search all week of the elusive next wave of monetization for the telcos. We know they make great money on connectivity, they're really good at that. But they're all talking about how they can't let this happen again. Meaning we can't let the over the top vendors yet again, basically steal our cookies. So we're going to not mess it up this time. We're going to win in the monetization. Charles, where are those monetization opportunities? Obviously at the edge, the telco cloud at the edge. What is that all about and where's the money? >> Well, Dave, I think from a Dell's perspective, what we want to be able to enable operators is a solution that enable them to roll out services much quicker, right? We know there's a lot of innovation around IoT, MEG and so on and so forth, but they continue to rely on traditional technology and way of operations is going to take them years to enable new services. So what Dell is doing is now, creating the entire vertical stack from the hardware through CAST and automation that enable them, not only to push out services very quickly, but operating them using cloud principles. >> So it's when you say the entire vertical stack, it's the integrated hardware components with like, for example, Red Hat on top- >> Right. >> Or a Wind River? >> That's correct. >> Okay, and then open API, so the developers can create workloads, I presume data companies. We just had a data conversation 'cause that was part of the original stack- >> That's correct. >> So through an open ecosystem, you can actually sort of recreate that value, correct? >> That's correct. >> Okay. >> So one thing Dell is doing, is we are offering an infrastructure block where we are taking over the overhead of certifying every release coming from the Red Hat or the Wind River of the world, right? We want telcos to spend their resources on what is going to generate them revenue. Not the overhead of creating this cloud stack. >> Dave, I remember when we went through this in the enterprise and you had companies like, you know, IBM with the AS400 and the mainframe saying it's easier to manage, which it was, but it's still, you know, it was subsumed by the open systems trend. >> Yeah, yeah. And I think that's an important thing to probe on, is this idea of what is, what exactly does it mean to be cloud at the edge in the telecom space? Because it's a much used term. >> Yeah. >> When we talk about cloud and edge, in sort of generalized IT, but what specifically does it mean? >> Yeah, so when we talk about telco cloud, first of all it's kind of different from what you're thinking about public cloud today. And there's a couple differences. One, if you look at the big hyperscaler public cloud today, they tend to be centralized in huge data centers. Okay, telco cloud, there are big data centers, but then there's also regional data centers. There are edge data centers, which are your typical like access central offices that have turned data centers, and then now even cell sites are becoming mini data centers. So it's distributed. I mean like you could have like, even in a country like say Germany, you'd have 30,000 soul sites, each one of them being a data center. So it's a very different model. Now the other thing I want to go back to the question of monetization, okay? So how do you do monetization? The only way to do that, is to be able to offer new services, like Charles said. How do you offer new services? You have to have an open ecosystem that's going to be very, very flexible. And if we look at where telcos are coming from today, they tend to be very inflexible 'cause they're all kind of single vendor solutions. And even as we've moved to virtualization, you know, if you look at packet core for instance, a lot of them are these vertical stacks of say a Nokia or Ericson or Huawei where you know, you can't really put any other vendors or any other solutions into that. So basically the idea is this kind of horizontal architecture, right? Where now across, not just my central data centers, but across my edge data centers, which would be traditionally my access COs, as well as my cell sites. I have an open environment. And we're kind of starting with, you know, packet core obviously with, and UPFs being distributed, but now open ran or virtual ran, where I can have CUs and DUs and I can split CUs, they could be at the soul site, they could be in edge data centers. But then moving forward, we're going to have like MEG, which are, you know, which are new kinds of services, you know, could be, you know, remote cars it could be gaming, it could be the Metaverse. And these are going to be a multi-vendor environment. So one of the things you need to do is you need to have you know, this cloud layer, and that's what Charles was talking about with the infrastructure blocks is helping the service providers do that, but they still own their infrastructure. >> Yeah, so it's still not clear to me how the service providers win that game but we can maybe come back to that because I want to dig into TCO a little bit. >> Sure. >> Because I have a lot of friends at Dell. I don't have a lot of friends at HPE. I've always been critical when they take an X86 server put a name on it that implies edge and they throw it over the fence to the edge, that's not going to work, okay? We're now seeing, you know we were just at the Dell booth yesterday, you did the booth crawl, which was awesome. Purpose-built servers for this environment. >> Charles: That's right. >> So there's two factors here that I want to explore in TCO. One is, how those next gen servers compare to the previous gen, especially in terms of power consumption but other factors and then how these sort of open ran, open ecosystem stacks compared to proprietary stacks. Peter, can you help us understand those? >> Yeah, sure. And Charles can comment on this as well. But I mean there, there's a couple areas. One is just moving the next generation. So especially on the Intel side, moving from Ice Lake to the Sapphire Rapids is a big deal, especially when it comes to the DU. And you know, with the radios, right? There's the radio unit, the RU, and then there's the DU the distributed unit, and the CU. The DU is really like part of the radio, but it's virtualized. When we moved from Ice lake to Sapphire Rapids, which is third generation intel to fourth generation intel, we're literally almost doubling the performance in the DU. And that's really important 'cause it means like almost half the number of servers and we're talking like 30, 40, 50,000 servers in some cases. So, you know, being able to divide that by two, that's really big, right? In terms of not only the the cost but all the TCO and the OpEx. Now another area that's really important, when I was talking moving from these vertical silos to the horizontal, the issue with the vertical silos is, you can't place any other workloads into those silos. So it's kind of inefficient, right? Whereas when we have the horizontal architecture, now you can place workloads wherever you want, which basically also means less servers but also more flexibility, more service agility. And then, you know, I think Charles can comment more, specifically on the XR8000, some things Dell's doing, 'cause it's really exciting relative to- >> Sure. >> What's happening in there. >> So, you know, when we start looking at putting compute at the edge, right? We recognize the first thing we have to do is understand the environment we are going into. So we spend with a lot of time with telcos going to the south side, going to the edge data center, looking at operation, how do the engineer today deal with maintenance replacement at those locations? Then based on understanding the operation constraints at those sites, we create innovation and take a traditional server, remodel it to make sure that we minimize the disruption to the operations, right? Just because we are helping them going from appliances to open compute, we do not want to disrupt what is have been a very efficient operation on the remote sites. So we created a lot of new ideas and develop them on general compute, where we believe we can save a lot of headache and disruptions and still provide the same level of availability, resiliency, and redundancy on an open compute platform. >> So when we talk about open, we don't mean generic? Fair? See what I mean? >> Open is more from the software workload perspective, right? A Dell server can run any type of workload that customer intend. >> But it's engineered for this? >> Environment. >> Environment. >> That's correct. >> And so what are some of the environmental issues that are dealt with in the telecom space that are different than the average data center? >> The most basic one, is in most of the traditional cell tower, they are deployed within cabinets instead of racks. So they are depth constraints that you just have no access to the rear of the chassis. So that means on a server, is everything you need to access, need to be in the front, nothing should be in the back. Then you need to consider how labor union come into play, right? There's a lot of constraint on who can go to a cell tower and touch power, who can go there and touch compute, right? So we minimize all that disruption through a modular design and make it very efficient. >> So when we took a look at XR8000, literally right here, sitting on the desk. >> Uh-huh. >> Took it apart, don't panic, just pulled out some sleds and things. >> Right, right. >> One of the interesting demonstrations was how it compared to the size of a shoe. Now apparently you hired someone at Dell specifically because they wear a size 14 shoe, (Charles laughs) so it was even more dramatic. >> That's right. >> But when you see it, and I would suggest that viewers go back and take a look at that segment, specifically on the hardware. You can see exactly what you just referenced. This idea that everything is accessible from the front. Yeah. >> So I want to dig in a couple things. So I want to push back a little bit on what you were saying about the horizontal 'cause there's the benefit, if you've got the horizontal infrastructure, you can run a lot more workloads. But I compare it to the enterprise 'cause I, that was the argument, I've made that argument with converged infrastructure versus say an Oracle vertical stack, but it turned out that actually Oracle ran Oracle better, okay? Is there an analog in telco or is this new open architecture going to be able to not only service the wide range of emerging apps but also be as resilient as the proprietary infrastructure? >> Yeah and you know, before I answer that, I also want to say that we've been writing a number of white papers. So we have actually three white papers we've just done with Dell looking at infrastructure blocks and looking at vertical versus horizontal and also looking at moving from the previous generation hardware to the next generation hardware. So all those details, you can find the white papers, and you can find them either in the Dell website or at the ACG research website >> ACGresearch.com? >> ACG research. Yeah, if you just search ACG research, you'll find- >> Yeah. >> Lots of white papers on TCO. So you know, what I want to say, relative to the vertical versus horizontal. Yeah, obviously in the vertical side, some of those things will run well, I mean it won't have issues. However, that being said, as we move to cloud native, you know, it's very high performance, okay? In terms of the stack, whether it be a Red Hat or a VMware or other cloud layers, that's really become much more mature. It now it's all CNF base, which is really containerized, very high performance. And so I don't think really performance is an issue. However, my feeling is that, if you want to offer new services and generate new revenue, you're not going to do it in vertical stacks, period. You're going to be able to do a packet core, you'll be able to do a ran over here. But now what if I want to offer a gaming service? What if I want to do metaverse? What if I want to do, you have to have an environment that's a multi-vendor environment that supports an ecosystem. Even in the RAN, when we look at the RIC, and the xApps and the rApps, these are multi-vendor environments that's going to create a lot of flexibility and you can't do that if you're restricted to, I can only have one vendor running on this hardware. >> Yeah, we're seeing these vendors work together and create RICs. That's obviously a key point, but what I'm hearing is that there may be trade offs, but the incremental value is going to overwhelm that. Second question I have, Peter is, TCO, I've been hearing a lot about 30%, you know, where's that 30% come from? Is it Op, is it from an OpEx standpoint? Is it labor, is it power? Is it, you mentioned, you know, cutting the number of servers in half. If I can unpack the granularity of that TCO, where's the benefit coming from? >> Yeah, the answer is yes. (Peter and Charles laugh) >> Okay, we'll do. >> Yeah, so- >> One side that, in terms of, where is the big bang for the bucks? >> So I mean, so you really need to look at the white paper to see details, but definitely power, definitely labor, definitely reducing the number of servers, you know, reducing the CapEx. The other thing is, is as you move to this really next generation horizontal telco cloud, there's the whole automation and orchestration, that is a key component as well. And it's enabled by what Dell is doing. It's enabled by the, because the thing is you're not going to have end-to-end automation if you have all this legacy stuff there or if you have these vertical stacks where you can't integrate. I mean you can automate that part and then you have separate automation here, you separate. you need to have integrated automation and orchestration across the whole thing. >> One other point I would add also, right, on the hardware perspective, right? With the customized hardware, what we allow operator to do is, take out the existing appliance and push a edge optimized server without reworking the entire infrastructure. There is a significant saving where you don't have to rethink about what is my power infrastructure, right? What is my security infrastructure? The server is designed to leverage the existing, what is already there. >> How should telco, Charles, plan for this transformation? Are there specific best practices that you would recommend in terms of the operational model? >> Great question. I think first thing is do an inventory of what you have. Understand what your constraints are and then come to Dell, we will love to consult with you, based on our experience on the best practices. We know how to minimize additional changes. We know how to help your support engineer, understand how to shift appliance based operation to a cloud-based operation. >> Is that a service you offer? Is that a pre-sales freebie? What is maybe both? >> It's both. >> Yeah. >> It's both. >> Yeah. >> Guys- >> Just really quickly. >> We're going to wrap. >> The, yeah. Dave loves the TCO discussion. I'm always thinking in terms of, well how do you measure TCO when you're comparing something where you can't do something to an environment where you're going to be able to do something new? And I know that that's always the challenge in any kind of emerging market where things are changing, any? >> Well, I mean we also look at, not only TCO, but we look at overall business case. So there's basically service at GLD and revenue and then there's faster time to revenues. Well, and actually ACG, we actually have a platform called the BAE or Business Analytics Engine that's a very sophisticated simulation cloud-based platform, where we can actually look at revenue month by month. And we look at what's the impact of accelerating revenue by three months. By four months. >> So you're looking into- >> By six months- >> So you're forward looking. You're just not consistently- >> So we're not just looking at TCO, we're looking at the overall business case benefit. >> Yeah, exactly right. There's the TCO, which is the hard dollars. >> Right. >> CFO wants to see that, he or she needs to see that. But you got to, you can convince that individual, that there's a business case around it. >> Peter: Yeah. >> And then you're going to sign up for that number. >> Peter: Yeah. >> And they're going to be held to it. That's the story the world wants. >> At the end of the day, telcos have to be offered new services 'cause look at all the money that's been spent. >> Dave: Yeah, that's right. >> On investment on 5G and everything else. >> 0.5 trillion over the next seven years. All right, guys, we got to go. Sorry to cut you off. >> Okay, thank you very much. >> But we're wall to wall here. All right, thanks so much for coming on. >> Dave: Fantastic. >> All right, Dave Vellante, for Dave Nicholson. Lisa Martin's in the house. John Furrier in Palo Alto Studios. Keep it right there. MWC 23 live from the Fira in Barcelona. (light airy music)
SUMMARY :
that drive human progress. and Peter Fetterolf is the of the elusive next wave of creating the entire vertical of the original stack- or the Wind River of the world, right? AS400 and the mainframe in the telecom space? So one of the things you need to do how the service providers win that game the fence to the edge, to the previous gen, So especially on the Intel side, We recognize the first thing we have to do from the software workload is in most of the traditional cell tower, sitting on the desk. Took it apart, don't panic, One of the interesting demonstrations accessible from the front. But I compare it to the Yeah and you know, Yeah, if you just search ACG research, and the xApps and the rApps, but the incremental value Yeah, the answer is yes. and then you have on the hardware perspective, right? inventory of what you have. Dave loves the TCO discussion. and then there's faster time to revenues. So you're forward looking. So we're not just There's the TCO, But you got to, you can And then you're going to That's the story the world wants. At the end of the day, and everything else. Sorry to cut you off. But we're wall to wall here. Lisa Martin's in the house.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Charles | PERSON | 0.99+ |
Charles Tsai | PERSON | 0.99+ |
Peter Fetterolf | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Peter | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
30 | QUANTITY | 0.99+ |
telco | ORGANIZATION | 0.99+ |
both | QUANTITY | 0.99+ |
ACG Business Analytics | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
three months | QUANTITY | 0.99+ |
ACG | ORGANIZATION | 0.99+ |
TCO | ORGANIZATION | 0.99+ |
four months | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
0.5 trillion | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
two factors | QUANTITY | 0.99+ |
six months | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
MWC 23 | EVENT | 0.98+ |
Germany | LOCATION | 0.98+ |
Red Hat | ORGANIZATION | 0.98+ |
one | QUANTITY | 0.98+ |
XR8000 | COMMERCIAL_ITEM | 0.98+ |
Ice Lake | COMMERCIAL_ITEM | 0.98+ |
One | QUANTITY | 0.97+ |
one vendor | QUANTITY | 0.97+ |
Palo Alto Studios | LOCATION | 0.97+ |
third generation | QUANTITY | 0.97+ |
fourth generation | QUANTITY | 0.96+ |
40, 50,000 servers | QUANTITY | 0.96+ |
theCUBE | ORGANIZATION | 0.96+ |
telcos | ORGANIZATION | 0.95+ |
telco cloud | ORGANIZATION | 0.95+ |
each one | QUANTITY | 0.95+ |
John Kreisa, Couchbase | MWC Barcelona 2023
>> Narrator: TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music intro) (logo background tingles) >> Hi everybody, welcome back to day three of MWC23, my name is Dave Vellante and we're here live at the Theater of Barcelona, Lisa Martin, David Nicholson, John Furrier's in our studio in Palo Alto. Lot of buzz at the show, the Mobile World Daily Today, front page, Netflix chief hits back in fair share row, Greg Peters, the co-CEO of Netflix, talking about how, "Hey, you guys want to tax us, the telcos want to tax us, well, maybe you should help us pay for some of the content. Your margins are higher, you have a monopoly, you know, we're delivering all this value, you're bundling Netflix in, from a lot of ISPs so hold on, you know, pump the brakes on that tax," so that's the big news. Lockheed Martin, FOSS issues, AI guidelines, says, "AI's not going to take over your job anytime soon." Although I would say, your job's going to be AI-powered for the next five years. We're going to talk about data, we've been talking about the disaggregation of the telco stack, part of that stack is a data layer. John Kreisa is here, the CMO of Couchbase, John, you know, we've talked about all week, the disaggregation of the telco stacks, they got, you know, Silicon and operating systems that are, you know, real time OS, highly reliable, you know, compute infrastructure all the way up through a telemetry stack, et cetera. And that's a proprietary block that's really exploding, it's like the big bang, like we saw in the enterprise 20 years ago and we haven't had much discussion about that data layer, sort of that horizontal data layer, that's the market you play in. You know, Couchbase obviously has a lot of telco customers- >> John: That's right. >> We've seen, you know, Snowflake and others launch telco businesses. What are you seeing when you talk to customers at the show? What are they doing with that data layer? >> Yeah, so they're building applications to drive and power unique experiences for their users, but of course, it all starts with where the data is. So they're building mobile applications where they're stretching it out to the edge and you have to move the data to the edge, you have to have that capability to deliver that highly interactive experience to their customers or for their own internal use cases out to that edge, so seeing a lot of that with Couchbase and with our customers in telco. >> So what do the telcos want to do with data? I mean, they've got the telemetry data- >> John: Yeah. >> Now they frequently complain about the over-the-top providers that have used that data, again like Netflix, to identify customer demand for content and they're mopping that up in a big way, you know, certainly Amazon and shopping Google and ads, you know, they're all using that network. But what do the telcos do today and what do they want to do in the future? They're all talking about monetization, how do they monetize that data? >> Yeah, well, by taking that data, there's insight to be had, right? So by usage patterns and what's happening, just as you said, so they can deliver a better experience. It's all about getting that edge, if you will, on their competition and so taking that data, using it in a smart way, gives them that edge to deliver a better service and then grow their business. >> We're seeing a lot of action at the edge and, you know, the edge can be a Home Depot or a Lowe's store, but it also could be the far edge, could be a, you know, an oil drilling, an oil rig, it could be a racetrack, you know, certainly hospitals and certain, you know, situations. So let's think about that edge, where there's maybe not a lot of connectivity, there might be private networks going in, in the future- >> John: That's right. >> Private 5G networks. What's the data flow look like there? Do you guys have any customers doing those types of use cases? >> Yeah, absolutely. >> And what are they doing with the data? >> Yeah, absolutely, we've got customers all across, so telco and transportation, all kinds of service delivery and healthcare, for example, we've got customers who are delivering healthcare out at the edge where they have a remote location, they're able to deliver healthcare, but as you said, there's not always connectivity, so they need to have the applications, need to continue to run and then sync back once they have that connectivity. So it's really having the ability to deliver a service, reliably and then know that that will be synced back to some central server when they have connectivity- >> So the processing might occur where the data- >> Compute at the edge. >> How do you sync back? What is that technology? >> Yeah, so there's, so within, so Couchbase and Couchbase's case, we have an autonomous sync capability that brings it back to the cloud once they get back to whether it's a private network that they want to run over, or if they're doing it over a public, you know, wifi network, once it determines that there's connectivity and, it can be peer-to-peer sync, so different edge apps communicating with each other and then ultimately communicating back to a central server. >> I mean, the other theme here, of course, I call it the software-defined telco, right? But you got to have, you got to run on something, got to have hardware. So you see companies like AWS putting Outposts, out to the edge, Outposts, you know, doesn't really run a lot of database to mind, I mean, it runs RDS, you know, maybe they're going to eventually work with companies like... I mean, you're a partner of AWS- >> John: We are. >> Right? So do you see that kind of cloud infrastructure that's moving to the edge? Do you see that as an opportunity for companies like Couchbase? >> Yeah, we do. We see customers wanting to push more and more of that compute out to the edge and so partnering with AWS gives us that opportunity and we are certified on Outpost and- >> Oh, you are? >> We are, yeah. >> Okay. >> Absolutely. >> When did that, go down? >> That was last year, but probably early last year- >> So I can run Couchbase at the edge, on Outpost? >> Yeah, that's right. >> I mean, you know, Outpost adoption has been slow, we've reported on that, but are you seeing any traction there? Are you seeing any nibbles? >> Starting to see some interest, yeah, absolutely. And again, it has to be for the right use case, but again, for service delivery, things like healthcare and in transportation, you know, they're starting to see where they want to have that compute, be very close to where the actions happen. >> And you can run on, in the data center, right? >> That's right. >> You can run in the cloud, you know, you see HPE with GreenLake, you see Dell with Apex, that's essentially their Outposts. >> Yeah. >> They're saying, "Hey, we're going to take our whole infrastructure and make it as a service." >> Yeah, yeah. >> Right? And so you can participate in those environments- >> We do. >> And then so you've got now, you know, we call it supercloud, you've got the on-prem, you've got the, you can run in the public cloud, you can run at the edge and you want that consistent experience- >> That's right. >> You know, from a data layer- >> That's right. >> So is that really the strategy for a data company is taking or should be taking, that horizontal layer across all those use cases? >> You do need to think holistically about it, because you need to be able to deliver as a, you know, as a provider, wherever the customer wants to be able to consume that application. So you do have to think about any of the public clouds or private networks and all the way to the edge. >> What's different John, about the telco business versus the traditional enterprise? >> Well, I mean, there's scale, I mean, one thing they're dealing with, particularly for end user-facing apps, you're dealing at a very very high scale and the expectation that you're going to deliver a very interactive experience. So I'd say one thing in particular that we are focusing on, is making sure we deliver that highly interactive experience but it's the scale of the number of users and customers that they have, and the expectation that your application's always going to work. >> Speaking of applications, I mean, it seems like that's where the innovation is going to come from. We saw yesterday, GSMA announced, I think eight APIs telco APIs, you know, we were talking on theCUBE, one of the analysts was like, "Eight, that's nothing," you know, "What do these guys know about developers?" But you know, as Daniel Royston said, "Eight's better than zero." >> Right? >> So okay, so we're starting there, but the point being, it's all about the apps, that's where the innovation's going to come from- >> That's right. >> So what are you seeing there, in terms of building on top of the data app? >> Right, well you have to provide, I mean, have to provide the APIs and the access because it is really, the rubber meets the road, with the developers and giving them the ability to create those really rich applications where they want and create the experiences and innovate and change the way that they're giving those experiences. >> Yeah, so what's your relationship with developers at Couchbase? >> John: Yeah. >> I mean, talk about that a little bit- >> Yeah, yeah, so we have a great relationship with developers, something we've been investing more and more in, in terms of things like developer relations teams and community, Couchbase started in open source, continue to be based on open source projects and of course, those are very developer centric. So we provide all the consistent APIs for developers to create those applications, whether it's something on Couchbase Lite, which is our kind of edge-based database, or how they can sync that data back and we actually automate a lot of that syncing which is a very difficult developer task which lends them to one of the developer- >> What I'm trying to figure out is, what's the telco developer look like? Is that a developer that comes from the enterprise and somebody comes from the blockchain world, or AI or, you know, there really doesn't seem to be a lot of developer talk here, but there's a huge opportunity. >> Yeah, yeah. >> And, you know, I feel like, the telcos kind of remind me of, you know, a traditional legacy company trying to get into the developer world, you know, even Oracle, okay, they bought Sun, they got Java, so I guess they have developers, but you know, IBM for years tried with Bluemix, they had to end up buying Red Hat, really, and that gave them the developer community. >> Yep. >> EMC used to have a thing called EMC Code, which was a, you know, good effort, but eh. And then, you know, VMware always trying to do that, but, so as you move up the stack obviously, you have greater developer affinity. Where do you think the telco developer's going to come from? How's that going to evolve? >> Yeah, it's interesting, and I think they're... To kind of get to your first question, I think they're fairly traditional enterprise developers and when we break that down, we look at it in terms of what the developer persona is, are they a front-end developer? Like they're writing that front-end app, they don't care so much about the infrastructure behind or are they a full stack developer and they're really involved in the entire application development lifecycle? Or are they living at the backend and they're really wanting to just focus in on that data layer? So we lend towards all of those different personas and we think about them in terms of the APIs that we create, so that's really what the developers are for telcos is, there's a combination of those front-end and full stack developers and so for them to continue to innovate they need to appeal to those developers and that's technology, like Couchbase, is what helps them do that. >> Yeah and you think about the Apples, you know, the app store model or Apple sort of says, "Okay, here's a developer kit, go create." >> John: Yeah. >> "And then if it's successful, you're going to be successful and we're going to take a vig," okay, good model. >> John: Yeah. >> I think I'm hearing, and maybe I misunderstood this, but I think it was the CEO or chairman of Ericsson on the day one keynotes, was saying, "We are going to monetize the, essentially the telemetry data, you know, through APIs, we're going to charge for that," you know, maybe that's not the best approach, I don't know, I think there's got to be some innovation on top. >> John: Yeah. >> Now maybe some of these greenfield telcos are going to do like, you take like a dish networks, what they're doing, they're really trying to drive development layers. So I think it's like this wild west open, you know, community that's got to be formed and right now it's very unclear to me, do you have any insights there? >> I think it is more, like you said, Wild West, I think there's no emerging standard per se for across those different company types and sort of different pieces of the industry. So consequently, it does need to form some more standards in order to really help it grow and I think you're right, you have to have the right APIs and the right access in order to properly monetize, you have to attract those developers or you're not going to be able to monetize properly. >> Do you think that if, in thinking about your business and you know, you've always sold to telcos, but now it's like there's this transformation going on in telcos, will that become an increasingly larger piece of your business or maybe even a more important piece of your business? Or it's kind of be steady state because it's such a slow moving industry? >> No, it is a big and increasing piece of our business, I think telcos like other enterprises, want to continue to innovate and so they look to, you know, technologies like, Couchbase document database that allows them to have more flexibility and deliver the speed that they need to deliver those kinds of applications. So we see a lot of migration off of traditional legacy infrastructure in order to build that new age interface and new age experience that they want to deliver. >> A lot of buzz in Silicon Valley about open AI and Chat GPT- >> Yeah. >> You know, what's your take on all that? >> Yeah, we're looking at it, I think it's exciting technology, I think there's a lot of applications that are kind of, a little, sort of innovate traditional interfaces, so for example, you can train Chat GPT to create code, sample code for Couchbase, right? You can go and get it to give you that sample app which gets you a headstart or you can actually get it to do a better job of, you know, sorting through your documentation, like Chat GPT can do a better job of helping you get access. So it improves the experience overall for developers, so we're excited about, you know, what the prospect of that is. >> So you're playing around with it, like everybody is- >> Yeah. >> And potentially- >> Looking at use cases- >> Ways tO integrate, yeah. >> Hundred percent. >> So are we. John, thanks for coming on theCUBE. Always great to see you, my friend. >> Great, thanks very much. >> All right, you're welcome. All right, keep it right there, theCUBE will be back live from Barcelona at the theater. SiliconANGLE's continuous coverage of MWC23. Go to siliconangle.com for all the news, theCUBE.net is where all the videos are, keep it right there. (cheerful upbeat music outro)
SUMMARY :
that drive human progress. that's the market you play in. We've seen, you know, and you have to move the data to the edge, you know, certainly Amazon that edge, if you will, it could be a racetrack, you know, Do you guys have any customers the applications, need to over a public, you know, out to the edge, Outposts, you know, of that compute out to the edge in transportation, you know, You can run in the cloud, you know, and make it as a service." to deliver as a, you know, and the expectation that But you know, as Daniel Royston said, and change the way that they're continue to be based on open or AI or, you know, there developer world, you know, And then, you know, VMware and so for them to continue to innovate about the Apples, you know, and we're going to take data, you know, through APIs, are going to do like, you and the right access in and so they look to, you know, so we're excited about, you know, yeah. Always great to see you, Go to siliconangle.com for all the news,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
John | PERSON | 0.99+ |
Greg Peters | PERSON | 0.99+ |
Daniel Royston | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
David Nicholson | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
John Kreisa | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Netflix | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Silicon Valley | LOCATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Java | TITLE | 0.99+ |
Lowe | ORGANIZATION | 0.99+ |
first question | QUANTITY | 0.99+ |
Lockheed Martin | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
telcos | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
Eight | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Chat GPT | TITLE | 0.99+ |
Hundred percent | QUANTITY | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.98+ |
Couchbase | ORGANIZATION | 0.98+ |
John Furrier | PERSON | 0.98+ |
siliconangle.com | OTHER | 0.98+ |
Apex | ORGANIZATION | 0.98+ |
Home Depot | ORGANIZATION | 0.98+ |
early last year | DATE | 0.98+ |
Barcelona | LOCATION | 0.98+ |
20 years ago | DATE | 0.98+ |
MWC23 | EVENT | 0.97+ |
Bluemix | ORGANIZATION | 0.96+ |
Sun | ORGANIZATION | 0.96+ |
SiliconANGLE | ORGANIZATION | 0.96+ |
theCUBE | ORGANIZATION | 0.95+ |
GreenLake | ORGANIZATION | 0.94+ |
Apples | ORGANIZATION | 0.94+ |
Snowflake | ORGANIZATION | 0.93+ |
Outpost | ORGANIZATION | 0.93+ |
VMware | ORGANIZATION | 0.93+ |
zero | QUANTITY | 0.93+ |
EMC | ORGANIZATION | 0.91+ |
day three | QUANTITY | 0.9+ |
today | DATE | 0.89+ |
Mobile World Daily Today | TITLE | 0.88+ |
Wild West | ORGANIZATION | 0.88+ |
theCUBE.net | OTHER | 0.87+ |
app store | TITLE | 0.86+ |
one thing | QUANTITY | 0.86+ |
EMC Code | TITLE | 0.86+ |
Couchbase | TITLE | 0.85+ |
SiliconANGLE News | Red Hat Collaborates with Nvidia, Samsung and Arm on Efficient, Open Networks
(upbeat music) >> Hello, everyone; I'm John Furrier with SiliconANGLE NEWS and host of theCUBE, and welcome to our SiliconANGLE NEWS MWC NEWS UPDATE in Barcelona where MWC is the premier event for the cloud telecommunication industry, and in the news here is Red Hat, Red Hat announcing a collaboration with NVIDIA, Samsung and Arm on Efficient Open Networks. Red Hat announced updates across various fields including advanced 5G telecommunications cloud, industrial edge, artificial intelligence, and radio access networks, RAN, and Efficiency. Red Hat's enterprise Kubernetes platform, OpenShift, has added support for NVIDIA's converged accelerators and aerial SDK facilitating RAND deployments on industry standard service across hybrid and multicloud platforms. This composable infrastructure enables telecom firms to support heavier compute demands for edge computing, AI, private 5G, and more, and just also helps network operators adopt open architectures, allowing them to choose non-proprietary components from multiple suppliers. In addition to the NVIDIA collaboration, Red Hat is working with Samsung to offer a new vRAN solution for service providers to better manage their open RAN networks. They're also working with UK chip designer, Arm, to create new networking solutions for energy efficient Red Hat Open Source Kubernetes-based Efficient Power Level Exporter project, or Kepler, has been donated to the open Cloud Native Compute Foundation, allowing enterprise to better understand their cloud native workloads and power consumptions. Kepler can also help in the development of sustainable software by creating less power hungry applications. Again, Red Hat continuing to provide OpenSource, OpenRAN, and contributing an open source project to the CNCF, continuing to create innovation for developers, and, of course, Red Hat knows what, a lot about operating systems and the telco could be the next frontier. That's SiliconANGLE NEWS. I'm John Furrier; thanks for watching. (monotone music)
SUMMARY :
and in the news here is Red Hat,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
NVIDIA | ORGANIZATION | 0.99+ |
Nvidia | ORGANIZATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Cloud Native Compute Foundation | ORGANIZATION | 0.99+ |
CNCF | ORGANIZATION | 0.98+ |
UK | LOCATION | 0.95+ |
OpenRAN | TITLE | 0.93+ |
telco | ORGANIZATION | 0.93+ |
Kubernetes | TITLE | 0.92+ |
Kepler | ORGANIZATION | 0.9+ |
SiliconANGLE NEWS | ORGANIZATION | 0.88+ |
vRAN | TITLE | 0.88+ |
SiliconANGLE | ORGANIZATION | 0.87+ |
Arm | ORGANIZATION | 0.87+ |
MWC | EVENT | 0.86+ |
Arm on Efficient Open Networks | ORGANIZATION | 0.86+ |
theCUBE | ORGANIZATION | 0.84+ |
OpenShift | TITLE | 0.78+ |
Hat | TITLE | 0.73+ |
SiliconANGLE News | ORGANIZATION | 0.65+ |
OpenSource | TITLE | 0.61+ |
NEWS | ORGANIZATION | 0.51+ |
Red | ORGANIZATION | 0.5+ |
SiliconANGLE | TITLE | 0.43+ |
Danielle Royston, TelcoDR | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat music) >> Hi everybody. Welcome back to Barcelona. We're here at the Fira Live, theCUBE's ongoing coverage of day two of MWC 23. Back in 2021 was my first Mobile World Congress. And you know what? It was actually quite an experience because there was nobody there. I talked to my friend, who's now my co-host, Chris Lewis about what to expect. He said, Dave, I don't think a lot of people are going to be there, but Danielle Royston is here and she's the CEO of Totoge. And that year when Erickson tapped out of its space she took out 60,000 square feet and built out Cloud City. If it weren't for Cloud City, there would've been no Mobile World Congress in June and July of 2021. DR is back. Great to see you. Thanks for coming on. >> It's great to see you. >> Chris. Awesome to see you. >> Yeah, Chris. Yep. >> Good to be back. Yep. >> You guys remember the narrative back then. There was this lady running around this crazy lady that I met at at Google Cloud next saying >> Yeah. Yeah. >> the cloud's going to take over Telco. And everybody's like, well, this lady's nuts. The cloud's been leaning in, you know? >> Yeah. >> So what do you think, I mean, what's changed since since you first caused all those ripples? >> I mean, I have to say that I think that I caused a lot of change in the industry. I was talking to leaders over at AWS yesterday and they were like, we've never seen someone push like you have and change so much in a short period of time. And Telco moves slow. It's known for that. And they're like, you are pushing buttons and you're getting people to change and thank you and keep going. And so it's been great. It's awesome. >> Yeah. I mean, it was interesting, Chris, we heard on the keynotes we had Microsoft, Satya came in, Thomas Curian came in. There was no AWS. And now I asked CMO of GSMA about that. She goes, hey, we got a great relationship with it, AWS. >> Danielle: Yeah. >> But why do you think they weren't here? >> Well, they, I mean, they are here. >> Mean, not here. Why do you think they weren't profiled? >> They weren't on the keynote stage. >> But, you know, at AWS, a lot of the times they want to be the main thing. They want to be the main part of the show. They don't like sharing the limelight. I think they just didn't want be on the stage with the Google CLoud guys and the these other guys, what they're doing they're building out, they're doing so much stuff. As Danielle said, with Telcos change in the ecosystem which is what's happening with cloud. Cloud's making the Telcos think about what the next move is, how they fit in with the way other people do business. Right? So Telcos never used to have to listen to anybody. They only listened to themselves and they dictated the way things were done. They're very successful and made a lot of money but they're now having to open up they're having to leverage the cloud they're having to leverage the services that (indistinct words) and people out provide and they're changing the way they work. >> So, okay in 2021, we talked a lot about the cloud as a potential disruptor, and your whole premise was, look you got to lean into the cloud, or you're screwed. >> Danielle: Yeah. >> But the flip side of that is, if they lean into the cloud too much, they might be screwed. >> Danielle: Yeah. >> So what's that equilibrium? Have they been able to find it? Are you working with just the disruptors or how's that? >> No I think they're finding it right. So my talk at MWC 21 was all about the cloud is a double-edged sword, right? There's two sides to it, and you definitely need to proceed through it with caution, but also I don't know that you have a choice, right? I mean, the multicloud, you know is there another industry that spends more on CapEx than Telco? >> No. >> Right. The hyperscalers are doing it right. They spend, you know, easily approaching over a $100 billion in CapEx that rivals this industry. And so when you have a player like that an industry driving, you know and investing so much Telco, you're always complaining how everyone's riding your coattails. This is the opportunity to write someone else's coattails. So jump on, right? I think you don't have a choice especially if other Telco competitors are using hyperscalers and you don't, they're going to be left behind. >> So you advise these companies all the time, but >> I mean, the issue is they're all they're all using all the hyperscalers, right? So they're the multi, the multiple relationships. And as Danielle said, the multi-layer of relationship they're using the hyperscalers to change their own internal operational environments to become more IT-centric to move to that software centric Telco. And they're also then with the hyperscalers going to market in different ways sometimes with them, sometimes competing with them. What what it means from an analyst point of view is you're suddenly changing the dynamic of a market where we used to have nicely well defined markets previously. Now they're, everyone's in it together, you know, it's great. And, and it's making people change the way they think about services. What I, what I really hope it changes more than anything else is the way the customers at the end of the, at the end of the supply, the value chain think this is what we can get hold of this stuff. Now we can go into the network through the cloud and we can get those APIs. We can draw on the mechanisms we need to to run our personal lives, to run our business lives. And frankly, society as a whole. It's really exciting. >> Then your premise is basically you were saying they should ride on the top over the top of the cloud vendor. >> Yeah. Right? >> No. Okay. But don't they lose the, all the data if they do that? >> I don't know. I mean, I think the hyperscalers are not going to take their data, right? I mean, that would be a really really bad business move if Google Cloud and Azure and and AWS start to take over that, that data. >> But they can't take it. >> They can't. >> From regulate, from sovereignty and regulation. >> They can't because of regulation, but also just like business, right? If they started taking their data and like no enterprises would use them. So I think, I think the data is safe. I think you, obviously every country is different. You got to understand the different rules and regulations for data privacy and, and how you keep it. But I think as we look at the long term, right and we always talk about 10 and 20 years there's going to be a hyperscaler region in every country right? And there will be a way for every Telco to use it. I think their data will be safe. And I think it just, you're going to be able to stand on on the shoulders of someone else for once and use the building blocks of software that these guys provide to make better experiences for subscribers. >> You guys got to explain this to me because when I say data I'm not talking about, you know, personal information. I'm talking about all the telemetry, you know, all the all the, you know the plumbing. >> Danielle: Yeah. >> Data, which is- >> It will increasingly be shared because you need to share it in order to deliver the services in the streamline efficient way that needs to be deliver. >> Did I hear the CEO of Ericsson Wright where basically he said, we're going to charge developers for access to that data through APIs. >> What the Ericsson have done, obviously with the Vage acquisition is they want to get into APIs. So the idea is you're exposing features, quality policy on demand type features for example, or even pulling we still use that a lot of SMS, right? So pulling those out using those APIs. So it will be charged in some way. Whether- >> Man: Like Twitter's charging me for APIs, now I API calls, you >> Know what it is? I think it's Twilio. >> Man: Oh, okay. >> Right. >> Man: No, no, that's sure. >> There's no reason why telcos couldn't provide a Twilio like service itself. >> It's a horizontal play though right? >> Danielle: Correct because developers need to be charged by the API. >> But doesn't there need to be an industry standard to do that as- >> Well. I think that's what they just announced. >> Industry standard. >> Danielle: I think they just announced that. Yeah. Right now I haven't looked at that API set, right? >> There's like eight of them. >> There's eight of them. Twilio has, it's a start you got to start somewhere Dave. (crosstalk) >> And there's all, the TM forum is all the other standard >> Right? Eight is better than zero- >> Right? >> Haven't got plenty. >> I mean for an industry that didn't really understand APIs as a feature, as a product as a service, right? For Mats Granryd, the deputy general of GSMA to stand on the keynote stage and say we partnered and we're unveiling, right. Pay by the use APIs. I was for it. I was like, that is insane. >> I liked his keynote actually, because I thought he was going to talk about how many attendees and how much economic benefiting >> Danielle: We're super diverse. >> He said, I would usually talk about that and you know greening in the network by what you did talk about a little bit. But, but that's, that surprised me. >> Yeah. >> But I've seen in the enterprise this is not my space as, you know, you guys don't live this but I've seen Oracle try to get developers. IBM had to pay $35 billion trying to get for Red Hat to get developers, right? EMC used to have a thing called EMC code, failed. >> I mean they got to do something, right? So 4G they didn't really make the business case the ROI on the investment in the network. Here we are with 5G, same discussion is having where's the use case? How are we going to monetize and make the ROI on this massive investment? And now they're starting to talk about 6G. Same fricking problem is going to happen again. And so I think they need to start experimenting with new ideas. I don't know if it's going to work. I don't know if this new a API network gateway theme that Mats talked about yesterday will work. But they need to start unbundling that unlimited plan. They need to start charging people who are using the network more, more money. Those who are using it less, less. They need to figure this out. This is a crisis for them. >> Yeah our own CEO, I mean she basically said, Hey, I'm for net neutrality, but I want to be able to charge the people that are using it more and more >> To make a return on, on a capital. >> I mean it costs billions of dollars to build these networks, right? And they're valuable. We use them and we talked about this in Cloud City 21, right? The ability to start building better metaverses. And I know that's a buzzword and everyone hates it, but it's true. Like we're working from home. We need- there's got to be a better experience in Zoom in 2D, right? And you need a great network for that metaverse to be awesome. >> You do. But Danielle, you don't need cellular for doing that, do you? So the fixed network is as important. >> Sure. >> And we're at mobile worlds. But actually what we beginning to hear and Crystal Bren did say this exactly, it's about the comp the access is sort of irrelevant. Fixed is better because it's more the cost the return on investment is better from fiber. Mobile we're going to change every so many years because we're a new generation. But we need to get the mechanism in place to deliver that. I actually don't agree that we should everyone should pay differently for what they use. It's a universal service. We need it as individuals. We need to make it sustainable for every user. Let's just not go for the biggest user. It's not, it's not the way to build it. It won't work if you do that you'll crash the system if you do that. And, and the other thing which I disagree on it's not about standing on the shoulders and benefiting from what- It's about cooperating across all levels. The hyperscalers want to work with the telcos as much as the telcos want to work with the hyperscalers. There's a lot of synergy there. There's a lot of ways they can work together. It's not one or the other. >> But I think you're saying let the cloud guys do the heavy lifting and I'm - >> Yeah. >> Not at all. >> And so you don't think so because I feel like the telcos are really good at pipes. They've always been good at pipes. They're engineers. >> Danielle: Yeah. >> Are they hanging on to the to the connectivity or should they let that go and well and go toward the developer. >> I mean AWS had two announcements on the 21st a week before MWC. And one was that telco network builder. This is literally being able to deploy a network capability at AWS with keystrokes. >> As a managed service. >> Danielle: Correct. >> Yeah. >> And so I don't know how the telco world I felt the shock waves, right? I was like, whoa, that seems really big. Because they're taking something that previously was like bread and butter. This is what differentiates each telco and now they've standardized it and made it super easy so anyone can do it. Now do I think the five nines of super crazy hardcore network criteria will be built on AWS this way? Probably not, but no >> It's not, it's not end twin. So you can't, no. >> Right. But private networks could be built with this pretty easily, right? And so telcos that don't have as much funding, right. Smaller, more experiments. I think it's going to change the way we think about building networks in telcos >> And those smaller telcos I think are going to be more developer friendly. >> Danielle: Yeah. >> They're going to have business models that invite those developers in. And that's, it's the disruption's going to come from the ISVs and the workloads that are on top of that. >> Well certainly what Dish is trying to do, right? Dish is trying to build a- they launched it reinvent a developer experience. >> Dave: Yeah. >> Right. Built around their network and you know, again I don't know, they were not part of this group that designed these eight APIs but I'm sure they're looking with great intent on what does this mean for them. They'll probably adopt them because they want people to consume the network as APIs. That's their whole thing that Mark Roanne is trying to do. >> Okay, and then they're doing open ran. But is it- they're not really cons- They're not as concerned as Rakuten with the reliability and is that the right play? >> In this discussion? Open RAN is not an issue. It really is irrelevant. It's relevant for the longer term future of the industry by dis aggregating and being able to share, especially ran sharing, for example, in the short term in rural environments. But we'll see some of that happening and it will change, but it will also influence the way the other, the existing ran providers build their services and offer their value. Look you got to remember in the relationship between the equipment providers and the telcos are very dramatically. Whether it's Ericson, NOKIA, Samsung, Huawei, whoever. So those relations really, and the managed services element to that depends on what skills people have in-house within the telco and what service they're trying to deliver. So there's never one size fits all in this industry. >> You're very balanced in your analysis and I appreciate that. >> I try to be. >> But I am not. (chuckles) >> So when Dr went off, this is my question. When Dr went off a couple years ago on the cloud's going to take over the world, you were skeptical. You gave a approach. Have you? >> I still am. >> Have you moderated your thoughts on that or- >> I believe the telecom industry is is a very strong industry. It's my industry of course I love it. But the relationship it is developing much different relationships with the ecosystem players around it. You mentioned developers, you mentioned the cloud players the equipment guys are changing there's so many moving parts to build the telco of the future that every country needs a very strong telco environment to be able to support the site as a whole. People individuals so- >> Well I think two years ago we were talking about should they or shouldn't they, and now it's an inevitability. >> I don't think we were Danielle. >> All using the hyperscalers. >> We were always going to need to transform the telcos from the conservative environments in which they developed. And they've had control of everything in order to reduce if they get no extra revenue at all, reducing the cost they've got to go on a cloud migration path to do that. >> Amenable. >> Has it been harder than you thought? >> It's been easier than I thought. >> You think it's gone faster than >> It's gone way faster than I thought. I mean pushing on this flywheel I thought for sure it would take five to 10 years it is moving. I mean the maths comp thing the AWS announcements last week they're putting in hyperscalers in Saudi Arabia which is probably one of the most sort of data private places in the world. It's happening really fast. >> What Azure's doing? >> I feel like I can't even go to sleep. Because I got to keep up with it. It's crazy. >> Guys. >> This is awesome. >> So awesome having you back on. >> Yeah. >> Chris, thanks for co-hosting. Appreciate you stay here. >> Yep. >> Danielle, amazing. We'll see you. >> See you soon. >> A lot of action here. We're going to come out >> Great. >> Check out your venue. >> Yeah the Togi buses that are outside. >> The big buses. You got a great setup there. We're going to see you on Wednesday. Thanks again. >> Awesome. Thanks. >> All right. Keep it right there. We'll be back to wrap up day two from MWC 23 on theCUBE. (upbeat music)
SUMMARY :
coverage is made possible I talked to my friend, who's Awesome to see you. Yep. Good to be back. the narrative back then. the cloud's going to take over Telco. I mean, I have to say that And now I asked CMO of GSMA about that. Why do you think they weren't profiled? on the stage with the Google CLoud guys talked a lot about the cloud But the flip side of that is, I mean, the multicloud, you know This is the opportunity to I mean, the issue is they're all over the top of the cloud vendor. the data if they do that? and AWS start to take But I think as we look I'm talking about all the in the streamline efficient Did I hear the CEO of Ericsson Wright So the idea is you're exposing I think it's Twilio. There's no reason why telcos need to be charged by the API. what they just announced. Danielle: I think got to start somewhere Dave. of GSMA to stand on the greening in the network But I've seen in the enterprise I mean they got to do something, right? of dollars to build these networks, right? So the fixed network is as important. Fixed is better because it's more the cost because I feel like the telcos Are they hanging on to the This is literally being able to I felt the shock waves, right? So you can't, no. I think it's going to going to be more developer friendly. And that's, it's the is trying to do, right? consume the network as APIs. is that the right play? It's relevant for the longer and I appreciate that. But I am not. on the cloud's going to take I believe the telecom industry is Well I think two years at all, reducing the cost I mean the maths comp thing Because I got to keep up with it. Appreciate you stay here. We'll see you. We're going to come out We're going to see you on Wednesday. We'll be back to wrap up day
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Danielle | PERSON | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Chris | PERSON | 0.99+ |
Chris Lewis | PERSON | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Mark Roanne | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Wednesday | DATE | 0.99+ |
Thomas Curian | PERSON | 0.99+ |
five | QUANTITY | 0.99+ |
Danielle Royston | PERSON | 0.99+ |
Saudi Arabia | LOCATION | 0.99+ |
eight | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
$35 billion | QUANTITY | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Ericson | ORGANIZATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
60,000 square feet | QUANTITY | 0.99+ |
ORGANIZATION | 0.99+ | |
June | DATE | 0.99+ |
Mats Granryd | PERSON | 0.99+ |
yesterday | DATE | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
NOKIA | ORGANIZATION | 0.99+ |
Eight | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
Barcelona | LOCATION | 0.99+ |
2021 | DATE | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
two years ago | DATE | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
Totoge | ORGANIZATION | 0.99+ |
two sides | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
MWC 23 | EVENT | 0.99+ |
Crystal Bren | PERSON | 0.99+ |
10 years | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
Satya | PERSON | 0.98+ |
two announcements | QUANTITY | 0.98+ |
Ericsson Wright | ORGANIZATION | 0.98+ |
Dish | ORGANIZATION | 0.98+ |
billions of dollars | QUANTITY | 0.98+ |
Mats | PERSON | 0.98+ |
20 years | QUANTITY | 0.98+ |
day two | QUANTITY | 0.98+ |
first | QUANTITY | 0.98+ |
Twilio | ORGANIZATION | 0.97+ |
telcos | ORGANIZATION | 0.97+ |
Red Hat | TITLE | 0.97+ |
theCUBE | ORGANIZATION | 0.96+ |
Dave Duggal, EnterpriseWeb & Azhar Sayeed, Red Hat | MWC Barcelona 2023
>> theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (ambient music) >> Lisa: Hey everyone, welcome back to Barcelona, Spain. It's theCUBE Live at MWC 23. Lisa Martin with Dave Vellante. This is day two of four days of cube coverage but you know that, because you've already been watching yesterday and today. We're going to have a great conversation next with EnterpriseWeb and Red Hat. We've had great conversations the last day and a half about the Telco industry, the challenges, the opportunities. We're going to unpack that from this lens. Please welcome Dave Duggal, founder and CEO of EnterpriseWeb and Azhar Sayeed is here, Senior Director Solution Architecture at Red Hat. >> Guys, it's great to have you on the program. >> Yes. >> Thank you Lisa, >> Great being here with you. >> Dave let's go ahead and start with you. Give the audience an overview of EnterpriseWeb. What kind of business is it? What's the business model? What do you guys do? >> Okay so, EnterpriseWeb is reinventing middleware, right? So the historic middleware was to build vertically integrated stacks, right? And those stacks are now such becoming the rate limiters for interoperability for so the end-to-end solutions that everybody's looking for, right? Red Hat's talking about the unified platform. You guys are talking about Supercloud, EnterpriseWeb addresses that we've built middleware based on serverless architecture, so lightweight, low latency, high performance middleware. And we're working with the world's biggest, we sell through channels and we work through partners like Red Hat Intel, Fortnet, Keysight, Tech Mahindra. So working with some of the biggest players that have recognized the value of our innovation, to deliver transformation to the Telecom industry. >> So what are you guys doing together? Is this, is this an OpenShift play? >> Is it? >> Yeah. >> Yeah, so we've got two projects right her on the floor at MWC throughout the various partners, where EnterpriseWeb is actually providing an application layer, sorry application middleware over Red Hat's, OpenShift and we're essentially generating operators so Red Hat operators, so that all our vendors, and, sorry vendors that we onboard into our catalog can be deployed easily through the OpenShift platform. And we allow those, those vendors to be flexibly composed into network services. So the real challenge for operators historically is that they, they have challenges onboarding the vendors. It takes a long time. Each one of them is a snowflake. They, you know, even though there's standards they don't all observe or follow the same standards. So we make it easier using models, right? For, in a model driven process to on boards or streamline that onboarding process, compose functions into services deploy those services seamlessly through Red Hat's OpenShift, and then manage the, the lifecycle, like the quality of service and the SLAs for those services. >> So Red Hat obviously has pretty prominent Telco business has for a while. Red Hat OpenStack actually is is pretty popular within the Telco business. People thought, "Oh, OpenStack, that's dead." Actually, no, it's actually doing quite well. We see it all over the place where for whatever reason people want to build their own cloud. And, and so, so what's happening in the industry because you have the traditional Telcos we heard in the keynotes that kind of typical narrative about, you know, we can't let the over the top vendors do this again. We're, we're going to be Apifi everything, we're going to monetize this time around, not just with connectivity but the, but the fact is they really don't have a developer community. >> Yes. >> Yet anyway. >> Then you have these disruptors over here that are saying "Yeah, we're going to enable ISVs." How do you see it? What's the landscape look like? Help us understand, you know, what the horses on the track are doing. >> Sure. I think what has happened, Dave, is that the conversation has moved a little bit from where they were just looking at IS infrastructure service with virtual machines and OpenStack, as you mentioned, to how do we move up the value chain and look at different applications. And therein comes the rub, right? You have applications with different requirements, IT network that have various different requirements that are there. So as you start to build those cloud platform, as you start to modernize those set of applications, you then start to look at microservices and how you build them. You need the ability to orchestrate them. So some of those problem statements have moved from not just refactoring those applications, but actually now to how do you reliably deploy, manage in a multicloud multi cluster way. So this conversation around Supercloud or this conversation around multicloud is very >> You could say Supercloud. That's okay >> (Dave Duggal and Azhar laughs) >> It's absolutely very real though. The reason why it's very real is, if you look at transformations around Telco, there are two things that are happening. One, Telco IT, they're looking at partnerships with hybrid cloud, I mean with public cloud players to build a hybrid environment. They're also building their own Telco Cloud environment for their network functions. Now, in both of those spaces, they end up operating two to three different environments themselves. Now how do you create a level of abstraction across those? How do you manage that particular infrastructure? And then how do you orchestrate all of those different workloads? Those are the type of problems that they're actually beginning to solve. So they've moved on from really just putting that virtualizing their application, putting it on OpenStack to now really seriously looking at "How do I build a service?" "How do I leverage the catalog that's available both in my private and public and build an overall service process?" >> And by the way what you just described as hybrid cloud and multicloud is, you know Supercloud is what multicloud should have been. And what, what it originally became is "I run on this cloud and I run on this cloud" and "I run on this cloud and I have a hybrid." And, and Supercloud is meant to create a common experience across those clouds. >> Dave Duggal: Right? >> Thanks to, you know, Supercloud middleware. >> Yeah. >> Right? And, and so that's what you guys do. >> Yeah, exactly. Exactly. Dave, I mean, even the name EnterpriseWeb, you know we started from looking from the application layer down. If you look at it, the last 10 years we've looked from the infrastructure up, right? And now everybody's looking northbound saying "You know what, actually, if I look from the infrastructure up the only thing I'll ever build is silos, right?" And those silos get in the way of the interoperability and the agility the businesses want. So we take the perspective as high level abstractions, common tools, so that if I'm a CXO, I can look down on my environments, right? When I'm really not, I honestly, if I'm an, if I'm a CEO I don't really care or CXO, I don't really care so much about my infrastructure to be honest. I care about my applications and their behavior. I care about my SLAs and my quality of service, right? Those are the things I care about. So I really want an EnterpriseWeb, right? Something that helps me connect all my distributed applications all across all of the environments. So I can have one place a consistency layer that speaks a common language. We know that there's a lot of heterogeneity down all those layers and a lot of complexity down those layers. But the business doesn't care. They don't want to care, right? They want to actually take their applications deploy them where they're the most performant where they're getting the best cost, right? The lowest and maybe sustainability concerns, all those. They want to address those problems, meet their SLAs meet their quality service. And you know what, if it's running on Amazon, great. If it's running on Google Cloud platform, great. If it, you know, we're doing one project right here that we're demonstrating here is with with Amazon Tech Mahindra and OpenShift, where we took a disaggregated 5G core, right? So this is like sort of latest telecom, you know net networking software, right? We're deploying pulling elements of that network across core, across Amazon EKS, OpenShift on Red Hat ROSA, as well as just OpenShift for cloud. And we, through a single pane of deployment and management, we deployed the elements of the 5G core across them and then connected them in an end-to-end process. That's Telco Supercloud. >> Dave Vellante: So that's an O-RAN deployment. >> Yeah that's >> So, the big advantage of that, pardon me, Dave but the big advantage of that is the customer really doesn't care where the components are being served from for them. It's a 5G capability. It happens to sit in different locations. And that's, it's, it's about how do you abstract and how do you manage all those different workloads in a cohesive way? And that's exactly what EnterpriseWeb is bringing to the table. And what we do is we abstract the underlying infrastructure which is the cloud layer. So if, because AWS operating environment is different then private cloud operating environment then Azure environment, you have the networking is set up is different in each one of them. If there is a way you can abstract all of that and present it in a common operating model it becomes a lot easier than for anybody to be able to consume. >> And what a lot of customers tell me is the way they deal with multicloud complexity is they go with mono cloud, right? And so they'll lose out on some of the best services >> Absolutely >> If best of, so that's not >> that's not ideal, but at the end of the day, agree, developers don't want to muck with all the plumbing >> Dave Duggal: Yep. >> They want to write code. >> Azhar: Correct. >> So like I come back to are the traditional Telcos leaning in on a way that they're going to enable ISVs and developers to write on top of those platforms? Or are there sort of new entrance and disruptors? And I know, I know the answer is both >> Dave Duggal: Yep. >> but I feel as though the Telcos still haven't, traditional Telcos haven't tuned in to that developer affinity, but you guys sell to them. >> What, what are you seeing? >> Yeah, so >> What we have seen is there are Telcos fall into several categories there. If you look at the most mature ones, you know they are very eager to move up the value chain. There are some smaller very nimble ones that have actually doing, they're actually doing something really interesting. For example, they've provided sandbox environments to developers to say "Go develop your applications to the sandbox environment." We'll use that to build an net service with you. I can give you some interesting examples across the globe that, where that is happening, right? In AsiaPac, particularly in Australia, ANZ region. There are a couple of providers who have who have done this, but in, in, in a very interesting way. But the challenges to them, why it's not completely open or public yet is primarily because they haven't figured out how to exactly monetize that. And, and that's the reason why. So in the absence of that, what will happen is they they have to rely on the ISV ecosystem to be able to build those capabilities which they can then bring it on as part of the catalog. But in Latin America, I was talking to one of the providers and they said, "Well look we have a public cloud, we have our own public cloud, right?" What we want do is use that to offer localized services not just bring everything in from the top >> But, but we heard from Ericson's CEO they're basically going to monetize it by what I call "gouge", the developers >> (Azhar laughs) >> access to the network telemetry as opposed to saying, "Hey, here's an open platform development on top of it and it will maybe create something like an app store and we'll take a piece of the action." >> So ours, >> to be is a better model. >> Yeah. So that's perfect. Our second project that we're showing here is with Intel, right? So Intel came to us cause they are a reputation for doing advanced automation solutions. They gave us carte blanche in their labs. So this is Intel Network Builders they said pick your partners. And we went with the Red Hat, Fort Net, Keysite this company KX doing AIML. But to address your DevX, here's Intel explicitly wants to get closer to the developers by exposing their APIs, open APIs over their infrastructure. Just like Red Hat has APIs, right? And so they can expose them northbound to developers so developers can leverage and tune their applications, right? But the challenge there is what Intel is doing at the low level network infrastructure, right? Is fundamentally complex, right? What you want is an abstraction layer where develop and this gets to, to your point Dave where you just said like "The developers just want to get their job done." or really they want to focus on the business logic and accelerate that service delivery, right? So the idea here is an EnterpriseWeb they can literally declaratively compose their services, express their intent. "I want this to run optimized for low latency. I want this to run optimized for energy consumption." Right? And that's all they say, right? That's a very high level statement. And then the run time translates it between all the elements that are participating in that service to realize the developer's intent, right? No hands, right? Zero touch, right? So that's now a movement in telecom. So you're right, it's taking a while because these are pretty fundamental shifts, right? But it's intent based networking, right? So it's almost two parts, right? One is you have to have the open APIs, right? So that the infrastructure has to expose its capabilities. Then you need abstractions over the top that make it simple for developers to take, you know, make use of them. >> See, one of the demonstrations we are doing is around AIOps. And I've had literally here on this floor, two conversations around what I call as network as a platform. Although it sounds like a cliche term, that's exactly what Dave was describing in terms of exposing APIs from the infrastructure and utilizing them. So once you get that data, then now you can do analytics and do machine learning to be able to build models and figure out how you can orchestrate better how you can monetize better, how can how you can utilize better, right? So all of those things become important. It's just not about internal optimization but it's also about how do you expose it to third party ecosystem to translate that into better delivery mechanisms or IOT capability and so on. >> But if they're going to charge me for every API call in the network I'm going to go broke (team laughs) >> And I'm going to get really pissed. I mean, I feel like, I'm just running down, Oracle. IBM tried it. Oracle, okay, they got Java, but they don't they don't have developer jobs. VMware, okay? They got Aria. EMC used to have a thing called code. IBM had to buy Red Hat to get to the developer community. (Lisa laughs) >> So I feel like the telcos don't today have those developer shops. So, so they have to partner. [Azhar] Yes. >> With guys like you and then be more open and and let a zillion flowers bloom or else they're going to get disrupted in a big way and they're going to it's going to be a repeat of the over, over the top in, in in a different model that I can't predict. >> Yeah. >> Absolutely true. I mean, look, they cannot be in the connectivity business. Telcos cannot be just in the connectivity business. It's, I think so, you know, >> Dave Vellante: You had a fry a frozen hand (Dave Daggul laughs) >> off that, you know. >> Well, you know, think about they almost have to go become over the top on themselves, right? That's what the cloud guys are doing, right? >> Yeah. >> They're riding over their backbone that by taking a creating a high level abstraction, they in turn abstract away the infrastructure underneath them, right? And that's really the end game >> Right? >> Dave Vellante: Yeah. >> Is because now, >> they're over the top it's their network, it's their infrastructure, right? They don't want to become bid pipes. >> Yep. >> Now you, they can take OpenShift, run that in any cloud. >> Yep. >> Right? >> You can run that in hybrid cloud, enterprise web can do the application layer configuration and management. And together we're running, you know, OSI layers one through seven, east to west, north to south. We're running across the the RAN, the core and the transport. And that is telco super cloud, my friend. >> Yeah. Well, >> (Dave Duggal laughs) >> I'm dominating the conversation cause I love talking super cloud. >> I knew you would. >> So speaking of super superpowers, when you're in customer or prospective customer conversations with providers and they've got, obviously they're they're in this transformative state right now. How, what do you describe as the superpower between Red Hat and EnterpriseWeb in terms of really helping these Telcos transforms. But at the end of the day, the connectivity's there the end user gets what they want, which is I want this to work wherever I am. >> Yeah, yeah. That's a great question, Lisa. So I think the way you could look at it is most software has, has been evolved to be specialized, right? So in Telcos' no different, right? We have this in the enterprise, right? All these specialized stacks, all these components that they wire together in the, in you think of Telco as a sort of a super set of enterprise problems, right? They have all those problems like magnified manyfold, right? And so you have specialized, let's say orchestrators and other tools for every Telco domain for every Telco layer. Now you have a zoo of orchestrators, right? None of them were designed to work together, right? They all speak a specific language, let's say quote unquote for doing a specific purpose. But everything that's interesting in the 21st century is across layers and across domains, right? If a siloed static application, those are dead, right? Nobody's doing those anymore. Even developers don't do those developers are doing composition today. They're not doing, nobody wants to hear about a 6 million lines of code, right? They want to hear, "How did you take these five things and bring 'em together for productive use?" >> Lisa: Right. How did you deliver faster for my enterprise? How did you save me money? How did you create business value? And that's what we're doing together. >> I mean, just to add on to Dave, I was talking to one of the providers, they have more than 30,000 nodes in their infrastructure. When I say no to your servers running, you know, Kubernetes,running open stack, running different components. If try managing that in one single entity, if you will. Not possible. You got to fragment, you got to segment in some way. Now the question is, if you are not exposing that particular infrastructure and the appropriate KPIs and appropriate things, you will not be able to efficiently utilize that across the board. So you need almost a construct that creates like a manager of managers, a hierarchical structure, which would allow you to be more intelligent in terms of how you place those, how you manage that. And so when you ask the question about what's the secret sauce between the two, well this is exactly where EnterpriseWeb brings in that capability to analyze information, be more intelligent about it. And what we do is provide an abstraction of the cloud layer so that they can, you know, then do the right job in terms of making sure that it's appropriate and it's consistent. >> Consistency is key. Guys, thank you so much. It's been a pleasure really digging through EnterpriseWeb. >> Thank you. >> What you're doing >> with Red Hat. How you're helping the organization transform and Supercloud, we can't forget Supercloud. (Dave Vellante laughs) >> Fight Supercloud. Guys, thank you so much for your time. >> Thank you so much Lisa. >> Thank you. >> Thank you guys. >> Very nice. >> Lisa: We really appreciate it. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live tech coverage coming to you live from MWC 23. We'll be back after a short break.
SUMMARY :
that drive human progress. the challenges, the opportunities. have you on the program. What's the business model? So the historic middleware So the real challenge for happening in the industry What's the landscape look like? You need the ability to orchestrate them. You could say Supercloud. And then how do you orchestrate all And by the way Thanks to, you know, And, and so that's what you guys do. even the name EnterpriseWeb, you know that's an O-RAN deployment. of that is the customer but you guys sell to them. on the ISV ecosystem to be able take a piece of the action." So that the infrastructure has and figure out how you And I'm going to get So, so they have to partner. the over, over the top in, in in the connectivity business. They don't want to become bid pipes. OpenShift, run that in any cloud. And together we're running, you know, I'm dominating the conversation the end user gets what they want, which is And so you have specialized, How did you create business value? You got to fragment, you got to segment Guys, thank you so much. and Supercloud, we Guys, thank you so much for your time. to you live from MWC 23.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave | PERSON | 0.99+ |
Lisa | PERSON | 0.99+ |
Dave Duggal | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Fortnet | ORGANIZATION | 0.99+ |
Keysight | ORGANIZATION | 0.99+ |
EnterpriseWeb | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
21st century | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
two projects | QUANTITY | 0.99+ |
Telcos' | ORGANIZATION | 0.99+ |
Latin America | LOCATION | 0.99+ |
EMC | ORGANIZATION | 0.99+ |
Dave Daggul | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
second project | QUANTITY | 0.99+ |
one | QUANTITY | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
Fort Net | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
more than 30,000 nodes | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
OpenShift | TITLE | 0.99+ |
Java | TITLE | 0.99+ |
three | QUANTITY | 0.99+ |
KX | ORGANIZATION | 0.99+ |
Azhar Sayeed | PERSON | 0.98+ |
One | QUANTITY | 0.98+ |
Tech Mahindra | ORGANIZATION | 0.98+ |
two conversations | QUANTITY | 0.98+ |
yesterday | DATE | 0.98+ |
five things | QUANTITY | 0.98+ |
telcos | ORGANIZATION | 0.97+ |
four days | QUANTITY | 0.97+ |
Azhar | PERSON | 0.97+ |
Udayan Mukherjee, Intel & Manish Singh, Dell Techhnologies | MWC Barcelona 2023
(soft corporate jingle) >> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies. Creating technologies that drive human progress. (upbeat jingle intro) >> Welcome back to Barcelona. We're here live at the Fira. (laughs) Just amazing day two of MWC23. It's packed today. It was packed yesterday. It's even more packed today. All the news is flowing. Check out siliconangle.com. John Furrier is in the studio in Palo Alto breaking all the news. And, we are here live. Really excited to have Udayan Mukherjee who's the Senior Fellow and Chief Architect of wireless product at Network and Edge for Intel. And, Manish Singh is back. He's the CTO of Telecom Systems Business at Dell Jets. Welcome. >> Thank you. >> Thank you >> We're going to talk about greening the network. I wonder, Udayan, if you could just set up why that's so important. I mean, it's obvious that it's an important thing, great for the environment, but why is it extra important in Telco? >> Yeah, thank you. Actually, I'll tell you, this morning I had a discussion with an operator. The first thing he said, that the electricity consumption is more expensive nowadays that total real estate that he's spending money on. So, it's like that is the number one thing that if you can change that, bring that power consumption down. And, if you talk about sustainability, look what is happening in Europe, what's happening in all the electricity areas. That's the critical element that we need to address. Whether we are defining chip, platforms, storage systems, that's the number one mantra right now. You know, reduce the power. Electricity consumption, because it's a sustainable planet that we are living in. >> So, you got CapEx and OpEx. We're talking about the big piece of OpEx is now power consumption? >> Power Consumption >> That's the point. Okay, so in my experience, servers are the big culprit for power consumption, which is powered by core semiconductors and microprocessors. So, what's the strategy to reduce the power consumption? You're probably not going to reduce the bill overall. You maybe just can keep pace, but from a technical standpoint, how do you attack that? >> Yeah, there are multiple defined ways of adding. Obviously the process technology, that micro (indistinct) itself is evolving to make it more low-power systems. But, even within the silicon, the server that we develop, if you look in a CPU, there is a lot of power states. So, if you have a 32 code platform, as an example, every code you can vary the frequency and the C-states, power states. So, if you look into any traffic, whether it's a radio access network, packet code. At any given time the load is not peak. So, your power consumption, actual what we are drawing from the wall, it also needs to vary with that. So, that's how if you look into this there's a huge savings. If you go to Intel booth or Ericson booth or anyone, you will see right now every possible, the packet code, radio access network, everything network. They're talking about our energy consumption, how they're lowering this. These states, as we call it power states, C-state P-state they've built in intel chip for a long time. The cloud providers are taking advantage of it. But Telcos, with even two generation before they used to actually switch it off in the bios. I say no, we need peak. Now, that thing is changing. Now, it's all like, how do I take advantage of the built in technologies? >> I remember the enterprise virtualization, Manish, was a big play. I remember PG&E used to give rebates to customers that would install virtualized software, VMware. >> And SSDs. >> Yeah. And SSDs, you know, yes. Because, the spinning disc was, but, nowhere near with a server consumption. So, how virtualized is the telco network? And then, what I'm saying is there other things, other knobs, you can of course turn. So, what's your perspective on this as a server player? >> Yeah, absolutely. Let me just back up a little bit and start at the big picture to share what Udayan said. Here, day two, every conversation I've had yesterday and today morning with every operator, every CTO, they're coming in and first topic they're talking about is energy. And, the reason is, A, it's the right thing to do, sustainability, but, it's also becoming a P&L issue. And, the reason it's becoming a P&L issue is because we are in this energy inflationary environment where the energy costs are constantly going up. So, it's becoming really important for the service providers to really drive more efficiency onto their networks, onto their infrastructure. Number one. Two, then to your question on what all knobs need to be turned on, and what are the knobs? So, Udayan talked about within the intel, silicon, the C-states, P-states and all these capabilities that are being brought up, absolutely important. But again, if we take a macro view of it. First of all, there are opportunities to do infrastructure audit. What's on, why is it on, does it need to be on right now? Number two, there are opportunities to do infrastructure upgrade. And, what I mean by that is as you go from previous generation servers to next generation servers, better cooling, better performance. And through all of that you start to gain power usage efficiency inside a data center. And, you take that out more into the networks you start to achieve same outcomes on the network site. Think about from a cooling perspective, air cooling but for that matter, even liquid cooling, especially inside the data centers. All opportunities around PUE, because PUE, power usage efficiency and improvement on PUE is an opportunity. But, I'll take it even further. Workloads that are coming onto it, core, RAN, these workloads based on the dynamic traffic. Look, if you look at the traffic inside a network, it's not constant, it's varied. As the traffic patterns change, can you reduce the amount of infrastructure you're using? I.e. reduce the amount of power that you're using and when the traffic loads are going up. So, the workloads themselves need to become more smarter about that. And last, but not the least. From an orchestration layer if you think about it, where you are placing these workloads, and depending on what's available, you can start to again, drive better energy outcomes. And, not to forget acceleration. Where you need acceleration, can you have the right hardware infrastructures delivering the right kind of accelerations to again, improve those energy efficiency outcomes. So, it's a complex problem. But, there are a lot of levers, lot of tools that are in place that the service providers, the technology builders like us, are building the infrastructure, and then the workload providers all come together to really solve this problem. >> Yeah, Udayan, Manish mentioned this idea of moving from one generation to a new generation and gaining benefits. Out there on the street, if you will. Most of the time it's an N plus 2 migration. It's not just moving from last generation to this next generation, but it's really a generation ago. So, those significant changes in the dynamics around power density and cooling are meaningful? You talk about where performance should be? We start talking about the edge. It's hard to have a full-blown raised data center floor edge everywhere. Do these advances fundamentally change the kinds of things that you can do at the base of a tower? >> Yeah, absolutely. Manish talked about that, the dynamic nature of the workload. So, we are using a lot of this AIML to actually predict. Like for example, your multiple force in a systems. So, why is the 32 core as a system, why is all running? So, your traffic profile in the night times. So, you are in the office areas, in the night has gone home and nowadays everybody's working from remote anyway. So, why is this thing a full blown, spending the TDP, the total power and extreme powers. You bring it down, different power states, C-states. We talked about it. Deeper C-states or P-states, you bring the frequency down. So, lot of those automation, even at the base of the tower. Lot of our deployment right now, we are doing a whole bunch of massive MIMO deployment. Virtual RAN in Verizon network. All actually cell-site deployment. Those eight centers are very close to the cell-site. And, they're doing aggressive power management. So, you don't have to go to a huge data centers, even there's a small rack of systems, four to five, 10 systems, you can do aggressive power management. And, you built it up that way. >> Okay. >> If I may just build on what Udayan said. I mean if you look at the radio access network, right? And, let's start at the bottom of the tower itself. The infrastructure that's going in there, especially with Open RAN, if you think about it, there are opportunities now to do a centralized RAN where you could do more BBU pooling. And, with that, not only on a given tower but across a given given coverage area, depending on what the traffics are, you can again get the infrastructure to become more efficient in terms of what traffic, what needs are, and really start to benefit. The pooling gains which is obviously going to give you benefit on the CapEx side, but from an energy standpoint going to give you benefits on the OpEx side of things. So that's important. The second thing I will say is we cannot forget, especially on the radio access side of things, that it's not just the bottom of the tower what's happening there. What's happening on the top of the tower especially with the radio, that's super important. And, that goes into how do you drive better PA efficiency, how do you drive better DPD in there? This is where again, applying AI machine learning there is a significant amount of opportunity there to improve the PA performance itself. But then, not only that, looking at traffic patterns. Can you do sleep modes, micro sleep modes to deep sleep modes. Turning down the cells itself, depending on the traffic patterns. So, these are all areas that are now becoming more and more important. And, clearly with our ecosystem of partners we are continuing to work on these. >> So we hear from the operators, it's an OpEx issue. It's hitting the P&L. They're in search of PUE of one. And, they've historically been wasteful, they go full throttle. And now, you're saying with intelligence you can optimize that consumption. So, where does the intelligence live? Is it in the rig. Where is it all throughout the network? Is it in the silicon? Maybe you could paint a picture as to where those smarts exist. >> I can start. It's across the stack. It starts, we talked about the C-states, P-states. If you want to take advantage of that, that intelligence is in the workload, which has to understand when can I really start to clock things down or turn off the cores. If you really look at it from a traffic pattern perspective you start to really look at a rig level where you can have power. And, we are working with the ecosystem partners who are looking at applying machine learning on that to see what can we really start to turn on, turn off, throttle things down, depending on what the, so yes, it's across the stack. And lastly, again, I'll go back to cannot forget orchestration, where you again have the ability to move some of these workloads and look at where your workload placements are happening depending on what infrastructure is and what the traffic needs are at that point in time. So it's, again, there's no silver bullet. It has to be looked across the stack. >> And, this is where actually if I may, last two years a sea change has happened. People used to say, okay there are C-states and P-states, there's silicon every code. OS operating system has a governor built in. We rely on that. So, that used to be the way. Now that applications are getting smarter, if you look at a radio access network or the packet core on the control plane signaling application, they're more aware of the what is the underlying silicon power state sleep states are available. So, every time they find some of these areas there's no enough traffic there, they immediately goes to a transition. So, the workload has become more intelligent. The rig application we talked about. Every possible rig application right now are apps on xApps. Most of them are on energy efficiency. How are they using it? So, I think lot more even the last two years. >> Can I just say one more thing there right? >> Yeah. >> We cannot forget the infrastructure as well, right? I mean, that's the most important thing. That's where the energy is really getting drawn in. And, constant improvement on the infrastructure. And, I'll give you some data points, right? If you really look at the power at servers, right? From 2013 to 2023, like a decade. 85% energy intensity improvement, right? So, these gains are coming from performance with better cooling, better technology applications. So, that's super critical, that's important. And, also to just give you another data point. Apart from the infrastructure what cache layers we are running and how much CPU and compute requirements are there, that's also important. So, looking at it from a cache perspective are we optimizing the required infrastructure blocks for radio access versus core? And again, really taking that back to energy efficiency outcomes. So, some of the work we've been doing with Wind River and Red Hat and some of our ecosystem partners around that for radio access network versus core. Really again, optimizing for those different use cases and the outcomes of those start to come in from an energy utilization perspective >> So, 85% improvement in power consumption. Of course you're doing, I don't know, 2, 300% more work, right? So, let's say, and I'm just sort of spit balling numbers but, let's say that historically powers on the P&L has been, I don't know, single digits, maybe 10%. Now, it's popping up the much higher. >> Udayan: Huge >> Right? >> I mean, I don't know what the number is. Is it over 20% in some cases or is it, do you have a sense of that? Or let's say it is. The objective I presume is you're probably not going to lower the power bill overall, but you're going to be able to lower the percent of cost on the OpEx as you grow, right? I mean, we're talking about 5G networks. So much more data >> Capacity increasing. >> Yeah, and so is it, am I right about that the carriers, the best they can hope for is to sort of stay even on that percentage or maybe somewhat lower that percentage? Or, do you think they can actually cut the bill? What's the goal? What are they trying to do? >> The goal is to cut the bill. >> It is! >> And the way you get started to cut the bill is, as I said, first of all on the radio side. Start to see where the improvements are and look, there's not a whole lot there to be done. I mean, the PS are as efficient as they can be, but as I said, there are things in DPD and all that still can be improved. But then, sleep modes and all, yes there are efficiencies in there. But, I'll give you one important, another interesting data point. We did a work with ACG Research on our 16G platform. The power edge service that we have recently launched based on Intel's Sapphire Rapids. And, if you look at the study there. 30% TCR reduction, 10% in CapEx gains, 30% in OpEx gains from moving away from these legacy monolithic architectures to cloud native architectures. And, a large part of that OpEx gain really starts to come from energy to the point of 800 metric tonnes of carbon reduction to the point of you could have, and if you really translate that to around 160 homes electric use per year, right? So yes, I mean the opportunity there is to reduce the bill. >> Wow, that's big, big goal guys. We got to run. But, thank you for informing the audience on the importance and how you get there. So, appreciate that. >> One thing that bears mentioning really quickly before we wrap, a lot of these things we're talking about are happening in remote locations. >> Oh, back to that point of distributed nature of telecom. >> Yes, we talked about a BBU being at the base of a tower that could be up on a mountain somewhere. >> No, you made the point. You can't just say, oh, hey we're going to go find ambient air or going to go... >> They don't necessarily... >> Go next to a waterfall. >> We don't necessarily have the greatest hydro tower. >> All right, we got to go. Thanks you guys. Alright, keep it right there. Wall to wall coverage is day two of theCUBE's coverage of MWC 23. Stay right there, we'll be right back. (corporate outro jingle)
SUMMARY :
that drive human progress. John Furrier is in the studio about greening the network. So, it's like that is the number one thing We're talking about the big piece of OpEx reduce the power consumption? So, if you look into any traffic, I remember the enterprise Because, the spinning disc was, So, the workloads themselves the kinds of things that you So, you are in the office areas, to give you benefit on the CapEx side, Is it in the rig. that intelligence is in the workload, So, the workload has and the outcomes of those start to come in historically powers on the P&L on the OpEx as you grow, right? And the way you get on the importance and how you get there. before we wrap, a lot of these Oh, back to that point of being at the base of a tower No, you made the point. the greatest hydro tower. Thanks you guys.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Manish Singh | PERSON | 0.99+ |
PG&E | ORGANIZATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Udayan Mukherjee | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
2013 | DATE | 0.99+ |
85% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
10% | QUANTITY | 0.99+ |
2, 300% | QUANTITY | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
30% | QUANTITY | 0.99+ |
CapEx | ORGANIZATION | 0.99+ |
yesterday | DATE | 0.99+ |
four | QUANTITY | 0.99+ |
Barcelona | LOCATION | 0.99+ |
32 code | QUANTITY | 0.99+ |
Udayan | PERSON | 0.99+ |
eight centers | QUANTITY | 0.99+ |
one generation | QUANTITY | 0.99+ |
Manish | PERSON | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
OpEx | ORGANIZATION | 0.99+ |
two generation | QUANTITY | 0.99+ |
today morning | DATE | 0.99+ |
10 systems | QUANTITY | 0.99+ |
32 core | QUANTITY | 0.99+ |
siliconangle.com | OTHER | 0.99+ |
today | DATE | 0.99+ |
800 metric tonnes | QUANTITY | 0.98+ |
2023 | DATE | 0.98+ |
ACG Research | ORGANIZATION | 0.98+ |
five | QUANTITY | 0.98+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.98+ |
over 20% | QUANTITY | 0.98+ |
first topic | QUANTITY | 0.98+ |
around 160 homes | QUANTITY | 0.97+ |
First | QUANTITY | 0.97+ |
xApps | TITLE | 0.97+ |
intel | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.96+ |
second thing | QUANTITY | 0.96+ |
Dell Jets | ORGANIZATION | 0.95+ |
Two | QUANTITY | 0.94+ |
last two years | DATE | 0.94+ |
first thing | QUANTITY | 0.93+ |
Dell Techhnologies | ORGANIZATION | 0.9+ |
P&L | ORGANIZATION | 0.9+ |
day two | QUANTITY | 0.89+ |
Ericson | ORGANIZATION | 0.89+ |
this morning | DATE | 0.88+ |
one more thing | QUANTITY | 0.88+ |
Edge | ORGANIZATION | 0.88+ |
MWC 23 | EVENT | 0.87+ |
MWC | EVENT | 0.86+ |
Telecom Systems Business | ORGANIZATION | 0.84+ |
Number two | QUANTITY | 0.8+ |
MWC23 | EVENT | 0.8+ |
first | QUANTITY | 0.78+ |
Network | ORGANIZATION | 0.78+ |
5G | QUANTITY | 0.76+ |
One thing | QUANTITY | 0.76+ |
OpEx | TITLE | 0.7+ |
single digits | QUANTITY | 0.69+ |
RAN | TITLE | 0.68+ |
theCUBE | ORGANIZATION | 0.63+ |
two | QUANTITY | 0.62+ |
16G | OTHER | 0.61+ |
Verizon | ORGANIZATION | 0.57+ |
Udayan | ORGANIZATION | 0.56+ |
OpEx | OTHER | 0.53+ |
Tibor Fabry Asztalos, Dell Technologies & Gautam Bhagra, Dell Technologies | MWC Barcelona 2023
>> Announcer: "theCUBE's" live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Good evening, everyone. Live from Barcelona, Spain, it's "theCUBE". We are at Mobile World, MWC, excuse me, '23. New name this year. I'm Lisa Martin with Dave Vellante. Dave, we have had some great conversations. This is only day one of four days of coverage from "theCUBE" but one of the things that we've been talking about is disaggregation. You've wrote about it in your breaking analysis. We've been talking about it. Today is a big thing that's happening. We're going to be talking about that next. >> Yeah, open ecosystems require integration. Integration requires certification. And so, you got to have labs. We're going to talk about that and what value that brings to the community. >> Right. Please welcome Tibor Fabry-Asztalos, senior vice president of telecom systems and product engineering at Dell. >> Hi. >> And back to "theCUBE" after a couple of hours, Gautam Bhagra, vice president of partnerships at Dell. Guys, great to have you here. >> I love to be here. Thank you. >> Great to be here. >> So, day one, I'm sure lots of conversations, lots of meetings, lots of jet lag that we're all trying to get over. Talk about, Gautam, we'll start with you. Talk about the disaggregation era. What it is intended to support? What is it intended to enable? >> Yeah, so I mean, I think to be honest with you, Lisa, we spoke about this earlier also, like the whole vision with the disaggregation is to make sure our telco providers can take the benefits of having the innovation that comes along with it, right? So currently, we all know they're tied into like lock systems, which kind of constricts them in going after this whole innovative space. So, our hope is by working with our operators and our partners, we can help make that disaggregation journey a lot easier and work on some of these challenges, and make it easier for the telcos to innovate and consolidate going forward. So, we're working very closely and we talked about the community this morning. We're working very closely with Tibor and his team from an engineering perspective to help build those solutions with our partners and we're excited about the announcements we made this morning. >> When you hear challenges from this ecosystem, can you stack rank 'em? What are you hearing? Kind of what's top of mind? And so, the top three, if you would. >> Some of the challenges are just to define moving from a closed system and open system, just to making sure that the acceptance of that to see what's the value proposition is for an open system and then for the carriers to see the path going from a closed system to an open system. Of course, at the end, people realize the value at the end and speed of innovation that you're going to get all the new technologies and new features, functionality you get in an open system. But then the challenge comes with it, how you actually integrate those and then validate them, and you are to deploy them. So in a sense, that's the opportunity and also some of the challenge along the way. And that's where, as Gautam said, that's where we are also looking at playing the key role with the OTEL lab, the Open Telecom Ecosystem Lab, where we take these pieces of the open ecosystem and have combined them, validate them, and provide the pipeline to the customer. Pre-integration and then full integration into the production network. >> Those challenges, I presume, vary whether you're talking to a greenfield network operator versus somebody who's got a 40, 50 year history, a hundred-year history in the business, right? I mean migration is a big issue for them, right? Whereas the greenfield, we heard from DISH earlier, they want to drive innovation so they might be willing to sacrifice some other areas. So, is that a fair summarization and what are you hearing? >> [Tibor and Gautam] Yeah. >> Absolutely it is. I mean, that's where you see that DISH being kind of a leader in the space, as they were deploying in greenfield, they defined what the open ecosystem should look like, defined all the components of it, how you integrate them, validate them, and they were able to, well, go through it and deploy it. To your point, for an open, closed systems, as how you actually start transforming the existing network into the open one, that's going to go to a different process, right? You need to figure out how these new open systems can interrupt and work together with existing networks. So, that's one likely some of those carriers will start in an isolated area and grow from there. Deploy an open system in a rural area, for example, and then build from there. >> So, what a bank would do is they say, "Okay, we're going to write in our own abstraction layer." >> Gautam: Yeah. >> Right? "Using microservices, we're going to connect to the cloud. And we're going to, you know, put maybe some lower risk applications in the cloud first and then we're going to create our own cloud." Is there a similar dynamic here? >> Yeah, I mean, so I think you're spot on, right? Like, I think one of the things that we are seeing with the telco operators that we've spoken to is they're very risk averse. >> Yep. >> Right, they have very strong SLA requirements. They cannot go down even for a second. So, what that basically means is the innovation aspect is constrained by the risks that they perceive on any changes that you want to make on the architecture. So, the question that comes up is how do we make it easier for them to not worry about the bare minimum requirements of making sure the network's running and working while thinking about the new innovative technologies and solutions you want to build on the start. So, back to your bank example, nine years ago, no one in a bank even was thinking about like applications that will run on the cloud. Like for them, it was like a side project. They'll try and test something, see if it works, and then they'll think about cloud in the future, right? But now, core applications on banks are actually being built on public cloud. I think we see the same happening with the telco operators as well. Right now, they're understanding the move from a closed ecosystem to an open ecosystem. They understand the value proposition. On the core side, it's already happening a lot. And I think they are slowly moving there and that's where I think Tibor and team have been doing a great job working with our customers to make the transition happen. >> But there are so many permutations. >> Right. >> And integration points. How is Dell addressing that across the ecosystem? >> So, to give you an example, we talked about OTEL, which is our brand new, kind of 13,000 square feet lab that we kind of inaugurated last year based in Round Rock, Texas. >> Dave: Open Telecom. >> Dave and Tibor: Ecosystem Lab. >> Correct, great. And so, as part of that, that's a physical lab but more importantly, that's kind of a community where partners, customers come together to actually, and collaborate and work on these solutions. And as part of this, we also develop what we call the SIP, or Solution Integration Platform, to enable exactly what you just said. Making sure that we have a platform that actually can take all these various components, validate them individually, combine them, and then provide a DevOps and GitOps model, how you actually combine them, provide the BOM or SBOM, and then push that to pre-production and deployments for our customers. So, that's part of the challenge as we talked earlier. And that's how Dell and we are looking at actually enabling this basically, the validation of this disaggregated wall. >> Oh. >> Sorry, I just wanted to- >> Go ahead. >> just going to add one more point, right? So, when we look at the partners that we are working with as well in the OTEL and there are three ways we are working with them. At the bare minimum, we want to make sure that solution will run on the Dell infrastructure and the hardware, right? So, we have the self-certification process. We had a lot of good uptake on it and we are seeing a lot more come in. In fact, I had a check-in with "theCUBE" this morning in our side and it's more than a hundred plus partners already interested in going through that. Awesome. Then we have other places where we work on with partners to build reference architectures together, right? So, we want some sort of validated solution that will work together that we can take to the market. And then we also have engineered solutions that we are building with partners like the infrastructure block offering that we have taken where it's all pre-packaged, pre-built by Dell, working very closely with our partners. So, the telcos don't have to worry about deployment, integration, and everything else that comes along. >> And I presume the security supply chain is part of that- >> Yes. >> bill of materials- >> Absolutely. >> you just described. >> Yeah. >> Exactly. >> And that would include all those levels, the engineered systems, the reference architectures as well? And how do you decide like candidates, we can't do it all, right? So, it's the big markets get the engineered system, is that right? How do you adjudicate there? >> Yeah, so I mean, I think there are a couple of angles to look at it, right? I think the first and foremost is where we see the biggest demand is coming from the customers in terms of the stack they already have and where they have the pain points. >> Dave: Okay. >> Right, so this is why we are working with Red Hat and Wind River, as an example, because they are in most of the deployments that we are aware of with the customers and where we see an opportunity for Dell to partner with these partners. I think we are seeing a lot of new players also coming up the stack. And as they come up the stack and we find opportunities to co-build and co-innovate, absolutely we'll be building joint solutions with them as well. >> Where are you on, from a partnership perspective, on the strategic vision? You mentioned a number of things that have already been accomplished, quite a few. But from your journey perspective on that strategy, where are you? >> Yeah, so it's a really good question. I think we really want to be the partner of choice for all technology and services company within the telecom space. We're looking to drive the transformation in the network area, right? So, that's the vision that we have in the telecom system business from a partnership side. We have created some really good strategic partnerships with key providers, with independent software vendors, the network equipment providers. We're having some really good, strategic conversations with them. You've heard some of the announcement come out today, the work we are doing with Nokia, with Samsung, the Red Hat announcement, the Wind River, and so on and so forth. And there's a lot more in the pipeline. But more importantly, we want to grow the impact of the ecosystem. So, that's why we are launching the partner community today as well to make that happen. >> How does the lab work? Who has access to it? Can I self-certify? If I can self-certify, how do you make sure that I'm following the rules, all of the stuff- >> Sure. >> that you would- >> Absolutely. >> expect. >> So yes, you can self-certify, that's Gautam just mentioned. We already had quite a few ISVs go through that self-certification. And then there's also, there's reference architecture that's being done and other engineered solutions that we talked about earlier. And the lab is set up in a way that when needed, test lines can be isolated. So, only certain set of partners have access to it. So, it's made up in a way that enables collaborations. At the same times, it kind of enables a certain set of customers and partners working together without having challenges of having a completely open system. >> Okay, but so, if I want to do something with you guys and let's say, I am a candidate for an engineered system, so how does it work? Somebody's got to buy the equipment, right? He's got to ship it, right? There's a lot of Dell equipment involved. >> Tibor: That's correct. >> There's other third-party CapEx software, et cetera. So, you fund that, the partners fund that, it's a hybrid funding model, how does that all get done? >> So today, for obviously, we work closely with those partners. The engineered solutions we've developed so far, we've been funding it largely and as you said, is Dell infrastructure plus the cast layers and the cloud players we work with. So, we actually put those in place. We funded them, of course, with participation from them. And that's being done through those labs. >> Okay, great. So, you guys are providing that benefit to the ecosystem. Writing checks, bringing engineering talent to the table. >> Gautam: Yeah. >> Okay. >> And at the same time, I mean, it's a partnership at the end of the day, right? So, depending on the kind of partnership we are. So, if you're an ISV, it's fairly simple. Come into our labs. You don't have to worry about the infrastructure. >> Sure. >> Run it all in our labs and you're good. If you're a hardware vendor or a NEP, network equipment provider, that's where it gets interesting where they need to send us stuff, we need to send them stuff. And usually, like Tibor mentioned, it's a joint collaboration. We all put in our chips on the table and we work together. >> So, when you're having conversations with prospective partners, obviously different types of partners, Gautam, that you just talked about, what's in it for them? What's the value proposition? What does this community- >> Gautam: Yeah. >> give them from a competitive advantage standpoint? >> Yeah, so I mean there are, so the way I think about it, right? There are three things that Dell is bringing to the table. The first one is our experience and expertise on doing this transformation within the enterprise space and the learnings we have from there that we're bringing to telco now, right? So, Dell's been working with enterprises for many, many years. We are one of the big providers there. We all know what transformation enterprise went through. >> Tibor: Telco transformation, IT transformation. >> Exactly. And that's the experience we have, which we're bringing to telco. The second one is our investment, both from a go-to market side as well as the way we are working with our sales and marketing, and so on and so forth, with the engineering side. And finally, I think, and this for me is the best one, is Dell is a very partner-centric organization. >> Lisa: Yes. >> Our strategy is built around partnerships. So, that's the other piece that we bring to the table. >> Where are the labs? Oh, go ahead. >> And what's one more note on that, and also, we are talking about the engineered solutions. There's also the supply chain then because that's a basically appliance and then that goes to Dell's supply chain, which is best in class. >> Dave: And where are the labs? How many are there? >> So Round Rock, Texas is the biggest one, the 13,000 square feet. We also have extension to it. We just announced opening one in Cork for the EME market to making sure that we can cover any regulatory challenges. But also, basically any test lines that we need to cover that have latency challenges. That's why we want to make sure that we have labs in other areas as well. >> And the go-to market, is it an overlay organization, a dedicated organization? >> Yeah, so it's a bit of both as you know. But yeah, in the telecom business unit, we have a dedicated sales organization as well as an alliance organization working very closely with product and engineering to take it to market. >> Given the strength and the breadth of the partner program in the community, based on this is only day one of MWC but is there anything that you've heard today that excites you where telecom is going and where Dell and its ecosystem is going and really burgeoning? >> Oh, I've had I don't know how many meetings since 6:00 AM this morning. So, it's been an amazing event and we're just having so many great conversations with partners, our customers. And I think a lot of today is all about figuring out what our strategy and our vision is, where is each side going and what the overlap is. I think the end result's going to be follow up conversations with a lot of these partners that we are working with or will be working with soon. And then thinking about, do we build engineered solutions together? Do we go validated route? Like we going to figure that out. But I mean, for me, this is like the perfect place to come and share your vision and strategy and understand what we are trying to solve for. >> To me, what's been interesting that all the interactions and discussions are about how to get to or render open ecosystem. That's great to see that the focus is on how to make it work versus still questioning it and I think that's pretty good. >> Well, you guys launched this business I think during the pandemic, right? >> Yes. >> Yeah, that's right. >> So I mean, you could do a lot over Zoom, but as we were talking about earlier, having the face-to-face interaction, there's no replacement for it. The 6:00 AM meetings versus the 30 minute zoom calls and your body language, I mean, you learn so much that you can take away from these events. >> Absolutely. Seeing someone in 3D is so different and it's good to build that relationship and rapport as well with the folks. >> I agree. >> It is. There's so much value in the hallway conversations that you can't have over Zoom. So, I guess last question for you as we head into to day two, what are some of the things that we can be on the lookout for from Dell and its ecosystem? >> Hmm. >> Interesting. (Tibor chuckling) >> I mean, all our announcements are out. I think what you can look at for us to really be leading in this segment, taking a leadership role, and continuously looking at how we can really enable the open ecosystem and how we can provide more value there, and how we can see how we can lead in this space. >> How you can lead in this space. >> Yeah, I mean for me, I mean, day two is like, I have a lot more meetings in day two than day one so I don't know if it's like people flying in today or what, but it's amazing to just meet the partners and customers. >> So, that theme of velocity for you is going to keep going. >> Oh, it's not stopping. (Lisa laughing) That's for sure. We are excited about it. >> Well, thank you for carving out some time to talk to with us on "theCUBE" about the partner program, the open ecosystem and the commitment to growing that and enabling partners to really differentiate their services with Dell. We appreciate it. >> We appreciate it as well. >> Thank you very much. >> Thank you for having us. >> Thanks. >> Our pleasure. For our guests and for Dave Vellante, I'm Lisa Martin. You're watching "theCUBE" live in Barcelona, Spain at MWC '23. Day one of our coverage. Be right back with our final guest of the day so stick around. (upbeat music continues)
SUMMARY :
that drive human progress. from "theCUBE" but one of the things And so, you got to have labs. of telecom systems and Guys, great to have you here. I love to be here. Talk about the disaggregation era. for the telcos to innovate And so, the top three, and provide the pipeline to the customer. Whereas the greenfield, we a leader in the space, So, what a bank would do is they say, applications in the cloud first things that we are seeing So, the question that comes that across the ecosystem? So, to give you an example, So, that's part of the At the bare minimum, we want to make sure in terms of the stack they already have that we are aware of with the customers on the strategic vision? So, that's the vision that we have And the lab is set up in the equipment, right? the partners fund that, and the cloud players we work with. that benefit to the ecosystem. So, depending on the kind We all put in our chips on the and the learnings we have from there Tibor: Telco transformation, And that's the experience we have, So, that's the other piece Where are the labs? and then that goes to Dell's supply chain, to making sure that we can of both as you know. that we are working with that all the interactions having the face-to-face interaction, different and it's good to build that we can be on the lookout for and how we can see how we the partners and customers. So, that theme of velocity We are excited about it. about the partner program, final guest of the day
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Samsung | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Gautam | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Lisa | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Tibor | PERSON | 0.99+ |
Gautam Bhagra | PERSON | 0.99+ |
30 minute | QUANTITY | 0.99+ |
Tibor Fabry-Asztalos | PERSON | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
13,000 square feet | QUANTITY | 0.99+ |
6:00 AM | DATE | 0.99+ |
last year | DATE | 0.99+ |
telco | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
Cork | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Barcelona, Spain | LOCATION | 0.99+ |
Round Rock, Texas | LOCATION | 0.99+ |
Tibor Fabry Asztalos | PERSON | 0.99+ |
Today | DATE | 0.99+ |
one | QUANTITY | 0.99+ |
more than a hundred plus partners | QUANTITY | 0.99+ |
DISH | ORGANIZATION | 0.99+ |
three ways | QUANTITY | 0.99+ |
today | DATE | 0.99+ |
first | QUANTITY | 0.99+ |
40, 50 year | QUANTITY | 0.98+ |
'23 | DATE | 0.98+ |
both | QUANTITY | 0.98+ |
Open Telecom Ecosystem Lab | ORGANIZATION | 0.98+ |
nine years ago | DATE | 0.98+ |
each side | QUANTITY | 0.98+ |
Mobile World | LOCATION | 0.98+ |
four days | QUANTITY | 0.98+ |
theCUBE | TITLE | 0.97+ |
this year | DATE | 0.97+ |
first one | QUANTITY | 0.97+ |
day two | QUANTITY | 0.96+ |
MWC | EVENT | 0.96+ |
OTEL lab | ORGANIZATION | 0.95+ |
day one | QUANTITY | 0.95+ |
6:00 AM this morning | DATE | 0.95+ |
Manish Singh, Dell Technologies & Doug Wolff, Dell Technologies | MWC Barcelona 2023
>> Announcer: theCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (upbeat music) >> Welcome to the Fira in Barcelona, everybody. This is theCUBE's coverage of MWC 23, day one of that coverage. We have four days of wall-to-wall action going on, the place is going crazy. I'm here with Dave Nicholson, Lisa Martin is also in the house. Today's ecosystem day, and we're really excited to have Manish Singh who's the CTO of the Telecom Systems Business unit at Dell Technologies. He's joined by Doug Wolf who's the head of strategy for the Telecom Systems Business unit at Dell. Gents, welcome. What a show. I mean really the first major MWC or used to be Mobile World Congress since you guys have launched your telecom business, you kind of did that sort of in the Covid transition, but really exciting, obviously a huge, huge venue to match the huge market. So Manish, how did you guys get into this? What did you see? What was the overall thinking to get Dell into this business? >> Manish: Yeah, well, I mean just to start with you know, if you look at the telecom ecosystem today, the service providers in particular, they are looking for network transformation, driving more disaggregation into their network so that they can get better utilization of the infrastructure, but then also get more agility, more cloud native characteristics onto their, for their networks in particular. And then further on, it's important for them to really start to accelerate the pace of innovation on the networks itself, to start more supply chain diversity, that's one of the challenges that they've been having. And so there've been all these market forces that have been really getting these service providers to really start to transform the way they have built the infrastructure in the past, which was legacy monolithic architectures to more cloud native disaggregated. And from a Dell perspective, you know, that really gives us the permission to play, to really, given all the expertise on the work we have done in the IT with all the IT transformations to leverage all that expertise and bring that to the service providers and really help them in accelerating their network transformation. So that's where the journey started. We've been obviously ever since then working on expanding the product portfolio on our compute platforms to bring Teleco great compute platforms with more capabilities than we can talk about that. But then working with partners and building the ecosystem to again create this disaggregated and open ecosystem that will be more cloud native and really meet the objective that the service providers are after. >> Dave Vellante: Great, thank you. So, Doug the strategy obviously is to attack this market, as Manish said, from an open standpoint, that's sort of new territory. It's like a little bit like the wild, wild west. So maybe you could double click on what Manish was saying from a, from a strategy standpoint, yes, the Telecos need to be more flexible, they need to be more open, but they also need this reliability piece. So talk about that from a strategy standpoint of what you guys saw. >> Doug: Yeah, absolutely. As Manish mentioned, you know, Dell getting into open systems isn't something new. You know, Dell has been kind of playing in that world for years and years, but the opportunity in Telecom that came was opening of the RAN, the core network, the edge, all of these with 5G really created a wide opening for us. So we started developing products and solutions, you know, built our first Telecom grade servers for open RAN over the last year, we'll talk about those at the show. But you know, as, as Manish mentioned, an open ecosystem is new to Telecom. I've been in the Telecom business along with Manish for, you know, 25 plus years and this is a new thing that they're embarking on. So started with virtualization about five, six years ago, and now moving to cloud native architectures on the core, suddenly there's this need to have multiple parties partner really well, share specifications, and put that together for an operator to consume. And I think that's just the start of really where all the challenges are and the opportunities that we see. >> Where are we in this transition cycle? When the average consumer hears 5G, feels like it's been around for a long time because it was hyped beforehand. >> Doug: Yeah. >> If you're talking about moving to an open infrastructure model from a proprietary closed model, when is the opportunity for Dell to become part of that? Is it, are there specific sites that have already transitioned to 5G, therefore they've either made the decision to be open or not? Or are there places where the 5G transition has taken place, and they might then make a transition to open brand with 5G? Where, where are we in that cycle? What does the opportunity look like? >> I'll kind of take it from the typology of the operator, and I'm sure Manish will build on this, but if I look back on the core, started to get virtualized you know, back around 2015-16 with some of the lead operators like AT&T et cetera. So Dell has been partnering with those operators for some years. So it really, it's happening on the core, but it's moving with 5G to more of a cloud-like architecture, number one. And number two, they're going beyond just virtualizing the network. You know, they previously had used OpenStack and most of them are migrating to more of a cloud native architecture that Manish mentioned. And that is a bit different in terms of there's more software vendors in that ecosystem because the software is disaggregated also. So Dell's been playing in the core for a number of years, but we brought out new solutions we've announced at the show for the core. And the parts that are really starting that transition of maybe where the core was back in 2015 is on the RAN and on the edge in particular. >> Because NFV kind of predated the ascendancy of cloud. >> Exactly, yeah. >> Right, so it really didn't have the impact that people had hoped. And there's some, when you look back, 'cause it's not same wine, new bottle as the open systems movement, there are a lot of similarities but you know, you mentioned cloud, and cloud native, you really didn't have, back in the nineties, true engineered systems. You didn't really have AI that, you know, to speak of at the sort of volume of the data that we have. So Manish, from a CTO's perspective, how are you attacking some of those differences in bringing that to market? >> Manish: Yeah, I mean, I think you touched on some very important points there. So first of all, the duck's point, a lot of this transformation started in the core, right? And as the technology evolution progress, the opportunities opened up. It has now come into the edge and the radio access network as well, in particular with open RAN. And so when we talk about the disaggregation of the infrastructure from the software itself and an open ecosystem, this now starts to create the opportunity to accelerate innovation. And I really want to pick up on the point that you'd said on AI, for example. AI and machine learning bring a whole new set of capabilities and opportunities for these service providers to drive better optimization, better performance, better sustainability and energy efficiency on their infrastructure, on and on and on. But to really tap into these technologies, they really need to open that up to third parties implementation solutions that are coming up. And again, the end objective remains to accelerate that innovation. Now that said, all these things need to be brought together, right? And delivered and deployed in the network without any degradation in the KPIs and actually improving the performance on different vectors, right? So this is what the current state of play is. And with this aggregation I'm definitely a believer that all these new technologies, including AI, machine learning, and there's a whole area, host area of problems that can be solved and attacked and are actually getting attacked by applying AI and machine learning onto these networks. >> Open obviously is good. Nobody's ever going to, you know, argue that open is a bad thing. It's like democracy is a good thing, right? At least amongst us. And so, but, the RAN, the open RAN, has to be as reliable and performant, right, as these, closed networks. Or maybe not, maybe it doesn't have to be identical. Just has to be close enough in order for that tipping point to occur. Is that a fair summarization? What are you guys hearing from carriers in terms of their willingness to sort of put their toe in the water and, and what could we expect in terms of the maturity model of, of open RAN and adoption? >> Right, so I mean I think on, on performance that, that's a tough one. I think the operators will demand performance and you've seen experiments, you've really seen more of the Greenfield operators kind of launch. >> Okay. >> Doug: Open RAN or vRAN type solutions. >> So they're going to disrupt. >> Doug: Yeah, they're going to disrupt. >> Yeah. >> Doug: And there's flexibility in an open RAN architecture also for 5G that they, that they're interested in and I think the Brownfield operators are too, but let's say maybe the Greenfield jump first in terms of doing that from a mass deployment perspective. But I still think that it's going to be critical to meet very similar SLAs and end user performance. And, you know, I think that's where, you know, maturity of that model is what's required. I think Brownfield operators are conservative in terms of, you know, going with something they know, but the opportunities and the benefits of that architecture and building new flexible, potentially cost advantaged over time solutions, that's what the, where the real interest is going forward. >> And new services that you can introduce much more quickly. You know, the interesting thing about Dell to me, you don't compete with the carriers, the public cloud vendors though, the carriers are concerned about them sort of doing an end run on them. So you provide a potential partnership for the carriers that's non-threatening, right? 'Cause you're, you're an arms dealer, you're selling hardware and software, right? But, but how do you see that? Because we heard in the keynote today, one of the Teleco, I think it was the chairman of Telefonica said, you know, cloud guys can't do this alone. You know, they need, you know, this massive, you know, build out. And so, what do you think about that in terms of your relationship with the carriers not being threatening? I mean versus say potentially the cloud guys, who are also your partners, I understand, it's a really interesting dynamic, isn't it? >> Manish: Yeah, I mean I think, you know, I mean, the way I look at it, the carriers actually need someone like Dell who really come in who can bring in the right capabilities, the right infrastructure, but also bring in the ecosystem together and deliver a performance solution that they can deploy and that they can trust, number one. Number two, to your point on cloud, I mean, from a Dell perspective, you know, we announced our Dell Telecom Multicloud Foundation and as part of that last year in September, we announced what we call is the Dell Telecom Infrastructure Blocks. The first one we announced with Wind River, and this is, think of it as the, you know, hardware and the cashier all pre-integrated with lot of automation around it, factory integrated, you know, delivered to customers in an integrated model with all the licenses, everything. And so it starts to solve the day zero, day one, day two integration deployment and then lifecycle management for them. So to broaden the discussion, our view is it's a multicloud world, the future is multicloud where you can have different clouds which can be optimized for different workloads. So for example, while our work with Wind River initially was very focused on virtualization of the radio access network, we just announced our infrastructure block with Red Hat, which is very much targeted and optimized for core network and edge, right? So, you know, there are different workflows which will require different capabilities also. And so, you know, again, we are bringing those things to these service providers to again, bring those cloud characteristics and cloud native architecture for their network. >> And It's going to be hybrid, to your point. >> David N.: And you, just hit on something, you said cloud characteristics. >> Yeah. >> If you look at this through the lens of kind of the general world of IT, sometimes when people hear the word cloud, they immediately leap to the idea that it's a hyperscale cloud provider. In this scenario we're talking about radio towers that have intelligence living on them and physically at the base. And so the cloud characteristics that you're delivering might be living physically in these remote locations all over the place, is that correct? >> Yeah, I mean that, that's true. That will definitely happen over time. But I think, I think we've seen the hyperscalers enter, you know, public cloud providers, enter at the edge and they're dabbling maybe with private, but I think the public RAN is another further challenge. I think that maybe a little bit down the road for them. So I think that is a different characteristic that you're talking about managing the macro RAN environment. >> Manish: If I may just add one more perspective of this cloud, and I mean, again, the hyperscale cloud, right? I mean that world's been great when you can centralize a lot of compute capability and you can then start to, you know, do workload aggregation and use the infrastructure more efficient. When it comes to Telecom, it is inherently it distributed architecture where you have access, you talked about radio access, your port, and it is inherently distributed because it has to provide the coverage and capacity. And so, you know, it does require different kind of capabilities when you're going out and about, and this is where I was talking about things like, you know, we just talked, we just have been working on our bare metal orchestration, right? This is what we are bringing is a capability where you can actually have distributed infrastructure, you can deploy, you can actually manage, do lifecycle management, in a distributed multicloud form. So it does require, you know, different set of capabilities that need to be enabled. >> Some, when talking about cloud, would argue that it's always been information technology, it always will be information technology, and especially as what we might refer to as public cloud or hyperscale cloud providers, are delivering things essentially on premises. It's like, well, is that cloud? Because it feels like some of those players are going to be delivering physical infrastructure outside of their own data centers in order to address this. It seems the nature, the nature of the beast is that some of these things need to be distributed. So it seems perfectly situated for Dell. That's why you guys are both at Dell now and not working for other Telecom places, right? >> Exactly. Exactly, yes. >> It's definitely an exciting space. It's transformed, the networks are under transformation and I do think that Dell's very well positioned to, to really help the customers, the service providers in accelerating their transformation journey with an open ecosystem. >> Dave V.: You've got the brand, and the breadth, and the resources to actually attract an ecosystem. But I wonder if you could sort of take us through your strategy of ecosystem, the challenges that you've seen in developing that ecosystem and what the vision is that ultimately, what's the outcome going to be of that open ecosystem? >> Yeah, I can start. So maybe just to give you the big picture, right? I mean the big picture, is disaggregation with performance, right, TCO models to the service providers, right? And it starts at the infrastructure layer, builds on bringing these cloud capabilities, the cast layer, right? Bringing the right accelerators. All of this requires to pull the ecosystem. So give you an example on the infrastructure in a Teleco grade servers like XR8000 with Sapphire, the new intel processors that we've just announced, and an extended array of servers. These are Teleco grade, short depth, et cetera. You know, the Teleco great characteristic. Working with the partners like Marvel for bringing in the accelerators in there, that's important to again, drive the performance and optimize for the TCO. Working then with partners like Wind River, Red Hat, et cetera, to bring in the cast capabilities so you can start to see how this ecosystem starts to build up. And then very recently we announced our private 5G solution with AirSpan and Expeto on the core site. So bringing those workloads together. Similarly, we have an open RAN solution we announce with Fujitsu. So it's, it's open, it's disaggregated, but bringing all these together. And one of the last things I would say is, you know, to make all this happen and make all of these, we've also been putting together our OTEL, our open Telecom ecosystem lab, which is very much geared, really gives this open ecosystem a playground where they can come in and do all that heavy lifting, which is anyways required, to do the integration, optimization, and board. So put all these capabilities in place, but the end goal, the end vision again, is that cloud native disaggregated infrastructure that starts to innovate at the speed of software and scales at the speed of cloud. >> And this is different than the nineties. You didn't have something like OTEL back then, you know, you didn't have the developer ecosystem that you have today because on top of everything that you just said, Manish, are new workloads and new applications that are going to be developed. Doug, anything you'd add to what Manish said? >> Doug: Yeah, I mean, as Manish said, I think adding to the infrastructure layers, which are, you know, critical for us to, to help integrate, right? Because we kind of took a vertical Teleco stack and we've disaggregated it, and it's gotten a little bit more complex. So our Solutions Dell Technology infrastructure block, and our lab infrastructure with OTEL, helps put those pieces together. But without the software players in this, you know, that's what we really do, I think in OTEL. And that's just starting to grow. So integrating with those software providers with that integration is something that the operators need. So we fill a gap there in terms of either providing engineered solutions so they can readily build on or actually bringing in that software provider. And I think that's what you're going to see more from us going forward is just extending that ecosystem even further. More software players effectively. >> In thinking about O-RAN, are they, is it possible to have the low latency, the high performance, the reliability capabilities that carriers are used to and the flexibility? Or can you sort of prioritize one over the other from a go to market and rollout standpoint and optimize one, maybe get a foothold in the market? How do you see that balance? >> Manish: Oh the answer is absolutely yes you can have both We are on that journey, we are on that journey. This is where all these things I was talking about in terms of the right kind of accelerators, right kind of capabilities on the infrastructure, obviously retargeting the software, there are certain changes, et cetera that need to be done on the software itself to make it more cloud native. And then building all the surrounding capabilities around the CICD pipeline and all where it's not just day zero or day one that you're doing the cloud-like lifecycle management of this infrastructure. But the answer to your point, yes, absolutely. It's possible, the technology is there, and the ecosystem is coming together, and that's the direction. Now, are there challenges? Absolutely there are challenges, but directionally that's the direction the industry is moving to. >> Dave V.: I guess my question, Manish, is do they have to go in lockstep? Because I would argue that the public cloud when it first came out wasn't nearly as functional as what I could get from my own data center in terms of recovery, you know, backup and recovery is a perfect example and it took, you know, a decade plus to get there. But it was the flexibility, and the openness, and the developer affinity, the programmability, that attracted people. Do you see O-RAN following a similar path? Or does it, my question is does it have to have that carrier class reliability today? >> David N.: Everything on day one, does it have to have everything on day one? >> Yeah, I mean, I would say, you know, like again, the Greenfield operators I think we're, we're willing do a little bit more experimentation. I think the operators, Brownfield operators that have existing, you know, deployments, they're going to want to be closer. But I think there's room for innovation here. And clearly, you know, Manish came from, from Meta and we're, we've been very involved with TIP, we're very involved with the O-RAN alliance, and as Manish mentioned, with all those accelerators that we're working with on our infrastructure, that is a space that we're trying to help move the ball forward. So I think you're seeing deployments from mainstream operators, but it's maybe not in, you know, downtown New York deployment, they're more rural deployments. I think that's getting at, you know, kind of your question is there's maybe a little bit more flexibility there, they get to experiment with the technology and the flexibility and then I think it will start to evolve >> Dave V.: And that's where the disruption's going to come from, I think. >> David N.: Well, where was the first place you could get reliable 4K streaming of video content? It wasn't ABC, CBS, NBC. It was YouTube. >> Right. >> So is it possible that when you say Greenfield, are a lot of those going to be what we refer to as private 5G networks where someone may set up a private 5G network that has more functions and capabilities than the public network? >> That's exactly where I was going is that, you know, that that's why you're seeing us getting very active in 5G solutions that Manish mentioned with, you know, Expeto and AirSpan. There's more of those that we haven't publicly announced. So I think you'll be seeing more announcements from us, but that is really, you know, a new opportunity. And there's spectrum there also, right? I mean, there's public and private spectrum. We plan to work directly with the operators and do it in their spectrum when needed. But we also have solutions that will do it, you know, on non-public spectrum. >> So let's close out, oh go ahead. You you have something to add there? >> I'm just going to add one more point to Doug's point, right? Is if you look on the private 5G and the end customer, it's the enterprise, right? And they're, they're not a service provider. They're not a carrier. They're more used to deploying, you know, enterprise infrastructure, maintaining, managing that. So, you know, private 5G, especially with this open ecosystem and with all the open run capabilities, it naturally tends to, you know, blend itself very well to meet those requirements that the enterprise would have. >> And people should not think of private 5G as a sort of a replacement for wifi, right? It's to to deal with those, you know, intense situations that can afford the additional cost, but absolutely require the reliability and the performance and, you know, never go down type of scenario. Is that right? >> Doug: And low latencies usually, the primary characteristics, you know, for things like Industry 4.0 manufacturing requirements, those are tough SLAs. They're just, they're different than the operator SLAs for coverage and, you know, cell performance. They're now, you know, Five9 type characteristics, but on a manufacturing floor. >> That's why we don't use wifi on theCUBE to broadcast, we need a hard line. >> Yeah, but why wouldn't it replace wifi over time? I mean, you know, I still have a home phone number that's hardwired to align, but it goes to a voicemail. We don't even have handset anymore for it, yeah. >> I think, well, unless the cost can come down, but I think that wifi is flexible, it's cheap. It's, it's kind of perfect for that. >> Manish: And it's good technology. >> Dave V.: And it works great. >> David N.: For now, for now. >> Dave V.: But you wouldn't want it in those situations, and you're arguing that maybe. >> I'm saying eventually, what, put a sim in a device, I don't know, you know, but why not? >> Yeah, I mean, you know, and Dell offers, you know, from our laptop, you know, our client side, we do offer wifi, we do offer 4G and 5G solutions. And I think those, you know, it's a volume and scale issue, I think for the cost structure you're talking about. >> Manish: Come to our booth and see the connected laptop. >> Dave V.: Well let's, let's close on that. Why don't you guys talk a little bit about what you're going on at the show, I did go by the booth, you got a whole big lineup of servers. You got some, you know, cool devices going on. So give us the rundown and you know, let's end with the takeaways here. >> The simple rundown, a broad range of new powered servers, broad range addressing core, edge, RAN, optimized for those with all the different kind of acceleration capabilities. You can see that, you can see infrastructure blocks. These are with Wind River, with Red Hat. You can see OTEL, the open telecom ecosystem lab where all that playground, the integration, the real work, the real sausage makings happening. And then you will see some interesting solutions in terms of co-creation that we are doing, right? So you, you will see all of that and not to forget the connected laptops. >> Dave V.: Yeah, yeah, cool. >> Doug: Yeah and, we mentioned it before, but just to add on, I think, you know, for private 5G, you know, we've announced a few offers here at the show with partners. So with Expeto and AirSpan in particular, and I think, you know, I just want to emphasize the partnerships that we're doing. You know, we're doing some, you know, fundamental integration on infrastructure, bare metal and different options for the operators to get engineered systems. But building on that ecosystem is really, the move to cloud native is where Dell is trying to get in front of. And we're offering solutions and a much larger ecosystem to go after it. >> Dave V.: Great. Manish and Doug, thanks for coming on the program. It was great to have you, awesome discussion. >> Thank you for having us. >> Thanks for having us. >> All right, Dave Vellante for Dave Nicholson and Lisa Martin. We're seeing the disaggregation of the Teleco network into open ecosystems with integration from companies like Dell and others. Keep it right there for theCUBE's coverage of MWC 23. We'll be right back. (upbeat tech music)
SUMMARY :
that drive human progress. I mean really the first just to start with you know, of what you guys saw. for open RAN over the last year, When the average consumer hears 5G, and on the edge in particular. the ascendancy of cloud. in bringing that to market? So first of all, the duck's point, And so, but, the RAN, the open RAN, the Greenfield operators but the opportunities and the And new services that you and this is, think of it as the, you know, And It's going to be you said cloud characteristics. and physically at the base. you know, public cloud providers, So it does require, you know, the nature of the beast Exactly, yes. the service providers in and the resources to actually So maybe just to give you ecosystem that you have today something that the operators need. But the answer to your and it took, you know, a does it have to have that have existing, you know, deployments, going to come from, I think. you could get reliable 4K but that is really, you You you have something to add there? that the enterprise would have. It's to to deal with those, you know, the primary characteristics, you know, we need a hard line. I mean, you know, I still the cost can come down, Dave V.: But you wouldn't And I think those, you know, and see the connected laptop. So give us the rundown and you know, and not to forget the connected laptops. the move to cloud native is where Dell coming on the program. of the Teleco network
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Doug | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Fujitsu | ORGANIZATION | 0.99+ |
ABC | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Doug Wolf | PERSON | 0.99+ |
OTEL | ORGANIZATION | 0.99+ |
CBS | ORGANIZATION | 0.99+ |
Manish Singh | PERSON | 0.99+ |
NBC | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
David N. | PERSON | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Marvel | ORGANIZATION | 0.99+ |
AirSpan | ORGANIZATION | 0.99+ |
Brownfield | ORGANIZATION | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
Greenfield | ORGANIZATION | 0.99+ |
Teleco | ORGANIZATION | 0.99+ |
Manish | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Expeto | ORGANIZATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
YouTube | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
Dave V. | PERSON | 0.99+ |
Manish | PERSON | 0.99+ |
MWC 23 | EVENT | 0.99+ |
Doug Wolff | PERSON | 0.99+ |
first | QUANTITY | 0.99+ |
Dell Telecom Multicloud Foundation | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
September | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
25 plus years | QUANTITY | 0.99+ |
O-RAN | ORGANIZATION | 0.99+ |
Telecos | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
Breaking Analysis: MWC 2023 goes beyond consumer & deep into enterprise tech
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR, this is Breaking Analysis with Dave Vellante. >> While never really meant to be a consumer tech event, the rapid ascendancy of smartphones sucked much of the air out of Mobile World Congress over the years, now MWC. And while the device manufacturers continue to have a major presence at the show, the maturity of intelligent devices, longer life cycles, and the disaggregation of the network stack, have put enterprise technologies front and center in the telco business. Semiconductor manufacturers, network equipment players, infrastructure companies, cloud vendors, software providers, and a spate of startups are eyeing the trillion dollar plus communications industry as one of the next big things to watch this decade. Hello, and welcome to this week's Wikibon CUBE Insights, powered by ETR. In this Breaking Analysis, we bring you part two of our ongoing coverage of MWC '23, with some new data on enterprise players specifically in large telco environments, a brief glimpse at some of the pre-announcement news and corresponding themes ahead of MWC, and some of the key announcement areas we'll be watching at the show on theCUBE. Now, last week we shared some ETR data that showed how traditional enterprise tech players were performing, specifically within the telecoms vertical. Here's a new look at that data from ETR, which isolates the same companies, but cuts the data for what ETR calls large telco. The N in this cut is 196, down from 288 last week when we included all company sizes in the dataset. Now remember the two dimensions here, on the y-axis is net score, or spending momentum, and on the x-axis is pervasiveness in the data set. The table insert in the upper left informs how the dots and companies are plotted, and that red dotted line, the horizontal line at 40%, that indicates a highly elevated net score. Now while the data are not dramatically different in terms of relative positioning, there are a couple of changes at the margin. So just going down the list and focusing on net score. Azure is comparable, but slightly lower in this sector in the large telco than it was overall. Google Cloud comes in at number two, and basically swapped places with AWS, which drops slightly in the large telco relative to overall telco. Snowflake is also slightly down by one percentage point, but maintains its position. Remember Snowflake, overall, its net score is much, much higher when measuring across all verticals. Snowflake comes down in telco, and relative to overall, a little bit down in large telco, but it's making some moves to attack this market that we'll talk about in a moment. Next are Red Hat OpenStack and Databricks. About the same in large tech telco as they were an overall telco. Then there's Dell next that has a big presence at MWC and is getting serious about driving 16G adoption, and new servers, and edge servers, and other partnerships. Cisco and Red Hat OpenShift basically swapped spots when moving from all telco to large telco, as Cisco drops and Red Hat bumps up a bit. And VMware dropped about four percentage points in large telco. Accenture moved up dramatically, about nine percentage points in big telco, large telco relative to all telco. HPE dropped a couple of percentage points. Oracle stayed about the same. And IBM surprisingly dropped by about five points. So look, I understand not a ton of change in terms of spending momentum in the large sector versus telco overall, but some deltas. The bottom line for enterprise players is one, they're just getting started in this new disruption journey that they're on as the stack disaggregates. Two, all these players have experience in delivering horizontal solutions, but now working with partners and identifying big problems to be solved, and three, many of these companies are generally not the fastest moving firms relative to smaller disruptive disruptors. Now, cloud has been an exception in fairness. But the good news for the legacy infrastructure and IT companies is that the telco transformation and the 5G buildout is going to take years. So it's moving at a pace that is very favorable to many of these companies. Okay, so looking at just some of the pre-announcement highlights that have hit the wire this week, I want to give you a glimpse of the diversity of innovation that is occurring in the telecommunication space. You got semiconductor manufacturers, device makers, network equipment players, carriers, cloud vendors, enterprise tech companies, software companies, startups. Now we've included, you'll see in this list, we've included OpeRAN, that logo, because there's so much buzz around the topic and we're going to come back to that. But suffice it to say, there's no way we can cover all the announcements from the 2000 plus exhibitors at the show. So we're going to cherry pick here and make a few call outs. Hewlett Packard Enterprise announced an acquisition of an Italian private cellular network company called AthoNet. Zeus Kerravala wrote about it on SiliconANGLE if you want more details. Now interestingly, HPE has a partnership with Solana, which also does private 5G. But according to Zeus, Solona is more of an out-of-the-box solution, whereas AthoNet is designed for the core and requires more integration. And as you'll see in a moment, there's going to be a lot of talk at the show about private network. There's going to be a lot of news there from other competitors, and we're going to be watching that closely. And while many are concerned about the P5G, private 5G, encroaching on wifi, Kerravala doesn't see it that way. Rather, he feels that these private networks are really designed for more industrial, and you know mission critical environments, like factories, and warehouses that are run by robots, et cetera. 'Cause these can justify the increased expense of private networks. Whereas wifi remains a very low cost and flexible option for, you know, whatever offices and homes. Now, over to Dell. Dell announced its intent to go hard after opening up the telco network with the announcement that in the second half of this year it's going to begin shipping its infrastructure blocks for Red Hat. Remember it's like kind of the converged infrastructure for telco with a more open ecosystem and sort of more flexible, you know, more mature engineered system. Dell has also announced a range of PowerEdge servers for a variety of use cases. A big wide line bringing forth its 16G portfolio and aiming squarely at the telco space. Dell also announced, here we go, a private wireless offering with airspan, and Expedo, and a solution with AthoNet, the company HPE announced it was purchasing. So I guess Dell and HPE are now partnering up in the private wireless space, and yes, hell is freezing over folks. We'll see where that relationship goes in the mid- to long-term. Dell also announced new lab and certification capabilities, which we said last week was going to be critical for the further adoption of open ecosystem technology. So props to Dell for, you know, putting real emphasis and investment in that. AWS also made a number of announcements in this space including private wireless solutions and associated managed services. AWS named Deutsche Telekom, Orange, T-Mobile, Telefonica, and some others as partners. And AWS announced the stepped up partnership, specifically with T-Mobile, to bring AWS services to T-Mobile's network portfolio. Snowflake, back to Snowflake, announced its telecom data cloud. Remember we showed the data earlier, it's Snowflake not as strong in the telco sector, but they're continuing to move toward this go-to market alignment within key industries, realigning their go-to market by vertical. It also announced that AT&T, and a number of other partners, are collaborating to break down data silos specifically in telco. Look, essentially, this is Snowflake taking its core value prop to the telco vertical and forming key partnerships that resonate in the space. So think simplification, breaking down silos, data sharing, eventually data monetization. Samsung previewed its future capability to allow smartphones to access satellite services, something Apple has previously done. AMD, Intel, Marvell, Qualcomm, are all in the act, all the semiconductor players. Qualcomm for example, announced along with Telefonica, and Erickson, a 5G millimeter network that will be showcased in Spain at the event this coming week using Qualcomm Snapdragon chipset platform, based on none other than Arm technology. Of course, Arm we said is going to dominate the edge, and is is clearly doing so. It's got the volume advantage over, you know, traditional Intel, you know, X86 architectures. And it's no surprise that Microsoft is touting its open AI relationship. You're going to hear a lot of AI talk at this conference as is AI is now, you know, is the now topic. All right, we could go on and on and on. There's just so much going on at Mobile World Congress or MWC, that we just wanted to give you a glimpse of some of the highlights that we've been watching. Which brings us to the key topics and issues that we'll be exploring at MWC next week. We touched on some of this last week. A big topic of conversation will of course be, you know, 5G. Is it ever going to become real? Is it, is anybody ever going to make money at 5G? There's so much excitement around and anticipation around 5G. It has not lived up to the hype, but that's because the rollout, as we've previous reported, is going to take years. And part of that rollout is going to rely on the disaggregation of the hardened telco stack, as we reported last week and in previous Breaking Analysis episodes. OpenRAN is a big component of that evolution. You know, as our RAN intelligent controllers, RICs, which essentially the brain of OpenRAN, if you will. Now as we build out 5G networks at massive scale and accommodate unprecedented volumes of data and apply compute-hungry AI to all this data, the issue of energy efficiency is going to be front and center. It has to be. Not only is it a, you know, hot political issue, the reality is that improving power efficiency is compulsory or the whole vision of telco's future is going to come crashing down. So chip manufacturers, equipment makers, cloud providers, everybody is going to be doubling down and clicking on this topic. Let's talk about AI. AI as we said, it is the hot topic right now, but it is happening not only in consumer, with things like ChatGPT. And think about the theme of this Breaking Analysis in the enterprise, AI in the enterprise cannot be ChatGPT. It cannot be error prone the way ChatGPT is. It has to be clean, reliable, governed, accurate. It's got to be ethical. It's got to be trusted. Okay, we're going to have Zeus Kerravala on the show next week and definitely want to get his take on private networks and how they're going to impact wifi. You know, will private networks cannibalize wifi? If not, why not? He wrote about this again on SiliconANGLE if you want more details, and we're going to unpack that on theCUBE this week. And finally, as always we'll be following the data flows to understand where and how telcos, cloud players, startups, software companies, disruptors, legacy companies, end customers, how are they going to make money from new data opportunities? 'Cause we often say in theCUBE, don't ever bet against data. All right, that's a wrap for today. Remember theCUBE is going to be on location at MWC 2023 next week. We got a great set. We're in the walkway in between halls four and five, right in Congress Square, stand CS-60. Look for us, we got a full schedule. If you got a great story or you have news, stop by. We're going to try to get you on the program. I'll be there with Lisa Martin, co-hosting, David Nicholson as well, and the entire CUBE crew, so don't forget to come by and see us. I want to thank Alex Myerson, who's on production and manages the podcast, and Ken Schiffman, as well, in our Boston studio. Kristen Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at SiliconANGLE.com. He does some great editing. Thank you. All right, remember all these episodes they are available as podcasts wherever you listen. All you got to do is search Breaking Analysis podcasts. I publish each week on Wikibon.com and SiliconANGLE.com. All the video content is available on demand at theCUBE.net, or you can email me directly if you want to get in touch David.Vellante@SiliconANGLE.com or DM me @DVellante, or comment on our LinkedIn posts. And please do check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching. We'll see you next week at Mobile World Congress '23, MWC '23, or next time on Breaking Analysis. (bright music)
SUMMARY :
bringing you data-driven in the mid- to long-term.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
David Nicholson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
Qualcomm | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Telefonica | ORGANIZATION | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Spain | LOCATION | 0.99+ |
T-Mobile | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Deutsche Telekom | ORGANIZATION | 0.99+ |
Hewlett Packard Enterprise | ORGANIZATION | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Marvell | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Samsung | ORGANIZATION | 0.99+ |
Apple | ORGANIZATION | 0.99+ |
AT&T | ORGANIZATION | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
40% | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
AthoNet | ORGANIZATION | 0.99+ |
Erickson | ORGANIZATION | 0.99+ |
Congress Square | LOCATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
next week | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
Solana | ORGANIZATION | 0.99+ |
Boston | LOCATION | 0.99+ |
two dimensions | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
MWC '23 | EVENT | 0.99+ |
MWC | EVENT | 0.99+ |
288 | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
this week | DATE | 0.98+ |
Solona | ORGANIZATION | 0.98+ |
David.Vellante@SiliconANGLE.com | OTHER | 0.98+ |
telco | ORGANIZATION | 0.98+ |
Two | QUANTITY | 0.98+ |
each week | QUANTITY | 0.97+ |
Zeus Kerravala | PERSON | 0.97+ |
MWC 2023 | EVENT | 0.97+ |
about five points | QUANTITY | 0.97+ |
theCUBE.net | OTHER | 0.97+ |
Red Hat | ORGANIZATION | 0.97+ |
Snowflake | TITLE | 0.96+ |
one | QUANTITY | 0.96+ |
Databricks | ORGANIZATION | 0.96+ |
three | QUANTITY | 0.96+ |
theCUBE Studios | ORGANIZATION | 0.96+ |
Breaking Analysis: MWC 2023 highlights telco transformation & the future of business
>> From the Cube Studios in Palo Alto in Boston, bringing you data-driven insights from The Cube and ETR. This is "Breaking Analysis" with Dave Vellante. >> The world's leading telcos are trying to shed the stigma of being monopolies lacking innovation. Telcos have been great at operational efficiency and connectivity and living off of transmission, and the costs and expenses or revenue associated with that transmission. But in a world beyond telephone poles and basic wireless and mobile services, how will telcos modernize and become more agile and monetize new opportunities brought about by 5G and private wireless and a spate of new innovations and infrastructure, cloud data and apps? Hello, and welcome to this week's Wikibon CUBE Insights powered by ETR. In this breaking analysis and ahead of Mobile World Congress or now, MWC23, we explore the evolution of the telco business and how the industry is in many ways, mimicking transformations that took place decades ago in enterprise IT. We'll model some of the traditional enterprise vendors using ETR data and investigate how they're faring in the telecommunications sector, and we'll pose some of the key issues facing the industry this decade. First, let's take a look at what the GSMA has in store for MWC23. GSMA is the host of what used to be called Mobile World Congress. They've set the theme for this year's event as "Velocity" and they've rebranded MWC to reflect the fact that mobile technology is only one part of the story. MWC has become one of the world's premier events highlighting innovations not only in Telco, mobile and 5G, but the collision between cloud, infrastructure, apps, private networks, smart industries, machine intelligence, and AI, and more. MWC comprises an enormous ecosystem of service providers, technology companies, and firms from virtually every industry including sports and entertainment. And as well, GSMA, along with its venue partner at the Fira Barcelona, have placed a major emphasis on sustainability and public and private partnerships. Virtually every industry will be represented at the event because every industry is impacted by the trends and opportunities in this space. GSMA has said it expects 80,000 attendees at MWC this year, not quite back to 2019 levels, but trending in that direction. Of course, attendance from Chinese participants has historically been very high at the show, and obviously the continued travel issues from that region are affecting the overall attendance, but still very strong. And despite these concerns, Huawei, the giant Chinese technology company. has the largest physical presence of any exhibitor at the show. And finally, GSMA estimates that more than $300 million in economic benefit will result from the event which takes place at the end of February and early March. And The Cube will be back at MWC this year with a major presence thanks to our anchor sponsor, Dell Technologies and other supporters of our content program, including Enterprise Web, ArcaOS, VMware, Snowflake, Cisco, AWS, and others. And one of the areas we're interested in exploring is the evolution of the telco stack. It's a topic that's often talked about and one that we've observed taking place in the 1990s when the vertically integrated IBM mainframe monopoly gave way to a disintegrated and horizontal industry structure. And in many ways, the same thing is happening today in telecommunications, which is shown on the left-hand side of this diagram. Historically, telcos have relied on a hardened, integrated, and incredibly reliable, and secure set of hardware and software services that have been fully vetted and tested, and certified, and relied upon for decades. And at the top of that stack on the left are the crown jewels of the telco stack, the operational support systems and the business support systems. For the OSS, we're talking about things like network management, network operations, service delivery, quality of service, fulfillment assurance, and things like that. For the BSS systems, these refer to customer-facing elements of the stack, like revenue, order management, what products they sell, billing, and customer service. And what we're seeing is telcos have been really good at operational efficiency and making money off of transport and connectivity, but they've lacked the innovation in services and applications. They own the pipes and that works well, but others, be the over-the-top content companies, or private network providers and increasingly, cloud providers have been able to bypass the telcos, reach around them, if you will, and drive innovation. And so, the right-most diagram speaks to the need to disaggregate pieces of the stack. And while the similarities to the 1990s in enterprise IT are greater than the differences, there are things that are different. For example, the granularity of hardware infrastructure will not likely be as high where competition occurred back in the 90s at every layer of the value chain with very little infrastructure integration. That of course changed in the 2010s with converged infrastructure and hyper-converged and also software defined. So, that's one difference. And the advent of cloud, containers, microservices, and AI, none of that was really a major factor in the disintegration of legacy IT. And that probably means that disruptors can move even faster than did the likes of Intel and Microsoft, Oracle, Cisco, and the Seagates of the 1990s. As well, while many of the products and services will come from traditional enterprise IT names like Dell, HPE, Cisco, Red Hat, VMware, AWS, Microsoft, Google, et cetera, many of the names are going to be different and come from traditional network equipment providers. These are names like Ericsson and Huawei, and Nokia, and other names, like Wind River, and Rakuten, and Dish Networks. And there are enormous opportunities in data to help telecom companies and their competitors go beyond telemetry data into more advanced analytics and data monetization. There's also going to be an entirely new set of apps based on the workloads and use cases ranging from hospitals, sports arenas, race tracks, shipping ports, you name it. Virtually every vertical will participate in this transformation as the industry evolves its focus toward innovation, agility, and open ecosystems. Now remember, this is not a binary state. There are going to be greenfield companies disrupting the apple cart, but the incumbent telcos are going to have to continue to ensure newer systems work with their legacy infrastructure, in their OSS and BSS existing systems. And as we know, this is not going to be an overnight task. Integration is a difficult thing, transformations, migrations. So that's what makes this all so interesting because others can come in with Greenfield and potentially disrupt. There'll be interesting partnerships and ecosystems will form and coalitions will also form. Now, we mentioned that several traditional enterprise companies are or will be playing in this space. Now, ETR doesn't have a ton of data on specific telecom equipment and software providers, but it does have some interesting data that we cut for this breaking analysis. What we're showing here in this graphic is some of the names that we've followed over the years and how they're faring. Specifically, we did the cut within the telco sector. So the Y-axis here shows net score or spending velocity. And the horizontal axis, that shows the presence or pervasiveness in the data set. And that table insert in the upper left, that informs as to how the dots are plotted. You know, the two columns there, net score and the ends. And that red-dotted line, that horizontal line at 40%, that is an indicator of a highly elevated level. Anything above that, we consider quite outstanding. And what we'll do now is we'll comment on some of the cohorts and share with you how they're doing in telecommunications, and that sector, that vertical relative to their position overall in the data set. Let's start with the public cloud players. They're prominent in every industry. Telcos, telecommunications is no exception and it's quite an interesting cohort here. On the one hand, they can help telecommunication firms modernize and become more agile by eliminating the heavy lifting and you know, all the cloud, you know, value prop, data center costs, and the cloud benefits. At the same time, public cloud players are bringing their services to the edge, building out their own global networks and are a disruptive force to traditional telcos. All right, let's talk about Azure first. Their net score is basically identical to telco relative to its overall average. AWS's net score is higher in telco by just a few percentage points. Google Cloud platform is eight percentage points higher in telco with a 53% net score. So all three hyperscalers have an equal or stronger presence in telco than their average overall. Okay, let's look at the traditional enterprise hardware and software infrastructure cohort. Dell, Cisco, HPE, Red Hat, VMware, and Oracle. We've highlighted in this chart just as sort of indicators or proxies. Dell's net score's 10 percentage points higher in telco than its overall average. Interesting. Cisco's is a bit higher. HPE's is actually lower by about nine percentage points in the ETR survey, and VMware's is lower by about four percentage points. Now, Red Hat is really interesting. OpenStack, as we've previously reported is popular with telcos who want to build out their own private cloud. And the data shows that Red Hat OpenStack's net score is 15 percentage points higher in the telco sector than its overall average. OpenShift, on the other hand, has a net score that's four percentage points lower in telco than its overall average. So this to us talks to the pace of adoption of microservices and containers. You know, it's going to happen, but it's going to happen more slowly. Finally, Oracle's spending momentum is somewhat lower in the sector than its average, despite the firm having a decent telco business. IBM and Accenture, heavy services companies are both lower in this sector than their average. And real quickly, snowflake's net score is much lower by about 12 percentage points relative to its very high average net score of 62%. But we look for them to be a player in this space as telcos need to modernize their analytics stack and share data in a governed manner. Databricks' net score is also much lower than its average by about 13 points. And same, I would expect them to be a player as open architectures and cloud gains steam in telco. All right, let's close out now on what we're going to be talking about at MWC23 and some of the key issues that we'll be unpacking. We've talked about stack disaggregation in this breaking analysis, but the key here will be the pace at which it will reach the operational efficiency and reliability of closed stacks. Telcos, you know, in a large part, they're engineering heavy firms and much of their work takes place, kind of in the basement, in the dark. It's not really a big public hype machine, and they tend to move slowly and cautiously. While they understand the importance of agility, they're going to be careful because, you know, it's in their DNA. And so at the same time, if they don't move fast enough, they're going to get hurt and disrupted by competitors. So that's going to be a topic of conversation, and we'll be looking for proof points. And the other comment I'll make is around integration. Telcos because of their conservatism will benefit from better testing and those firms that can innovate on the testing front and have labs and certifications and innovate at that level, with an ecosystem are going to be in a better position. Because open sometimes means wild west. So the more players like Dell, HPE, Cisco, Red Hat, et cetera, that do that and align with their ecosystems and provide those resources, the faster adoption is going to go. So we'll be looking for, you know, who's actually doing that, Open RAN or Radio Access Networks. That fits in this discussion because O-RAN is an emerging network architecture. It essentially enables the use of open technologies from an ecosystem and over time, look at O-RAN is going to be open, but the questions, you know, a lot of questions remain as to when it will be able to deliver the operational efficiency of traditional RAN. Got some interesting dynamics going on. Rakuten is a company that's working hard on this problem, really focusing on operational efficiency. Then you got Dish Networks. They're also embracing O-RAN. They're coming at it more from service innovation. So that's something that we'll be monitoring and unpacking. We're going to look at cloud as a disruptor. On the one hand, cloud can help drive agility, as we said earlier and optionality, and innovation for incumbent telcos. But the flip side is going to also do the same for startups trying to disrupt and cloud attracts startups. While some of the telcos are actually embracing the cloud, many are being cautious. So that's going to be an interesting topic of discussion. And there's private wireless networks and 5G, and hyperlocal private networks, they're being deployed, you know, at the edge. This idea of open edge is also a really hot topic and this trend is going to accelerate. You know, the importance here is that the use cases are going to be widely varied. The needs of a hospital are going to be different than those of a sports venue are different from a remote drilling location, and energy or a concert venue. Things like real-time AI inference and data flows are going to bring new services and monetization opportunities. And many firms are going to be bypassing traditional telecommunications networks to build these out. Satellites as well, we're going to see, you know, in this decade, you're going to have, you're going to look down at Google Earth and you're going to see real-time. You know, today you see snapshots and so, lots of innovations going in that space. So how is this going to disrupt industries and traditional industry structures? Now, as always, we'll be looking at data angles, right? 'Cause it's in The Cube's DNA to follow the data and what opportunities and risks data brings. The Cube is going to be on location at MWC23 at the end of the month. We got a great set. We're in the walkway between halls four and five, right in Congress Square, it's booths CS60. So we'll have a full, they're called Stan CS60. We have a full schedule. I'm going to be there with Lisa Martin, Dave Nicholson and the entire Cube crew, so don't forget to stop by. All right, that's a wrap. I want to thank Alex Myerson, who's on production and manages the podcast, Ken Schiffman as well. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at Silicon Angle, does some great stuff for us. Thank you all. Remember, all these episodes are available as podcasts. Wherever you listen, just search "Breaking Analysis" podcasts I publish each week on wikibon.com and silicon angle.com. And all the video content is available on demand at thecube.net. You can email me directly at david.vellante@silicon angle.com. You can DM me at dvellante or comment on my LinkedIn post. Please do check out etr.ai for the best survey data in the enterprise tech business. This is Dave Vellante for The Cube Insights powered by ETR. Thanks for watching and we'll see you at Mobile World Congress, and/or at next time on "Breaking Analysis." (bright music) (bright music fades)
SUMMARY :
From the Cube Studios and some of the key issues
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Myerson | PERSON | 0.99+ |
Lisa Martin | PERSON | 0.99+ |
Dave Nicholson | PERSON | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Ericsson | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Huawei | ORGANIZATION | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Rakuten | ORGANIZATION | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
GSMA | ORGANIZATION | 0.99+ |
Accenture | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
2019 | DATE | 0.99+ |
53% | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Wind River | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
more than $300 million | QUANTITY | 0.99+ |
40% | QUANTITY | 0.99+ |
Telcos | ORGANIZATION | 0.99+ |
Congress Square | LOCATION | 0.99+ |
First | QUANTITY | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Telco | ORGANIZATION | 0.99+ |
Dish Networks | ORGANIZATION | 0.99+ |
telco | ORGANIZATION | 0.99+ |
2010s | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
david.vellante@silicon angle.com | OTHER | 0.99+ |
MWC23 | EVENT | 0.99+ |
1990s | DATE | 0.99+ |
62% | QUANTITY | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
two columns | QUANTITY | 0.99+ |
each week | QUANTITY | 0.99+ |
Seagates | ORGANIZATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
today | DATE | 0.99+ |
early March | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
thecube.net | OTHER | 0.99+ |
MWC | EVENT | 0.99+ |
ETR | ORGANIZATION | 0.98+ |
this year | DATE | 0.98+ |
Cube Studios | ORGANIZATION | 0.98+ |
one part | QUANTITY | 0.98+ |
Chinese | OTHER | 0.98+ |
Boston | LOCATION | 0.98+ |
decades ago | DATE | 0.97+ |
three | QUANTITY | 0.97+ |
90s | DATE | 0.97+ |
about 13 points | QUANTITY | 0.97+ |
Daren Brabham & Erik Bradley | What the Spending Data Tells us About Supercloud
(gentle synth music) (music ends) >> Welcome back to Supercloud 2, an open industry collaboration between technologists, consultants, analysts, and of course practitioners to help shape the future of cloud. At this event, one of the key areas we're exploring is the intersection of cloud and data. And how building value on top of hyperscale clouds and across clouds is evolving, a concept of course we call "Supercloud". And we're pleased to welcome our friends from Enterprise Technology research, Erik Bradley and Darren Brabham. Guys, thanks for joining us, great to see you. we love to bring the data into these conversations. >> Thank you for having us, Dave, I appreciate it. >> Yeah, thanks. >> You bet. And so, let me do the setup on what is Supercloud. It's a concept that we've floated, Before re:Invent 2021, based on the idea that cloud infrastructure is becoming ubiquitous, incredibly powerful, but there's a lack of standards across the big three clouds. That creates friction. So we defined over the period of time, you know, better part of a year, a set of essential elements, deployment models for so-called supercloud, which create this common experience for specific cloud services that, of course, again, span multiple clouds and even on-premise data. So Erik, with that as background, I wonder if you could add your general thoughts on the term supercloud, maybe play proxy for the CIO community, 'cause you do these round tables, you talk to these guys all the time, you gather a lot of amazing information from senior IT DMs that compliment your survey. So what are your thoughts on the term and the concept? >> Yeah, sure. I'll even go back to last year when you and I did our predictions panel, right? And we threw it out there. And to your point, you know, there's some haters. Anytime you throw out a new term, "Is it marketing buzz? Is it worth it? Why are you even doing it?" But you know, from my own perspective, and then also speaking to the IT DMs that we interview on a regular basis, this is just a natural evolution. It's something that's inevitable in enterprise tech, right? The internet was not built for what it has become. It was never intended to be the underlying infrastructure of our daily lives and work. The cloud also was not built to be what it's become. But where we're at now is, we have to figure out what the cloud is and what it needs to be to be scalable, resilient, secure, and have the governance wrapped around it. And to me that's what supercloud is. It's a way to define operantly, what the next generation, the continued iteration and evolution of the cloud and what its needs to be. And that's what the supercloud means to me. And what depends, if you want to call it metacloud, supercloud, it doesn't matter. The point is that we're trying to define the next layer, the next future of work, which is inevitable in enterprise tech. Now, from the IT DM perspective, I have two interesting call outs. One is from basically a senior developer IT architecture and DevSecOps who says he uses the term all the time. And the reason he uses the term, is that because multi-cloud has a stigma attached to it, when he is talking to his business executives. (David chuckles) the stigma is because it's complex and it's expensive. So he switched to supercloud to better explain to his business executives and his CFO and his CIO what he's trying to do. And we can get into more later about what it means to him. But the inverse of that, of course, is a good CSO friend of mine for a very large enterprise says the concern with Supercloud is the reduction of complexity. And I'll explain, he believes anything that takes the requirement of specific expertise out of the equation, even a little bit, as a CSO worries him. So as you said, David, always two sides to the coin, but I do believe supercloud is a relevant term, and it is necessary because the cloud is continuing to be defined. >> You know, that's really interesting too, 'cause you know, Darren, we use Snowflake a lot as an example, sort of early supercloud, and you think from a security standpoint, we've always pushed Amazon and, "Are you ever going to kind of abstract the complexity away from all these primitives?" and their position has always been, "Look, if we produce these primitives, and offer these primitives, we we can move as the market moves. When you abstract, then it becomes harder to peel the layers." But Darren, from a data standpoint, like I say, we use Snowflake a lot. I think of like Tim Burners-Lee when Web 2.0 came out, he said, "Well this is what the internet was always supposed to be." So in a way, you know, supercloud is maybe what multi-cloud was supposed to be. But I mean, you think about data sharing, Darren, across clouds, it's always been a challenge. Snowflake always, you know, obviously trying to solve that problem, as are others. But what are your thoughts on the concept? >> Yeah, I think the concept fits, right? It is reflective of, it's a paradigm shift, right? Things, as a pendulum have swung back and forth between needing to piece together a bunch of different tools that have specific unique use cases and they're best in breed in what they do. And then focusing on the duct tape that holds 'em all together and all the engineering complexity and skill, it shifted from that end of the pendulum all the way back to, "Let's streamline this, let's simplify it. Maybe we have budget crunches and we need to consolidate tools or eliminate tools." And so then you kind of see this back and forth over time. And with data and analytics for instance, a lot of organizations were trying to bring the data closer to the business. That's where we saw self-service analytics coming in. And tools like Snowflake, what they did was they helped point to different databases, they helped unify data, and organize it in a single place that was, you know, in a sense neutral, away from a single cloud vendor or a single database, and allowed the business to kind of be more flexible in how it brought stuff together and provided it out to the business units. So Snowflake was an example of one of those times where we pulled back from the granular, multiple points of the spear, back to a simple way to do things. And I think Snowflake has continued to kind of keep that mantle to a degree, and we see other tools trying to do that, but that's all it is. It's a paradigm shift back to this kind of meta abstraction layer that kind of simplifies what is the reality, that you need a complex multi-use case, multi-region way of doing business. And it sort of reflects the reality of that. >> And you know, to me it's a spectrum. As part of Supercloud 2, we're talking to a number of of practitioners, Ionis Pharmaceuticals, US West, we got Walmart. And it's a spectrum, right? In some cases the practitioner's saying, "You know, the way I solve multi-cloud complexity is mono-cloud, I just do one cloud." (laughs) Others like Walmart are saying, "Hey, you know, we actually are building an abstraction layer ourselves, take advantage of it." So my general question to both of you is, is this a concept, is the lack of standards across clouds, you know, really a problem, you know, or is supercloud a solution looking for a problem? Or do you hear from practitioners that "No, this is really an issue, we have to bring together a set of standards to sort of unify our cloud estates." >> Allow me to answer that at a higher level, and then we're going to hand it over to Dr. Brabham because he is a little bit more detailed on the realtime streaming analytics use cases, which I think is where we're going to get to. But to answer that question, it really depends on the size and the complexity of your business. At the very large enterprise, Dave, Yes, a hundred percent. This needs to happen. There is complexity, there is not only complexity in the compute and actually deploying the applications, but the governance and the security around them. But for lower end or, you know, business use cases, and for smaller businesses, it's a little less necessary. You certainly don't need to have all of these. Some of the things that come into mind from the interviews that Darren and I have done are, you know, financial services, if you're doing real-time trading, anything that has real-time data metrics involved in your transactions, is going to be necessary. And another use case that we hear about is in online travel agencies. So I think it is very relevant, the complexity does need to be solved, and I'll allow Darren to explain a little bit more about how that's used from an analytics perspective. >> Yeah, go for it. >> Yeah, exactly. I mean, I think any modern, you know, multinational company that's going to have a footprint in the US and Europe, in China, or works in different areas like manufacturing, where you're probably going to have on-prem instances that will stay on-prem forever, for various performance reasons. You have these complicated governance and security and regulatory issues. So inherently, I think, large multinational companies and or companies that are in certain areas like finance or in, you know, online e-commerce, or things that need real-time data, they inherently are going to have a very complex environment that's going to need to be managed in some kind of cleaner way. You know, they're looking for one door to open, one pane of glass to look at, one thing to do to manage these multi points. And, streaming's a good example of that. I mean, not every organization has a real-time streaming use case, and may not ever, but a lot of organizations do, a lot of industries do. And so there's this need to use, you know, they want to use open-source tools, they want to use Apache Kafka for instance. They want to use different megacloud vendors offerings, like Google Pub/Sub or you know, Amazon Kinesis Firehose. They have all these different pieces they want to use for different use cases at different stages of maturity or proof of concept, you name it. They're going to have to have this complexity. And I think that's why we're seeing this need, to have sort of this supercloud concept, to juggle all this, to wrangle all of it. 'Cause the reality is, it's complex and you have to simplify it somehow. >> Great, thanks you guys. All right, let's bring up the graphic, and take a look. Anybody who follows the breaking analysis, which is co-branded with ETR Cube Insights powered by ETR, knows we like to bring data to the table. ETR does amazing survey work every quarter, 1200 plus 1500 practitioners that that answer a number of questions. The vertical axis here is net score, which is ETR's proprietary methodology, which is a measure of spending momentum, spending velocity. And the horizontal axis here is overlap, but it's the presence pervasiveness, and the dataset, the ends, that table insert on the bottom right shows you how the dots are plotted, the net score and then the ends in the survey. And what we've done is we've plotted a bunch of the so-called supercloud suspects, let's start in the upper right, the cloud platforms. Without these hyperscale clouds, you can't have a supercloud. And as always, Azure and AWS, up and to the right, it's amazing we're talking about, you know, 80 plus billion dollar company in AWS. Azure's business is, if you just look at the IaaS is in the 50 billion range, I mean it's just amazing to me the net scores here. Anything above 40% we consider highly elevated. And you got Azure and you got Snowflake, Databricks, HashiCorp, we'll get to them. And you got AWS, you know, right up there at that size, it's quite amazing. With really big ends as well, you know, 700 plus ends in the survey. So, you know, kind of half the survey actually has these platforms. So my question to you guys is, what are you seeing in terms of cloud adoption within the big three cloud players? I wonder if you could could comment, maybe Erik, you could start. >> Yeah, sure. Now we're talking data, now I'm happy. So yeah, we'll get into some of it. Right now, the January, 2023 TSIS is approaching 1500 survey respondents. One caveat, it's not closed yet, it will close on Friday, but with an end that big we are over statistically significant. We also recently did a cloud survey, and there's a couple of key points on that I want to get into before we get into individual vendors. What we're seeing here, is that annual spend on cloud infrastructure is expected to grow at almost a 70% CAGR over the next three years. The percentage of those workloads for cloud infrastructure are expected to grow over 70% as three years as well. And as you mentioned, Azure and AWS are still dominant. However, we're seeing some share shift spreading around a little bit. Now to get into the individual vendors you mentioned about, yes, Azure is still number one, AWS is number two. What we're seeing, which is incredibly interesting, CloudFlare is number three. It's actually beating GCP. That's the first time we've seen it. What I do want to state, is this is on net score only, which is our measure of spending intentions. When you talk about actual pervasion in the enterprise, it's not even close. But from a spending velocity intention point of view, CloudFlare is now number three above GCP, and even Salesforce is creeping up to be at GCPs level. So what we're seeing here, is a continued domination by Azure and AWS, but some of these other players that maybe might fit into your moniker. And I definitely want to talk about CloudFlare more in a bit, but I'm going to stop there. But what we're seeing is some of these other players that fit into your Supercloud moniker, are starting to creep up, Dave. >> Yeah, I just want to clarify. So as you also know, we track IaaS and PaaS revenue and we try to extract, so AWS reports in its quarterly earnings, you know, they're just IaaS and PaaS, they don't have a SaaS play, a little bit maybe, whereas Microsoft and Google include their applications and so we extract those out and if you do that, AWS is bigger, but in the surveys, you know, customers, they see cloud, SaaS to them as cloud. So that's one of the reasons why you see, you know, Microsoft as larger in pervasion. If you bring up that survey again, Alex, the survey results, you see them further to the right and they have higher spending momentum, which is consistent with what you see in the earnings calls. Now, interesting about CloudFlare because the CEO of CloudFlare actually, and CloudFlare itself uses the term supercloud basically saying, "Hey, we're building a new type of internet." So what are your thoughts? Do you have additional information on CloudFlare, Erik that you want to share? I mean, you've seen them pop up. I mean this is a really interesting company that is pretty forward thinking and vocal about how it's disrupting the industry. >> Sure, we've been tracking 'em for a long time, and even from the disruption of just a traditional CDN where they took down Akamai and what they're doing. But for me, the definition of a true supercloud provider can't just be one instance. You have to have multiple. So it's not just the cloud, it's networking aspect on top of it, it's also security. And to me, CloudFlare is the only one that has all of it. That they actually have the ability to offer all of those things. Whereas you look at some of the other names, they're still piggybacking on the infrastructure or platform as a service of the hyperscalers. CloudFlare does not need to, they actually have the cloud, the networking, and the security all themselves. So to me that lends credibility to their own internal usage of that moniker Supercloud. And also, again, just what we're seeing right here that their net score is now creeping above AGCP really does state it. And then just one real last thing, one of the other things we do in our surveys is we track adoption and replacement reasoning. And when you look at Cloudflare's adoption rate, which is extremely high, it's based on technical capabilities, the breadth of their feature set, it's also based on what we call the ability to avoid stack alignment. So those are again, really supporting reasons that makes CloudFlare a top candidate for your moniker of supercloud. >> And they've also announced an object store (chuckles) and a database. So, you know, that's going to be, it takes a while as you well know, to get database adoption going, but you know, they're ambitious and going for it. All right, let's bring the chart back up, and I want to focus Darren in on the ecosystem now, and really, we've identified Snowflake and Databricks, it's always fun to talk about those guys, and there are a number of other, you know, data platforms out there, but we use those too as really proxies for leaders. We got a bunch of the backup guys, the data protection folks, Rubric, Cohesity, and Veeam. They're sort of in a cluster, although Rubric, you know, ahead of those guys in terms of spending momentum. And then VMware, Tanzu and Red Hat as sort of the cross cloud platform. But I want to focus, Darren, on the data piece of it. We're seeing a lot of activity around data sharing, governed data sharing. Databricks is using Delta Sharing as their sort of place, Snowflakes is sort of this walled garden like the app store. What are your thoughts on, you know, in the context of Supercloud, cross cloud capabilities for the data platforms? >> Yeah, good question. You know, I think Databricks is an interesting player because they sort of have made some interesting moves, with their Data Lakehouse technology. So they're trying to kind of complicate, or not complicate, they're trying to take away the complications of, you know, the downsides of data warehousing and data lakes, and trying to find that middle ground, where you have the benefits of a managed, governed, you know, data warehouse environment, but you have sort of the lower cost, you know, capability of a data lake. And so, you know, Databricks has become really attractive, especially by data scientists, right? We've been tracking them in the AI machine learning sector for quite some time here at ETR, attractive for a data scientist because it looks and acts like a lake, but can have some managed capabilities like a warehouse. So it's kind of the best of both worlds. So in some ways I think you've seen sort of a data science driver for the adoption of Databricks that has now become a little bit more mainstream across the business. Snowflake, maybe the other direction, you know, it's a cloud data warehouse that you know, is starting to expand its capabilities and add on new things like Streamlit is a good example in the analytics space, with apps. So you see these tools starting to branch and creep out a bit, but they offer that sort of neutrality, right? We heard one IT decision maker we recently interviewed that referred to Snowflake and Databricks as the quote unquote Switzerland of what they do. And so there's this desirability from an organization to find these tools that can solve the complex multi-headed use-case of data and analytics, which every business unit needs in different ways. And figure out a way to do that, an elegant way that's governed and centrally managed, that federated kind of best of both worlds that you get by bringing the data close to the business while having a central governed instance. So these tools are incredibly powerful and I think there's only going to be room for growth, for those two especially. I think they're going to expand and do different things and maybe, you know, join forces with others and a lot of the power of what they do well is trying to define these connections and find these partnerships with other vendors, and try to be seen as the nice add-on to your existing environment that plays nicely with everyone. So I think that's where those two tools are going, but they certainly fit this sort of label of, you know, trying to be that supercloud neutral, you know, layer that unites everything. >> Yeah, and if you bring the graphic back up, please, there's obviously big data plays in each of the cloud platforms, you know, Microsoft, big database player, AWS is, you know, 11, 12, 15, data stores. And of course, you know, BigQuery and other, you know, data platforms within Google. But you know, I'm not sure the big cloud guys are going to go hard after so-called supercloud, cross-cloud services. Although, we see Oracle getting in bed with Microsoft and Azure, with a database service that is cross-cloud, certainly Google with Anthos and you know, you never say never with with AWS. I guess what I would say guys, and I'll I'll leave you with this is that, you know, just like all players today are cloud players, I feel like anybody in the business or most companies are going to be so-called supercloud players. In other words, they're going to have a cross-cloud strategy, they're going to try to build connections if they're coming from on-prem like a Dell or an HPE, you know, or Pure or you know, many of these other companies, Cohesity is another one. They're going to try to connect to their on-premise states, of course, and create a consistent experience. It's natural that they're going to have sort of some consistency across clouds. You know, the big question is, what's that spectrum look like? I think on the one hand you're going to have some, you know, maybe some rudimentary, you know, instances of supercloud or maybe they just run on the individual clouds versus where Snowflake and others and even beyond that are trying to go with a single global instance, basically building out what I would think of as their own cloud, and importantly their own ecosystem. I'll give you guys the last thought. Maybe you could each give us, you know, closing thoughts. Maybe Darren, you could start and Erik, you could bring us home on just this entire topic, the future of cloud and data. >> Yeah, I mean I think, you know, two points to make on that is, this question of these, I guess what we'll call legacy on-prem players. These, mega vendors that have been around a long time, have big on-prem footprints and a lot of people have them for that reason. I think it's foolish to assume that a company, especially a large, mature, multinational company that's been around a long time, it's foolish to think that they can just uproot and leave on-premises entirely full scale. There will almost always be an on-prem footprint from any company that was not, you know, natively born in the cloud after 2010, right? I just don't think that's reasonable anytime soon. I think there's some industries that need on-prem, things like, you know, industrial manufacturing and so on. So I don't think on-prem is going away, and I think vendors that are going to, you know, go very cloud forward, very big on the cloud, if they neglect having at least decent connectors to on-prem legacy vendors, they're going to miss out. So I think that's something that these players need to keep in mind is that they continue to reach back to some of these players that have big footprints on-prem, and make sure that those integrations are seamless and work well, or else their customers will always have a multi-cloud or hybrid experience. And then I think a second point here about the future is, you know, we talk about the three big, you know, cloud providers, the Google, Microsoft, AWS as sort of the opposite of, or different from this new supercloud paradigm that's emerging. But I want to kind of point out that, they will always try to make a play to become that and I think, you know, we'll certainly see someone like Microsoft trying to expand their licensing and expand how they play in order to become that super cloud provider for folks. So also don't want to downplay them. I think you're going to see those three big players continue to move, and take over what players like CloudFlare are doing and try to, you know, cut them off before they get too big. So, keep an eye on them as well. >> Great points, I mean, I think you're right, the first point, if you're Dell, HPE, Cisco, IBM, your strategy should be to make your on-premise state as cloud-like as possible and you know, make those differences as minimal as possible. And you know, if you're a customer, then the business case is going to be low for you to move off of that. And I think you're right. I think the cloud guys, if this is a real problem, the cloud guys are going to play in there, and they're going to make some money at it. Erik, bring us home please. >> Yeah, I'm going to revert back to our data and this on the macro side. So to kind of support this concept of a supercloud right now, you know Dave, you and I know, we check overall spending and what we're seeing right now is total year spent is expected to only be 4.6%. We ended 2022 at 5% even though it began at almost eight and a half. So this is clearly declining and in that environment, we're seeing the top two strategies to reduce spend are actually vendor consolidation with 36% of our respondents saying they're actively seeking a way to reduce their number of vendors, and consolidate into one. That's obviously supporting a supercloud type of play. Number two is reducing excess cloud resources. So when I look at both of those combined, with a drop in the overall spending reduction, I think you're on the right thread here, Dave. You know, the overall macro view that we're seeing in the data supports this happening. And if I can real quick, couple of names we did not touch on that I do think deserve to be in this conversation, one is HashiCorp. HashiCorp is the number one player in our infrastructure sector, with a 56% net score. It does multiple things within infrastructure and it is completely agnostic to your environment. And if we're also speaking about something that's just a singular feature, we would look at Rubric for data, backup, storage, recovery. They're not going to offer you your full cloud or your networking of course, but if you are looking for your backup, recovery, and storage Rubric, also number one in that sector with a 53% net score. Two other names that deserve to be in this conversation as we watch it move and evolve. >> Great, thank you for bringing that up. Yeah, we had both of those guys in the chart and I failed to focus in on HashiCorp. And clearly a Supercloud enabler. All right guys, we got to go. Thank you so much for joining us, appreciate it. Let's keep this conversation going. >> Always enjoy talking to you Dave, thanks. >> Yeah, thanks for having us. >> All right, keep it right there for more content from Supercloud 2. This is Dave Valente for John Ferg and the entire Cube team. We'll be right back. (gentle synth music) (music fades)
SUMMARY :
is the intersection of cloud and data. Thank you for having period of time, you know, and evolution of the cloud So in a way, you know, supercloud the data closer to the business. So my general question to both of you is, the complexity does need to be And so there's this need to use, you know, So my question to you guys is, And as you mentioned, Azure but in the surveys, you know, customers, the ability to offer and there are a number of other, you know, and maybe, you know, join forces each of the cloud platforms, you know, the three big, you know, And you know, if you're a customer, you and I know, we check overall spending and I failed to focus in on HashiCorp. to you Dave, thanks. Ferg and the entire Cube team.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
IBM | ORGANIZATION | 0.99+ |
Cisco | ORGANIZATION | 0.99+ |
Erik | PERSON | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
John Ferg | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Walmart | ORGANIZATION | 0.99+ |
Erik Bradley | PERSON | 0.99+ |
David | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Dave Valente | PERSON | 0.99+ |
January, 2023 | DATE | 0.99+ |
China | LOCATION | 0.99+ |
US | LOCATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
50 billion | QUANTITY | 0.99+ |
Ionis Pharmaceuticals | ORGANIZATION | 0.99+ |
Darren Brabham | PERSON | 0.99+ |
56% | QUANTITY | 0.99+ |
4.6% | QUANTITY | 0.99+ |
Europe | LOCATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
53% | QUANTITY | 0.99+ |
36% | QUANTITY | 0.99+ |
Tanzu | ORGANIZATION | 0.99+ |
Darren | PERSON | 0.99+ |
1200 | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
VMware | ORGANIZATION | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Friday | DATE | 0.99+ |
Rubric | ORGANIZATION | 0.99+ |
last year | DATE | 0.99+ |
two sides | QUANTITY | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
5% | QUANTITY | 0.99+ |
Cohesity | ORGANIZATION | 0.99+ |
two tools | QUANTITY | 0.99+ |
Veeam | ORGANIZATION | 0.99+ |
CloudFlare | TITLE | 0.99+ |
two | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
One | QUANTITY | 0.99+ |
Daren Brabham | PERSON | 0.99+ |
three years | QUANTITY | 0.99+ |
TSIS | ORGANIZATION | 0.99+ |
Brabham | PERSON | 0.99+ |
CloudFlare | ORGANIZATION | 0.99+ |
1500 survey respondents | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
first point | QUANTITY | 0.98+ |
Snowflake | TITLE | 0.98+ |
one | QUANTITY | 0.98+ |
Supercloud | ORGANIZATION | 0.98+ |
ETR | ORGANIZATION | 0.98+ |
Snowflake | ORGANIZATION | 0.98+ |
Akamai | ORGANIZATION | 0.98+ |
Ramesh Prabagaran, Prosimo.io | Defining the Network Supercloud
(upbeat music) >> Hello, and welcome to Supercloud2. I'm John Furrier, host of theCUBE here. We're exploring all the new Supercloud trends around multiple clouds, hyper scale gaps in their systems, new innovations, new applications, new companies, new products, new brands emerging from this big inflection point. Got a great guest who's going to unpack it with me today, Ramesh Prabagaran, who's the co-founder and CEO of Prosimo, CUBE alumni. Ramesh, legend in the industry, you've been around. You've seen many cycles. Welcome to Supercloud2. >> Thank you. You're being too kind. >> Well, you know, you guys have been a technical, great technical founding team, multiple ventures, multiple times around the track as they say, but now we're seeing something completely different. This is our second event, kind of we're doing to start the the ball rolling around unpacking this idea of Supercloud which evolved from a riff with me and Dave to now a working group paper, multiple definitions. People are saying they're Supercloud. CloudFlare says this is their version. Someone says there over there. Fitzi over there in the blog is always, you know, challenging us on our definitions, but it's, the consensus is though something's happening. >> Ramesh: Absolutely. >> And what's your take on this kind of big inflection point? >> Absolutely, so if you just look at kind of this in layers right, so you have hyper scalers that are innovating really quickly on underlying capabilities, and then you have enterprises adopting these technologies, right, there is a layer in the middle that I would say is largely missing, right? And one that addresses the gaps introduced by these new capabilities, by the hyper scalers. At the same time, one that actually spans, let's say multiple regions, multiple clouds and so forth. So that to me is kind of the Supercloud layer of sorts. One that helps enterprises adopt the underlying hyper scaler capabilities a lot faster, and at the same time brings a certain level of consistency and homogeneity also. >> What do you think the big driver of Supercloud is? Is it the industry growing up or is it the demand for new kinds of capabilities or both? Or just evolution? What's your take? >> I would say largely it depends on kind of who the entity is that you're talking about, right? And so I would say both. So if you look at one cohort here, it's adoption, right? If I have a externally facing digital presence, for example, then I'm going to scale that up and get to as many subscribers and users no matter what, right? And at that time it's a different set of problems. If you're looking at kind of traditional enterprise inward that are bringing apps into the cloud and so forth, it's a different set of care abouts, right? So both are, I would say, equally important problems to solve for. >> Well, one reality that we're definitely tracking, and it's not really a debate anymore, is hybrid. >> Ramesh: Yep >> Hybrid happened. It happened faster than most people thought. But, you know, we were talking about this in 2015 when it first got kicked around, but now you see hybrid in the cloud, on premises and the edge. This kind of forms that distributed computing paradigm that we've always been predicting. And so if that continues to play out the way it is, you're now going to have a completely distributed, connected internet and sets of systems, intra and external within companies. So again, the world is connected 100%. Everything's changing, right? >> And that introduces. >> It wasn't your grandfather's networking anymore or storage. The game is still the same, but the play, the components are acting differently. What's your take on this? >> Absolutely. No, absolutely. That's a very key important point, and it's one that we always ask our customers right at the front end, right? Because your starting assumptions matter. If you have workloads of workloads in the cloud and data center is something that you want to connect into, then you'll make decisions kind of keeping cloud in the center and then kind of bolt on technologies for what that means to extend it to the data center. If your center of gravity is in the data center, and then cloud is let's say 10% right now, but you see that growing, then what choices do you have? Right, do you want to bring your data center technologies into the cloud because you want that consistency in operations? Or do you want to start off fresh, right? So this is a really key, important question, and one that many of our customers are actually are grappling with, right? They have this notion that going cloud native is the right approach, but at the same time that means I have a bifurcation in kind of how do I operate my data center versus my cloud, right? Two different operating models, and slowly it'll shift over to one. But you're going to have to deal with dual reality for a while. >> I was talking to an old friend of mine, CIO, very experienced CIO. Big time company, large deployment, a lot of IT. I said, so what's the big trend everyone's telling me about IT's going. He goes no, not really. IT's not going away for me. It's going everywhere in the company. >> Ramesh: Exactly. >> So I need to scale my IT-like capabilities everywhere and then make it invisible. >> Ramesh: Correct. >> Which is essentially code words for saying it's going to be completely cloud native everywhere. This is what is happening. Do you agree? >> Absolutely right, and so if you look at what do enterprises care about it? The reason to go to the cloud is to get speed of operations, and it's apps, apps, apps, right? Do you ever have a conversation on networking and infrastructure first? No, that kind of gets brought into the conversation because you want to deal with users, applications and services, right? And so the end goal is essentially how do users communicate with apps and get the right experience, security and whatnot, and how do apps talk to each other and make sure that you get all of the connectivity and security requirements? Underneath the covers, what does this mean for infrastructure, networking, security and whatnot? It's actually going to be someone else's job, right? And you shouldn't have to think too much about it. So this whole notion of kind of making that transparent is real actually, right? But at the same time, us and all the guys that we talk to on the customer side, that's their job, right? Like we have to work towards making that transparent. Some are going to be in the form of capability, some are going to be driven by data, but that's really where the two worlds are going to come together. >> Lots of debates going on. We just heard from Bob Muglia here on Supercloud2. He said Supercloud's a platform that provides programmatically consistent services hosted on heterogeneous cloud providers. So the question that's being debated is is Supercloud a platform or an architecture in your view? >> Okay, that's a tough one actually. I'm going to side on the side on kind of the platform side right, and the reason for that is architectural choices are things that you make ahead of time. And you, once you're in, there really isn't a fork in the road, right? Platforms continue to evolve. You can iterate, innovate and so on and so forth. And so I'm thinking Supercloud is more of a platform because you do have a choice. Hey, am I going AWS, Azure, GCP. You make that choice. What is my center of gravity? You make that choice. That's kind of an architectural decision, right? Once you make that, then how do I make things work consistently across like two or three clouds? That's a platform choice. >> So who's responsible for the architecture as the platform, the vendor serving the platform or is the platform vendor agnostic? >> You know, this is where you have to kind of peel the onion in layers, right? If you talk about applications, you can't go to a developer team or an app team and say I want you to operate on Google or AWS. They're like I'll pick the cloud that I want, right? Now who are we talking to? The infrastructure guys and the networking guys, right? They want to make sure that it's not bifurcated. It's like, hey, I want to make sure whatever I build for AWS I can equally use that on Azure. I can equally use that on GCP. So if you're talking to more of the application centric teams who really want infrastructure to be transparent, they'll say, okay, I want to make this choice of whether this is AWS, Azure, GCP, and stick to that. And if you come kind of down the layers of the stack into infrastructure, they are thinking a little more holistically, a little more Supercloud, a little more multicloud, and that. >> That's a good point. So that brings up the deployment question. >> Ramesh: Exactly! >> I want to ask you the next question, okay, what is the preferred deployment in your opinion for a Supercloud narrative? Is it single instance, spread it around everywhere? What's the, do you have a single global instance or do you have everything synchronized? >> So I would say first layer of that Supercloud really kind of fix the holes that have been introduced as a result of kind of adopting the hyper scaler technologies, right? So each, the hyper scalers have been really good at innovating and providing really massive scale elastic capabilities, right? But once you start to build capabilities on top of that to help serve the application, there's a few holes start to show up. So first job of Supercloud really is to plug those holes, right? Second is can I get to an operating model, so that I can replicate this not just in a single region, but across multiple regions, same cloud, and then across multiple clouds, right? And so both of those need to be solved for in order to be (cross talking). >> So is that multiple instantiations of the stack or? >> Yeah, so this again depends on kind of the capability, right? So if you take a more solution view, and so I can speak for kind of networking security combined right? There you always take a solution view. You don't ever look at, you know, what does this mean for a single instance in a single region. You take a macro view, and then you then break it down into what does this mean for region, what does it mean for instance, what does this mean for AZs? And so on and so forth. So you kind of have to go top to bottom. >> Okay, welcome you down into the trap now. Okay, synchronizing the data, latency, these are all questions. So what does the network Supercloud look like to you? Because networking is big here. >> Ramesh: Yes, absolutely. >> This is what you guys do. >> Exactly, yeah. So the different set of problems as you go up the stack, right? So if you have hundreds of workloads in a single region, the set of problems you're dealing with there are kind of app native connectivity, how do I go from kind of east/west, all of those fun things, right? Which are usually bound in terms of latency. You don't have those challenges as much, but can you build your entire enterprise application architecture in one region? No, you're going to have to create multiple instances, right? So my data lake is invariably going to be in one place. My business logic is going to be spread across a few places. What does that bring in? I need to go across regions. Am I going to put those two regions right next to each other? No, I'm not going to, right? I'm going to have places in Europe. I'm going to have APAC, and I'm going to have a North American presence, and I need to bring all these things together. So this is where, back to your point, latency really matters, right? Because I need to be able to find out not just best path but also how do I reduce the millisecond, microseconds that my application cares about, which brings in a layer of optimization and then so on and so on and so forth. So this is what we call kind of to borrow the Prosimo language full stack networking, right? Because I'm not just dealing with how do I go from one region to another because that's laws of physics. I can only control so much. But there are a few elements up the application stack in software that you can tweak to actually bring these things closer and closer. >> And on that point, you're seeing security being talked a lot more at the network layer. So how do you secure the Supercloud at the network layer? What's that look like? >> Yeah, we've been grappling with essentially is security kind of foundational, and then is the network on top. And then we had an alternative viewpoint which is kind of network and then security on top. And the answer is actually it's neither, right? It's almost like a meshed up sandwich of sorts. So you need to have networking security work really well together, right? Case in point, I mean we were talking to a customer yesterday. He said, hey, I have my data lake in one region that needs to talk to an analytics service in a completely different region of a different cloud. These two things just need to be able to talk to each other, which means I need to bring elements of networking. I need to bring elements of security, secure access, app segmentation, all of those things. Very simple, I have an analytics service that needs to contact a data lake. That's what he starts with, but then before you know it, it actually brings up a whole stack underneath, so that's. >> VMware calls that cloud chaos. >> Ramesh: Yes, exactly. >> And then that's the halfway point between cloud smart. Cloud first, cloud chaos, cloud smart, and the next thing, you can skip that whole step. But again, again, it's pick your strategy right? Again, this comes back down to your earlier point. I want to ask you from a customer standpoint, you got the hyper scalers doing very, very well. >> Ramesh: Yep, absolutely. >> And I love what their Amazon's doing. I think Microsoft again though they had a little bit of downgrade are catching up fast, and they have their installed base. So you got the land of the installed bases. >> Correct. >> First and greater, better cloud. Install base getting better, almost as good, almost as good is a gift, but close. Now you have them specializing. Silicon, special silicon. So there's gaps for other services. >> Ramesh: Correct. >> And Amazon Web Services, Adam Selipsky's a open book saying, hey, we want our ecosystem to pick up these gaps and build on them. Go ahead, go to town. >> So this is where I think choices are tough, right? Because if you had one choice, you would work with it, and you would work around it, right? Now I have five different choices. Now what do I do? Our viewpoint is there are a bunch of things that say AWS does really, really well. Use that as a foundational layer, right? Like don't reinvent the wheel on those things. Transit gateways, global accelerators and whatnot, they exist for a reason. Billions of dollars have gone into building those things. Use that foundational layer, right? But what you want to build on top of that is actually driven by the application. The requirements of a lambda application that's serverless, it's very different than a packaged application that's responding for transactions, right? Like it's just completely very, very different. And so bring in the right set of capabilities required for those set of applications, and then you go based on that. This is also where I think whether something is a regional construct versus an overall global construct really, really matters, right? Because if you start with the assumption that everything is going to be built regionally, then it's someone else's job to make sure that all of these things are connected. But if you start with kind of the global purview, then the rest of them start to (cross talking). >> What are some of the things that the enterprises might want that are gaps that are going to be filled by the, by startups like you guys and the ecosystem because we're seeing the ecosystem form into two big camps. >> Ramesh: Yep. >> ISVs, which is an old school definition of independent software vendor, aka someone who writes software. >> Ramesh: Exactly. >> SaaS app. >> Ramesh: Correct. >> And then ecosystem software players that were once ISVs now have people building on top of them. >> Ramesh: Correct. >> They're building on top of the cloud. So you have that new hyper scale effect going on. >> Ramesh: Exactly. >> You got ISVs, which is software developers, software vendors. >> Ramesh: Correct. >> And ecosystems. >> Yep. >> What's that impact of that? Cause it's a new dynamic. >> Exactly, so if you take kind of enterprises, want to make sure that that their apps and the data center migrate to the cloud, new apps are developed the right way in the cloud, right? So that's kind of table stakes. So now what choices do they have? They listen to AWS and say, okay, I have all these cloud native services. I want to be able to instantiate all that. Now comes the interesting choice that they have to make. Do I go hire a whole bunch of people and do it myself or do I go there on the platform route, right? Because I made an architectural choice. Now I have to decide whether I want to do this myself or the platform choice. DIY works great for some, but you don't know what you're getting into, and it's people involved, right? People, process, all those fun things involved, right? So we show up there and say, you don't know what you don't know, right? Like because that's the nature of it. Why don't you invest in a platform like what what we provide, and then you actually build on top of it. We will, it's our job to make sure that we keep up with the innovation happening underneath the covers. And at the same time, this is not a closed ended system. You can actually build on top of our platform, right? And so that actually gives you a good mix. Now the care abouts are interesting. Some apps care about experience. Some apps care about latency. Some apps are extremely charty and extremely data intensive, but nobody wants to pay for it, right? And so it's a interesting Jenga that you have to play between experience versus security versus cost, right? And that makes kind of head of infrastructure and cloud platform teams' life really, really, really interesting. >> And this is why I love your background, and Stu Miniman, when he was with theCUBE, and now he's at Red Hat, we used to riff about the network and how network folks are now, those concepts are now up the top of the stack because the cloud is one big network effect. >> Ramesh: Exactly, correct. >> It's a computer. >> Yep, absolutely. No, and case in point, right, like say we're in let's say in San Jose here or or Palo Alto here, and let's say my application is sitting in London, right? The cloud gives you different express lanes. I can go down to my closest pop location provided by AWS and then I can go ride that all the way up to up to London. It's going to give me better performance, low latency, but I'm going to have to incur some costs associated with it. Or I can go all the wild internet all the way from Palo Alta up to kind of the ingress point into London and then go access, but I'm spending time on the wild internet, which means all kinds of fun things happen, right? But I'm not paying much, but my experience is not going to be so great. So, and there are various degrees of shade in them, of gray in the middle, right? So how do you pick what? It all kind of is driven by the applications. >> Well, we certainly want you back for Supercloud3, our next version of this virtual/live event here in our Palo Alto studios. Really appreciate you coming on. >> Absolutely. >> While you're here, give a quick plug for the company. Next minute, we can take a minute to talk about the success of the company. >> Ramesh: Absolutely. >> I know you got a fresh financing this past year. Plenty of money in the bank, going to ride this new wave, Supercloud wave. Give us a quick plug. >> Absolutely, yeah. So three years going on to four this calendar year. So it's an interesting time for the company. We have proven that our technology, product and our initial customers are quite happy with it. Now comes essentially more of those and scale and so forth. That's kind of the interesting phase that we are in. Also heartened to see quite a few of kind of really large and dominant players in the market, partners, channels and so forth, invest in us to take this to the next set of customers. I would say there's been a dramatic shift in the conversation with our customers. The first couple of years or so of the company, we are about three years old right now, was really about us educating them. This is what you need. This is what you need. Now actually it's a lot of just pull, right? We've seen a good indication, as much as a hate RFIs, a good indication is the number of RFIs that show up at our door saying we want you to participate in this because we want to understand more, right? And so as a, I think we are at an interesting point of the, of that shift. >> RFIs always like do all this work and hope for the best. Pray for a deal. You know, you guys on the right side of history. If a customer asks with respect to Supercloud, multicloud, is that your focus? Is that the direction you guys are going into? >> Yeah, so I would say we are kind of both, right? Supercloud and multicloud because we, our customers are hybrid, multiple clouds, all of the above, right? Our main pitch and kind of value back to the customers is go embrace cloud native because that's the right approach, right? It doesn't make sense to go reinvent the wheel on that one, but then make a really good choice about whether you want to do this yourself or invest in a platform to make your life easy. Because we have seen this story play out with many many enterprises, right? They pick the right technologies. They do a simple POC overnight, and they say, yeah, I can make this work for two apps, right? And then they say, yes, I can make this work for 100. You go down a certain path. You hit a wall. You hit a wall, and it's a hard wall. It's like, no, there isn't a thing that you can go around it. >> A lot of dead bodies laying around. >> Ramesh: Exactly. >> Dead wall. >> And then they have to unravel around that, and then they come talk to us, and they say, okay, now what? Like help me, help me through this journey. So I would say to the extent that you can do this diligence ahead of time, do that, and then, and then pick the right platform. >> You've got to have the talent. And you got to be geared up. You got to know what you're getting into. >> Ramesh: Exactly. >> You got to have the staff to do this. >> And cloud talent and skillset in particular, I mean there's lots available but it's in pockets right? And if you look at kind of web three companies, they've gone and kind of amassed all those guys, right? So enterprises are not left with the cream of the crop. >> John: They might be coming back. >> Exactly, exactly, so. >> With this downturn. Ramesh, great to see you and thanks for contributing to Supercloud2, and again, love your team. Very technical team, and you're in the right side of history in this one. Congratulations. >> Ramesh: No, and thank you, thank you very much. >> Okay, this is Supercloud2. I'm John Furrier with Dave Vellante. We'll be back right after this short break. (upbeat music)
SUMMARY :
Ramesh, legend in the You're being too kind. blog is always, you know, And one that addresses the gaps and get to as many subscribers and users and it's not really a This kind of forms that The game is still the same, but the play, and it's one that we It's going everywhere in the company. So I need to scale my it's going to be completely and make sure that you get So the question that's being debated is on kind of the platform side kind of peel the onion in layers, right? So that brings up the deployment question. And so both of those need to be solved for So you kind of have to go top to bottom. down into the trap now. in software that you can tweak So how do you secure the that needs to talk to an analytics service and the next thing, you So you got the land of Now you have them specializing. ecosystem to pick up these gaps and then you go based on that. and the ecosystem of independent software vendor, that were once ISVs now have So you have that new hyper is software developers, What's that impact of that? and the data center migrate to the cloud, because the cloud is of gray in the middle, right? you back for Supercloud3, quick plug for the company. Plenty of money in the bank, That's kind of the interesting Is that the direction all of the above, right? and then they come talk to us, And you got to be geared up. And if you look at kind Ramesh, great to see you Ramesh: No, and thank Okay, this is Supercloud2.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Ramesh | PERSON | 0.99+ |
Europe | LOCATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ramesh Prabagaran | PERSON | 0.99+ |
Bob Muglia | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
2015 | DATE | 0.99+ |
ORGANIZATION | 0.99+ | |
Microsoft | ORGANIZATION | 0.99+ |
London | LOCATION | 0.99+ |
San Jose | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
10% | QUANTITY | 0.99+ |
Dave | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Adam Selipsky | PERSON | 0.99+ |
two | QUANTITY | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
Stu Miniman | PERSON | 0.99+ |
100% | QUANTITY | 0.99+ |
100 | QUANTITY | 0.99+ |
two apps | QUANTITY | 0.99+ |
yesterday | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Amazon Web Services | ORGANIZATION | 0.99+ |
Palo Alta | LOCATION | 0.99+ |
Second | QUANTITY | 0.99+ |
two regions | QUANTITY | 0.99+ |
APAC | ORGANIZATION | 0.99+ |
First | QUANTITY | 0.99+ |
one choice | QUANTITY | 0.99+ |
second event | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
three years | QUANTITY | 0.99+ |
Prosimo | ORGANIZATION | 0.99+ |
Billions of dollars | QUANTITY | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
one region | QUANTITY | 0.98+ |
multicloud | ORGANIZATION | 0.98+ |
five different choices | QUANTITY | 0.98+ |
hundreds | QUANTITY | 0.98+ |
each | QUANTITY | 0.98+ |
first layer | QUANTITY | 0.98+ |
first | QUANTITY | 0.97+ |
two worlds | QUANTITY | 0.97+ |
Supercloud | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
single instance | QUANTITY | 0.97+ |
Supercloud2 | ORGANIZATION | 0.97+ |
two big camps | QUANTITY | 0.97+ |
one reality | QUANTITY | 0.96+ |
three companies | QUANTITY | 0.96+ |
today | DATE | 0.96+ |
SaaS | TITLE | 0.95+ |
CloudFlare | ORGANIZATION | 0.95+ |
first couple of years | QUANTITY | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |
first job | QUANTITY | 0.94+ |
Supercloud wave | EVENT | 0.94+ |
Azure | ORGANIZATION | 0.94+ |
three clouds | QUANTITY | 0.93+ |
AWS Startup Showcase S3E1
(upbeat electronic music) >> Hello everyone, welcome to this CUBE conversation here from the studios in the CUBE in Palo Alto, California. I'm John Furrier, your host. We're featuring a startup, Astronomer. Astronomer.io is the URL, check it out. And we're going to have a great conversation around one of the most important topics hitting the industry, and that is the future of machine learning and AI, and the data that powers it underneath it. There's a lot of things that need to get done, and we're excited to have some of the co-founders of Astronomer here. Viraj Parekh, who is co-founder of Astronomer, and Paola Peraza Calderon, another co-founder, both with Astronomer. Thanks for coming on. First of all, how many co-founders do you guys have? >> You know, I think the answer's around six or seven. I forget the exact, but there's really been a lot of people around the table who've worked very hard to get this company to the point that it's at. We have long ways to go, right? But there's been a lot of people involved that have been absolutely necessary for the path we've been on so far. >> Thanks for that, Viraj, appreciate that. The first question I want to get out on the table, and then we'll get into some of the details, is take a minute to explain what you guys are doing. How did you guys get here? Obviously, multiple co-founders, sounds like a great project. The timing couldn't have been better. ChatGPT has essentially done so much public relations for the AI industry to kind of highlight this shift that's happening. It's real, we've been chronicalizing, take a minute to explain what you guys do. >> Yeah, sure, we can get started. So, yeah, when Viraj and I joined Astronomer in 2017, we really wanted to build a business around data, and we were using an open source project called Apache Airflow that we were just using sort of as customers ourselves. And over time, we realized that there was actually a market for companies who use Apache Airflow, which is a data pipeline management tool, which we'll get into, and that running Airflow is actually quite challenging, and that there's a big opportunity for us to create a set of commercial products and an opportunity to grow that open source community and actually build a company around that. So the crux of what we do is help companies run data pipelines with Apache Airflow. And certainly we've grown in our ambitions beyond that, but that's sort of the crux of what we do for folks. >> You know, data orchestration, data management has always been a big item in the old classic data infrastructure. But with AI, you're seeing a lot more emphasis on scale, tuning, training. Data orchestration is the center of the value proposition, when you're looking at coordinating resources, it's one of the most important things. Can you guys explain what data orchestration entails? What does it mean? Take us through the definition of what data orchestration entails. >> Yeah, for sure. I can take this one, and Viraj, feel free to jump in. So if you google data orchestration, here's what you're going to get. You're going to get something that says, "Data orchestration is the automated process" "for organizing silo data from numerous" "data storage points, standardizing it," "and making it accessible and prepared for data analysis." And you say, "Okay, but what does that actually mean," right, and so let's give sort of an an example. So let's say you're a business and you have sort of the following basic asks of your data team, right? Okay, give me a dashboard in Sigma, for example, for the number of customers or monthly active users, and then make sure that that gets updated on an hourly basis. And then number two, a consistent list of active customers that I have in HubSpot so that I can send them a monthly product newsletter, right? Two very basic asks for all sorts of companies and organizations. And when that data team, which has data engineers, data scientists, ML engineers, data analysts get that request, they're looking at an ecosystem of data sources that can help them get there, right? And that includes application databases, for example, that actually have in product user behavior and third party APIs from tools that the company uses that also has different attributes and qualities of those customers or users. And that data team needs to use tools like Fivetran to ingest data, a data warehouse, like Snowflake or Databricks to actually store that data and do analysis on top of it, a tool like DBT to do transformations and make sure that data is standardized in the way that it needs to be, a tool like Hightouch for reverse ETL. I mean, we could go on and on. There's so many partners of ours in this industry that are doing really, really exciting and critical things for those data movements. And the whole point here is that data teams have this plethora of tooling that they use to both ingest the right data and come up with the right interfaces to transform and interact with that data. And data orchestration, in our view, is really the heartbeat of all of those processes, right? And tangibly the unit of data orchestration is a data pipeline, a set of tasks or jobs that each do something with data over time and eventually run that on a schedule to make sure that those things are happening continuously as time moves on and the company advances. And so, for us, we're building a business around Apache Airflow, which is a workflow management tool that allows you to author, run, and monitor data pipelines. And so when we talk about data orchestration, we talk about sort of two things. One is that crux of data pipelines that, like I said, connect that large ecosystem of data tooling in your company. But number two, it's not just that data pipeline that needs to run every day, right? And Viraj will probably touch on this as we talk more about Astronomer and our value prop on top of Airflow. But then it's all the things that you need to actually run data and production and make sure that it's trustworthy, right? So it's actually not just that you're running things on a schedule, but it's also things like CICD tooling, secure secrets management, user permissions, monitoring, data lineage, documentation, things that enable other personas in your data team to actually use those tools. So long-winded way of saying that it's the heartbeat, we think, of of the data ecosystem, and certainly goes beyond scheduling, but again, data pipelines are really at the center of it. >> One of the things that jumped out, Viraj, if you can get into this, I'd like to hear more about how you guys look at all those little tools that are out. You mentioned a variety of things. You look at the data infrastructure, it's not just one stack. You've got an analytic stack, you've got a realtime stack, you've got a data lake stack, you got an AI stack potentially. I mean you have these stacks now emerging in the data world that are fundamental, that were once served by either a full package, old school software, and then a bunch of point solution. You mentioned Fivetran there, I would say in the analytics stack. Then you got S3, they're on the data lake stack. So all these things are kind of munged together. >> Yeah. >> How do you guys fit into that world? You make it easier, or like, what's the deal? >> Great question, right? And you know, I think that one of the biggest things we've found in working with customers over the last however many years is that if a data team is using a bunch of tools to get what they need done, and the number of tools they're using is growing exponentially and they're kind of roping things together here and there, that's actually a sign of a productive team, not a bad thing, right? It's because that team is moving fast. They have needs that are very specific to them, and they're trying to make something that's exactly tailored to their business. So a lot of times what we find is that customers have some sort of base layer, right? That's kind of like, it might be they're running most of the things in AWS, right? And then on top of that, they'll be using some of the things AWS offers, things like SageMaker, Redshift, whatever, but they also might need things that their cloud can't provide. Something like Fivetran, or Hightouch, those are other tools. And where data orchestration really shines, and something that we've had the pleasure of helping our customers build, is how do you take all those requirements, all those different tools and whip them together into something that fulfills a business need? So that somebody can read a dashboard and trust the number that it says, or somebody can make sure that the right emails go out to their customers. And Airflow serves as this amazing kind of glue between that data stack, right? It's to make it so that for any use case, be it ELT pipelines, or machine learning, or whatever, you need different things to do them, and Airflow helps tie them together in a way that's really specific for a individual business' needs. >> Take a step back and share the journey of what you guys went through as a company startup. So you mentioned Apache, open source. I was just having an interview with a VC, we were talking about foundational models. You got a lot of proprietary and open source development going on. It's almost the iPhone/Android moment in this whole generative space and foundational side. This is kind of important, the open source piece of it. Can you share how you guys started? And I can imagine your customers probably have their hair on fire and are probably building stuff on their own. Are you guys helping them? Take us through, 'cause you guys are on the front end of a big, big wave, and that is to make sense of the chaos, rain it in. Take us through your journey and why this is important. >> Yeah, Paola, I can take a crack at this, then I'll kind of hand it over to you to fill in whatever I miss in details. But you know, like Paola is saying, the heart of our company is open source, because we started using Airflow as an end user and started to say like, "Hey wait a second," "more and more people need this." Airflow, for background, started at Airbnb, and they were actually using that as a foundation for their whole data stack. Kind of how they made it so that they could give you recommendations, and predictions, and all of the processes that needed orchestrated. Airbnb created Airflow, gave it away to the public, and then fast forward a couple years and we're building a company around it, and we're really excited about that. >> That's a beautiful thing. That's exactly why open source is so great. >> Yeah, yeah. And for us, it's really been about watching the community and our customers take these problems, find a solution to those problems, standardize those solutions, and then building on top of that, right? So we're reaching to a point where a lot of our earlier customers who started to just using Airflow to get the base of their BI stack down and their reporting in their ELP infrastructure, they've solved that problem and now they're moving on to things like doing machine learning with their data, because now that they've built that foundation, all the connective tissue for their data arriving on time and being orchestrated correctly is happening, they can build a layer on top of that. And it's just been really, really exciting kind of watching what customers do once they're empowered to pick all the tools that they need, tie them together in the way they need to, and really deliver real value to their business. >> Can you share some of the use cases of these customers? Because I think that's where you're starting to see the innovation. What are some of the companies that you're working with, what are they doing? >> Viraj, I'll let you take that one too. (group laughs) >> So you know, a lot of it is... It goes across the gamut, right? Because it doesn't matter what you are, what you're doing with data, it needs to be orchestrated. So there's a lot of customers using us for their ETL and ELT reporting, right? Just getting data from other disparate sources into one place and then building on top of that. Be it building dashboards, answering questions for the business, building other data products and so on and so forth. From there, these use cases evolve a lot. You do see folks doing things like fraud detection, because Airflow's orchestrating how transactions go, transactions get analyzed. They do things like analyzing marketing spend to see where your highest ROI is. And then you kind of can't not talk about all of the machine learning that goes on, right? Where customers are taking data about their own customers, kind of analyze and aggregating that at scale, and trying to automate decision making processes. So it goes from your most basic, what we call data plumbing, right? Just to make sure data's moving as needed, all the ways to your more exciting expansive use cases around automated decision making and machine learning. >> And I'd say, I mean, I'd say that's one of the things that I think gets me most excited about our future, is how critical Airflow is to all of those processes, and I think when you know a tool is valuable is when something goes wrong and one of those critical processes doesn't work. And we know that our system is so mission critical to answering basic questions about your business and the growth of your company for so many organizations that we work with. So it's, I think, one of the things that gets Viraj and I and the rest of our company up every single morning is knowing how important the work that we do for all of those use cases across industries, across company sizes, and it's really quite energizing. >> It was such a big focus this year at AWS re:Invent, the role of data. And I think one of the things that's exciting about the open AI and all the movement towards large language models is that you can integrate data into these models from outside. So you're starting to see the integration easier to deal with. Still a lot of plumbing issues. So a lot of things happening. So I have to ask you guys, what is the state of the data orchestration area? Is it ready for disruption? Has it already been disrupted? Would you categorize it as a new first inning kind of opportunity, or what's the state of the data orchestration area right now? Both technically and from a business model standpoint. How would you guys describe that state of the market? >> Yeah, I mean, I think in a lot of ways, in some ways I think we're category creating. Schedulers have been around for a long time. I released a data presentation sort of on the evolution of going from something like Kron, which I think was built in like the 1970s out of Carnegie Mellon. And that's a long time ago, that's 50 years ago. So sort of like the basic need to schedule and do something with your data on a schedule is not a new concept. But to our point earlier, I think everything that you need around your ecosystem, first of all, the number of data tools and developer tooling that has come out industry has 5X'd over the last 10 years. And so obviously as that ecosystem grows, and grows, and grows, and grows, the need for orchestration only increases. And I think, as Astronomer, I think we... And we work with so many different types of companies, companies that have been around for 50 years, and companies that got started not even 12 months ago. And so I think for us it's trying to, in a ways, category create and adjust sort of what we sell and the value that we can provide for companies all across that journey. There are folks who are just getting started with orchestration, and then there's folks who have such advanced use case, 'cause they're hitting sort of a ceiling and only want to go up from there. And so I think we, as a company, care about both ends of that spectrum, and certainly want to build and continue building products for companies of all sorts, regardless of where they are on the maturity curve of data orchestration. >> That's a really good point, Paola. And I think the other thing to really take into account is it's the companies themselves, but also individuals who have to do their jobs. If you rewind the clock like 5 or 10 years ago, data engineers would be the ones responsible for orchestrating data through their org. But when we look at our customers today, it's not just data engineers anymore. There's data analysts who sit a lot closer to the business, and the data scientists who want to automate things around their models. So this idea that orchestration is this new category is right on the money. And what we're finding is the need for it is spreading to all parts of the data team, naturally where Airflow's emerged as an open source standard and we're hoping to take things to the next level. >> That's awesome. We've been up saying that the data market's kind of like the SRE with servers, right? You're going to need one person to deal with a lot of data, and that's data engineering, and then you're got to have the practitioners, the democratization. Clearly that's coming in what you're seeing. So I have to ask, how do you guys fit in from a value proposition standpoint? What's the pitch that you have to customers, or is it more inbound coming into you guys? Are you guys doing a lot of outreach, customer engagements? I'm sure they're getting a lot of great requirements from customers. What's the current value proposition? How do you guys engage? >> Yeah, I mean, there's so many... Sorry, Viraj, you can jump in. So there's so many companies using Airflow, right? So the baseline is that the open source project that is Airflow that came out of Airbnb, over five years ago at this point, has grown exponentially in users and continues to grow. And so the folks that we sell to primarily are folks who are already committed to using Apache Airflow, need data orchestration in their organization, and just want to do it better, want to do it more efficiently, want to do it without managing that infrastructure. And so our baseline proposition is for those organizations. Now to Viraj's point, obviously I think our ambitions go beyond that, both in terms of the personas that we addressed and going beyond that data engineer, but really it's to start at the baseline, as we continue to grow our our company, it's really making sure that we're adding value to folks using Airflow and help them do so in a better way, in a larger way, in a more efficient way, and that's really the crux of who we sell to. And so to answer your question on, we get a lot of inbound because they're... >> You have a built in audience. (laughs) >> The world that use it. Those are the folks who we talk to and come to our website and chat with us and get value from our content. I mean, the power of the opensource community is really just so, so big, and I think that's also one of the things that makes this job fun. >> And you guys are in a great position. Viraj, you can comment a little, get your reaction. There's been a big successful business model to starting a company around these big projects for a lot of reasons. One is open source is continuing to be great, but there's also supply chain challenges in there. There's also we want to continue more innovation and more code and keeping it free and and flowing. And then there's the commercialization of productizing it, operationalizing it. This is a huge new dynamic, I mean, in the past 5 or so years, 10 years, it's been happening all on CNCF from other areas like Apache, Linux Foundation, they're all implementing this. This is a huge opportunity for entrepreneurs to do this. >> Yeah, yeah. Open source is always going to be core to what we do, because we wouldn't exist without the open source community around us. They are huge in numbers. Oftentimes they're nameless people who are working on making something better in a way that everybody benefits from it. But open source is really hard, especially if you're a company whose core competency is running a business, right? Maybe you're running an e-commerce business, or maybe you're running, I don't know, some sort of like, any sort of business, especially if you're a company running a business, you don't really want to spend your time figuring out how to run open source software. You just want to use it, you want to use the best of it, you want to use the community around it, you want to be able to google something and get answers for it, you want the benefits of open source. You don't have the time or the resources to invest in becoming an expert in open source, right? And I think that dynamic is really what's given companies like us an ability to kind of form businesses around that in the sense that we'll make it so people get the best of both worlds. You'll get this vast open ecosystem that you can build on top of, that you can benefit from, that you can learn from. But you won't have to spend your time doing undifferentiated heavy lifting. You can do things that are just specific to your business. >> It's always been great to see that business model evolve. We used a debate 10 years ago, can there be another Red Hat? And we said, not really the same, but there'll be a lot of little ones that'll grow up to be big soon. Great stuff. Final question, can you guys share the history of the company? The milestones of Astromer's journey in data orchestration? >> Yeah, we could. So yeah, I mean, I think, so Viraj and I have obviously been at Astronomer along with our other founding team and leadership folks for over five years now. And it's been such an incredible journey of learning, of hiring really amazing people, solving, again, mission critical problems for so many types of organizations. We've had some funding that has allowed us to invest in the team that we have and in the software that we have, and that's been really phenomenal. And so that investment, I think, keeps us confident, even despite these sort of macroeconomic conditions that we're finding ourselves in. And so honestly, the milestones for us are focusing on our product, focusing on our customers over the next year, focusing on that market for us that we know can get valuable out of what we do, and making developers' lives better, and growing the open source community and making sure that everything that we're doing makes it easier for folks to get started, to contribute to the project and to feel a part of the community that we're cultivating here. >> You guys raised a little bit of money. How much have you guys raised? >> Don't know what the total is, but it's in the ballpark over $200 million. It feels good to... >> A little bit of capital. Got a little bit of cap to work with there. Great success. I know as a Series C Financing, you guys have been down. So you're up and running, what's next? What are you guys looking to do? What's the big horizon look like for you from a vision standpoint, more hiring, more product, what is some of the key things you're looking at doing? >> Yeah, it's really a little of all of the above, right? Kind of one of the best and worst things about working at earlier stage startups is there's always so much to do and you often have to just kind of figure out a way to get everything done. But really investing our product over the next, at least over the course of our company lifetime. And there's a lot of ways we want to make it more accessible to users, easier to get started with, easier to use, kind of on all areas there. And really, we really want to do more for the community, right, like I was saying, we wouldn't be anything without the large open source community around us. And we want to figure out ways to give back more in more creative ways, in more code driven ways, in more kind of events and everything else that we can keep those folks galvanized and just keep them happy using Airflow. >> Paola, any final words as we close out? >> No, I mean, I'm super excited. I think we'll keep growing the team this year. We've got a couple of offices in the the US, which we're excited about, and a fully global team that will only continue to grow. So Viraj and I are both here in New York, and we're excited to be engaging with our coworkers in person finally, after years of not doing so. We've got a bustling office in San Francisco as well. So growing those teams and continuing to hire all over the world, and really focusing on our product and the open source community is where our heads are at this year. So, excited. >> Congratulations. 200 million in funding, plus. Good runway, put that money in the bank, squirrel it away. It's a good time to kind of get some good interest on it, but still grow. Congratulations on all the work you guys do. We appreciate you and the open source community does, and good luck with the venture, continue to be successful, and we'll see you at the Startup Showcase. >> Thank you. >> Yeah, thanks so much, John. Appreciate it. >> Okay, that's the CUBE Conversation featuring astronomer.io, that's the website. Astronomer is doing well. Multiple rounds of funding, over 200 million in funding. Open source continues to lead the way in innovation. Great business model, good solution for the next gen cloud scale data operations, data stacks that are emerging. I'm John Furrier, your host, thanks for watching. (soft upbeat music)
SUMMARY :
and that is the future of for the path we've been on so far. for the AI industry to kind of highlight So the crux of what we center of the value proposition, that it's the heartbeat, One of the things and the number of tools they're using of what you guys went and all of the processes That's a beautiful thing. all the tools that they need, What are some of the companies Viraj, I'll let you take that one too. all of the machine learning and the growth of your company that state of the market? and the value that we can provide and the data scientists that the data market's And so the folks that we sell to You have a built in audience. one of the things that makes this job fun. in the past 5 or so years, 10 years, that you can build on top of, the history of the company? and in the software that we have, How much have you guys raised? but it's in the ballpark What's the big horizon look like for you Kind of one of the best and worst things and continuing to hire the work you guys do. Yeah, thanks so much, John. for the next gen cloud
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Viraj Parekh | PERSON | 0.99+ |
Paola | PERSON | 0.99+ |
Viraj | PERSON | 0.99+ |
John | PERSON | 0.99+ |
John Furrier | PERSON | 0.99+ |
Airbnb | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
San Francisco | LOCATION | 0.99+ |
New York | LOCATION | 0.99+ |
Apache | ORGANIZATION | 0.99+ |
US | LOCATION | 0.99+ |
Two | QUANTITY | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Paola Peraza Calderon | PERSON | 0.99+ |
1970s | DATE | 0.99+ |
first question | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Airflow | TITLE | 0.99+ |
both | QUANTITY | 0.99+ |
Linux Foundation | ORGANIZATION | 0.99+ |
200 million | QUANTITY | 0.99+ |
Astronomer | ORGANIZATION | 0.99+ |
One | QUANTITY | 0.99+ |
over 200 million | QUANTITY | 0.99+ |
over $200 million | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
10 years ago | DATE | 0.99+ |
HubSpot | ORGANIZATION | 0.98+ |
Fivetran | ORGANIZATION | 0.98+ |
50 years ago | DATE | 0.98+ |
over five years | QUANTITY | 0.98+ |
one stack | QUANTITY | 0.98+ |
12 months ago | DATE | 0.98+ |
10 years | QUANTITY | 0.97+ |
Both | QUANTITY | 0.97+ |
Apache Airflow | TITLE | 0.97+ |
both worlds | QUANTITY | 0.97+ |
CNCF | ORGANIZATION | 0.97+ |
one | QUANTITY | 0.97+ |
ChatGPT | ORGANIZATION | 0.97+ |
5 | DATE | 0.97+ |
next year | DATE | 0.96+ |
Astromer | ORGANIZATION | 0.96+ |
today | DATE | 0.95+ |
5X | QUANTITY | 0.95+ |
over five years ago | DATE | 0.95+ |
CUBE | ORGANIZATION | 0.94+ |
two things | QUANTITY | 0.94+ |
each | QUANTITY | 0.93+ |
one person | QUANTITY | 0.93+ |
First | QUANTITY | 0.92+ |
S3 | TITLE | 0.91+ |
Carnegie Mellon | ORGANIZATION | 0.91+ |
Startup Showcase | EVENT | 0.91+ |
Mobile World Congress Preview 2023 | Mobile World Congress 2023
(electronic music) (graphics whooshing) (graphics tinkling) >> Telecommunications is well north of a trillion-dollar business globally, that provides critical services on which virtually everyone on the planet relies. Dramatic changes are occurring in the sector, and one of the most important dimensions of this change is the underlying infrastructure that powers global telecommunications networks. Telcos have been thawing out, if you will, they're frozen infrastructure, modernizing. They're opening up, they're disaggregating their infrastructure, separating, for example, the control plane from the data plane, and adopting open standards. Telco infrastructure is becoming software-defined. And leading telcos are adopting cloud native microservices to help make developers more productive, so they can respond more quickly to market changes. They're embracing technology consumption models, and selectively leveraging the cloud where it makes sense. And these changes are being driven by market forces, the root of which stem from customer demand. So from a customer's perspective, they want services, and they want them fast. Meaning, not only at high speeds, but also they want them now. Customers want the latest, the greatest, and they want these services to be reliable and stable with high quality of service levels. And they want them to be highly cost-effective. Hello and welcome to this preview of Mobile World Congress 2023. My name is Dave Vellante, and at this year's event, theCUBE has a major presence at the show made possible by Dell Technologies, and with me to unpack the trends in telco, and look ahead to MWC23 are Dennis Hoffman, he's the Senior Vice President and General Manager of Dell's telecom business, and Aaron Chaisson, who is the Vice President of Telecom and Edge Solutions Marketing at Dell Technologies, gentlemen, welcome, thanks so much for spending some time with me. >> Thank you, Dave. >> Thanks, glad to be here. >> So, Dennis, let's start with you. Telcos in recent history have been slow to deliver and to monetize new services, and a large part because their purpose-built infrastructure could been somewhat of a barrier to responding to all these market forces. In many ways, this is what makes telecoms, really this market so exciting. So from your perspective, where is the action in this space? >> Yeah, the action Dave is kind of all over the place, partly because it's an ecosystem play. I think it's been, as you point out, the disaggregation trend has been going on for a while. The opportunity's been clear, but it has taken a few years to get all of the vendors, and all of the components that make up a solution, as well as the operators themselves, to a point where we can start putting this stuff together, and actually achieving some of the promise. >> So Aaron, for those who might not be as familiar with Dell's a activities in this area, here we are just ahead of Mobile World Congress, it's the largest event for telecoms, what should people know about Dell? And what's the key message to this industry? >> Sure, yeah, I think everybody knows that there's a lot of innovation that's been happening in the industry of late. One of the major trends that we're seeing is that shift from more of a vertically-integrated technology stack, to more of a disaggregated set of solutions, and that trend has actually created a ton of innovation that's happening across the industry, or along technology vendors and providers, the telecoms themselves. And so, one of the things that Dell's really looking to do is, as Dennis talked about, is build out a really strong ecosystem of partners and vendors that we're working closely together to be able to collaborate on new technologies, new capabilities that are solving challenges that the networks are seeing today. Be able to create new solutions built on those in order to be able to bring new value to the industry. And then finally, we want to help both partners, as well as our CSP providers activate those changes, so that they can bring new solutions to market, to be able to serve their customers. And so, the key areas that we're really focusing on with our customers is, technologies to help modernize the network, to be able to capitalize on the value of open architectures, and bring price performance to what they're expecting, and availability that they're expecting today. And then also, partner with the lines of business to be able to take these new capabilities, produce new solutions, and then deliver new value to their customers. >> Great, thank you, Aaron. So Dennis, you and I, known you for a number of years. I've watched you, you're are a trend spotter. You're a strategic thinker. I love now the fact that you're running a business that you had to go out and analyze, and now you got to make it happen. So, how would you describe Dell's strategy in this market? >> Well, it's really two things. And I appreciate the comment, I'm not sure how much of a trend spotter I am, but I certainly enjoy, and I think I'm fascinated by what's going on in this industry right now. Our two main thrusts, Dave, are first round, trying to catalyze that ecosystem, be a force for pulling together a group of folks, vendors that have been flying in fairly loose formation for a couple of years, to deliver the kinds of solutions that move the needle forward, and produce the outcomes that our network operator customers can actually buy and consume, and deploy, and have them be supported. The other thing is, there's a couple of very key technology areas that need to be advanced here. This ends up being a much anticipated year in telecom. Because of the delivery of some open infrastructure solutions that have being developed for years. With the Intel Sapphire Rapids program coming to market, we've of course got some purpose-built solutions on top of that for telecommunications networks. Some expanded partnerships in the area of multi-cloud infrastructure. And so, I would say the second main thrust is, we've got to bring some intellectual property to the party. It's not just about pulling the ecosystem together. But those two things together really form the twin thrusts of our strategy. >> Okay, so as you point out, you obviously not going to go alone in this market, it's way too broad, there's so many routes to market, partnerships, obviously very, very important. So, can you share a little bit more about the ecosystem and partners, maybe give some examples of some of the key partners that you'd be highlighting or working with, maybe at Mobile World Congress, or other activities this year? >> Yeah, absolutely. As Aaron touched on, I'm a visual thinker. The way I think about this thing is a very, very vertical architecture is tipping sideways. It's becoming horizontal. And all of the layers of that horizontal architecture are really where the partnerships are at. So, let's start at the bottom, silicon. The silicon ecosystem is very much focused on this market. And producing very specific products to enable open, high performance telecom networks. That's both in the form of host processors, as well as accelerators. One layer up, of course, is the stuff that we're known for, subsystems, compute storage, the hardware infrastructure that forms the foundation for telco clouds. A layer above that, all of the cloud software layer, the virtualization and containerization software, and all of the usual suspects there, all of whom are very good partners of ours, and we're looking to expand that pretty broadly this year. And then at the top of the layer cake, all of the network functions, all of the VNF's and CNF's that were once kind of the top of proprietary stacks, that are now opening up and being delivered, as well-formed containers that can run on these clouds. So, we're focusing on all of those, if you will, product partnerships, and there is a services wrapper around all of it. The systems integration necessary to make these systems part of a carrier's network, which of course, has been running for a long time, and needs to be integrated with in a very specific way. And so, all of that, together kind of forms the ecosystem, all of those are partners, and we're really excited about being at the heart of it. >> Interesting, it's not like we've never seen this movie before, which is, it's sort of repeating itself in telco. Aaron, you heard my little intro up front about the need to modernize infrastructure, I wonder if I could touch on another major trend, which we're seeing is the cloud, and I'm talkin' about not only public, but private and hybrid cloud. The public cloud is an opportunity, but it's also a threat for telcos. Telcom providers are lookin' to the public cloud for specific use cases, you think about like bursting for an iPhone launch or whatever. But at the same time, these cloud vendors, they're sort of competing with telcos. They're providing local zones, for example, sometimes trying to do an end run on the telco connectivity services, so telecom companies, they have to find the right balance between what they own and what they rent. And I wonder if you could add some color as to what you see in the market and what Dell specifically is doing to support these trends. >> Yeah, and I think the most important thing is what we're seeing, as you said, is these aren't things that we haven't seen before. And I think that telecom is really going through their own set of cloud transformations, and so, one of the hot topics in the industry now is, what is telco cloud? And what does that look like going forward? And it's going to be, as you said, a combination of services that they offer, services that they leverage. But at the end of the day, it's going to help them modernize how they deliver telecommunication services to their customers, and then provide value added services on top of that. From a Dell perspective, we're really providing the technologies to provide the underpinnings to lay a foundation on which that network can be built, whether that's best of breed servers that are built in design for the telecom environments. Recently, we announced our Infer block program, in partnering with virtualization providers, to be able to provide engineered systems that dramatically simplify how our customers can deploy, manage, and lifecycle manage throughout day two operations, an entire cloud environment. And whether they're using Red Hat, whether they're using Wind River, or VMware, or other virtualization layers, they can deploy the right virtualization layer at the right part of their network to support the applications they're looking to drive. And Dell is looking to solve how they simplify and manage all of that, both from a hardware, as well as on management software perspective. So, this is really what Dell's doing to, again, partner with the broader technology community, to help make that telco cloud a reality. >> Aaron, let's stay here for a second, I'm interested in some of the use cases that you're going after with customers. You've got Edge infrastructure, remote work, 5G, where's security fit, what are the focus areas for Dell, and can we double click on that a little bit? >> Yeah, I mean, I think there's two main areas of telecommunication industry that we're talking to. One, we've really been talking about the sort of the network buyer, how do they modernize the core, the network Edge, the RAN capabilities to deliver traditional telecommunication services, and modernize that as they move into 5G and beyond. I think the other side of the business is, telecoms are really looking from a line of business perspective to figure out how do they monetize that network, and be able to deliver value added services to their enterprise customers on top of these new networks. So, you were just touching on a couple of things that are really critical. In the enterprise space, AI and IoT is driving a tremendous amount of innovation out there, and there's a need for being able to support and manage Edge compute at scale, be able to provide connectivity, like private mobility, and 4G and 5G, being able to support things like mobile workforces and client capabilities, to be able to access these devices that are around all of these Edge environments of the enterprises. And telecoms are seeing as that, as an opportunity for them to not only provide connectivity, but how do they extend their cloud out into these enterprise environments with compute, with connectivity, with client and connectivity resources, and even also provide protection for those environments as well. So, these are areas that Dell is historically very strong at. Being able to provide compute, be able to provide connectivity, and being able to provide data protection and client services, we are looking to work closely with lines of businesses to be able to develop solutions that they can bring to market in combination with us, to be able to serve their end user customers and their enterprises. So, those are really the two key areas, not only network buyer, but being able to enable the lines of business to go and capitalize on the services they're developing for their customers. >> I think that line of business aspect is key, I mean, the telcos have had to sit back and provide the plumbing, cost per bit goes down, data consumption going through the roof, all the over at the top guys have had the field day with the data, and the customer relationships, and now it's almost like the revenge (chuckles) of the telcos. Dennis, I wonder if we could talk about the future. What can we expect in the years ahead from Dell, if you break out the binoculars a little bit. >> Yeah, I think you hit it earlier. We've seen the movie before. This has happened in the IT data center. We went from proprietary vertical solutions to horizontal open systems. We went from client server to software-defined open hardware cloud native. And the trend is likely to be exactly that, in the telecom industry because that's what the operators want. They're not naive to what's happened in the IT data center, they all run very large data centers. And they're trying to get some of the scale economies. Some of the agility, the cost of ownership benefits for the reasons Aaron just discussed. It's clear as you point out, this industry's been really defined by the inability to stop investing, and the difficulty to monetize that investment. And I think now, everybody's looking at this 5G, and frankly, 5G plus 6G, and beyond, as the opportunity to really go get a chunk of that revenue, and Enterprise Edge is the target. >> And 5G is touching so many industries, and that kind of brings me, Aaron into Mobile World Congress. I mean, you look at the floor layout, it's amazing. You got Industry 4.0, you've got our traditional industry and telco colliding. There's public policy. So, give us a teaser to Mobile World Congress 23, what's on deck at the show from Dell? >> Yeah, we're really excited about Mobile World Congress. This, as you know, is a massive event for the industry every year. And it's really the event that the whole industry uses to kick off this coming year. So, we're going to be using this obviously to talk to our customers and our partners about what Dell's looking to do, and what we're innovating on right now, and what we're looking to partner with them around. In the front of the house, we're going to be doin', we're going to be highlighting 13 different solutions and demonstrations to be able to show our customers what we're doing today, and show them the use cases, and put into action, so they get to actually look and feel, and touch, and experience what it is that we're working around. Obviously, meetings are important, everybody knows Mobile World Congress is the place to get those meetings and kickoff for the year. So, we're going to have, we're lookin' at several hundred meetings, hundreds of meetings that we're going to be lookin' to have across the industry with our customers and partners in the broader community. And of course, we've also got technology that's going to be in a variety of different partner spaces as well. So, you can come and see us in hall three, but we're also going to have technologies, kind of spread all over the floor. And of course, there's always theCUBE. You're going to be able to see us live all four days, all day, every day. You're going to be hearing our executives, our partners, our customers, talk about what Dell is doing to innovate in the industry, and how we're looking to leverage the broader, open ecosystem to be able to transform the network, and what we're lookin' to do. So, in that space, we're going to be focusing on what we're doing from an ecosystem perspective, our infrastructure focus. We'll be talking about what we're doing to support telco cloud transformation. And then finally, as we talked about earlier, how are we helping the lines of business within our telecoms monetize the opportunity? So, these are all different things we're really excited to be focusing on, and look forward to the event next month. >> Yeah, it's going to be awesome in Barcelona at the FITA, as you say, Dell's big presence in hall three, Orange is in there, Deutsche Telecom, Intel's in hall three. VMware's there, Nokia, Vodafone, you got some great things to see there. Check that out, and of course, theCUBE, we are super excited to be collaborating with you, we got a great setup. We're in the walkway right between halls four and five, right across from the government of Catalonia, who are the host partners for the event, so there's going to be a ton of action there. Guys, can't wait to see you there, really appreciate your time today. >> Great, thanks. >> Alright, Mobile World Congress, theCUBE's coverage starts on February 27th right after the keynotes. So, first thing in the morning, east coast time, we'll be broadcasting is, Aaron said all week, Monday through Thursday in the show floor, check that out at thecube.net. siliconangle.com has all the written coverage, and go to dell.com, see what's happenin' there, have all the action from the event. Don't miss us, this is Dave Vellante, we'll see you there. (electronic music)
SUMMARY :
and one of the most important and to monetize new and all of the components the network, to be able to capitalize on I love now the fact that Because of the delivery of some open examples of some of the key and all of the usual suspects there, about the need to the applications they're looking to drive. I'm interested in some of the use cases the lines of business to go and capitalize I mean, the telcos have had to sit back and the difficulty to and that kind of brings me, Aaron and kickoff for the year. awesome in Barcelona at the FITA, and go to dell.com, see
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Dave Vellante | PERSON | 0.99+ |
Dennis | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Aaron | PERSON | 0.99+ |
Vodafone | ORGANIZATION | 0.99+ |
Aaron Chaisson | PERSON | 0.99+ |
Dennis Hoffman | PERSON | 0.99+ |
February 27th | DATE | 0.99+ |
Dell | ORGANIZATION | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Orange | ORGANIZATION | 0.99+ |
Barcelona | LOCATION | 0.99+ |
Nokia | ORGANIZATION | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
hundreds | QUANTITY | 0.99+ |
Deutsche Telecom | ORGANIZATION | 0.99+ |
Monday | DATE | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Dell Technologies | ORGANIZATION | 0.99+ |
first round | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
Thursday | DATE | 0.99+ |
Mobile World Congress | EVENT | 0.99+ |
next month | DATE | 0.99+ |
Telco | ORGANIZATION | 0.98+ |
13 different solutions | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
Telcos | ORGANIZATION | 0.98+ |
thecube.net. | OTHER | 0.98+ |
both | QUANTITY | 0.98+ |
Mobile World Congress 23 | EVENT | 0.98+ |
this year | DATE | 0.98+ |
One | QUANTITY | 0.98+ |
One layer | QUANTITY | 0.98+ |
VMware | ORGANIZATION | 0.98+ |
both partners | QUANTITY | 0.98+ |
Mobile World Congress 2023 | EVENT | 0.97+ |
one | QUANTITY | 0.97+ |
MWC23 | EVENT | 0.97+ |
twin thrusts | QUANTITY | 0.97+ |
two key areas | QUANTITY | 0.96+ |
telco | ORGANIZATION | 0.95+ |
two main thrusts | QUANTITY | 0.94+ |
five | QUANTITY | 0.93+ |
second main thrust | QUANTITY | 0.93+ |
2023 | DATE | 0.93+ |
Edge | TITLE | 0.92+ |
theCUBE | ORGANIZATION | 0.92+ |
a trillion-dollar | QUANTITY | 0.91+ |
Telcom | ORGANIZATION | 0.91+ |
first | QUANTITY | 0.91+ |
hall three | QUANTITY | 0.9+ |
dell.com | ORGANIZATION | 0.89+ |
Brian Stevens, Neural Magic | Cube Conversation
>> John: Hello and welcome to this cube conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We got a great conversation on making machine learning easier and more affordable in an era where everybody wants more machine learning and AI. We're featuring Neural Magic with the CEO is also Cube alumni, Brian Steve. CEO, Great to see you Brian. Thanks for coming on this cube conversation. Talk about machine learning. >> Brian: Hey John, happy to be here again. >> John: What a buzz that's going on right now? Machine learning, one of the hottest topics, AI front and center, kind of going mainstream. We're seeing the success of the, of the kind of NextGen capabilities in the enterprise and in apps. It's a really exciting time. So perfect timing. Great, great to have this conversation. Let's start with taking a minute to explain what you guys are doing over there at Neural Magic. I know there's some history there, neural networks, MIT. But the, the convergence of what's going on, this big wave hitting, it's an exciting time for you guys. Take a minute to explain the company and your mission. >> Brian: Sure, sure, sure. So, as you said, the company's Neural Magic and spun out at MIT four plus years ago, along with some people and, and some intellectual property. And you summarize it better than I can cause you said, we're just trying to make, you know, AI that much easier. And so, but like another level of specificity around it is. You know, in the world you have a lot of like data scientists really focusing on making AI work for whatever their use case is. And then the next phase of that, then they're looking at optimizing the models that they built. And then it's not good enough just to work on models. You got to put 'em into production. So, what we do is we make it easier to optimize the models that have been developed and trained and then trying to make it super simple when it comes time to deploying those in production and managing them. >> Brian: You know, we've seen this movie before with the cloud. You start to see abstractions come out. Data science we saw like was like the, the secret art of being like a data scientist now democratization of data. You're kind of seeing a similar wave with machine learning models, foundational models, some call it developers are getting involved. Model complexity's still there, but, but it's getting easier. There's almost like the democratization happening. You got complexity, you got deployment, it's challenges, cost, you got developers involved. So it's like how do you grow it? How do you get more horsepower? And then how do you make developers productive, right? So like, this seems to be the thread. So, so where, where do you see this going? Because there's going to be a massive demand for, I want to do more with my machine learning. But what's the data source? What's the formatting? This kind of a stack develop, what, what are you guys doing to address this? Can you take us through and demystify this, this wave that's hitting, that everyone's seeing? >> Brian: Yeah. Now like you said, like, you know, the democratization of all of it. And that brings me all the way back to like the roots of open source, right? When you think about like, like back in the day you had to build your own tech stack yourself. A lot of people probably probably don't remember that. And then you went, you're building, you're always starting on a body of code or a module that was out there with open source. And I think that's what I equate to where AI has gotten to with what you were talking about the foundational models that didn't really exist years ago. So you really were like putting the layers of your models together in the formulas and it was a lot of heavy lifting. And so there was so much time spent on development. With far too few success cases, you know, to get into production to solve like a business stereo technical need. But as these, what's happening is as these models are becoming foundational. It's meaning people don't have to start from scratch. They're actually able to, you know, the avant-garde now is start with existing model that almost does what you want, but then applying your data set to it. So it's, you know, it's really the industry moving forward. And then we, you know, and, and the best thing about it is open source plays a new dimension, but this time, you know, in the, in the realm of AI. And so to us though, like, you know, I've been like, I spent a career focusing on, I think on like the, not just the technical side, but the consumption of the technology and how it's still way too hard for somebody to actually like, operationalize technology that all those vendors throw at them. So I've always been like empathetic the user around like, you know what their job is once you give them great technology. And so it's still too difficult even with the foundational models because what happens is there's really this impedance mismatch between the development of the model and then where, where the model has to live and run and be deployed and the life cycle of the model, if you will. And so what we've done in our research is we've developed techniques to introduce what's known as sparsity into a machine learning model. It's already been developed and trained. And what that sparsity does is that unlocks by making that model so much smaller. So in many cases we can make a model 90 to 95% smaller, even smaller than that in research. So, and, and so by doing that, we do that in a way that preserves all the accuracy out of the foundational model as you talked about. So now all of a sudden you get this much smaller model just as accurate. And then the even more exciting part about it is we developed a software-based engine called Deep Source. And what that, what the Inference Runtime does is takes that now sparsified model and it runs it, but because you sparsified it, it only needs a fraction of the compute that it, that it would've needed otherwise. So what we've done is make these models much faster, much smaller, and then by pairing that with an inference runtime, you now can actually deploy that model anywhere you want on commodity hardware, right? So X 86 in the cloud, X 86 in the data center arm at the edge, it's like this massive unlock that happens because you get the, the state-of-the-art models, but you get 'em, you know, on the IT assets and the commodity infrastructure. That is where all the applications are running today. >> John: I want to get into the inference piece and the deep sparse you mentioned, but I first have to ask, you mentioned open source, Dave and I with some fellow cube alumnis. We're having a chat about, you know, the iPhone and Android moment where you got proprietary versus open source. You got a similar thing happening with some of these machine learning modules where there's a lot of proprietary things happening and there's open source movement is growing. So is there a balance there? Are they all trying to do the same thing? Is it more like a chip, you know, silicons involved, all kinds of things going on that are really fascinating from a science. What's your, what's your reaction to that? >> Brian: I think it's like anything that, you know, the way we talk about AI you think had been around for decades, but the reality is it's been some of the deep learning models. When we first, when we first started taking models that the brain team was working on at Google and billing APIs around them on Google Cloud where the first cloud to even have AI services was 2015, 2016. So when you think about it, it's really been what, 6 years since like this thing is even getting lift off. So I think with that, everybody's throwing everything at it. You know, there's tons of funded hardware thrown at specialty for training or inference new companies. There's legacy companies that are getting into like AI now and whether it's a, you know, a CPU company that's now building specialized ASEX for training. There's new tech stacks proprietary software and there's a ton of asset service. So it really is, you know, what's gone from nascent 8 years ago is the wild, wild west out there. So there's a, there's a little bit of everything right now and I think that makes sense because at the early part of any industry it really becomes really specialized. And that's the, you know, showing my age of like, you know, the early pilot of the two thousands, you know, red Hat people weren't running X 86 in enterprise back then and they thought it was a toy and they certainly weren't running open source, but you really, and it made sense that they weren't because it didn't deliver what they needed to at that time. So they needed specialty stacks, they needed expensive, they needed expensive hardware that did what an Oracle database needed to do. They needed proprietary software. But what happens is that commoditizes through both hardware and through open source and the same thing's really just starting with with AI. >> John: Yeah. And I think that's a great point before we to call that out because in any industry timing's everything, right? I mean I remember back in the 80s, late 80s and 90s, AI, you know, stuff was going on and it just wasn't, there wasn't enough horsepower, there wasn't enough tech. >> Brian: Yep. >> John: You mentioned some of the processing. So AI is this industry that has all these experts who have been itch scratching that itch for decades. And now with cloud and custom silicon. The tech fundamental at the lower end of the stack, if you will, on the performance side is significantly more performant. It's there you got more capabilities. >> Brian: Yeah. >> John: Now you're kicking into more software, faster software. So it just seems like we're at a tipping point where finally it's here, like that AI moment or machine learning and now data is, is involved. So this is where organizations I see really jumping in with the CEO mandate. Hey team, make ML work for us. Go figure it out. It's got to be an advantage for us. >> Brian: Yeah. >> John: So now they go, okay boss, we will. So what, what do they do? What's the steps does an enterprise take to get machine learning into their organizations? Cause you know, it's coming down from the boards, you know, how does this work for rob? >> Brian: Yeah. Like the, you know, the, what we're seeing is it's like anything, like it's, whether that was source adoption or whether that was cloud adoption, it always starts usually with one person. And increasingly it is the CEO, which realizes they're getting further behind the competition because they're not leaning in, you know, faster. But typically it really comes down to like a really strong practitioner that's inside the organization, right? And, that realizes that the number one goal isn't doing more and just training more models and and necessarily being proprietary about it. It's really around understanding the art of the possible. Something that's grounded in the art of the possible, what, what deep learning can do today and what business outcomes you can deliver, you know, if you can employ. And then there's well proven paths through that. It's just that because of where it's been, it's not that industrialized today. It's very much, you know, you see ML project by ML project is very snowflakey, right? And that was kind of the early days of open source as well. And so, we're just starting to get to the point where it's getting easier, it's getting more industrialized, there's less steps, there's less burdensome on developers, there's less burdensome on, on the deployment side. And we're trying to bring that, that whole last mile by saying, you know what? Deploying deep learning and AI models should be as easy as the as to deploy your application, right? You shouldn't have to take an extra step to deploy an AI model. It shouldn't have to require a new hardware, it shouldn't require a new process, a new DevOps model. It should be as simple as what you're already doing. >> John: What is the best practice for companies to effectively bring an acceptable level of machine learning and performance into their organizations? >> Brian: Yeah, I think like the, the number one start is like what you hinted at before is they, they have to know the use case. They have to, in most cases, you're going to find across every industry you know, that that problem's been tackled by some company, right? And then you have to have the best practice around fine-tuning the models already exist. So fine tuning that existing model. That foundational model on your unique dataset. You, you know, if you are in medical instruments, it's not good enough to identify that it's a medical instrument in the picture. You got to know what type of medical instrument. So there's always a fine tuning step. And so we've created open source tools that make it easy for you to do two things at once. You can fine tune that existing foundational model, whether that's in the language space or whether that's in the vision space. You can fine tune that on your dataset. And at the same time you get an optimized model that comes out the other end. So you get kind of both things. So you, you no longer have to worry about you're, we're freeing you from worrying about the complexity of that transfer learning, if you will. And we're freeing you from worrying about, well where am I going to deploy the model? Where does it need to be? Does it need to be on a device, an edge, a data center, a cloud edge? What kind of hardware is it? Is there enough hardware there? We're liberating you from all of that. Because what you want, what you can count on is there'll always be commodity capability, commodity CPUs where you want to deploy in abundance cause that's where your application is. And so all of a sudden we're just freeing you of that, of that whole step. >> John: Okay. Let's get into deep sparse because you mentioned that earlier. What inspired the creation of deep sparse and how does it differ from any other solutions in the market that are out there? >> Brian: Sure. So, so where unique is it? It starts by, by two things. One is what the industry's pretty good at from the optimization side is they're good at like this thing called quantization, which turns like, you know, big numbers into small numbers, lower precision. So a 32 bit representation of a, of AI weight into a bit. And they're good at like cutting out layers, which also takes away accuracy. What we've figured out is to take those, the industry techniques for those that are best practice, but we combined it with unstructured varsity. So by reducing that model by 90 to 95% in size, that's great because it's made it smaller. But we've taken that when it's the deep sparse engine, when you deploy it that looks at that model and says, because it's so much smaller, I no longer have to run the part of the model that's been essentially sparsified. So what that's done is, it's meant that you no longer need a supercomputer to run models because there's not nearly as much math and processing as there was before the model was optimized. So now what happens is, every CPU platform out there has, has an enormous amount of compute because we've sparsified the rest of it away. So you can pick a, you can pick your, your laptop and you have enough compute to run state-of-the-art models. The second thing that, and you need a software engine to do that cause it ignores the parts of the models. It doesn't need to run, which is what like specialized hardware can't do. The second part is it's then turned into a memory efficiency problem. So it's really around just getting memory, getting the models loaded into the cash of the computer and keeping it there. Never having to go back out to memory. So, so our techniques are both, we reduce the model size and then we only run the part of the model that matters and then we keep it all in cash. And so what that does is it gets us to like these, these low, low latency faster and we're able to increase, you know, the CPU processing by an order magnitude. >> John: Yeah. That low latency is key. And you got developers, you know, co coding super fast. We'll get to the developer angle in a second. I want to just follow up on this, this motivation behind the, the deep sparse because you know, as we were talking earlier before we came on camera about the old days, I mean, not too long ago, virtualization and VMware abstracted away the os from, from the hardware rights and the server virtualization changed the game. >> Brian: Yeah. >> John: And that basically invented cloud computing as we know it today. So, so we see that abstraction. >> Brian: Yeah. >> John: There seems to be a motivation behind abstracting the way the machine learning models away from the hardware. And that seems to be bringing advantages to the AI growth. Can you elaborate on, is that true? And it's, what's your comment? >> Brian: It's true. I think it's true for us. I don't think the industry's there yet, honestly. Cause I think the industry still is of that mindset that if I took, if it took these expensive GPUs to train my model, then I want to run my model on those same expensive GPUs. Because there's often like not a separation between the people that are developing AI and the people that have to manage and deploy at where you need it. So the reality is, is that that's everything that we're after. Like, do we decrease the cost? Yes. Do we make the models smaller? Yes. Do we make them faster? A yes. But I think the most amazing power is that we've turned AI into a docker based microservice. And so like who in the industry wants to deploy their apps the old way on a os without virtualization, without docker, without Kubernetes, without microservices, without service mesh without serverless. You want all those tools for your apps by converting AI models. So they can be run inside a docker container with no apologies around latency and performance cause it's faster. You get the best of that whole world that you just talked about, which is, you know, what we're calling, you know, software delivered AI. So now the AI lives in the same world. Organizations that have gone through that digital cloud transformation with their app infrastructure. AI fits into that world. >> John: And this is where the abstraction concepts matter. When you have these inflection points, the convergence of compute data, machine learning that powers AI, it really becomes a developer opportunity. Because now applications and businesses, when they actually go through the digital transformation, their businesses are completely transformed. There is no IT. Developers are the application. They are the company, right? So AI will be part of whatever business or app will be out there. So there is a application developer angle here. Brian, can you explain >> Brian: Oh completely. >> John: how they're going to use this? Because you mentioned docker container microservice, I mean this really is an insane flipping of the script for developers. >> Brian: Yeah. >> John: So what's that look like? >> Brian: Well speak, it's because like AI's kind of, I mean, again, like it's come so fast. So you figure there's my app team and here's my AI team, right? And they're in different places and the AI team is dragging in specialized infrastructure in support of that as well. And that's not how app developers think. Like they've ran on fungible infrastructure that subtracted and virtualized forever, right? And so what we've done is we've, in addition to fitting into that world that they, that they like, we've also made it simple for them for they don't have to be a machine learning engineer to be able to experiment with these foundational models and transfer learning 'em. We've done that. So they can do that in a couple of commands and it has a simple API that they can either link to their application directly as a library to make difference calls or they can stand it up as a standalone, you know, scale up, scale out inference server. They get two choices. But it really fits into that, you know, you know that world that the modern developer, whether they're just using Python or C or otherwise, we made it just simple. So as opposed to like Go learn something else, they kind of don't have to. So in a way though, it's made it. It's almost made it hard because people expect when we talk to 'em for the first time to be the old way. Like, how do you look like a piece of hardware? Are you compatible with my existing hardware that runs ML? Like, no, we're, we're not. Because you don't need that stack anymore. All you need is a library called to make your prediction and that's it. That's it. >> John: Well, I mean, we were joking on Twitter the other day with someone saying, is AI a pet or a cattle? Right? Because they love their, their AI bots right now. So, so I'd say pet there. But you look at a lot of, there's going to be a lot of AI. So on a more serious note, you mentioned in microservices, will deep sparse have an API for developers? And how does that look like? What do I do? >> Brian: Yeah. >> John: tell me what my, as a developer, what's the roadmap look like? What's the >> Brian: Yeah, it, it really looks, it really can go in both modes. It can go in a standalone server mode where it handles, you know, rest API and it can scale out with ES as the workload comes up and scale back and like try to make hardware do that. Hardware may scale back, but it's just sitting there dormant, you know, so with this, it scales the same way your application needs to. And then for a developer, they basically just, they just, the PIP install de sparse, you know, has one commanded to do an install, and then they do two calls, really. The first call is a library call that the app makes to create the model. And models really already trained, but they, it's called a model create call. And the second command they do is they make a call to do a prediction. And it's as simple as that. So it's, it's AI's as simple as using any other library that the developers are already using, which I, which sounds hard to fathom because it is just so simplified. >> John: Software delivered AI. Okay, that's a cool thing. I believe in it personally. I think that's the way to go. I think there's going to be plenty of hardware options if you look at the advances of cloud players that got more silicon coming out. Yeah. More GPU. I mean, there's more instance, I mean, everything's out there right now. So the question is how does that evolve in your mind? Because that's seems to be key. You have open source projects emerging. What, what path does this take? Is there a parallel mental model that you see, Brian, that is similar? You mentioned open source earlier. Is it more like a VMware virtualization thing or is it more of a cloud thing? Is there Yeah. Is it going to evolve in a, in a trajectory that looks similar to what we might've seen in the past? >> Brian: Yeah, we're, you know, when I, when when I got involved with the company, what I, when I thought about it and I was reasoning about it, like, do you, you know, you want to, like, we all do when you want to join something full-time. I thought about it and said, where will the industry eventually get to? Right? To fully realize the value of, of deep learning and what's plausible as it evolves. And to me, like I, I know it's the old adage of, you know, you know, software, its hardware, cloudy software. But it truly was like, you know, we can solve these problems in software. Like there's nothing special that's happening at the hardware layer and the processing AI. The reality is that it's just early in the industry. So the view that that we had was like, this is eventually the best place where the industry will be, is the liberation of being able to run AI anywhere. Like you're really not democratizing, you democratize the model. But if you can't run the model anywhere you want because these models are getting bigger and bigger with these large language models, then you're kind of not democratizing. And if you got to go and like by a cluster to run this thing on. So the democratization comes by if all of a sudden that model can be consumed anywhere on demand without planning, without provisioning, wherever infrastructure is. And so I think that's with or without Neural Magic, that's where the industry will go and will get to. I think we're the leaders, leaders in getting it there. It's right because we're more advanced on these techniques. >> John: Yeah. And your background too. You've seen OpenStack, pre-cloud, you saw open source grow and still exponentially growing. And so you have the same similar dynamic with machine learning models growing. And they're also segmenting into almost a, an ML stack or foundational model as we talk about. So you're starting to see the formation of tooling inference. So a lot of components coming. It's almost a stack, it's almost a, it literally is like an operating system problem space, you know? How do you run things, how do you link things? How do you bring things together? Is that what's going on here? Is this like a data modeling operating environment kind of red hat type thing going on? Like. >> Brian: Yeah. Yeah. Like I think there is, you know, I thought about that too. And I think there is the role of like distribution, because the industrialization not happening fast enough of this. Like, can I go back to like every customers, every, every user does it in their own kind of way. Like it's not, everyone's a little bit of a snowflake. And I think that's okay. There's definitely plenty of companies that want to come in and say, well, this is the way it's going to be and we industrialize it as long as you do it our way. The reality is technology doesn't get industrialized by one company just saying, do it our way. And so that's why like we've taken the approach through open source by saying like, Hey, you haven't really industrialized it if you said. We made it simple, but you always got to run AI here. Yeah, right. You only like really industrialize it if you break it down into components that are simple to use and they work integrated in the stack the way you want them to. And so to me, that first principles was getting thing into microservices and dockers that could be run on VMware, OpenShare on the cloud in the edge. And so that's the, that's the real part that we're happening with. The other part, like I do agree, like I think it's going to quickly move into less about the model. Less about the training of the model and the transfer learning, you know, the data set of the model. We're taking away the complexity of optimization. Giving liberating deployment to be anywhere. And I think the last mile, John is going to be around the ML ops around that. Because it's easy to think of like soft now that it's just a software problem, we've turned it into a software problem. So it's easy to think of software as like kind of a point release, but that's not the reality, right? It's a life cycle. And it's, and so I think ML very much brings in the what is the lifecycle of that deployment? And, you know, you get into more interesting conversations, to be honest than like, once you've deployed in a docking container is around like model drift and accuracy and the dataset changes and the user changes is how do you become from an ML perspective of where of that sending signal back retraining. And, and that's where I think a lot of the, in more of the innovation's going to start to move there. >> John: Yeah. And software also, the software problem, the software opportunity as well is developer focused. And if you look at the cloud native landscape now, similar stacks developing a lot of components. A lot of things to, to stitch together a lot of things that are automating under the hood. A lot of developer productivity conversations. I think this is going to go down that same road. I want to get your thoughts because developers will set the pace. And this is something that's clear in this next wave developer productivity. They're the defacto standards bodies. They will decide what microservices check, API check. Now, skill gap is going to be a problem because it's relatively new. So model sprawl, model sizes, proprietary versus open. There has to be a way to kind of crunch that down into a, like a DevOps, like just make it, get the developer out of the, the muck. So what's your view? Are we early days like that? Or what's the young kid in college studying CS or whatever degree who comes into this with, with both feet? What are they doing? >> Brian: I'll probably say like the, the non-popular answer to that. A little bit is it's happening so fast that it's going to get kind of boring fast. Meaning like, yeah, you could go to school and go to MIT, right? Sorry. Like, and you could get a hold through end like becoming a model architect, like inventing the next model, right? And the layers and combining 'em and et cetera, et cetera. And then what operators and, and building a model that's bigger than the last one and trains faster, right? And there will be those people, right? That actually, like they're building the engines the same way. You know, I grew up as an infrastructure software developer. There's not a lot of companies that hire those anymore because they're all sitting inside of three big clouds. Yeah. Right? So you better be a good app developer, but I think what you're going to see is before you had to be everything, you had to be the, if you were going to use infrastructure, you had to know how to build infrastructure. And I think the same thing's true around is quickly exiting ML is to be able to use ML in your company, you better be like, great at every aspect of ML, including every intricacy inside of the model and every operation's doing, that's quickly changing. Like, you're going to start with a starting point. You know, in the future you're not going to be like cracking open these GPT models, you're going to just be pulling them off the shelf, fine tuning 'em and go. You don't have to invent it. You don't have to understand it. And I think that's going to be a pivot point, you know, in the industry between, you know, what's the future? What's, what's the future of a, a data scientist? ML engineer researcher look like? >> John: I think that's, the outcome's going to be determined. I mean, you mentioned, you know, doing it yourself what an SRE is for a Google with the servers scale's huge. So yeah, it might have to, at the beginning get boring, you get obsolete quickly, but that means it's progressing. So, The scale becomes huge. And that's where I think it's going to be interesting when we see that scale. >> Brian: Yep. Yeah, I think that's right. I think that's right. And we always, and, and what I've always said, and much the, again, the distribute into my ML team is that I want every developer to be as adept at being able take advantage of ML as non ML engineer, right? It's got to be that simple. And I think, I think it's getting there. I really do. >> John: Well, Brian, great, great to have you on theCUBE here on this cube conversation. As part of the startup showcase that's coming up. You're going to be featured. Or your company would featured on the upcoming ABRA startup showcase on making machine learning easier and more affordable as more machine learning models come in. You guys got deep sparse and some great technology. We're going to dig into that next time. I'll give you the final word right now. What do you see for the company? What are you guys looking for? Give a plug for the company right now. >> Brian: Oh, give a plug that I haven't already doubled in as the plug. >> John: You're hiring engineers, I assume from MIT and other places. >> Brian: Yep. I think like the, the biggest thing is like, like we're on the developer side. We're here to make this easy. The majority of inference today is, is on CPUs already, believe it or not, as much as kind of, we like to talk about hardware and specialized hardware. The majority is already on CPUs. We're basically bringing 95% cost savings to CPUs through this acceleration. So, but we're trying to do it in a way that makes it community first. So I think the, the shout out would be come find the Neural Magic community and engage with us and you'll find, you know, a thousand other like-minded people in Slack that are willing to help you as well as our engineers. And, and let's, let's go take on some successful AI deployments. >> John: Exciting times. This is, I think one of the pivotal moments, NextGen data, machine learning, and now starting to see AI not be that chat bot, just, you know, customer support or some basic natural language processing thing. You're starting to see real innovation. Brian Stevens, CEO of Neural Magic, bringing the magic here. Thanks for the time. Great conversation. >> Brian: Thanks John. >> John: Thanks for joining me. >> Brian: Cheers. Thank you. >> John: Okay. I'm John Furrier, host of theCUBE here in Palo Alto, California for this cube conversation with Brian Stevens. Thanks for watching.
SUMMARY :
CEO, Great to see you Brian. happy to be here again. minute to explain what you guys in the world you have a lot So it's like how do you grow it? like back in the day you had and the deep sparse you And that's the, you know, late 80s and 90s, AI, you know, It's there you got more capabilities. the CEO mandate. Cause you know, it's coming the as to deploy your application, right? And at the same time you get in the market that are out meant that you no longer need a the deep sparse because you know, John: And that basically And that seems to be bringing and the people that have to the convergence of compute data, insane flipping of the script But it really fits into that, you know, But you look at a lot of, call that the app makes to model that you see, Brian, the old adage of, you know, And so you have the same the way you want them to. And if you look at the to see is before you had to be I mean, you mentioned, you know, the distribute into my ML team great to have you on theCUBE already doubled in as the plug. and other places. the biggest thing is like, of the pivotal moments, Brian: Cheers. host of theCUBE here in Palo Alto,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
John | PERSON | 0.99+ |
Brian | PERSON | 0.99+ |
Brian Stevens | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
95% | QUANTITY | 0.99+ |
2015 | DATE | 0.99+ |
John Furrier | PERSON | 0.99+ |
90 | QUANTITY | 0.99+ |
2016 | DATE | 0.99+ |
32 bit | QUANTITY | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
Brian Steve | PERSON | 0.99+ |
Neural Magic | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
two calls | QUANTITY | 0.99+ |
both things | QUANTITY | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
Palo Alto, California | LOCATION | 0.99+ |
second thing | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
iPhone | COMMERCIAL_ITEM | 0.99+ |
Python | TITLE | 0.99+ |
MIT | ORGANIZATION | 0.99+ |
first call | QUANTITY | 0.99+ |
two things | QUANTITY | 0.99+ |
second part | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
both feet | QUANTITY | 0.98+ |
Oracle | ORGANIZATION | 0.98+ |
both modes | QUANTITY | 0.98+ |
today | DATE | 0.98+ |
80s | DATE | 0.98+ |
first | QUANTITY | 0.98+ |
second command | QUANTITY | 0.98+ |
Breaking Analysis: Google's Point of View on Confidential Computing
>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)
SUMMARY :
bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
IDSA | ORGANIZATION | 0.99+ |
Rodrigo Branco | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Dave | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
2019 | DATE | 0.99+ |
2017 | DATE | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Nelly Porter | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Rob Hof | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
last year | DATE | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Patricia Florissi | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
one | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
second | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
two | QUANTITY | 0.99+ |
Arm | ORGANIZATION | 0.99+ |
each | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
First | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
Gaia-X | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
both | QUANTITY | 0.99+ |
this year | DATE | 0.99+ |
seven years | QUANTITY | 0.99+ |
OCTO | ORGANIZATION | 0.99+ |
zero days | QUANTITY | 0.98+ |
10 years ago | DATE | 0.98+ |
each week | QUANTITY | 0.98+ |
today | DATE | 0.97+ |
Breaking Analysis: Google's PoV on Confidential Computing
>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)
SUMMARY :
bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Nelly | PERSON | 0.99+ |
Patricia | PERSON | 0.99+ |
Alex Myerson | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
International Data Space Association | ORGANIZATION | 0.99+ |
Dave | PERSON | 0.99+ |
AWS' | ORGANIZATION | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Rob Hoof | PERSON | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Nelly Porter | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Nvidia | ORGANIZATION | 0.99+ |
IDSA | ORGANIZATION | 0.99+ |
Rodrigo Bronco | PERSON | 0.99+ |
2019 | DATE | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
2017 | DATE | 0.99+ |
ARM | ORGANIZATION | 0.99+ |
Aem | ORGANIZATION | 0.99+ |
Nellie | PERSON | 0.99+ |
Kristin Martin | PERSON | 0.99+ |
Red Hat | ORGANIZATION | 0.99+ |
two parties | QUANTITY | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
last year | DATE | 0.99+ |
Patricia Florissi | PERSON | 0.99+ |
one | QUANTITY | 0.99+ |
Meta | ORGANIZATION | 0.99+ |
two | QUANTITY | 0.99+ |
third | QUANTITY | 0.99+ |
Gaia-X | ORGANIZATION | 0.99+ |
second point | QUANTITY | 0.99+ |
two experts | QUANTITY | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
second | QUANTITY | 0.99+ |
both | QUANTITY | 0.99+ |
first question | QUANTITY | 0.99+ |
five | QUANTITY | 0.99+ |
One | QUANTITY | 0.99+ |
theCUBE Studios | ORGANIZATION | 0.99+ |
two decades ago | DATE | 0.99+ |
'23 | DATE | 0.99+ |
each | QUANTITY | 0.99+ |
a decade ago | DATE | 0.99+ |
three | QUANTITY | 0.99+ |
zero days | QUANTITY | 0.98+ |
four | QUANTITY | 0.98+ |
OCTO | ORGANIZATION | 0.98+ |
today | DATE | 0.98+ |
theCUBE's New Analyst Talks Cloud & DevOps
(light music) >> Hi everybody. Welcome to this Cube Conversation. I'm really pleased to announce a collaboration with Rob Strechay. He's a guest cube analyst, and we'll be working together to extract the signal from the noise. Rob is a long-time product pro, working at a number of firms including AWS, HP, HPE, NetApp, Snowplow. I did a stint as an analyst at Enterprise Strategy Group. Rob, good to see you. Thanks for coming into our Marlboro Studios. >> Well, thank you for having me. It's always great to be here. >> I'm really excited about working with you. We've known each other for a long time. You've been in the Cube a bunch. You know, you're in between gigs, and I think we can have a lot of fun together. Covering events, covering trends. So. let's get into it. What's happening out there? We're sort of exited the isolation economy. Things were booming. Now, everybody's tapping the brakes. From your standpoint, what are you seeing out there? >> Yeah. I'm seeing that people are really looking how to get more out of their data. How they're bringing things together, how they're looking at the costs of Cloud, and understanding how are they building out their SaaS applications. And understanding that when they go in and actually start to use Cloud, it's not only just using the base services anymore. They're looking at, how do I use these platforms as a service? Some are easier than others, and they're trying to understand, how do I get more value out of that relationship with the Cloud? They're also consolidating the number of Clouds that they have, I would say to try to better optimize their spend, and getting better pricing for that matter. >> Are you seeing people unhook Clouds, or just reduce maybe certain Cloud activities and going maybe instead of 60/40 going 90/10? >> Correct. It's more like the 90/10 type of rule where they're starting to say, Hey I'm not going to get rid of Azure or AWS or Google. I'm going to move a portion of this over that I was using on this one service. Maybe I got a great two-year contract to start with on this platform as a service or a database as a service. I'm going to unhook from that and maybe go with an independent. Maybe with something like a Snowflake or a Databricks on top of another Cloud, so that I can consolidate down. But it also gives them more flexibility as well. >> In our last breaking analysis, Rob, we identified six factors that were reducing Cloud consumption. There were factors and customer tactics. And I want to get your take on this. So, some of the factors really, you got fewer mortgage originations. FinTech, obviously big Cloud user. Crypto, not as much activity there. Lower ad spending means less Cloud. And then one of 'em, which you kind of disagreed with was less, less analytics, you know, fewer... Less frequency of calculations. I'll come back to that. But then optimizing compute using Graviton or AMD instances moving to cheaper storage tiers. That of course makes sense. And then optimize pricing plans. Maybe going from On Demand, you know, to, you know, instead of pay by the drink, buy in volume. Okay. So, first of all, do those make sense to you with the exception? We'll come back and talk about the analytics piece. Is that what you're seeing from customers? >> Yeah, I think so. I think that was pretty much dead on with what I'm seeing from customers and the ones that I go out and talk to. A lot of times they're trying to really monetize their, you know, understand how their business utilizes these Clouds. And, where their spend is going in those Clouds. Can they use, you know, lower tiers of storage? Do they really need the best processors? Do they need to be using Intel or can they get away with AMD or Graviton 2 or 3? Or do they need to move in? And, I think when you look at all of these Clouds, they always have pricing curves that are arcs from the newest to the oldest stuff. And you can play games with that. And understanding how you can actually lower your costs by looking at maybe some of the older generation. Maybe your application was written 10 years ago. You don't necessarily have to be on the best, newest processor for that application per se. >> So last, I want to come back to this whole analytics piece. Last June, I think it was June, Dev Ittycheria, who's the-- I call him Dev. Spelled Dev, pronounced Dave. (chuckles softly) Same pronunciation, different spelling. Dev Ittycheria, CEO of Mongo, on the earnings call. He was getting, you know, hit. Things were starting to get a little less visible in terms of, you know, the outlook. And people were pushing him like... Because you're in the Cloud, is it easier to dial down? And he said, because we're the document database, we support transaction applications. We're less discretionary than say, analytics. Well on the Snowflake earnings call, that same month or the month after, they were all over Slootman and Scarpelli. Oh, the Mongo CEO said that they're less discretionary than analytics. And Snowflake was an interesting comment. They basically said, look, we're the Cloud. You can dial it up, you can dial it down, but the area under the curve over a period of time is going to be the same, because they get their customers to commit. What do you say? You disagreed with the notion that people are running their calculations less frequently. Is that because they're trying to do a better job of targeting customers in near real time? What are you seeing out there? >> Yeah, I think they're moving away from using people and more expensive marketing. Or, they're trying to figure out what's my Google ad spend, what's my Meta ad spend? And what they're trying to do is optimize that spend. So, what is the return on advertising, or the ROAS as they would say. And what they're looking to do is understand, okay, I have to collect these analytics that better understand where are these people coming from? How do they get to my site, to my store, to my whatever? And when they're using it, how do they they better move through that? What you're also seeing is that analytics is not only just for kind of the retail or financial services or things like that, but then they're also, you know, using that to make offers in those categories. When you move back to more, you know, take other companies that are building products and SaaS delivered products. They may actually go and use this analytics for making the product better. And one of the big reasons for that is maybe they're dialing back how many product managers they have. And they're looking to be more data driven about how they actually go and build the product out or enhance the product. So maybe they're, you know, an online video service and they want to understand why people are either using or not using the whiteboard inside the product. And they're collecting a lot of that product analytics in a big way so that they can go through that. And they're doing it in a constant manner. This first party type tracking within applications is growing rapidly by customers. >> So, let's talk about who wins in that. So, obviously the Cloud guys, AWS, Google and Azure. I want to come back and unpack that a little bit. Databricks and Snowflake, we reported on our last breaking analysis, it kind of on a collision course. You know, a couple years ago we were thinking, okay, AWS, Snowflake and Databricks, like perfect sandwich. And then of course they started to become more competitive. My sense is they still, you know, compliment each other in the field, right? But, you know, publicly, they've got bigger aspirations, they get big TAMs that they're going after. But it's interesting, the data shows that-- So, Snowflake was off the charts in terms of spending momentum and our EPR surveys. Our partner down in New York, they kind of came into line. They're both growing in terms of market presence. Databricks couldn't get to IPO. So, we don't have as much, you know, visibility on their financials. You know, Snowflake obviously highly transparent cause they're a public company. And then you got AWS, Google and Azure. And it seems like AWS appears to be more partner friendly. Microsoft, you know, depends on what market you're in. And Google wants to sell BigQuery. >> Yeah. >> So, what are you seeing in the public Cloud from a data platform perspective? >> Yeah. I think that was pretty astute in what you were talking about there, because I think of the three, Google is definitely I think a little bit behind in how they go to market with their partners. Azure's done a fantastic job of partnering with these companies to understand and even though they may have Synapse as their go-to and where they want people to go to do AI and ML. What they're looking at is, Hey, we're going to also be friendly with Snowflake. We're also going to be friendly with a Databricks. And I think that, Amazon has always been there because that's where the market has been for these developers. So, many, like Databricks' and the Snowflake's have gone there first because, you know, Databricks' case, they built out on top of S3 first. And going and using somebody's object layer other than AWS, was not as simple as you would think it would be. Moving between those. >> So, one of the financial meetups I said meetup, but the... It was either the CEO or the CFO. It was either Slootman or Scarpelli talking at, I don't know, Merrill Lynch or one of the other financial conferences said, I think it was probably their Q3 call. Snowflake said 80% of our business goes through Amazon. And he said to this audience, the next day we got a call from Microsoft. Hey, we got to do more. And, we know just from reading the financial statements that Snowflake is getting concessions from Amazon, they're buying in volume, they're renegotiating their contracts. Amazon gets it. You know, lower the price, people buy more. Long term, we're all going to make more money. Microsoft obviously wants to get into that game with Snowflake. They understand the momentum. They said Google, not so much. And I've had customers tell me that they wanted to use Google's AI with Snowflake, but they can't, they got to go to to BigQuery. So, honestly, I haven't like vetted that so. But, I think it's true. But nonetheless, it seems like Google's a little less friendly with the data platform providers. What do you think? >> Yeah, I would say so. I think this is a place that Google looks and wants to own. Is that now, are they doing the right things long term? I mean again, you know, you look at Google Analytics being you know, basically outlawed in five countries in the EU because of GDPR concerns, and compliance and governance of data. And I think people are looking at Google and BigQuery in general and saying, is it the best place for me to go? Is it going to be in the right places where I need it? Still, it's still one of the largest used databases out there just because it underpins a number of the Google services. So you almost get, like you were saying, forced into BigQuery sometimes, if you want to use the tech on top. >> You do strategy. >> Yeah. >> Right? You do strategy, you do messaging. Is it the right call by Google? I mean, it's not a-- I criticize Google sometimes. But, I'm not sure it's the wrong call to say, Hey, this is our ace in the hole. >> Yeah. >> We got to get people into BigQuery. Cause, first of all, BigQuery is a solid product. I mean it's Cloud native and it's, you know, by all, it gets high marks. So, why give the competition an advantage? Let's try to force people essentially into what is we think a great product and it is a great product. The flip side of that is, they're giving up some potential partner TAM and not treating the ecosystem as well as one of their major competitors. What do you do if you're in that position? >> Yeah, I think that that's a fantastic question. And the question I pose back to the companies I've worked with and worked for is, are you really looking to have vendor lock-in as your key differentiator to your service? And I think when you start to look at these companies that are moving away from BigQuery, moving to even, Databricks on top of GCS in Google, they're looking to say, okay, I can go there if I have to evacuate from GCP and go to another Cloud, I can stay on Databricks as a platform, for instance. So I think it's, people are looking at what platform as a service, database as a service they go and use. Because from a strategic perspective, they don't want that vendor locking. >> That's where Supercloud becomes interesting, right? Because, if I can run on Snowflake or Databricks, you know, across Clouds. Even Oracle, you know, they're getting into business with Microsoft. Let's talk about some of the Cloud players. So, the big three have reported. >> Right. >> We saw AWSs Cloud growth decelerated down to 20%, which is I think the lowest growth rate since they started to disclose public numbers. And they said they exited, sorry, they said January they grew at 15%. >> Yeah. >> Year on year. Now, they had some pretty tough compares. But nonetheless, 15%, wow. Azure, kind of mid thirties, and then Google, we had kind of low thirties. But, well behind in terms of size. And Google's losing probably almost $3 billion annually. But, that's not necessarily a bad thing by advocating and investing. What's happening with the Cloud? Is AWS just running into the law, large numbers? Do you think we can actually see a re-acceleration like we have in the past with AWS Cloud? Azure, we predicted is going to be 75% of AWS IAS revenues. You know, we try to estimate IAS. >> Yeah. >> Even though they don't share that with us. That's a huge milestone. You'd think-- There's some people who have, I think, Bob Evans predicted a while ago that Microsoft would surpass AWS in terms of size. You know, what do you think? >> Yeah, I think that Azure's going to keep to-- Keep growing at a pretty good clip. I think that for Azure, they still have really great account control, even though people like to hate Microsoft. The Microsoft sellers that are out there making those companies successful day after day have really done a good job of being in those accounts and helping people. I was recently over in the UK. And the UK market between AWS and Azure is pretty amazing, how much Azure there is. And it's growing within Europe in general. In the states, it's, you know, I think it's growing well. I think it's still growing, probably not as fast as it is outside the U.S. But, you go down to someplace like Australia, it's also Azure. You hear about Azure all the time. >> Why? Is that just because of the Microsoft's software state? It's just so convenient. >> I think it has to do with, you know, and you can go with the reasoning they don't break out, you know, Office 365 and all of that out of their numbers is because they have-- They're in all of these accounts because the office suite is so pervasive in there. So, they always have reasons to go back in and, oh by the way, you're on these old SQL licenses. Let us move you up here and we'll be able to-- We'll support you on the old version, you know, with security and all of these things. And be able to move you forward. So, they have a lot of, I guess you could say, levers to stay in those accounts and be interesting. At least as part of the Cloud estate. I think Amazon, you know, is hitting, you know, the large number. Laws of large numbers. But I think that they're also going through, and I think this was seen in the layoffs that they were making, that they're looking to understand and have profitability in more of those services that they have. You know, over 350 odd services that they have. And you know, as somebody who went there and helped to start yet a new one, while I was there. And finally, it went to beta back in September, you start to look at the fact that, that number of services, people, their own sellers don't even know all of their services. It's impossible to comprehend and sell that many things. So, I think what they're going through is really looking to rationalize a lot of what they're doing from a services perspective going forward. They're looking to focus on more profitable services and bringing those in. Because right now it's built like a layer cake where you have, you know, S3 EBS and EC2 on the bottom of the layer cake. And then maybe you have, you're using IAM, the authorization and authentication in there and you have all these different services. And then they call it EMR on top. And so, EMR has to pay for that entire layer cake just to go and compete against somebody like Mongo or something like that. So, you start to unwind the costs of that. Whereas Azure, went and they build basically ground up services for the most part. And Google kind of falls somewhere in between in how they build their-- They're a sort of layer cake type effect, but not as many layers I guess you could say. >> I feel like, you know, Amazon's trying to be a platform for the ecosystem. Yes, they have their own products and they're going to sell. And that's going to drive their profitability cause they don't have to split the pie. But, they're taking a piece of-- They're spinning the meter, as Ziyas Caravalo likes to say on every time Snowflake or Databricks or Mongo or Atlas is, you know, running on their system. They take a piece of the action. Now, Microsoft does that as well. But, you look at Microsoft and security, head-to-head competitors, for example, with a CrowdStrike or an Okta in identity. Whereas, it seems like at least for now, AWS is a more friendly place for the ecosystem. At the same time, you do a lot of business in Microsoft. >> Yeah. And I think that a lot of companies have always feared that Amazon would just throw, you know, bodies at it. And I think that people have come to the realization that a two pizza team, as Amazon would call it, is eight people. I think that's, you know, two slices per person. I'm a little bit fat, so I don't know if that's enough. But, you start to look at it and go, okay, if they're going to start out with eight engineers, if I'm a startup and they're part of my ecosystem, do I really fear them or should I really embrace them and try to partner closer with them? And I think the smart people and the smart companies are partnering with them because they're realizing, Amazon, unless they can see it to, you know, a hundred million, $500 million market, they're not going to throw eight to 16 people at a problem. I think when, you know, you could say, you could look at the elastic with OpenSearch and what they did there. And the licensing terms and the battle they went through. But they knew that Elastic had a huge market. Also, you had a number of ecosystem companies building on top of now OpenSearch, that are now domain on top of Amazon as well. So, I think Amazon's being pretty strategic in how they're doing it. I think some of the-- It'll be interesting. I think this year is a payout year for the cuts that they're making to some of the services internally to kind of, you know, how do we take the fat off some of those services that-- You know, you look at Alexa. I don't know how much revenue Alexa really generates for them. But it's a means to an end for a number of different other services and partners. >> What do you make of this ChatGPT? I mean, Microsoft obviously is playing that card. You want to, you want ChatGPT in the Cloud, come to Azure. Seems like AWS has to respond. And we know Google is, you know, sharpening its knives to come up with its response. >> Yeah, I mean Google just went and talked about Bard for the first time this week and they're in private preview or I guess they call it beta, but. Right at the moment to select, select AI users, which I have no idea what that means. But that's a very interesting way that they're marketing it out there. But, I think that Amazon will have to respond. I think they'll be more measured than say, what Google's doing with Bard and just throwing it out there to, hey, we're going into beta now. I think they'll look at it and see where do we go and how do we actually integrate this in? Because they do have a lot of components of AI and ML underneath the hood that other services use. And I think that, you know, they've learned from that. And I think that they've already done a good job. Especially for media and entertainment when you start to look at some of the ways that they use it for helping do graphics and helping to do drones. I think part of their buy of iRobot was the fact that iRobot was a big user of RoboMaker, which is using different models to train those robots to go around objects and things like that, so. >> Quick touch on Kubernetes, the whole DevOps World we just covered. The Cloud Native Foundation Security, CNCF. The security conference up in Seattle last week. First time they spun that out kind of like reinforced, you know, AWS spins out, reinforced from reinvent. Amsterdam's coming up soon, the CubeCon. What should we expect? What's hot in Cubeland? >> Yeah, I think, you know, Kubes, you're going to be looking at how OpenShift keeps growing and I think to that respect you get to see the momentum with people like Red Hat. You see others coming up and realizing how OpenShift has gone to market as being, like you were saying, partnering with those Clouds and really making it simple. I think the simplicity and the manageability of Kubernetes is going to be at the forefront. I think a lot of the investment is still going into, how do I bring observability and DevOps and AIOps and MLOps all together. And I think that's going to be a big place where people are going to be looking to see what comes out of CubeCon in Amsterdam. I think it's that manageability ease of use. >> Well Rob, I look forward to working with you on behalf of the whole Cube team. We're going to do more of these and go out to some shows extract the signal from the noise. Really appreciate you coming into our studio. >> Well, thank you for having me on. Really appreciate it. >> You're really welcome. All right, keep it right there, or thanks for watching. This is Dave Vellante for the Cube. And we'll see you next time. (light music)
SUMMARY :
I'm really pleased to It's always great to be here. and I think we can have the number of Clouds that they have, contract to start with those make sense to you And, I think when you look in terms of, you know, the outlook. And they're looking to My sense is they still, you know, in how they go to market And he said to this audience, is it the best place for me to go? You do strategy, you do messaging. and it's, you know, And I think when you start Even Oracle, you know, since they started to to be 75% of AWS IAS revenues. You know, what do you think? it's, you know, I think it's growing well. Is that just because of the And be able to move you forward. I feel like, you know, I think when, you know, you could say, And we know Google is, you know, And I think that, you know, you know, AWS spins out, and I think to that respect forward to working with you Well, thank you for having me on. And we'll see you next time.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Amazon | ORGANIZATION | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Bob Evans | PERSON | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Rob | PERSON | 0.99+ |
ORGANIZATION | 0.99+ | |
Oracle | ORGANIZATION | 0.99+ |
Rob Strechay | PERSON | 0.99+ |
New York | LOCATION | 0.99+ |
September | DATE | 0.99+ |
Seattle | LOCATION | 0.99+ |
January | DATE | 0.99+ |
Dev Ittycheria | PERSON | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
NetApp | ORGANIZATION | 0.99+ |
Amsterdam | LOCATION | 0.99+ |
75% | QUANTITY | 0.99+ |
UK | LOCATION | 0.99+ |
AWSs | ORGANIZATION | 0.99+ |
June | DATE | 0.99+ |
Snowplow | ORGANIZATION | 0.99+ |
eight | QUANTITY | 0.99+ |
80% | QUANTITY | 0.99+ |
Scarpelli | PERSON | 0.99+ |
15% | QUANTITY | 0.99+ |
Australia | LOCATION | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Slootman | PERSON | 0.99+ |
two-year | QUANTITY | 0.99+ |
AMD | ORGANIZATION | 0.99+ |
Europe | LOCATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
six factors | QUANTITY | 0.99+ |
three | QUANTITY | 0.99+ |
Merrill Lynch | ORGANIZATION | 0.99+ |
Last June | DATE | 0.99+ |
five countries | QUANTITY | 0.99+ |
eight people | QUANTITY | 0.99+ |
U.S. | LOCATION | 0.99+ |
last week | DATE | 0.99+ |
16 people | QUANTITY | 0.99+ |
Databricks' | ORGANIZATION | 0.99+ |
Humphreys & Ferron-Jones | Trusted security by design, Compute Engineered for your Hybrid World
(upbeat music) >> Welcome back, everyone, to our Cube special programming on "Securing Compute, Engineered for the Hybrid World." We got Cole Humphreys who's with HPE, global server security product manager, and Mike Ferron-Jones with Intel. He's the product manager for data security technology. Gentlemen, thank you for coming on this special presentation. >> All right, thanks for having us. >> So, securing compute, I mean, compute, everyone wants more compute. You can't have enough compute as far as we're concerned. You know, more bits are flying around the internet. Hardware's mattering more than ever. Performance markets hot right now for next-gen solutions. When you're talking about security, it's at the center of every single conversation. And Gen11 for the HPE has been big-time focus here. So let's get into the story. What's the market for Gen11, Cole, on the security piece? What's going on? How do you see this impacting the marketplace? >> Hey, you know, thanks. I think this is, again, just a moment in time where we're all working towards solving a problem that doesn't stop. You know, because we are looking at data protection. You know, in compute, you're looking out there, there's international impacts, there's federal impacts, there's state-level impacts, and even regulation to protect the data. So, you know, how do we do this stuff in an environment that keeps changing? >> And on the Intel side, you guys are a Tier 1 combination partner, Better Together. HPE has a deep bench on security, Intel, We know what your history is. You guys have a real root of trust with your code, down to the silicon level, continuing to be, and you're on the 4th Gen Xeon here. Mike, take us through the Intel's relationship with HPE. Super important. You guys have been working together for many, many years. Data security, chips, HPE, Gen11. Take us through the relationship. What's the update? >> Yeah, thanks and I mean, HPE and Intel have been partners in delivering technology and delivering security for decades. And when a customer invests in an HPE server, like at one of the new Gen11s, they're getting the benefit of the combined investment that these two great companies are putting into product security. On the Intel side, for example, we invest heavily in the way that we develop our products for security from the ground up, and also continue to support them once they're in the market. You know, launching a product isn't the end of our security investment. You know, our Intel Red Teams continue to hammer on Intel products looking for any kind of security vulnerability for a platform that's in the field. As well as we invest heavily in the external research community through our bug bounty programs to harness the entire creativity of the security community to find those vulnerabilities, because that allows us to patch them and make sure our customers are staying safe throughout that platform's deployed lifecycle. You know, in 2021, between Intel's internal red teams and our investments in external research, we found 93% of our own vulnerabilities. Only a small percentage were found by unaffiliated external entities. >> Cole, HPE has a great track record and long history serving customers around security, actually, with the solutions you guys had. With Gen11, it's more important than ever. Can you share your thoughts on the talent gap out there? People want to move faster, breaches are happening at a higher velocity. They need more protection now than ever before. Can you share your thoughts on why these breaches are happening, and what you guys are doing, and how you guys see this happening from a customer standpoint? What you guys fill in with Gen11 with solution? >> You bet, you know, because when you hear about the relentless pursuit of innovation from our partners, and we in our engineering organizations in India, and Taiwan, and the Americas all collaborating together years in advance, are about delivering solutions that help protect our customer's environments. But what you hear Mike talking about is it's also about keeping 'em safe. Because you look to the market, right? What you see in, at least from our data from 2021, we have that breaches are still happening, and lot of it has to do with the fact that there is just a lack of adequate security staff with the necessary skills to protect the customer's application and ultimately the workloads. And then that's how these breaches are happening. Because ultimately you need to see some sort of control and visibility of what's going on out there. And what we were talking about earlier is you see time. Time to seeing some incident happen, the blast radius can be tremendous in today's technical, advanced world. And so you have to identify it and then correct it quickly, and that's why this continued innovation and partnership is so important, to help work together to keep up. >> You guys have had a great track record with Intel-based platforms with HPE. Gen11's a really big part of the story. Where do you see that impacting customers? Can you explain the benefits of what's going on with Gen11? What's the key story? What's the most important thing we should be paying attention to here? >> I think there's probably three areas as we look into this generation. And again, this is a point in time, we will continue to evolve. But at this particular point it's about, you know, a fundamental approach to our security enablement, right? Partnering as a Tier 1 OEM with one of the best in the industry, right? We can deliver systems that help protect some of the most critical infrastructure on earth, right? I know of some things that are required to have a non-disclosure because it is some of the most important jobs that you would see out there. And working together with Intel to protect those specific compute workloads, that's a serious deal that protects not only state, and local, and federal interests, but, really, a global one. >> This is a really- >> And then there's another one- Oh sorry. >> No, go ahead. Finish your thought. >> And then there's another one that I would call our uncompromising focus. We work in the industry, we lead and partner with those in the, I would say, in the good side. And we want to focus on enablement through a specific capability set, let's call it our global operations, and that ability to protect our supply chain and deliver infrastructure that can be trusted and into an operating environment. You put all those together and you see very significant and meaningful solutions together. >> The operating benefits are significant. I just want to go back to something you just said before about the joint NDAs and kind of the relationship you kind of unpacked, that to me, you know, I heard you guys say from sand to server, I love that phrase, because, you know, silicone into the server. But this is a combination you guys have with HPE and Intel supply-chain security. I mean, it's not just like you're getting chips and sticking them into a machine. This is, like, there's an in-depth relationship on the supply chain that has a very intricate piece to it. Can you guys just double down on that and share that, how that works and why it's important? >> Sure, so why don't I go ahead and start on that one. So, you know, as you mentioned the, you know, the supply chain that ultimately results in an end user pulling, you know, a new Gen11 HPE server out of the box, you know, started, you know, way, way back in it. And we've been, you know, Intel, from our part are, you know, invest heavily in making sure that all of our entire supply chain to deliver all of the Intel components that are inside that HPE platform have been protected and monitored ever since, you know, their inception at one of any of our 14,000, you know, Intel vendors that we monitor as part of our supply-chain assurance program. I mean we, you know, Intel, you know, invests heavily in compliance with guidelines from places like NIST and ISO, as well as, you know, doing best practices under things like the Transported Asset Protection Alliance, TAPA. You know, we have been intensely invested in making sure that when a customer gets an Intel processor, or any other Intel silicone product, that it has not been tampered with or altered during its trip through the supply chain. HPE then is able to pick up that, those components that we deliver, and add onto that their own supply-chain assurance when it comes down to delivering, you know, the final product to the customer. >> Cole, do you want to- >> That's exactly right. Yeah, I feel like that integration point is a really good segue into why we're talking today, right? Because that then comes into a global operations network that is pulling together these servers and able to deploy 'em all over the world. And as part of the Gen11 launch, we have security services that allow 'em to be hardened from our factories to that next stage into that trusted partner ecosystem for system integration, or directly to customers, right? So that ability to have that chain of trust. And it's not only about attestation and knowing what, you know, came from whom, because, obviously, you want to trust and make sure you're get getting the parts from Intel to build your technical solutions. But it's also about some of the provisioning we're doing in our global operations where we're putting cryptographic identities and manifests of the server and its components and moving it through that supply chain. So you talked about this common challenge we have of assuring no tampering of that device through the supply chain, and that's why this partnering is so important. We deliver secure solutions, we move them, you're able to see and control that information to verify they've not been tampered with, and you move on to your next stage of this very complicated and necessary chain of trust to build, you know, what some people are calling zero-trust type ecosystems. >> Yeah, it's interesting. You know, a lot goes on under the covers. That's good though, right? You want to have greater security and platform integrity, if you can abstract the way the complexity, that's key. Now one of the things I like about this conversation is that you mentioned this idea of a hardware-root-of-trust set of technologies. Can you guys just quickly touch on that, because that's one of the major benefits we see from this combination of the partnership, is that it's not just one, each party doing something, it's the combination. But this notion of hardware-root-of-trust technologies, what is that? >> Yeah, well let me, why don't I go ahead and start on that, and then, you know, Cole can take it from there. Because we provide some of the foundational technologies that underlie a root of trust. Now the idea behind a root of trust, of course, is that you want your platform to, you know, from the moment that first electron hits it from the power supply, that it has a chain of trust that all of the software, firmware, BIOS is loading, to bring that platform up into an operational state is trusted. If you have a breach in one of those lower-level code bases, like in the BIOS or in the system firmware, that can be a huge problem. It can undermine every other software-based security protection that you may have implemented up the stack. So, you know, Intel and HPE work together to coordinate our trusted boot and root-of-trust technologies to make sure that when a customer, you know, boots that platform up, it boots up into a known good state so that it is ready for the customer's workload. So on the Intel side, we've got technologies like our trusted execution technology, or Intel Boot Guard, that then feed into the HPE iLO system to help, you know, create that chain of trust that's rooted in silicon to be able to deliver that known good state to the customer so it's ready for workloads. >> All right, Cole, I got to ask you, with Gen11 HPE platforms that has 4th Gen Intel Xeon, what are the customers really getting? >> So, you know, what a great setup. I'm smiling because it's, like, it has a good answer, because one, this, you know, to be clear, this isn't the first time we've worked on this root-of-trust problem. You know, we have a construct that we call the HPE Silicon Root of Trust. You know, there are, it's an industry standard construct, it's not a proprietary solution to HPE, but it does follow some differentiated steps that we like to say make a little difference in how it's best implemented. And where you see that is that tight, you know, Intel Trusted Execution exchange. The Intel Trusted Execution exchange is a very important step to assuring that route of trust in that HPE Silicon Root of Trust construct, right? So they're not different things, right? We just have an umbrella that we pull under our ProLiant, because there's ILO, our BIOS team, CPLDs, firmware, but I'll tell you this, Gen11, you know, while all that, keeping that moving forward would be good enough, we are not holding to that. We are moving forward. Our uncompromising focus, we want to drive more visibility into that Gen11 server, specifically into the PCIE lanes. And now you're going to be able to see, and measure, and make policies to have control and visibility of the PCI devices, like storage controllers, NICs, direct connect, NVME drives, et cetera. You know, if you follow the trends of where the industry would like to go, all the components in a server would be able to be seen and attested for full infrastructure integrity, right? So, but this is a meaningful step forward between not only the greatness we do together, but, I would say, a little uncompromising focus on this problem and doing a little bit more to make Gen11 Intel's server just a little better for the challenges of the future. >> Yeah, the Tier 1 partnership is really kind of highlighted there. Great, great point. I got to ask you, Mike, on the 4th Gen Xeon Scalable capabilities, what does it do for the customer with Gen11 now that they have these breaches? Does it eliminate stuff? What's in it for the customer? What are some of the new things coming out with the Xeon? You're at Gen4, Gen11 for HP, but you guys have new stuff. What does it do for the customer? Does it help eliminate breaches? Are there things that are inherent in the product that HP is jointly working with you on or you were contributing in to the relationship that we should know about? What's new? >> Yeah, well there's so much great new stuff in our new 4th Gen Xeon Scalable processor. This is the one that was codenamed Sapphire Rapids. I mean, you know, more cores, more performance, AI acceleration, crypto acceleration, it's all in there. But one of my favorite security features, and it is one that's called Intel Control-Flow Enforcement Technology, or Intel CET. And why I like CET is because I find the attack that it is designed to mitigate is just evil genius. This type of attack, which is called a return, a jump, or a call-oriented programming attack, is designed to not bring a whole bunch of new identifiable malware into the system, you know, which could be picked up by security software. What it is designed to do is to look for little bits of existing, little bits of existing code already on the server. So if you're running, say, a web server, it's looking for little bits of that web-server code that it can then execute in a particular order to achieve a malicious outcome, something like open a command prompt, or escalate its privileges. Now in order to get those little code bits to execute in an order, it has a control mechanism. And there are different, each of the different types of attacks uses a different control mechanism. But what CET does is it gets in there and it disrupts those control mechanisms, uses hardware to prevent those particular techniques from being able to dig in and take effect. So CET can, you know, disrupt it and make sure that software behaves safely and as the programmer intended, rather than picking off these little arbitrary bits in one of these return, or jump, or call-oriented programming attacks. Now it is a technology that is included in every single one of the new 4th Gen Xeon Scalable processors. And so it's going to be an inherent characteristic the customers can benefit from when they buy a new Gen11 HPE server. >> Cole, more goodness from Intel there impacting Gen11 on the HPE side. What's your reaction to that? >> I mean, I feel like this is exactly why you do business with the big Tier 1 partners, because you can put, you know, trust in from where it comes from, through the global operations, literally, having it hardened from the factory it's finished in, moving into your operating environment, and then now protecting against attacks in your web hosting services, right? I mean, this is great. I mean, you'll always have an attack on data, you know, as you're seeing in the data. But the more contained, the more information, and the more control and trust we can give to our customers, it's going to make their job a little easier in protecting whatever job they're trying to do. >> Yeah, and enterprise customers, as you know, they're always trying to keep up to date on the skills and battle the threats. Having that built in under the covers is a real good way to kind of help them free up their time, and also protect them is really killer. This is a big, big part of the Gen11 story here. Securing the data, securing compute, that's the topic here for this special cube conversation, engineering for a hybrid world. Cole, I'll give you the final word. What should people pay attention to, Gen11 from HPE, bottom line, what's the story? >> You know, it's, you know, it's not the first time, it's not the last time, but it's our fundamental security approach to just helping customers through their digital transformation defend in an uncompromising focus to help protect our infrastructure in these technical solutions. >> Cole Humphreys is the global server security product manager at HPE. He's got his finger on the pulse and keeping everyone secure in the platform integrity there. Mike Ferron-Jones is the Intel product manager for data security technology. Gentlemen, thank you for this great conversation, getting into the weeds a little bit with Gen11, which is great. Love the hardware route-of-trust technologies, Better Together. Congratulations on Gen11 and your 4th Gen Xeon Scalable. Thanks for coming on. >> All right, thanks, John. >> Thank you very much, guys, appreciate it. Okay, you're watching "theCube's" special presentation, "Securing Compute, Engineered for the Hybrid World." I'm John Furrier, your host. Thanks for watching. (upbeat music)
SUMMARY :
for the Hybrid World." And Gen11 for the HPE has So, you know, how do we do this stuff And on the Intel side, you guys in the way that we develop and how you guys see this happening and lot of it has to do with the fact that Gen11's a really big part of the story. that you would see out there. And then Finish your thought. and that ability to that to me, you know, I heard you guys say out of the box, you know, and manifests of the is that you mentioned this idea is that you want your is that tight, you know, that HP is jointly working with you on and as the programmer intended, impacting Gen11 on the HPE side. and the more control and trust and battle the threats. you know, it's not the first time, is the global server security for the Hybrid World."
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
India | LOCATION | 0.99+ |
John Furrier | PERSON | 0.99+ |
NIST | ORGANIZATION | 0.99+ |
ISO | ORGANIZATION | 0.99+ |
Mike | PERSON | 0.99+ |
Taiwan | LOCATION | 0.99+ |
John | PERSON | 0.99+ |
Cole | PERSON | 0.99+ |
Transported Asset Protection Alliance | ORGANIZATION | 0.99+ |
HP | ORGANIZATION | 0.99+ |
HPE | ORGANIZATION | 0.99+ |
93% | QUANTITY | 0.99+ |
2021 | DATE | 0.99+ |
Mike Ferron-Jones | PERSON | 0.99+ |
Intel | ORGANIZATION | 0.99+ |
Cole Humphreys | PERSON | 0.99+ |
TAPA | ORGANIZATION | 0.99+ |
Gen11 | ORGANIZATION | 0.99+ |
today | DATE | 0.98+ |
first time | QUANTITY | 0.98+ |
14,000 | QUANTITY | 0.98+ |
one | QUANTITY | 0.98+ |
Humphreys | PERSON | 0.98+ |
each party | QUANTITY | 0.98+ |
earth | LOCATION | 0.97+ |
Gen11 | COMMERCIAL_ITEM | 0.97+ |
Americas | LOCATION | 0.97+ |
Gen11s | COMMERCIAL_ITEM | 0.96+ |
Securing Compute, Engineered for the Hybrid World | TITLE | 0.96+ |
Xeon | COMMERCIAL_ITEM | 0.94+ |
4th Gen Xeon Scalable processor | COMMERCIAL_ITEM | 0.94+ |
each | QUANTITY | 0.93+ |
4th Gen Xeon | COMMERCIAL_ITEM | 0.92+ |
Ferron-Jones | PERSON | 0.91+ |
Sapphire Rapids | COMMERCIAL_ITEM | 0.91+ |
first electron | QUANTITY | 0.9+ |
two great companies | QUANTITY | 0.89+ |
decades | QUANTITY | 0.86+ |
three areas | QUANTITY | 0.85+ |
Gen11 | EVENT | 0.84+ |
ILO | ORGANIZATION | 0.83+ |
Control-Flow Enforcement Technology | OTHER | 0.82+ |
Breaking Analysis: Cloud players sound a cautious tone for 2023
>> From the Cube Studios in Palo Alto in Boston bringing you data-driven insights from the Cube and ETR. This is Breaking Analysis with Dave Vellante. >> The unraveling of market enthusiasm continued in Q4 of 2022 with the earnings reports from the US hyperscalers, the big three now all in. As we said earlier this year, even the cloud is an immune from the macro headwinds and the cracks in the armor that we saw from the data that we shared last summer, they're playing out into 2023. For the most part actuals are disappointing beyond expectations including our own. It turns out that our estimates for the big three hyperscaler's revenue missed by 1.2 billion or 2.7% lower than we had forecast from even our most recent November estimates. And we expect continued decelerating growth rates for the hyperscalers through the summer of 2023 and we don't think that's going to abate until comparisons get easier. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we share our view of what's happening in cloud markets not just for the hyperscalers but other firms that have hitched a ride on the cloud. And we'll share new ETR data that shows why these trends are playing out tactics that customers are employing to deal with their cost challenges and how long the pain is likely to last. You know, riding the cloud wave, it's a two-edged sword. Let's look at the players that have gone all in on or are exposed to both the positive and negative trends of cloud. Look the cloud has been a huge tailwind for so many companies like Snowflake and Databricks, Workday, Salesforce, Mongo's move with Atlas, Red Hats Cloud strategy with OpenShift and so forth. And you know, the flip side is because cloud is elastic what comes up can also go down very easily. Here's an XY graphic from ETR that shows spending momentum or net score on the vertical axis and market presence in the dataset on the horizontal axis provision or called overlap. This is data from the January 2023 survey and that the red dotted lines show the positions of several companies that we've highlighted going back to January 2021. So let's unpack this for a bit starting with the big three hyperscalers. The first point is AWS and Azure continue to solidify their moat relative to Google Cloud platform. And we're going to get into this in a moment, but Azure and AWS revenues are five to six times that of GCP for IaaS. And at those deltas, Google should be gaining ground much faster than the big two. The second point on Google is notice the red line on GCP relative to its starting point. While it appears to be gaining ground on the horizontal axis, its net score is now below that of AWS and Azure in the survey. So despite its significantly smaller size it's just not keeping pace with the leaders in terms of market momentum. Now looking at AWS and Microsoft, what we see is basically AWS is holding serve. As we know both Google and Microsoft benefit from including SaaS in their cloud numbers. So the fact that AWS hasn't seen a huge downward momentum relative to a January 2021 position is one positive in the data. And both companies are well above that magic 40% line on the Y-axis, anything above 40% we consider to be highly elevated. But the fact remains that they're down as are most of the names on this chart. So let's take a closer look. I want to start with Snowflake and Databricks. Snowflake, as we reported from several quarters back came down to Earth, it was up in the 80% range in the Y-axis here. And it's still highly elevated in the 60% range and it continues to move to the right, which is positive but as we'll address in a moment it's customers can dial down consumption just as in any cloud. Now, Databricks is really interesting. It's not a public company, it never made it to IPO during the sort of tech bubble. So we don't have the same level of transparency that we do with other companies that did make it through. But look at how much more prominent it is on the X-axis relative to January 2021. And it's net score is basically held up over that period of time. So that's a real positive for Databricks. Next, look at Workday and Salesforce. They've held up relatively well, both inching to the right and generally holding their net scores. Same from Mongo, which is the brown dot above its name that says Elastic, it says a little gets a little crowded which Elastic's actually the blue dot above it. But generally, SaaS is harder to dial down, Workday, Salesforce, Oracles, SaaS and others. So it's harder to dial down because commitments have been made in advance, they're kind of locked in. Now, one of the discussions from last summer was as Mongo, less discretionary than analytics i.e. Snowflake. And it's an interesting debate but maybe Snowflake customers, you know, they're also generally committed to a dollar amount. So over time the spending is going to be there. But in the short term, yeah maybe Snowflake customers can dial down. Now that highlighted dotted red line, that bolded one is Datadog and you can see it's made major strides on the X-axis but its net score has decelerated quite dramatically. Openshift's momentum in the survey has dropped although IBM just announced that OpenShift has a a billion dollar ARR and I suspect what's happening there is IBM consulting is bundling OpenShift into its modernization projects. It's got a, that sort of captive base if you will. And as such it's probably not as top of mind to the respondents but I'll bet you the developers are certainly aware of it. Now the other really notable call out here is CloudFlare, We've reported on them earlier. Cloudflare's net score has held up really well since January of 2021. It really hasn't seen the downdraft of some of these others, but it's making major major moves to the right gaining market presence. We really like how CloudFlare is performing. And the last comment is on Oracle which as you can see, despite its much, much lower net score continues to gain ground in the market and thrive from a profitability standpoint. But the data pretty clearly shows that there's a downdraft in the market. Okay, so what's happening here? Let's dig deeper into this data. Here's a graphic from the most recent ETR drill down asking customers that said they were going to cut spending what technique they're using to do so. Now, as we've previously reported, consolidating redundant vendors is by far the most cited approach but there's two key points we want to make here. One is reducing excess cloud resources. As you can see in the bars is the second most cited technique and it's up from the previous polling period. The second we're not showing, you know directly but we've got some red call outs there. Reducing cloud costs jumps to 29% and 28% respectively in financial services and tech telco. And it's much closer to second. It's basically neck and neck with consolidating redundant vendors in those two industries. So they're being really aggressive about optimizing cloud cost. Okay, so as we said, cloud is great 'cause you can dial it up but it's just as easy to dial down. We've identified six factors that customers tell us are affecting their cloud consumption and there are probably more, if you got more we'd love to hear them but these are the ones that are fairly prominent that have hit our radar. First, rising mortgage rates mean banks are processing fewer loans means less cloud. The crypto crash means less trading activity and that means less cloud resources. Third lower ad spend has led companies to reduce not only you know, their ad buying but also their frequency of running their analytics and their calculations. And they're also often using less data, maybe compressing the timeframe of the corpus down to a shorter time period. Also very prominent is down to the bottom left, using lower cost compute instances. For example, Graviton from AWS or AMD chips and tiering storage to cheaper S3 or deep archived tiers. And finally, optimizing based on better pricing plans. So customers are moving from, you know, smaller companies in particular moving maybe from on demand or other larger companies that are experimenting using on demand or they're moving to spot pricing or reserved instances or optimized savings plans. That all lowers cost and that means less cloud resource consumption and less cloud revenue. Now in the days when everything was on prem CFOs, what would they do? They would freeze CapEx and IT Pros would have to try to do more with less and often that meant a lot of manual tasks. With the cloud it's much easier to move things around. It still takes some thinking and some effort but it's dramatically simpler to do so. So you can get those savings a lot faster. Now of course the other huge factor is you can cut or you can freeze. And this graphic shows data from a recent ETR survey with 159 respondents and you can see the meaningful uptick in hiring freezes, freezing new IT deployments and layoffs. And as we've been reporting, this has been trending up since earlier last year. And note the call out, this is especially prominent in retail sectors, all three of these techniques jump up in retail and that's a bit of a concern because oftentimes consumer spending helps the economy make a softer landing out of a pullback. But this is a potential canary in the coal mine. If retail firms are pulling back it's because consumers aren't spending as much. And so we're keeping a close eye on that. So let's boil this down to the market data and what this all means. So in this graphic we show our estimates for Q4 IaaS revenues compared to the "actual" IaaS revenues. And we say quote because AWS is the only one that reports, you know clean revenue and IaaS, Azure and GCP don't report actuals. Why would they? Because it would make them look even, you know smaller relative to AWS. Rather, they bury the figures in overall cloud which includes their, you know G-Suite for Google and all the Microsoft SaaS. And then they give us little tidbits about in Microsoft's case, Azure, they give growth rates. Google gives kind of relative growth of GCP. So, and we use survey data and you know, other data to try to really pinpoint and we've been covering this for, I don't know, five or six years ever since the cloud really became a thing. But looking at the data, we had AWS growing at 25% this quarter and it came in at 20%. So a significant decline relative to our expectations. AWS announced that it exited December, actually, sorry it's January data showed about a 15% mid-teens growth rate. So that's, you know, something we're watching. Azure was two points off our forecast coming in at 38% growth. It said it exited December in the 35% growth range and it said that it's expecting five points of deceleration off of that. So think 30% for Azure. GCP came in three points off our expectation coming in 35% and Alibaba has yet to report but we've shaved a bid off that forecast based on some survey data and you know what maybe 9% is even still not enough. Now for the year, the big four hyperscalers generated almost 160 billion of revenue, but that was 7 billion lower than what what we expected coming into 2022. For 2023, we're expecting 21% growth for a total of 193.3 billion. And while it's, you know, lower, you know, significantly lower than historical expectations it's still four to five times the overall spending forecast that we just shared with you in our predictions post of between 4 and 5% for the overall market. We think AWS is going to come in in around 93 billion this year with Azure closing in at over 71 billion. This is, again, we're talking IaaS here. Now, despite Amazon focusing investors on the fact that AWS's absolute dollar growth is still larger than its competitors. By our estimates Azure will come in at more than 75% of AWS's forecasted revenue. That's a significant milestone. AWS is operating margins by the way declined significantly this past quarter, dropping from 30% of revenue to 24%, 30% the year earlier to 24%. Now that's still extremely healthy and we've seen wild fluctuations like this before so I don't get too freaked out about that. But I'll say this, Microsoft has a marginal cost advantage relative to AWS because one, it has a captive cloud on which to run its massive software estate. So it can just throw software at its own cloud and two software marginal costs. Marginal economics despite AWS's awesomeness in high degrees of automation, software is just a better business. Now the upshot for AWS is the ecosystem. AWS is essentially in our view positioning very smartly as a platform for data partners like Snowflake and Databricks, security partners like CrowdStrike and Okta and Palo Alto and many others and SaaS companies. You know, Microsoft is more competitive even though AWS does have competitive products. Now of course Amazon's competitive to retail companies so that's another factor but generally speaking for tech players, Amazon is a really thriving ecosystem that is a secret weapon in our view. AWS happy to spin the meter with its partners even though it sells competitive products, you know, more so in our view than other cloud players. Microsoft, of course is, don't forget is hyping now, we're hearing a lot OpenAI and ChatGPT we reported last week in our predictions post. How OpenAI is shot up in terms of market sentiment in ETR's emerging technology company surveys and people are moving to Azure to get OpenAI and get ChatGPT that is a an interesting lever. Amazon in our view has to have a response. They have lots of AI and they're going to have to make some moves there. Meanwhile, Google is emphasizing itself as an AI first company. In fact, Google spent at least five minutes of continuous dialogue, nonstop on its AI chops during its latest earnings call. So that's an area that we're watching very closely as the buzz around large language models continues. All right, let's wrap up with some assumptions for 2023. We think SaaS players are going to continue to be sticky. They're going to be somewhat insulated from all these downdrafts because they're so tied in and customers, you know they make the commitment up front, you've got the lock in. Now having said that, we do expect some backlash over time on the onerous and generally customer unfriendly pricing models of most large SaaS companies. But that's going to play out over a longer period of time. Now for cloud generally and the hyperscalers specifically we do expect accelerating growth rates into Q3 but the amplitude of the demand swings from this rubber band economy, we expect to continue to compress and become more predictable throughout the year. Estimates are coming down, CEOs we think are going to be more cautious when the market snaps back more cautious about hiring and spending and as such a perhaps we expect a more orderly return to growth which we think will slightly accelerate in Q4 as comps get easier. Now of course the big risk to these scenarios is of course the economy, the FED, consumer spending, inflation, supply chain, energy prices, wars, geopolitics, China relations, you know, all the usual stuff. But as always with our partners at ETR and the Cube community, we're here for you. We have the data and we'll be the first to report when we see a change at the margin. Okay, that's a wrap for today. I want to thank Alex Morrison who's on production and manages the podcast, Ken Schiffman as well out of our Boston studio getting this up on LinkedIn Live. Thank you for that. Kristen Martin also and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our Editor-in-Chief over at siliconangle.com. He does some great editing for us. Thank you all. Remember all these episodes are available as podcast. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com, at siliconangle.com where you can see all the data and you want to get in touch. Just all you can do is email me david.vellante@siliconangle.com or DM me @dvellante if you if you got something interesting, I'll respond. If you don't, it's either 'cause I'm swamped or it's just not tickling me. You can comment on our LinkedIn post as well. And please check out ETR.ai for the best survey data in the enterprise tech business. This is Dave Vellante for the Cube Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (gentle upbeat music)
SUMMARY :
From the Cube Studios and how long the pain is likely to last.
SENTIMENT ANALYSIS :
ENTITIES
Entity | Category | Confidence |
---|---|---|
Alex Morrison | PERSON | 0.99+ |
AWS | ORGANIZATION | 0.99+ |
Alibaba | ORGANIZATION | 0.99+ |
Cheryl Knight | PERSON | 0.99+ |
Kristen Martin | PERSON | 0.99+ |
Dave Vellante | PERSON | 0.99+ |
Ken Schiffman | PERSON | 0.99+ |
January 2021 | DATE | 0.99+ |
Microsoft | ORGANIZATION | 0.99+ |
ORGANIZATION | 0.99+ | |
Rob Hof | PERSON | 0.99+ |
2.7% | QUANTITY | 0.99+ |
January | DATE | 0.99+ |
Amazon | ORGANIZATION | 0.99+ |
December | DATE | 0.99+ |
January of 2021 | DATE | 0.99+ |
five | QUANTITY | 0.99+ |
January 2023 | DATE | 0.99+ |
Snowflake | ORGANIZATION | 0.99+ |
Palo Alto | LOCATION | 0.99+ |
1.2 billion | QUANTITY | 0.99+ |
20% | QUANTITY | 0.99+ |
IBM | ORGANIZATION | 0.99+ |
Databricks | ORGANIZATION | 0.99+ |
29% | QUANTITY | 0.99+ |
30% | QUANTITY | 0.99+ |
six factors | QUANTITY | 0.99+ |
second point | QUANTITY | 0.99+ |
24% | QUANTITY | 0.99+ |
2022 | DATE | 0.99+ |
david.vellante@siliconangle.com | OTHER | 0.99+ |
X-axis | ORGANIZATION | 0.99+ |
2023 | DATE | 0.99+ |
28% | QUANTITY | 0.99+ |
193.3 billion | QUANTITY | 0.99+ |
ETR | ORGANIZATION | 0.99+ |
38% | QUANTITY | 0.99+ |
7 billion | QUANTITY | 0.99+ |
21% | QUANTITY | 0.99+ |
Earth | LOCATION | 0.99+ |
25% | QUANTITY | 0.99+ |
Mongo | ORGANIZATION | 0.99+ |
Oracle | ORGANIZATION | 0.99+ |
Atlas | ORGANIZATION | 0.99+ |
two industries | QUANTITY | 0.99+ |
last week | DATE | 0.99+ |
six years | QUANTITY | 0.99+ |
first point | QUANTITY | 0.99+ |
Red Hats | ORGANIZATION | 0.99+ |
35% | QUANTITY | 0.99+ |
four | QUANTITY | 0.99+ |
159 respondents | QUANTITY | 0.99+ |
Okta | ORGANIZATION | 0.99+ |