Image Title

Search Results for three main points:

Luis Ceze & Anna Connolly, OctoML | AWS Startup Showcase S3 E1


 

(soft music) >> Hello, everyone. Welcome to theCUBE's presentation of the AWS Startup Showcase. AI and Machine Learning: Top Startups Building Foundational Model Infrastructure. This is season 3, episode 1 of the ongoing series covering the exciting stuff from the AWS ecosystem, talking about machine learning and AI. I'm your host, John Furrier and today we are excited to be joined by Luis Ceze who's the CEO of OctoML and Anna Connolly, VP of customer success and experience OctoML. Great to have you on again, Luis. Anna, thanks for coming on. Appreciate it. >> Thank you, John. It's great to be here. >> Thanks for having us. >> I love the company. We had a CUBE conversation about this. You guys are really addressing how to run foundational models faster for less. And this is like the key theme. But before we get into it, this is a hot trend, but let's explain what you guys do. Can you set the narrative of what the company's about, why it was founded, what's your North Star and your mission? >> Yeah, so John, our mission is to make AI sustainable and accessible for everyone. And what we offer customers is, you know, a way of taking their models into production in the most efficient way possible by automating the process of getting a model and optimizing it for a variety of hardware and making cost-effective. So better, faster, cheaper model deployment. >> You know, the big trend here is AI. Everyone's seeing the ChatGPT, kind of the shot heard around the world. The BingAI and this fiasco and the ongoing experimentation. People are into it, and I think the business impact is clear. I haven't seen this in all of my career in the technology industry of this kind of inflection point. And every senior leader I talk to is rethinking about how to rebuild their business with AI because now the large language models have come in, these foundational models are here, they can see value in their data. This is a 10 year journey in the big data world. Now it's impacting that, and everyone's rebuilding their company around this idea of being AI first 'cause they see ways to eliminate things and make things more efficient. And so now they telling 'em to go do it. And they're like, what do we do? So what do you guys think? Can you explain what is this wave of AI and why is it happening, why now, and what should people pay attention to? What does it mean to them? >> Yeah, I mean, it's pretty clear by now that AI can do amazing things that captures people's imaginations. And also now can show things that are really impactful in businesses, right? So what people have the opportunity to do today is to either train their own model that adds value to their business or find open models out there that can do very valuable things to them. So the next step really is how do you take that model and put it into production in a cost-effective way so that the business can actually get value out of it, right? >> Anna, what's your take? Because customers are there, you're there to make 'em successful, you got the new secret weapon for their business. >> Yeah, I think we just see a lot of companies struggle to get from a trained model into a model that is deployed in a cost-effective way that actually makes sense for the application they're building. I think that's a huge challenge we see today, kind of across the board across all of our customers. >> Well, I see this, everyone asking the same question. I have data, I want to get value out of it. I got to get these big models, I got to train it. What's it going to cost? So I think there's a reality of, okay, I got to do it. Then no one has any visibility on what it costs. When they get into it, this is going to break the bank. So I have to ask you guys, the cost of training these models is on everyone's mind. OctoML, your company's focus on the cost side of it as well as the efficiency side of running these models in production. Why are the production costs such a concern and where specifically are people looking at it and why did it get here? >> Yeah, so training costs get a lot of attention because normally a large number, but we shouldn't forget that it's a large, typically one time upfront cost that customers pay. But, you know, when the model is put into production, the cost grows directly with model usage and you actually want your model to be used because it's adding value, right? So, you know, the question that a customer faces is, you know, they have a model, they have a trained model and now what? So how much would it cost to run in production, right? And now without the big wave in generative AI, which rightfully is getting a lot of attention because of the amazing things that it can do. It's important for us to keep in mind that generative AI models like ChatGPT are huge, expensive energy hogs. They cost a lot to run, right? And given that model usage growth directly, model cost grows directly with usage, what you want to do is make sure that once you put a model into production, you have the best cost structure possible so that you're not surprised when it's gets popular, right? So let me give you an example. So if you have a model that costs, say 1 to $2 million to train, but then it costs about one to two cents per session to use it, right? So if you have a million active users, even if they use just once a day, it's 10 to $20,000 a day to operate that model in production. And that very, very quickly, you know, get beyond what you paid to train it. >> Anna, these aren't small numbers, and it's cost to train and cost to operate, it kind of reminds me of when the cloud came around and the data center versus cloud options. Like, wait a minute, one, it costs a ton of cash to deploy, and then running it. This is kind of a similar dynamic. What are you seeing? >> Yeah, absolutely. I think we are going to see increasingly the cost and production outpacing the costs and training by a lot. I mean, people talk about training costs now because that's what they're confronting now because people are so focused on getting models performant enough to even use in an application. And now that we have them and they're that capable, we're really going to start to see production costs go up a lot. >> Yeah, Luis, if you don't mind, I know this might be a little bit of a tangent, but, you know, training's super important. I get that. That's what people are doing now, but then there's the deployment side of production. Where do people get caught up and miss the boat or misconfigure? What's the gotcha? Where's the trip wire or so to speak? Where do people mess up on the cost side? What do they do? Is it they don't think about it, they tie it to proprietary hardware? What's the issue? >> Yeah, several things, right? So without getting really technical, which, you know, I might get into, you know, you have to understand relationship between performance, you know, both in terms of latency and throughput and cost, right? So reducing latency is important because you improve responsiveness of the model. But it's really important to keep in mind that it often leads diminishing returns. Below a certain latency, making it faster won't make a measurable difference in experience, but it's going to cost a lot more. So understanding that is important. Now, if you care more about throughputs, which is the time it takes for you to, you know, units per period of time, you care about time to solution, we should think about this throughput per dollar. And understand what you want is the highest throughput per dollar, which may come at the cost of higher latency, which you're not going to care about, right? So, and the reality here, John, is that, you know, humans and especially folks in this space want to have the latest and greatest hardware. And often they commit a lot of money to get access to them and have to commit upfront before they understand the needs that their models have, right? So common mistake here, one is not spending time to understand what you really need, and then two, over-committing and using more hardware than you actually need. And not giving yourself enough freedom to get your workload to move around to the more cost-effective choice, right? So this is just a metaphoric choice. And then another thing that's important here too is making a model run faster on the hardware directly translates to lower cost, right? So, but it takes a lot of engineers, you need to think of ways of producing very efficient versions of your model for the target hardware that you're going to use. >> Anna, what's the customer angle here? Because price performance has been around for a long time, people get that, but now latency and throughput, that's key because we're starting to see this in apps. I mean, there's an end user piece. I even seeing it on the infrastructure side where they're taking a heavy lifting away from operational costs. So you got, you know, application specific to the user and/or top of the stack, and then you got actually being used in operations where they want both. >> Yeah, absolutely. Maybe I can illustrate this with a quick story with the customer that we had recently been working with. So this customer is planning to run kind of a transformer based model for tech generation at super high scale on Nvidia T4 GPU, so kind of a commodity GPU. And the scale was so high that they would've been paying hundreds of thousands of dollars in cloud costs per year just to serve this model alone. You know, one of many models in their application stack. So we worked with this team to optimize our model and then benchmark across several possible targets. So that matching the hardware that Luis was just talking about, including the newer kind of Nvidia A10 GPUs. And what they found during this process was pretty interesting. First, the team was able to shave a quarter of their spend just by using better optimization techniques on the T4, the older hardware. But actually moving to a newer GPU would allow them to serve this model in a sub two milliseconds latency, so super fast, which was able to unlock an entirely new kind of user experience. So they were able to kind of change the value they're delivering in their application just because they were able to move to this new hardware easily. So they ultimately decided to plan their deployment on the more expensive A10 because of this, but because of the hardware specific optimizations that we helped them with, they managed to even, you know, bring costs down from what they had originally planned. And so if you extend this kind of example to everything that's happening with generative AI, I think the story we just talked about was super relevant, but the scale can be even higher, you know, it can be tenfold that. We were recently conducting kind of this internal study using GPT-J as a proxy to illustrate the experience of just a company trying to use one of these large language models with an example scenario of creating a chatbot to help job seekers prepare for interviews. So if you imagine kind of a conservative usage scenario where the model generates just 3000 words per user per day, which is, you know, pretty conservative for how people are interacting with these models. It costs 5 cents a session and if you're a company and your app goes viral, so from, you know, beginning of the year there's nobody, at the end of the year there's a million daily active active users in that year alone, going from zero to a million. You'll be spending about $6 million a year, which is pretty unmanageable. That's crazy, right? >> Yeah. >> For a company or a product that's just launching. So I think, you know, for us we see the real way to make these kind of advancements accessible and sustainable, as we said is to bring down cost to serve using these techniques. >> That's a great story and I think that illustrates this idea that deployment cost can vary from situation to situation, from model to model and that the efficiency is so strong with this new wave, it eliminates heavy lifting, creates more efficiency, automates intellect. I mean, this is the trend, this is radical, this is going to increase. So the cost could go from nominal to millions, literally, potentially. So, this is what customers are doing. Yeah, that's a great story. What makes sense on a financial, is there a cost of ownership? Is there a pattern for best practice for training? What do you guys advise cuz this is a lot of time and money involved in all potential, you know, good scenarios of upside. But you can get over your skis as they say, and be successful and be out of business if you don't manage it. I mean, that's what people are talking about, right? >> Yeah, absolutely. I think, you know, we see kind of three main vectors to reduce cost. I think one is make your deployment process easier overall, so that your engineering effort to even get your app running goes down. Two, would be get more from the compute you're already paying for, you're already paying, you know, for your instances in the cloud, but can you do more with that? And then three would be shop around for lower cost hardware to match your use case. So on the first one, I think making the deployment easier overall, there's a lot of manual work that goes into benchmarking, optimizing and packaging models for deployment. And because the performance of machine learning models can be really hardware dependent, you have to go through this process for each target you want to consider running your model on. And this is hard, you know, we see that every day. But for teams who want to incorporate some of these large language models into their applications, it might be desirable because licensing a model from a large vendor like OpenAI can leave you, you know, over provision, kind of paying for capabilities you don't need in your application or can lock you into them and you lose flexibility. So we have a customer whose team actually prepares models for deployment in a SaaS application that many of us use every day. And they told us recently that without kind of an automated benchmarking and experimentation platform, they were spending several days each to benchmark a single model on a single hardware type. So this is really, you know, manually intensive and then getting more from the compute you're already paying for. We do see customers who leave money on the table by running models that haven't been optimized specifically for the hardware target they're using, like Luis was mentioning. And for some teams they just don't have the time to go through an optimization process and for others they might lack kind of specialized expertise and this is something we can bring. And then on shopping around for different hardware types, we really see a huge variation in model performance across hardware, not just CPU vs. GPU, which is, you know, what people normally think of. But across CPU vendors themselves, high memory instances and across cloud providers even. So the best strategy here is for teams to really be able to, we say, look before you leap by running real world benchmarking and not just simulations or predictions to find the best software, hardware combination for their workload. >> Yeah. You guys sound like you have a very impressive customer base deploying large language models. Where would you categorize your current customer base? And as you look out, as you guys are growing, you have new customers coming in, take me through the progression. Take me through the profile of some of your customers you have now, size, are they hyperscalers, are they big app folks, are they kicking the tires? And then as people are out there scratching heads, I got to get in this game, what's their psychology like? Are they coming in with specific problems or do they have specific orientation point of view about what they want to do? Can you share some data around what you're seeing? >> Yeah, I think, you know, we have customers that kind of range across the spectrum of sophistication from teams that basically don't have MLOps expertise in their company at all. And so they're really looking for us to kind of give a full service, how should I do everything from, you know, optimization, find the hardware, prepare for deployment. And then we have teams that, you know, maybe already have their serving and hosting infrastructure up and ready and they already have models in production and they're really just looking to, you know, take the extra juice out of the hardware and just do really specific on that optimization piece. I think one place where we're doing a lot more work now is kind of in the developer tooling, you know, model selection space. And that's kind of an area that we're creating more tools for, particularly within the PyTorch ecosystem to bring kind of this power earlier in the development cycle so that as people are grabbing a model off the shelf, they can, you know, see how it might perform and use that to inform their development process. >> Luis, what's the big, I like this idea of picking the models because isn't that like going to the market and picking the best model for your data? It's like, you know, it's like, isn't there a certain approaches? What's your view on this? 'Cause this is where everyone, I think it's going to be a land rush for this and I want to get your thoughts. >> For sure, yeah. So, you know, I guess I'll start with saying the one main takeaway that we got from the GPT-J study is that, you know, having a different understanding of what your model's compute and memory requirements are, very quickly, early on helps with the much smarter AI model deployments, right? So, and in fact, you know, Anna just touched on this, but I want to, you know, make sure that it's clear that OctoML is putting that power into user's hands right now. So in partnership with AWS, we are launching this new PyTorch native profiler that allows you with a single, you know, one line, you know, code decorator allows you to see how your code runs on a variety of different hardware after accelerations. So it gives you very clear, you know, data on how you should think about your model deployments. And this ties back to choices of models. So like, if you have a set of choices that are equally good of models in terms of functionality and you want to understand after acceleration how are you going to deploy, how much they're going to cost or what are the options using a automated process of making a decision is really, really useful. And in fact, so I think these events can get early access to this by signing up for the Octopods, you know, this is exclusive group for insiders here, so you can go to OctoML.ai/pods to sign up. >> So that Octopod, is that a program? What is that, is that access to code? Is that a beta, what is that? Explain, take a minute and explain Octopod. >> I think the Octopod would be a group of people who is interested in experiencing this functionality. So it is the friends and users of OctoML that would be the Octopod. And then yes, after you sign up, we would provide you essentially the tool in code form for you to try out in your own. I mean, part of the benefit of this is that it happens in your own local environment and you're in control of everything kind of within the workflow that developers are already using to create and begin putting these models into their applications. So it would all be within your control. >> Got it. I think the big question I have for you is when do you, when does that one of your customers know they need to call you? What's their environment look like? What are they struggling with? What are the conversations they might be having on their side of the fence? If anyone's watching this, they're like, "Hey, you know what, I've got my team, we have a lot of data. Do we have our own language model or do I use someone else's?" There's a lot of this, I will say discovery going on around what to do, what path to take, what does that customer look like, if someone's listening, when do they know to call you guys, OctoML? >> Well, I mean the most obvious one is that you have a significant spend on AI/ML, come and talk to us, you know, putting AIML into production. So that's the clear one. In fact, just this morning I was talking to someone who is in life sciences space and is having, you know, 15 to $20 million a year cloud related to AI/ML deployment is a clear, it's a pretty clear match right there, right? So that's on the cost side. But I also want to emphasize something that Anna said earlier that, you know, the hardware and software complexity involved in putting model into production is really high. So we've been able to abstract that away, offering a clean automation flow enables one, to experiment early on, you know, how models would run and get them to production. And then two, once they are into production, gives you an automated flow to continuously updating your model and taking advantage of all this acceleration and ability to run the model on the right hardware. So anyways, let's say one then is cost, you know, you have significant cost and then two, you have an automation needs. And Anna please compliment that. >> Yeah, Anna you can please- >> Yeah, I think that's exactly right. Maybe the other time is when you are expecting a big scale up in serving your application, right? You're launching a new feature, you expect to get a lot of usage or, and you want to kind of anticipate maybe your CTO, your CIO, whoever pays your cloud bills is going to come after you, right? And so they want to know, you know, what's the return on putting this model essentially into my application stack? Am I going to, is the usage going to match what I'm paying for it? And then you can understand that. >> So you guys have a lot of the early adopters, they got big data teams, they're pushed in the production, they want to get a little QA, test the waters, understand, use your technology to figure it out. Is there any cases where people have gone into production, they have to pull it out? It's like the old lemon laws with your car, you buy a car and oh my god, it's not the way I wanted it. I mean, I can imagine the early people through the wall, so to speak, in the wave here are going to be bloody in the sense that they've gone in and tried stuff and get stuck with huge bills. Are you seeing that? Are people pulling stuff out of production and redeploying? Or I can imagine that if I had a bad deployment, I'd want to refactor that or actually replatform that. Do you see that too? >> Definitely after a sticker shock, yes, your customers will come and make sure that, you know, the sticker shock won't happen again. >> Yeah. >> But then there's another more thorough aspect here that I think we likely touched on, be worth elaborating a bit more is just how are you going to scale in a way that's feasible depending on the allocation that you get, right? So as we mentioned several times here, you know, model deployment is so hardware dependent and so complex that you tend to get a model for a hardware choice and then you want to scale that specific type of instance. But what if, when you want to scale because suddenly luckily got popular and, you know, you want to scale it up and then you don't have that instance anymore. So how do you live with whatever you have at that moment is something that we see customers needing as well. You know, so in fact, ideally what we want is customers to not think about what kind of specific instances they want. What they want is to know what their models need. Say, they know the SLA and then find a set of hybrid targets and instances that hit the SLA whenever they're also scaling, they're going to scale with more freedom, right? Instead of having to wait for AWS to give them more specific allocation for a specific instance. What if you could live with other types of hardware and scale up in a more free way, right? So that's another thing that we see customers, you know, like they need more freedom to be able to scale with whatever is available. >> Anna, you touched on this with the business model impact to that 6 million cost, if that goes out of control, there's a business model aspect and there's a technical operation aspect to the cost side too. You want to be mindful of riding the wave in a good way, but not getting over your skis. So that brings up the point around, you know, confidence, right? And teamwork. Because if you're in production, there's probably a team behind it. Talk about the team aspect of your customers. I mean, they're dedicated, they go put stuff into production, they're developers, there're data. What's in it for them? Are they getting better, are they in the beach, you know, reading the book. Are they, you know, are there easy street for them? What's the customer benefit to the teams? >> Yeah, absolutely. With just a few clicks of a button, you're in production, right? That's the dream. So yeah, I mean I think that, you know, we illustrated it before a little bit. I think the automated kind of benchmarking and optimization process, like when you think about the effort it takes to get that data by hand, which is what people are doing today, they just don't do it. So they're making decisions without the best information because it's, you know, there just isn't the bandwidth to get the information that they need to make the best decision and then know exactly how to deploy it. So I think it's actually bringing kind of a new insight and capability to these teams that they didn't have before. And then maybe another aspect on the team side is that it's making the hand-off of the models from the data science teams to the model deployment teams more seamless. So we have, you know, we have seen in the past that this kind of transition point is the place where there are a lot of hiccups, right? The data science team will give a model to the production team and it'll be too slow for the application or it'll be too expensive to run and it has to go back and be changed and kind of this loop. And so, you know, with the PyTorch profiler that Luis was talking about, and then also, you know, the other ways we do optimization that kind of prevents that hand-off problem from happening. >> Luis and Anna, you guys have a great company. Final couple minutes left. Talk about the company, the people there, what's the culture like, you know, if Intel has Moore's law, which is, you know, doubling the performance in few years, what's the culture like there? Is it, you know, more throughput, better pricing? Explain what's going on with the company and put a plug in. Luis, we'll start with you. >> Yeah, absolutely. I'm extremely proud of the team that we built here. You know, we have a people first culture, you know, very, very collaborative and folks, we all have a shared mission here of making AI more accessible and sustainable. We have a very diverse team in terms of backgrounds and life stories, you know, to do what we do here, we need a team that has expertise in software engineering, in machine learning, in computer architecture. Even though we don't build chips, we need to understand how they work, right? So, and then, you know, the fact that we have this, this very really, really varied set of backgrounds makes the environment, you know, it's say very exciting to learn more about, you know, assistance end-to-end. But also makes it for a very interesting, you know, work environment, right? So people have different backgrounds, different stories. Some of them went to grad school, others, you know, were in intelligence agencies and now are working here, you know. So we have a really interesting set of people and, you know, life is too short not to work with interesting humans. You know, that's something that I like to think about, you know. >> I'm sure your off-site meetings are a lot of fun, people talking about computer architectures, silicon advances, the next GPU, the big data models coming in. Anna, what's your take? What's the culture like? What's the company vibe and what are you guys looking to do? What's the customer success pattern? What's up? >> Yeah, absolutely. I mean, I, you know, second all of the great things that Luis just said about the team. I think one that I, an additional one that I'd really like to underscore is kind of this customer obsession, to use a term you all know well. And focus on the end users and really making the experiences that we're bringing to our user who are developers really, you know, useful and valuable for them. And so I think, you know, all of these tools that we're trying to put in the hands of users, the industry and the market is changing so rapidly that our products across the board, you know, all of the companies that, you know, are part of the showcase today, we're all evolving them so quickly and we can only do that kind of really hand in glove with our users. So that would be another thing I'd emphasize. >> I think the change dynamic, the power dynamics of this industry is just the beginning. I'm very bullish that this is going to be probably one of the biggest inflection points in history of the computer industry because of all the dynamics of the confluence of all the forces, which you mentioned some of them, I mean PC, you know, interoperability within internetworking and you got, you know, the web and then mobile. Now we have this, I mean, I wouldn't even put social media even in the close to this. Like, this is like, changes user experience, changes infrastructure. There's going to be massive accelerations in performance on the hardware side from AWS's of the world and cloud and you got the edge and more data. This is really what big data was going to look like. This is the beginning. Final question, what do you guys see going forward in the future? >> Well, it's undeniable that machine learning and AI models are becoming an integral part of an interesting application today, right? So, and the clear trends here are, you know, more and more competitional needs for these models because they're only getting more and more powerful. And then two, you know, seeing the complexity of the infrastructure where they run, you know, just considering the cloud, there's like a wide variety of choices there, right? So being able to live with that and making the most out of it in a way that does not require, you know, an impossible to find team is something that's pretty clear. So the need for automation, abstracting with the complexity is definitely here. And we are seeing this, you know, trends are that you also see models starting to move to the edge as well. So it's clear that we're seeing, we are going to live in a world where there's no large models living in the cloud. And then, you know, edge models that talk to these models in the cloud to form, you know, an end-to-end truly intelligent application. >> Anna? >> Yeah, I think, you know, our, Luis said it at the beginning. Our vision is to make AI sustainable and accessible. And I think as this technology just expands in every company and every team, that's going to happen kind of on its own. And we're here to help support that. And I think you can't do that without tools like those like OctoML. >> I think it's going to be an error of massive invention, creativity, a lot of the format heavy lifting is going to allow the talented people to automate their intellect. I mean, this is really kind of what we see going on. And Luis, thank you so much. Anna, thanks for coming on this segment. Thanks for coming on theCUBE and being part of the AWS Startup Showcase. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

Great to have you on again, Luis. It's great to be here. but let's explain what you guys do. And what we offer customers is, you know, So what do you guys think? so that the business you got the new secret kind of across the board So I have to ask you guys, And that very, very quickly, you know, and the data center versus cloud options. And now that we have them but, you know, training's super important. John, is that, you know, humans and then you got actually managed to even, you know, So I think, you know, for us we see in all potential, you know, And this is hard, you know, And as you look out, as And then we have teams that, you know, and picking the best model for your data? from the GPT-J study is that, you know, What is that, is that access to code? And then yes, after you sign up, to call you guys, OctoML? come and talk to us, you know, And so they want to know, you know, So you guys have a lot make sure that, you know, we see customers, you know, What's the customer benefit to the teams? and then also, you know, what's the culture like, you know, So, and then, you know, and what are you guys looking to do? all of the companies that, you know, I mean PC, you know, in the cloud to form, you know, And I think you can't And Luis, thank you so much.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AnnaPERSON

0.99+

Anna ConnollyPERSON

0.99+

John FurrierPERSON

0.99+

LuisPERSON

0.99+

Luis CezePERSON

0.99+

JohnPERSON

0.99+

1QUANTITY

0.99+

10QUANTITY

0.99+

15QUANTITY

0.99+

AWSORGANIZATION

0.99+

10 yearQUANTITY

0.99+

6 millionQUANTITY

0.99+

zeroQUANTITY

0.99+

IntelORGANIZATION

0.99+

threeQUANTITY

0.99+

NvidiaORGANIZATION

0.99+

FirstQUANTITY

0.99+

OctoMLORGANIZATION

0.99+

twoQUANTITY

0.99+

millionsQUANTITY

0.99+

todayDATE

0.99+

TwoQUANTITY

0.99+

$2 millionQUANTITY

0.98+

3000 wordsQUANTITY

0.98+

one lineQUANTITY

0.98+

A10COMMERCIAL_ITEM

0.98+

OctoMLTITLE

0.98+

oneQUANTITY

0.98+

three main vectorsQUANTITY

0.97+

hundreds of thousands of dollarsQUANTITY

0.97+

bothQUANTITY

0.97+

CUBEORGANIZATION

0.97+

T4COMMERCIAL_ITEM

0.97+

one timeQUANTITY

0.97+

first oneQUANTITY

0.96+

two centsQUANTITY

0.96+

GPT-JORGANIZATION

0.96+

single modelQUANTITY

0.95+

a minuteQUANTITY

0.95+

about $6 million a yearQUANTITY

0.95+

once a dayQUANTITY

0.95+

$20,000 a dayQUANTITY

0.95+

a millionQUANTITY

0.94+

theCUBEORGANIZATION

0.93+

OctopodTITLE

0.93+

this morningDATE

0.93+

first cultureQUANTITY

0.92+

$20 million a yearQUANTITY

0.92+

AWS Startup ShowcaseEVENT

0.9+

North StarORGANIZATION

0.9+

Robert Nishihara, Anyscale | AWS Startup Showcase S3 E1


 

(upbeat music) >> Hello everyone. Welcome to theCube's presentation of the "AWS Startup Showcase." The topic this episode is AI and machine learning, top startups building foundational model infrastructure. This is season three, episode one of the ongoing series covering exciting startups from the AWS ecosystem. And this time we're talking about AI and machine learning. I'm your host, John Furrier. I'm excited I'm joined today by Robert Nishihara, who's the co-founder and CEO of a hot startup called Anyscale. He's here to talk about Ray, the open source project, Anyscale's infrastructure for foundation as well. Robert, thank you for joining us today. >> Yeah, thanks so much as well. >> I've been following your company since the founding pre pandemic and you guys really had a great vision scaled up and in a perfect position for this big wave that we all see with ChatGPT and OpenAI that's gone mainstream. Finally, AI has broken out through the ropes and now gone mainstream, so I think you guys are really well positioned. I'm looking forward to to talking with you today. But before we get into it, introduce the core mission for Anyscale. Why do you guys exist? What is the North Star for Anyscale? >> Yeah, like you mentioned, there's a tremendous amount of excitement about AI right now. You know, I think a lot of us believe that AI can transform just every different industry. So one of the things that was clear to us when we started this company was that the amount of compute needed to do AI was just exploding. Like to actually succeed with AI, companies like OpenAI or Google or you know, these companies getting a lot of value from AI, were not just running these machine learning models on their laptops or on a single machine. They were scaling these applications across hundreds or thousands or more machines and GPUs and other resources in the Cloud. And so to actually succeed with AI, and this has been one of the biggest trends in computing, maybe the biggest trend in computing in, you know, in recent history, the amount of compute has been exploding. And so to actually succeed with that AI, to actually build these scalable applications and scale the AI applications, there's a tremendous software engineering lift to build the infrastructure to actually run these scalable applications. And that's very hard to do. So one of the reasons many AI projects and initiatives fail is that, or don't make it to production, is the need for this scale, the infrastructure lift, to actually make it happen. So our goal here with Anyscale and Ray, is to make that easy, is to make scalable computing easy. So that as a developer or as a business, if you want to do AI, if you want to get value out of AI, all you need to know is how to program on your laptop. Like, all you need to know is how to program in Python. And if you can do that, then you're good to go. Then you can do what companies like OpenAI or Google do and get value out of machine learning. >> That programming example of how easy it is with Python reminds me of the early days of Cloud, when infrastructure as code was talked about was, it was just code the infrastructure programmable. That's super important. That's what AI people wanted, first program AI. That's the new trend. And I want to understand, if you don't mind explaining, the relationship that Anyscale has to these foundational models and particular the large language models, also called LLMs, was seen with like OpenAI and ChatGPT. Before you get into the relationship that you have with them, can you explain why the hype around foundational models? Why are people going crazy over foundational models? What is it and why is it so important? >> Yeah, so foundational models and foundation models are incredibly important because they enable businesses and developers to get value out of machine learning, to use machine learning off the shelf with these large models that have been trained on tons of data and that are useful out of the box. And then, of course, you know, as a business or as a developer, you can take those foundational models and repurpose them or fine tune them or adapt them to your specific use case and what you want to achieve. But it's much easier to do that than to train them from scratch. And I think there are three, for people to actually use foundation models, there are three main types of workloads or problems that need to be solved. One is training these foundation models in the first place, like actually creating them. The second is fine tuning them and adapting them to your use case. And the third is serving them and actually deploying them. Okay, so Ray and Anyscale are used for all of these three different workloads. Companies like OpenAI or Cohere that train large language models. Or open source versions like GPTJ are done on top of Ray. There are many startups and other businesses that fine tune, that, you know, don't want to train the large underlying foundation models, but that do want to fine tune them, do want to adapt them to their purposes, and build products around them and serve them, those are also using Ray and Anyscale for that fine tuning and that serving. And so the reason that Ray and Anyscale are important here is that, you know, building and using foundation models requires a huge scale. It requires a lot of data. It requires a lot of compute, GPUs, TPUs, other resources. And to actually take advantage of that and actually build these scalable applications, there's a lot of infrastructure that needs to happen under the hood. And so you can either use Ray and Anyscale to take care of that and manage the infrastructure and solve those infrastructure problems. Or you can build the infrastructure and manage the infrastructure yourself, which you can do, but it's going to slow your team down. It's going to, you know, many of the businesses we work with simply don't want to be in the business of managing infrastructure and building infrastructure. They want to focus on product development and move faster. >> I know you got a keynote presentation we're going to go to in a second, but I think you hit on something I think is the real tipping point, doing it yourself, hard to do. These are things where opportunities are and the Cloud did that with data centers. Turned a data center and made it an API. The heavy lifting went away and went to the Cloud so people could be more creative and build their product. In this case, build their creativity. Is that kind of what's the big deal? Is that kind of a big deal happening that you guys are taking the learnings and making that available so people don't have to do that? >> That's exactly right. So today, if you want to succeed with AI, if you want to use AI in your business, infrastructure work is on the critical path for doing that. To do AI, you have to build infrastructure. You have to figure out how to scale your applications. That's going to change. We're going to get to the point, and you know, with Ray and Anyscale, we're going to remove the infrastructure from the critical path so that as a developer or as a business, all you need to focus on is your application logic, what you want the the program to do, what you want your application to do, how you want the AI to actually interface with the rest of your product. Now the way that will happen is that Ray and Anyscale will still, the infrastructure work will still happen. It'll just be under the hood and taken care of by Ray in Anyscale. And so I think something like this is really necessary for AI to reach its potential, for AI to have the impact and the reach that we think it will, you have to make it easier to do. >> And just for clarification to point out, if you don't mind explaining the relationship of Ray and Anyscale real quick just before we get into the presentation. >> So Ray is an open source project. We created it. We were at Berkeley doing machine learning. We started Ray so that, in order to provide an easy, a simple open source tool for building and running scalable applications. And Anyscale is the managed version of Ray, basically we will run Ray for you in the Cloud, provide a lot of tools around the developer experience and managing the infrastructure and providing more performance and superior infrastructure. >> Awesome. I know you got a presentation on Ray and Anyscale and you guys are positioning as the infrastructure for foundational models. So I'll let you take it away and then when you're done presenting, we'll come back, I'll probably grill you with a few questions and then we'll close it out so take it away. >> Robert: Sounds great. So I'll say a little bit about how companies are using Ray and Anyscale for foundation models. The first thing I want to mention is just why we're doing this in the first place. And the underlying observation, the underlying trend here, and this is a plot from OpenAI, is that the amount of compute needed to do machine learning has been exploding. It's been growing at something like 35 times every 18 months. This is absolutely enormous. And other people have written papers measuring this trend and you get different numbers. But the point is, no matter how you slice and dice it, it' a astronomical rate. Now if you compare that to something we're all familiar with, like Moore's Law, which says that, you know, the processor performance doubles every roughly 18 months, you can see that there's just a tremendous gap between the needs, the compute needs of machine learning applications, and what you can do with a single chip, right. So even if Moore's Law were continuing strong and you know, doing what it used to be doing, even if that were the case, there would still be a tremendous gap between what you can do with the chip and what you need in order to do machine learning. And so given this graph, what we've seen, and what has been clear to us since we started this company, is that doing AI requires scaling. There's no way around it. It's not a nice to have, it's really a requirement. And so that led us to start Ray, which is the open source project that we started to make it easy to build these scalable Python applications and scalable machine learning applications. And since we started the project, it's been adopted by a tremendous number of companies. Companies like OpenAI, which use Ray to train their large models like ChatGPT, companies like Uber, which run all of their deep learning and classical machine learning on top of Ray, companies like Shopify or Spotify or Instacart or Lyft or Netflix, ByteDance, which use Ray for their machine learning infrastructure. Companies like Ant Group, which makes Alipay, you know, they use Ray across the board for fraud detection, for online learning, for detecting money laundering, you know, for graph processing, stream processing. Companies like Amazon, you know, run Ray at a tremendous scale and just petabytes of data every single day. And so the project has seen just enormous adoption since, over the past few years. And one of the most exciting use cases is really providing the infrastructure for building training, fine tuning, and serving foundation models. So I'll say a little bit about, you know, here are some examples of companies using Ray for foundation models. Cohere trains large language models. OpenAI also trains large language models. You can think about the workloads required there are things like supervised pre-training, also reinforcement learning from human feedback. So this is not only the regular supervised learning, but actually more complex reinforcement learning workloads that take human input about what response to a particular question, you know is better than a certain other response. And incorporating that into the learning. There's open source versions as well, like GPTJ also built on top of Ray as well as projects like Alpa coming out of UC Berkeley. So these are some of the examples of exciting projects in organizations, training and creating these large language models and serving them using Ray. Okay, so what actually is Ray? Well, there are two layers to Ray. At the lowest level, there's the core Ray system. This is essentially low level primitives for building scalable Python applications. Things like taking a Python function or a Python class and executing them in the cluster setting. So Ray core is extremely flexible and you can build arbitrary scalable applications on top of Ray. So on top of Ray, on top of the core system, what really gives Ray a lot of its power is this ecosystem of scalable libraries. So on top of the core system you have libraries, scalable libraries for ingesting and pre-processing data, for training your models, for fine tuning those models, for hyper parameter tuning, for doing batch processing and batch inference, for doing model serving and deployment, right. And a lot of the Ray users, the reason they like Ray is that they want to run multiple workloads. They want to train and serve their models, right. They want to load their data and feed that into training. And Ray provides common infrastructure for all of these different workloads. So this is a little overview of what Ray, the different components of Ray. So why do people choose to go with Ray? I think there are three main reasons. The first is the unified nature. The fact that it is common infrastructure for scaling arbitrary workloads, from data ingest to pre-processing to training to inference and serving, right. This also includes the fact that it's future proof. AI is incredibly fast moving. And so many people, many companies that have built their own machine learning infrastructure and standardized on particular workflows for doing machine learning have found that their workflows are too rigid to enable new capabilities. If they want to do reinforcement learning, if they want to use graph neural networks, they don't have a way of doing that with their standard tooling. And so Ray, being future proof and being flexible and general gives them that ability. Another reason people choose Ray in Anyscale is the scalability. This is really our bread and butter. This is the reason, the whole point of Ray, you know, making it easy to go from your laptop to running on thousands of GPUs, making it easy to scale your development workloads and run them in production, making it easy to scale, you know, training to scale data ingest, pre-processing and so on. So scalability and performance, you know, are critical for doing machine learning and that is something that Ray provides out of the box. And lastly, Ray is an open ecosystem. You can run it anywhere. You can run it on any Cloud provider. Google, you know, Google Cloud, AWS, Asure. You can run it on your Kubernetes cluster. You can run it on your laptop. It's extremely portable. And not only that, it's framework agnostic. You can use Ray to scale arbitrary Python workloads. You can use it to scale and it integrates with libraries like TensorFlow or PyTorch or JAX or XG Boost or Hugging Face or PyTorch Lightning, right, or Scikit-learn or just your own arbitrary Python code. It's open source. And in addition to integrating with the rest of the machine learning ecosystem and these machine learning frameworks, you can use Ray along with all of the other tooling in the machine learning ecosystem. That's things like weights and biases or ML flow, right. Or you know, different data platforms like Databricks, you know, Delta Lake or Snowflake or tools for model monitoring for feature stores, all of these integrate with Ray. And that's, you know, Ray provides that kind of flexibility so that you can integrate it into the rest of your workflow. And then Anyscale is the scalable compute platform that's built on top, you know, that provides Ray. So Anyscale is a managed Ray service that runs in the Cloud. And what Anyscale does is it offers the best way to run Ray. And if you think about what you get with Anyscale, there are fundamentally two things. One is about moving faster, accelerating the time to market. And you get that by having the managed service so that as a developer you don't have to worry about managing infrastructure, you don't have to worry about configuring infrastructure. You also, it provides, you know, optimized developer workflows. Things like easily moving from development to production, things like having the observability tooling, the debug ability to actually easily diagnose what's going wrong in a distributed application. So things like the dashboards and the other other kinds of tooling for collaboration, for monitoring and so on. And then on top of that, so that's the first bucket, developer productivity, moving faster, faster experimentation and iteration. The second reason that people choose Anyscale is superior infrastructure. So this is things like, you know, cost deficiency, being able to easily take advantage of spot instances, being able to get higher GPU utilization, things like faster cluster startup times and auto scaling. Things like just overall better performance and faster scheduling. And so these are the kinds of things that Anyscale provides on top of Ray. It's the managed infrastructure. It's fast, it's like the developer productivity and velocity as well as performance. So this is what I wanted to share about Ray in Anyscale. >> John: Awesome. >> Provide that context. But John, I'm curious what you think. >> I love it. I love the, so first of all, it's a platform because that's the platform architecture right there. So just to clarify, this is an Anyscale platform, not- >> That's right. >> Tools. So you got tools in the platform. Okay, that's key. Love that managed service. Just curious, you mentioned Python multiple times, is that because of PyTorch and TensorFlow or Python's the most friendly with machine learning or it's because it's very common amongst all developers? >> That's a great question. Python is the language that people are using to do machine learning. So it's the natural starting point. Now, of course, Ray is actually designed in a language agnostic way and there are companies out there that use Ray to build scalable Java applications. But for the most part right now we're focused on Python and being the best way to build these scalable Python and machine learning applications. But, of course, down the road there always is that potential. >> So if you're slinging Python code out there and you're watching that, you're watching this video, get on Anyscale bus quickly. Also, I just, while you were giving the presentation, I couldn't help, since you mentioned OpenAI, which by the way, congratulations 'cause they've had great scale, I've noticed in their rapid growth 'cause they were the fastest company to the number of users than anyone in the history of the computer industry, so major successor, OpenAI and ChatGPT, huge fan. I'm not a skeptic at all. I think it's just the beginning, so congratulations. But I actually typed into ChatGPT, what are the top three benefits of Anyscale and came up with scalability, flexibility, and ease of use. Obviously, scalability is what you guys are called. >> That's pretty good. >> So that's what they came up with. So they nailed it. Did you have an inside prompt training, buy it there? Only kidding. (Robert laughs) >> Yeah, we hard coded that one. >> But that's the kind of thing that came up really, really quickly if I asked it to write a sales document, it probably will, but this is the future interface. This is why people are getting excited about the foundational models and the large language models because it's allowing the interface with the user, the consumer, to be more human, more natural. And this is clearly will be in every application in the future. >> Absolutely. This is how people are going to interface with software, how they're going to interface with products in the future. It's not just something, you know, not just a chat bot that you talk to. This is going to be how you get things done, right. How you use your web browser or how you use, you know, how you use Photoshop or how you use other products. Like you're not going to spend hours learning all the APIs and how to use them. You're going to talk to it and tell it what you want it to do. And of course, you know, if it doesn't understand it, it's going to ask clarifying questions. You're going to have a conversation and then it'll figure it out. >> This is going to be one of those things, we're going to look back at this time Robert and saying, "Yeah, from that company, that was the beginning of that wave." And just like AWS and Cloud Computing, the folks who got in early really were in position when say the pandemic came. So getting in early is a good thing and that's what everyone's talking about is getting in early and playing around, maybe replatforming or even picking one or few apps to refactor with some staff and managed services. So people are definitely jumping in. So I have to ask you the ROI cost question. You mentioned some of those, Moore's Law versus what's going on in the industry. When you look at that kind of scale, the first thing that jumps out at people is, "Okay, I love it. Let's go play around." But what's it going to cost me? Am I going to be tied to certain GPUs? What's the landscape look like from an operational standpoint, from the customer? Are they locked in and the benefit was flexibility, are you flexible to handle any Cloud? What is the customers, what are they looking at? Basically, that's my question. What's the customer looking at? >> Cost is super important here and many of the companies, I mean, companies are spending a huge amount on their Cloud computing, on AWS, and on doing AI, right. And I think a lot of the advantage of Anyscale, what we can provide here is not only better performance, but cost efficiency. Because if we can run something faster and more efficiently, it can also use less resources and you can lower your Cloud spending, right. We've seen companies go from, you know, 20% GPU utilization with their current setup and the current tools they're using to running on Anyscale and getting more like 95, you know, 100% GPU utilization. That's something like a five x improvement right there. So depending on the kind of application you're running, you know, it's a significant cost savings. We've seen companies that have, you know, processing petabytes of data every single day with Ray going from, you know, getting order of magnitude cost savings by switching from what they were previously doing to running their application on Ray. And when you have applications that are spending, you know, potentially $100 million a year and getting a 10 X cost savings is just absolutely enormous. So these are some of the kinds of- >> Data infrastructure is super important. Again, if the customer, if you're a prospect to this and thinking about going in here, just like the Cloud, you got infrastructure, you got the platform, you got SaaS, same kind of thing's going to go on in AI. So I want to get into that, you know, ROI discussion and some of the impact with your customers that are leveraging the platform. But first I hear you got a demo. >> Robert: Yeah, so let me show you, let me give you a quick run through here. So what I have open here is the Anyscale UI. I've started a little Anyscale Workspace. So Workspaces are the Anyscale concept for interactive developments, right. So here, imagine I'm just, you want to have a familiar experience like you're developing on your laptop. And here I have a terminal. It's not on my laptop. It's actually in the cloud running on Anyscale. And I'm just going to kick this off. This is going to train a large language model, so OPT. And it's doing this on 32 GPUs. We've got a cluster here with a bunch of CPU cores, bunch of memory. And as that's running, and by the way, if I wanted to run this on instead of 32 GPUs, 64, 128, this is just a one line change when I launch the Workspace. And what I can do is I can pull up VS code, right. Remember this is the interactive development experience. I can look at the actual code. Here it's using Ray train to train the torch model. We've got the training loop and we're saying that each worker gets access to one GPU and four CPU cores. And, of course, as I make the model larger, this is using deep speed, as I make the model larger, I could increase the number of GPUs that each worker gets access to, right. And how that is distributed across the cluster. And if I wanted to run on CPUs instead of GPUs or a different, you know, accelerator type, again, this is just a one line change. And here we're using Ray train to train the models, just taking my vanilla PyTorch model using Hugging Face and then scaling that across a bunch of GPUs. And, of course, if I want to look at the dashboard, I can go to the Ray dashboard. There are a bunch of different visualizations I can look at. I can look at the GPU utilization. I can look at, you know, the CPU utilization here where I think we're currently loading the model and running that actual application to start the training. And some of the things that are really convenient here about Anyscale, both I can get that interactive development experience with VS code. You know, I can look at the dashboards. I can monitor what's going on. It feels, I have a terminal, it feels like my laptop, but it's actually running on a large cluster. And I can, with however many GPUs or other resources that I want. And so it's really trying to combine the best of having the familiar experience of programming on your laptop, but with the benefits, you know, being able to take advantage of all the resources in the Cloud to scale. And it's like when, you know, you're talking about cost efficiency. One of the biggest reasons that people waste money, one of the silly reasons for wasting money is just forgetting to turn off your GPUs. And what you can do here is, of course, things will auto terminate if they're idle. But imagine you go to sleep, I have this big cluster. You can turn it off, shut off the cluster, come back tomorrow, restart the Workspace, and you know, your big cluster is back up and all of your code changes are still there. All of your local file edits. It's like you just closed your laptop and came back and opened it up again. And so this is the kind of experience we want to provide for our users. So that's what I wanted to share with you. >> Well, I think that whole, couple of things, lines of code change, single line of code change, that's game changing. And then the cost thing, I mean human error is a big deal. People pass out at their computer. They've been coding all night or they just forget about it. I mean, and then it's just like leaving the lights on or your water running in your house. It's just, at the scale that it is, the numbers will add up. That's a huge deal. So I think, you know, compute back in the old days, there's no compute. Okay, it's just compute sitting there idle. But you know, data cranking the models is doing, that's a big point. >> Another thing I want to add there about cost efficiency is that we make it really easy to use, if you're running on Anyscale, to use spot instances and these preemptable instances that can just be significantly cheaper than the on-demand instances. And so when we see our customers go from what they're doing before to using Anyscale and they go from not using these spot instances 'cause they don't have the infrastructure around it, the fault tolerance to handle the preemption and things like that, to being able to just check a box and use spot instances and save a bunch of money. >> You know, this was my whole, my feature article at Reinvent last year when I met with Adam Selipsky, this next gen Cloud is here. I mean, it's not auto scale, it's infrastructure scale. It's agility. It's flexibility. I think this is where the world needs to go. Almost what DevOps did for Cloud and what you were showing me that demo had this whole SRE vibe. And remember Google had site reliability engines to manage all those servers. This is kind of like an SRE vibe for data at scale. I mean, a similar kind of order of magnitude. I mean, I might be a little bit off base there, but how would you explain it? >> It's a nice analogy. I mean, what we are trying to do here is get to the point where developers don't think about infrastructure. Where developers only think about their application logic. And where businesses can do AI, can succeed with AI, and build these scalable applications, but they don't have to build, you know, an infrastructure team. They don't have to develop that expertise. They don't have to invest years in building their internal machine learning infrastructure. They can just focus on the Python code, on their application logic, and run the stuff out of the box. >> Awesome. Well, I appreciate the time. Before we wrap up here, give a plug for the company. I know you got a couple websites. Again, go, Ray's got its own website. You got Anyscale. You got an event coming up. Give a plug for the company looking to hire. Put a plug in for the company. >> Yeah, absolutely. Thank you. So first of all, you know, we think AI is really going to transform every industry and the opportunity is there, right. We can be the infrastructure that enables all of that to happen, that makes it easy for companies to succeed with AI, and get value out of AI. Now we have, if you're interested in learning more about Ray, Ray has been emerging as the standard way to build scalable applications. Our adoption has been exploding. I mentioned companies like OpenAI using Ray to train their models. But really across the board companies like Netflix and Cruise and Instacart and Lyft and Uber, you know, just among tech companies. It's across every industry. You know, gaming companies, agriculture, you know, farming, robotics, drug discovery, you know, FinTech, we see it across the board. And all of these companies can get value out of AI, can really use AI to improve their businesses. So if you're interested in learning more about Ray and Anyscale, we have our Ray Summit coming up in September. This is going to highlight a lot of the most impressive use cases and stories across the industry. And if your business, if you want to use LLMs, you want to train these LLMs, these large language models, you want to fine tune them with your data, you want to deploy them, serve them, and build applications and products around them, give us a call, talk to us. You know, we can really take the infrastructure piece, you know, off the critical path and make that easy for you. So that's what I would say. And, you know, like you mentioned, we're hiring across the board, you know, engineering, product, go-to-market, and it's an exciting time. >> Robert Nishihara, co-founder and CEO of Anyscale, congratulations on a great company you've built and continuing to iterate on and you got growth ahead of you, you got a tailwind. I mean, the AI wave is here. I think OpenAI and ChatGPT, a customer of yours, have really opened up the mainstream visibility into this new generation of applications, user interface, roll of data, large scale, how to make that programmable so we're going to need that infrastructure. So thanks for coming on this season three, episode one of the ongoing series of the hot startups. In this case, this episode is the top startups building foundational model infrastructure for AI and ML. I'm John Furrier, your host. Thanks for watching. (upbeat music)

Published Date : Mar 9 2023

SUMMARY :

episode one of the ongoing and you guys really had and other resources in the Cloud. and particular the large language and what you want to achieve. and the Cloud did that with data centers. the point, and you know, if you don't mind explaining and managing the infrastructure and you guys are positioning is that the amount of compute needed to do But John, I'm curious what you think. because that's the platform So you got tools in the platform. and being the best way to of the computer industry, Did you have an inside prompt and the large language models and tell it what you want it to do. So I have to ask you and you can lower your So I want to get into that, you know, and you know, your big cluster is back up So I think, you know, the on-demand instances. and what you were showing me that demo and run the stuff out of the box. I know you got a couple websites. and the opportunity is there, right. and you got growth ahead

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Robert NishiharaPERSON

0.99+

JohnPERSON

0.99+

RobertPERSON

0.99+

John FurrierPERSON

0.99+

NetflixORGANIZATION

0.99+

35 timesQUANTITY

0.99+

AmazonORGANIZATION

0.99+

$100 millionQUANTITY

0.99+

UberORGANIZATION

0.99+

AWSORGANIZATION

0.99+

100%QUANTITY

0.99+

GoogleORGANIZATION

0.99+

Ant GroupORGANIZATION

0.99+

firstQUANTITY

0.99+

PythonTITLE

0.99+

20%QUANTITY

0.99+

32 GPUsQUANTITY

0.99+

LyftORGANIZATION

0.99+

hundredsQUANTITY

0.99+

tomorrowDATE

0.99+

AnyscaleORGANIZATION

0.99+

threeQUANTITY

0.99+

128QUANTITY

0.99+

SeptemberDATE

0.99+

todayDATE

0.99+

Moore's LawTITLE

0.99+

Adam SelipskyPERSON

0.99+

PyTorchTITLE

0.99+

RayORGANIZATION

0.99+

second reasonQUANTITY

0.99+

64QUANTITY

0.99+

each workerQUANTITY

0.99+

each workerQUANTITY

0.99+

PhotoshopTITLE

0.99+

UC BerkeleyORGANIZATION

0.99+

JavaTITLE

0.99+

ShopifyORGANIZATION

0.99+

OpenAIORGANIZATION

0.99+

AnyscalePERSON

0.99+

thirdQUANTITY

0.99+

two thingsQUANTITY

0.99+

ByteDanceORGANIZATION

0.99+

SpotifyORGANIZATION

0.99+

OneQUANTITY

0.99+

95QUANTITY

0.99+

AsureORGANIZATION

0.98+

one lineQUANTITY

0.98+

one GPUQUANTITY

0.98+

ChatGPTTITLE

0.98+

TensorFlowTITLE

0.98+

last yearDATE

0.98+

first bucketQUANTITY

0.98+

bothQUANTITY

0.98+

two layersQUANTITY

0.98+

CohereORGANIZATION

0.98+

AlipayORGANIZATION

0.98+

RayPERSON

0.97+

oneQUANTITY

0.97+

InstacartORGANIZATION

0.97+

Vanesa Diaz, LuxQuanta & Dr Antonio Acin, ICFO | MWC Barcelona 2023


 

(upbeat music) >> Narrator: theCUBE's live coverage is made possible by funding from Dell Technologies: creating technologies that drive human progress. (upbeat music) >> Welcome back to the Fira in Barcelona. You're watching theCUBE's Coverage day two of MWC 23. Check out SiliconANGLE.com for all the news, John Furrier in our Palo Alto studio, breaking that down. But we're here live Dave Vellante, Dave Nicholson and Lisa Martin. We're really excited. We're going to talk qubits. Vanessa Diaz is here. She's CEO of LuxQuanta And Antonio Acin is a professor of ICFO. Folks, welcome to theCUBE. We're going to talk quantum. Really excited about that. >> Vanessa: Thank you guys. >> What does quantum have to do with the network? Tell us. >> Right, so we are actually leaving the second quantum revolution. So the first one actually happened quite a few years ago. It enabled very much the communications that we have today. So in this second quantum revolution, if in the first one we learn about some very basic properties of quantum physics now our scientific community is able to actually work with the systems and ask them to do things. So quantum technologies mean right now, three main pillars, no areas of exploration. The first one is quantum computing. Everybody knows about that. Antonio knows a lot about that too so he can explain further. And it's about computers that now can do wonder. So the ability of of these computers to compute is amazing. So they'll be able to do amazing things. The other pillar is quantum communications but in fact it's slightly older than quantum computer, nobody knows that. And we are the ones that are coming to actually counteract the superpowers of quantum computers. And last but not least quantum sensing, that's the the application of again, quantum physics to measure things that were impossible to measure in with such level of quality, of precision than before. So that's very much where we are right now. >> Okay, so I think I missed the first wave of quantum computing Because, okay, but my, our understanding is ones and zeros, they can be both and the qubits aren't that stable, et cetera. But where are we today, Antonio in terms of actually being able to apply quantum computing? I'm inferring from what Vanessa said that we've actually already applied it but has it been more educational or is there actual work going on with quantum? >> Well, at the moment, I mean, typical question is like whether we have a quantum computer or not. I think we do have some quantum computers, some machines that are able to deal with these quantum bits. But of course, this first generation of quantum computers, they have noise, they're imperfect, they don't have many qubits. So we have to understand what we can do with these quantum computers today. Okay, this is science, but also technology working together to solve relevant problems. So at this moment is not clear what we can do with present quantum computers but we also know what we can do with a perfect quantum computer without noise with many quantum bits, with many qubits. And for instance, then we can solve problems that are out of reach for our classical computers. So the typical example is the problem of factorization that is very connected to what Vanessa does in her company. So we have identified problems that can be solved more efficiently with a quantum computer, with a very good quantum computer. People are working to have this very good quantum computer. At the moment, we have some imperfect quantum computers, we have to understand what we can do with these imperfect machines. >> Okay. So for the first wave was, okay, we have it working for a little while so we see the potential. Okay, and we have enough evidence almost like a little experiment. And now it's apply it to actually do some real work. >> Yeah, so now there is interest by companies so because they see a potential there. So they are investing and they're working together with scientists. We have to identify use cases, problems of relevance for all of us. And then once you identify a problem where a quantum computer can help you, try to solve it with existing machines and see if you can get an advantage. So now the community is really obsessed with getting a quantum advantage. So we really hope that we will get a quantum advantage. This, we know we will get it. We eventually have a very good quantum computer. But we want to have it now. And we're working on that. We have some results, there were I would say a bit academic situation in which a quantum advantage was proven. But to be honest with you on a really practical problem, this has not happened yet. But I believe the day that this happens and I mean it will be really a game changing. >> So you mentioned the word efficiency and you talked about the quantum advantage. Is the quantum advantage a qualitative advantage in that it is fundamentally different? Or is it simply a question of greater efficiency, so therefore a quantitative advantage? The example in the world we're used to, think about a card system where you're writing information on a card and putting it into a filing cabinet and then you want to retrieve it. Well, the information's all there, you can retrieve it. Computer system accelerates that process. It's not doing something that is fundamentally different unless you accept that the speed with which these things can be done gives it a separate quality. So how would you characterize that quantum versus non quantum? Is it just so much horse power changes the game or is it fundamentally different? >> Okay, so from a fundamental perspective, quantum physics is qualitatively different from classical physics. I mean, this year the Nobel Prize was given to three experimentalists who made experiments that proved that quantum physics is qualitatively different from classical physics. This is established, I mean, there have been experiments proving that. Now when we discuss about quantum computation, it's more a quantitative difference. So we have problems that you can solve, in principle you can solve with the classical computers but maybe the amount of time you need to solve them is we are talking about centuries and not with your laptop even with a classic super computer, these machines that are huge, where you have a building full of computers there are some problems for which computers take centuries to solve them. So you can say that it's quantitative, but in practice you may even say that it's impossible in practice and it will remain impossible. And now these problems become feasible with a quantum computer. So it's quantitative but almost qualitative I would say. >> Before we get into the problems, 'cause I want to understand some of those examples, but Vanessa, so your role at LuxQuanta is you're applying quantum in the communication sector for security purposes, correct? >> Vanessa: Correct. >> Because everybody talks about how quantum's going to ruin our lives in terms of taking all our passwords and figuring everything out. But can quantum help us defend against quantum and is that what you do? >> That's what we do. So one of the things that Antonio's explaining so our quantum computer will be able to solve in a reasonable amount of time something that today is impossible to solve unless you leave a laptop or super computer working for years. So one of those things is cryptography. So at the end, when use send a message and you want to preserve its confidentiality what you do is you destroy it but following certain rules which means they're using some kind of key and therefore you can send it through a public network which is the case for every communication that we have, we go through the internet and then the receiver is going to be able to reassemble it because they have that private key and nobody else has. So that private key is actually made of computational problems or mathematical problems that are very, very hard. We're talking about 40 years time for a super computer today to be able to hack it. However, we do not have the guarantee that there is already very smart mind that already have potentially the capacity also of a quantum computer even with enough, no millions, but maybe just a few qubits, it's enough to actually hack this cryptography. And there is also the fear that somebody could actually waiting for quantum computing to finally reach out this amazing capacity we harvesting now which means capturing all this confidential information storage in it. So when we are ready to have the power to unlock it and hack it and see what's behind. So we are talking about information as delicate as governmental, citizens information related to health for example, you name it. So what we do is we build a key to encrypt the information but it's not relying on a mathematical problem it's relying on the laws of quantum physics. So I'm going to have a channel that I'm going to pump photons there, light particles of light. And that quantum channel, because of the laws of physics is going to allow to detect somebody trying to sneak in and seeing the key that I'm establishing. If that happens, I will not create a key if it's clean and nobody was there, I'll give you a super key that nobody today or in the future, regardless of their computational power, will be able to hack. >> So it's like super zero trust. >> Super zero trust. >> Okay so but quantum can solve really challenging mathematical problems. If you had a quantum computer could you be a Bitcoin billionaire? >> Not that I know. I think people are, okay, now you move me a bit of my comfort zone. Because I know people have working on that. I don't think there is a lot of progress at least not that I am aware of. Okay, but I mean, in principle you have to understand that our society is based on information and computation. Computers are a key element in our society. And if you have a machine that computes better but much better than our existing machines, this gives you an advantage for many things. I mean, progress is locked by many computational problems we cannot solve. We can want to have better materials better medicines, better drugs. I mean this, you have to solve hard computational problems. If you have machine that gives you machine learning, big data. I mean, if you have a machine that gives you an advantage there, this may be a really real change. I'm not saying that we know how to do these things with a quantum computer. But if we understand how this machine that has been proven more powerful in some context can be adapted to some other context. I mean having a much better computer machine is an advantage. >> When? When are we going to have, you said we don't really have it today, we want it today. Are we five years away, 10 years away? Who's working on this? >> There are already quantum computers are there. It's just that the capacity that they have of right now is the order of a few hundred qubits. So people are, there are already companies harvesting, they're actually the companies that make these computers they're already putting them. People can access to them through the cloud and they can actually run certain algorithms that have been tailor made or translated to the language of a quantum computer to see how that performs there. So some people are already working with them. There is billions of investment across the world being put on different flavors of technologies that can reach to that quantum supremacy that we are talking about. The question though that you're asking is Q day it sounds like doomsday, you know, Q day. So depending on who you talk to, they will give you a different estimation. So some people say, well, 2030 for example but perhaps we could even think that it could be a more aggressive date, maybe 2027. So it is yet to be the final, let's say not that hard deadline but I think that the risk, that it can actually bring is big enough for us to pay attention to this and start preparing for it. So the end times of cryptography that's what quantum is doing is we have a system here that can actually prevent all your communications from being hacked. So if you think also about Q day and you go all the way back. So whatever tools you need to protect yourself from it, you need to deploy them, you need to see how they fit in your organization, evaluate the benefits, learn about it. So that, how close in time does that bring us? Because I believe that the time to start thinking about this is now. >> And it's likely it'll be some type of hybrid that will get us there, hybrid between existing applications. 'Cause you have to rewrite or write new applications and that's going to take some time. But it sounds like you feel like this decade we will see Q day. What probability would you give that? Is it better than 50/50? By 2030 we'll see Q day. >> But I'm optimistic by nature. So yes, I think it's much higher than 50. >> Like how much higher? >> 80, I would say yes. I'm pretty confident. I mean, but what I want to say also usually when I think there is a message here so you have your laptop, okay, in the past I had a Spectrum This is very small computer, it was more or less the same size but this machine is much more powerful. Why? Because we put information on smaller scales. So we always put information in smaller and smaller scale. This is why here you have for the same size, you have much more information because you put on smaller scales. So if you go small and small and small, you'll find the quantum word. So this is unavoidable. So our information devices are going to meet the quantum world and they're going to exploit it. I'm fully convinced about this, maybe not for the quantum computer we're imagining now but they will find it and they will use quantum effects. And also for cryptography, for me, this is unavoidable. >> And you brought the point there are several companies working on that. I mean, I can get quantum computers on in the cloud and Amazon and other suppliers. IBM of course is. >> The underlying technology, there are competing versions of how you actually create these qubits. pins of electrons and all sorts of different things. Does it need to be super cooled or not? >> Vanessa: There we go. >> At a fundamental stage we'd be getting ground. But what is, what does ChatGPT look like when it can leverage the quantum realm? >> Well, okay. >> I Mean are we all out of jobs at that point? Should we all just be planning for? >> No. >> Not you. >> I think all of us real estate in Portugal, should we all be looking? >> No, actually, I mean in machine learning there are some hopes about quantum competition because usually you have to deal with lots of data. And we know that in quantum physics you have a concept that is called superposition. So we, there are some hopes not in concrete yet but we have some hopes that these superpositions may allow you to explore this big data in a more efficient way. One has to if this can be confirmed. But one of the hopes creating this lots of qubits in this superpositions that you will have better artificial intelligence machines but, okay, this is quite science fiction what I'm saying now. >> At this point and when you say superposition, that's in contrast to the ones and zeros that we're used to. So when someone says it could be a one or zero or a one and a zero, that's referencing the concept of superposition. And so if this is great for encryption, doesn't that necessarily mean that bad actors can leverage it in a way that is now unhackable? >> I mean our technologies, again it's impossible to hack because it is the laws of physics what are allowing me to detect an intruder. So that's the beauty of it. It's not something that you're going to have to replace in the future because there will be a triple quantum computer, it is not going to affect us in any way but definitely the more capacity, computational capacity that we see out there in quantum computers in particular but in any other technologies in general, I mean, when we were coming to talk to you guys, Antonio and I, he was the one saying we do not know whether somebody has reached some relevant computational power already with the technologies that we have. And they've been able to hack already current cryptography and then they're not telling us. So it's a bit of, the message is a little bit like a paranoid message, but if you think about security that the amount of millions that means for a private institution know when there is a data breach, we see it every day. And also the amount of information that is relevant for the wellbeing of a country. Can you really put a reasonable amount of paranoid to that? Because I believe that it's worth exploring whatever tool is going to prevent you from putting any of those piece of information at risk. >> Super interesting topic guys. I know you're got to run. Thanks for stopping by theCUBE, it was great to have you on. >> Thank you guys. >> All right, so this is the SiliconANGLE theCUBE's coverage of Mobile World Congress, MWC now 23. We're live at the Fira Check out silicon SiliconANGLE.com and theCUBE.net for all the videos. Be right back, right after this short break. (relaxing music)

Published Date : Feb 28 2023

SUMMARY :

that drive human progress. for all the news, to do with the network? if in the first one we learn and the qubits aren't So we have to understand what we can do Okay, and we have enough evidence almost But to be honest with you So how would you characterize So we have problems that you can solve, and is that what you do? that I'm going to pump photons If you had a quantum computer that gives you machine learning, big data. you said we don't really have It's just that the capacity that they have of hybrid that will get us there, So yes, I think it's much higher than 50. So if you go small and small and small, And you brought the point of how you actually create these qubits. But what is, what does ChatGPT look like that these superpositions may allow you and when you say superposition, that the amount of millions that means it was great to have you on. for all the videos.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

VanessaPERSON

0.99+

Lisa MartinPERSON

0.99+

Vanessa DiazPERSON

0.99+

Dave NicholsonPERSON

0.99+

John FurrierPERSON

0.99+

AntonioPERSON

0.99+

IBMORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

PortugalLOCATION

0.99+

five yearsQUANTITY

0.99+

LuxQuantaORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

Vanesa DiazPERSON

0.99+

three experimentalistsQUANTITY

0.99+

todayDATE

0.99+

Antonio AcinPERSON

0.99+

Palo AltoLOCATION

0.99+

2027DATE

0.99+

first oneQUANTITY

0.99+

2030DATE

0.99+

BarcelonaLOCATION

0.99+

zeroQUANTITY

0.98+

bothQUANTITY

0.98+

three main pillarsQUANTITY

0.98+

oneQUANTITY

0.98+

Dell TechnologiesORGANIZATION

0.97+

this yearDATE

0.97+

Nobel PrizeTITLE

0.97+

Mobile World CongressEVENT

0.97+

first generationQUANTITY

0.97+

MWC 23EVENT

0.96+

millionsQUANTITY

0.96+

SiliconANGLEORGANIZATION

0.95+

second quantum revolutionQUANTITY

0.95+

few years agoDATE

0.95+

80QUANTITY

0.94+

billions of investmentQUANTITY

0.92+

theCUBEORGANIZATION

0.92+

centuriesQUANTITY

0.91+

SiliconANGLE.comOTHER

0.9+

about 40 yearsQUANTITY

0.89+

DrPERSON

0.88+

super zeroOTHER

0.86+

50/50QUANTITY

0.84+

first waveEVENT

0.84+

day twoQUANTITY

0.83+

zerosQUANTITY

0.82+

yearsQUANTITY

0.81+

ICFOORGANIZATION

0.8+

this decadeDATE

0.77+

few hundred qubitsQUANTITY

0.72+

FiraLOCATION

0.69+

23DATE

0.64+

MWCEVENT

0.62+

higherQUANTITY

0.62+

50QUANTITY

0.61+

FiraEVENT

0.55+

tripleQUANTITY

0.55+

zeroOTHER

0.54+

OneQUANTITY

0.53+

theCUBE.netOTHER

0.53+

qubitsQUANTITY

0.51+

Keynote Analysis with Sarbjeet Johal & Chris Lewis | MWC Barcelona 2023


 

(upbeat instrumental music) >> TheCUBE's live coverage is made possible by funding from Dell Technologies, creating technologies that drive human progress. (uplifting instrumental music) >> Hey everyone. Welcome to Barcelona, Spain. It's theCUBE Live at MWC '23. I'm Lisa Martin, Dave Vellante, our co-founder, our co-CEO of theCUBE, you know him, you love him. He's here as my co-host. Dave, we have a great couple of guests here to break down day one keynote. Lots of meat. I can't wait to be part of this conversation. Chris Lewis joins us, the founder and MD of Lewis Insight. And Sarbjeet Johal, one of you know him as well. He's a Cube contributor, cloud architect. Guys, welcome to the program. Thank you so much for joining Dave and me today. >> Lovely to be here. >> Thank you. >> Chris, I want to start with you. You have covered all aspects of global telecoms industries over 30 years working as an analyst. Talk about the evolution of the telecom industry that you've witnessed, and what were some of the things you heard in the keynote that excite you about the direction it's going? >> Well, as ever, MWC, there's no lack of glitz and glamour, but it's the underlying issues of the industry that are really at stake here. There's not a lot of new revenue coming into the telecom providers, but there's a lot of adjustment, readjustment of the underlying operational environment. And also, really importantly, what came out of the keynotes is the willingness and the necessity to really engage with the API community, with the developer community, people who traditionally, telecoms would never have even touched. So they're sorting out their own house, they're cleaning their own stables, getting the cost base down, but they're also now realizing they've got to engage with all the other parties. There's a lot of cloud providers here, there's a lot of other people from outside so they're realizing they cannot do it all themselves. It's quite a tough lesson for a very conservative, inward looking industry, right? So should we be spending all this money and all this glitz and glamour of MWC and all be here, or should would be out there really building for the future and making sure the services are right for yours and my needs in a business and personal lives? So a lot of new changes, a lot of realization of what's going on outside, but underlying it, we've just got to get this right this time. >> And it feels like that monetization is front and center. You mentioned developers, we've got to work with developers, but I'm hearing the latest keynote from the Ericsson CEOs, we're going to monetize through those APIs, we're going to charge the developers. I mean, first of all, Chris, am I getting that right? And Sarbjeet, as somebody who's close to the developer community, is that the right way to build bridges? But Chris, are we getting that right? >> Well, let's take the first steps first. So, Ericsson, of course, acquired Vonage, which is a massive API business so they want to make money. They expect to make money by bringing that into the mainstream telecom community. Now, whether it's the developers who pay for it, or let's face it, we are moving into a situation as the telco moves into a techco model where the techco means they're going to be selling bits of the technology to developer guys and to other application developers. So when he says he needs to charge other people for it, it's the way in which people reach in and will take going through those open APIs like the open gateway announced today, but also the way they'll reach in and take things like network slicing. So we're opening up the telecom community, the treasure chest, if you like, where developers' applications and other third parties can come in and take those chunks of technology and build them into their services. This is a complete change from the old telecom industry where everybody used to come and you say, "all right, this is my product, you've got to buy it and you're going to pay me a lot of money for it." So we are looking at a more flexible environment where the other parties can take those chunks. And we know we want collectivity built into our financial applications, into our government applications, everything, into the future of the metaverse, whatever it may be. But it requires that change in attitude of the telcos. And they do need more money 'cause they've said, the baseline of revenue is pretty static, there's not a lot of growth in there so they're looking for new revenues. It's in a B2B2X time model. And it's probably the middle man's going to pay for it rather than the customer. >> But the techco model, Sarbjeet, it looks like the telcos are getting their money on their way in. The techco company model's to get them on their way out like the app store. Go build something of value, build some kind of app or data product, and then when it takes off, we'll take a piece of the action. What are your thoughts from a developer perspective about how the telcos are approaching it? >> Yeah, I think before we came here, like I said, I did some tweets on this, that we talk about all kind of developers, like there's game developers and front end, back end, and they're all talking about like what they're building on top of cloud, but nowhere you will hear the term "telco developer," there's no API from telcos given to the developers to build IoT solutions on top of it because telco as an IoT, I think is a good sort of hand in hand there. And edge computing as well. The glimmer of hope, if you will, for telcos is the edge computing, I believe. And even in edge, I predicted, I said that many times that cloud players will dominate that market with the private 5G. You know that story, right? >> We're going to talk about that. (laughs) >> The key is this, that if you see in general where the population lives, in metros, right? That's where the world population is like flocking to and we have cloud providers covering the local zones with local like heavy duty presence from the big cloud providers and then these telcos are getting sidetracked by that. Even the V2X in cars moving the autonomous cars and all that, even in that space, telcos are getting sidetracked in many ways. What telcos have to do is to join the forces, build some standards, if not standards, some consortium sort of. They're trying to do that with the open gateway here, they have only eight APIs. And it's 2023, eight APIs is nothing, right? (laughs) So they should have started this 10 years back, I think. So, yeah, I think to entice the developers, developers need the employability, we need to train them, we need to show them some light that hey, you can build a lot on top of it. If you tell developers they can develop two things or five things, nobody will come. >> So, Chris, the cloud will dominate the edge. So A, do you buy it? B, the telcos obviously are acting like that might happen. >> Do you know I love people when they've got their heads in the clouds. (all laugh) And you're right in so many ways, but if you flip it around and think about how the customers think about this, business customers and consumers, they don't care about all this background shenanigans going on, do they? >> Lisa: No. >> So I think one of the problems we have is that this is a new territory and whether you call it the edge or whatever you call it, what we need there is we need connectivity, we need security, we need storage, we need compute, we need analytics, and we need applications. And are any of those more important than the others? It's the collective that actually drives the real value there. So we need all those things together. And of course, the people who represented at this show, whether it's the cloud guys, the telcos, the Nokia, the Ericssons of this world, they all own little bits of that. So that's why they're all talking partnerships because they need the combination, they cannot do it on their own. The cloud guys can't do it on their own. >> Well, the cloud guys own all of those things that you just talked about though. (all laugh) >> Well, they don't own the last bit of connectivity, do they? They don't own the access. >> Right, exactly. That's the one thing they don't own. So, okay, we're back to pipes, right? We're back to charging for connectivity- >> Pipes are very valuable things, right? >> Yeah, for sure. >> Never underestimate pipes. I don't know about where you live, plumbers make a lot of money where I live- >> I don't underestimate them but I'm saying can the telcos charge for more than that or are the cloud guys going to mop up the storage, the analytics, the compute, and the apps? >> They may mop it up, but I think what the telcos are doing and we've seen a lot of it here already, is they are working with all those major cloud guys already. So is it an unequal relationship? The cloud guys are global, massive global scale, the telcos are fundamentally national operators. >> Yep. >> Some have a little bit of regional, nobody has global scale. So who stitches it all together? >> Dave: Keep your friends close and your enemies closer. >> Absolutely. >> I know that saying never gets old. It's true. Well, Sarbjeet, one of the things that you tweeted about, I didn't get to see the keynote but I was looking at your tweets. 46% of telcos think they won't make it to the next decade. That's a big number. Did that surprise you? >> No, actually it didn't surprise me because the competition is like closing in on them and the telcos are competing with telcos as well and the telcos are competing with cloud providers on the other side, right? So the smaller ones are getting squeezed. It's the bigger players, they can hook up the newer platforms, I think they will survive. It's like that part is like any other industry, if you will. But the key is here, I think why the pain points were sort of described on the main stage is that they're crying out loud to tell the big tech cloud providers that "hey, you pay your fair share," like we talked, right? You are not paying, you're generating so much content which reverses our networks and you are not paying for it. So they are not able to recoup the cost of laying down their networks. By the way, one thing actually I want to mention is that they said the cloud needs earth. The cloud and earth, it's like there's no physical need to cloud, you know that, right? So like, I think it's the other way around. I think the earth needs the cloud because I'm a cloud guy. (Sarbjeet and Lisa laugh) >> I think you need each other, right? >> I think so too. >> They need each other. When they said cloud needs earth, right? I think they're still in denial that the cloud is a big force. They have to partner. When you can't compete with somebody, what do you do? Partner with them. >> Chris, this is your world. Are they in denial? >> No, I think they're waking up to the pragmatism of the situation. >> Yeah. >> They're building... As we said, most of the telcos, you find have relationships with the cloud guys, I think you're right about the industry. I mean, do you think what's happened since US was '96, the big telecom act when we started breaking up all the big telcos and we had lots of competition came in, we're seeing the signs that we might start to aggregate them back up together again. So it's been an interesting experiment for like 30 years, hasn't it too? >> It made the US less competitive, I would argue, but carry on. >> Yes, I think it's true. And Europe is maybe too competitive and therefore, it's not driven the investment needed. And by the way, it's not just mobile, it's fixed as well. You saw the Orange CEO was talking about the her investment and the massive fiber investments way ahead of many other countries, way ahead of the UK or Germany. We need that fiber in the ground to carry all your cloud traffic to do this. So there is a scale issue, there is a competition issue, but the telcos are very much aware of it. They need the cloud, by the way, to improve their operational environments as well, to change that whole old IT environment to deliver you and I better service. So no, it absolutely is changing. And they're getting scale, but they're fundamentally offering the basic product, you call it pipes, I'll just say they're offering broadband to you and I and the business community. But they're stepping on dangerous ground, I think, when saying they want to charge the over the top guys for all the traffic they use. Those over the top guys now build a lot of the global networks, the backbone submarine network. They're putting a lot of money into it, and by giving us endless data for our individual usage, that cat is out the bag, I think to a large extent. >> Yeah. And Orange CEO basically said that, that they're not paying their fair share. I'm for net neutrality but the governments are going to have to fund this unless you let us charge the OTT. >> Well, I mean, we could of course renationalize. Where would that take us? (Dave laughs) That would make MWC very interesting next year, wouldn't it? To renationalize it. So, no, I think you've got to be careful what we wish for here. Creating the absolute clear product that is required to underpin all of these activities, whether it's IoT or whether it's cloud delivery or whether it's just our own communication stuff, delivering that absolutely ubiquitously high quality for business and for consumer is what we have to do. And telcos have been too conservative in the past. >> I think they need to get together and create standards around... I think they have a big opportunity. We know that the clouds are being built in silos, right? So there's Azure stack, there's AWS and there's Google. And those are three main ones and a few others, right? So that we are fighting... On the cloud side, what we are fighting is the multicloud. How do we consume that multicloud without having standards? So if these people get together and create some standards around IoT and edge computing sort of area, people will flock to them to say, "we will use you guys, your API, we don't care behind the scenes if you use AWS or Google Cloud or Azure, we will come to you." So market, actually is looking for that solution. I think it's an opportunity for these guys, for telcos. But the problem with telcos is they're nationalized, as you said Chris versus the cloud guys are still kind of national in a way, but they're global corporations. And some of the telcos are global corporations as well, BT covers so many countries and TD covers so many... DT is in US as well, so they're all over the place. >> But you know what's interesting is that the TM forum, which is one of the industry associations, they've had an open digital architecture framework for quite some years now. Google had joined that some years ago, Azure in there, AWS just joined it a couple of weeks ago. So when people said this morning, why isn't AWS on the keynote? They don't like sharing the limelight, do they? But they're getting very much in bed with the telco. So I think you'll see the marriage. And in fact, there's a really interesting statement, if you look at the IoT you mentioned, Bosch and Nokia have been working together 'cause they said, the problem we've got, you've got a connectivity network on one hand, you've got the sensor network on the other hand, you're trying to merge them together, it's a nightmare. So we are finally seeing those sort of groups talking to each other. So I think the standards are coming, the cooperation is coming, partnerships are coming, but it means that the telco can't dominate the sector like it used to. It's got to play ball with everybody else. >> I think they have to work with the regulators as well to loosen the regulation. Or you said before we started this segment, you used Chris, the analogy of sports, right? In sports, when you're playing fiercely, you commit the fouls and then ask for ref to blow the whistle. You're now looking at the ref all the time. The telcos are looking at the ref all the time. >> Dave: Yeah, can I do this? Can I do that? Is this a fair move? >> They should be looking for the space in front of the opposition. >> Yeah, they should be just on attack mode and commit these fouls, if you will, and then ask for forgiveness then- >> What do you make of that AWS not you there- >> Well, Chris just made a great point that they don't like to share the limelight 'cause I thought it was very obvious that we had Google Cloud, we had Microsoft there on day one of this 80,000 person event. A lot of people back from COVID and they weren't there. But Chris, you brought up a great point that kind of made me think, maybe you're right. Maybe they're in the afternoon keynote, they want their own time- >> You think GSMA invited them? >> I imagine so. You'd have to ask GSMA. >> I would think so. >> Get Max on here and ask that. >> I'm going to ask them, I will. >> But no, and they don't like it because I think the misconception, by the way, is that everyone says, "oh, it's AWS, it's Google Cloud and it's Azure." They're not all the same business by any stretch of the imagination. AWS has been doing loads of great work, they've been launching private network stuff over the last couple of weeks. Really interesting. Google's been playing catch up. We know that they came in readily late to the market. And Azure, they've all got slightly different angles on it. So perhaps it just wasn't right for AWS and the way they wanted to pitch things so they don't have to be there, do they? >> That's a good point. >> But the industry needs them there, that's the number one cloud. >> Dave, they're there working with the industry. >> Yeah, of course. >> They don't have to be on the keynote stage. And in fact, you think about this show and you mentioned the 80,000 people, the activity going on around in all these massive areas they're in, it's fantastic. That's where the business is done. The business isn't done up on the keynote stage. >> That's why there's the glitz and the glamour, Chris. (all laugh) >> Yeah. It's not glitz, it's espresso. It's not glamour anymore, it's just espresso. >> We need the espresso. >> Yeah. >> I think another thing is that it's interesting how an average European sees the tech market and an average North American, especially you from US, you have to see the market. Here, people are more like process oriented and they want the rules of the road already established before they can take a step- >> Chris: That's because it's your pension in the North American- >> Exactly. So unions are there and the more employee rights and everything, you can't fire people easily here or in Germany or most of the Europe is like that with the exception of UK. >> Well, but it's like I said, that Silicone Valley gets their money on the way out, you know? And that's how they do it, that's how they think it. And they don't... They ask for forgiveness. I think the east coast is more close to Europe, but in the EU, highly regulated, really focused on lifetime employment, things like that. >> But Dave, the issue is the telecom industry is brilliant, right? We keep paying every month whatever we do with it. >> It's a great business, to your point- >> It's a brilliant business model. >> Dave: It's fantastic. >> So it's about then getting the structure right behind it. And you know, we've seen a lot of stratification where people are selling off towers, Orange haven't sold their towers off, they made a big point about that. Others are selling their towers off. Some people are selling off their underlying network, Telecom Italia talking about KKR buying the whole underlying network. It's like what do you want to be in control of? It's a great business. >> But that's why they complain so much is that they're having to sell their assets because of the onerous CapEx requirements, right? >> Yeah, they've had it good, right? And dare I say, perhaps they've not planned well enough for the future. >> They're trying to protect their past from the future. I mean, that's... >> Actually, look at the... Every "n" number of years, there's a new faster network. They have to dig the ground, they have to put the fiber, they have to put this. Now, there are so many booths showing 6G now, we are not even done with 5G yet, now the next 6G you know, like then- >> 10G's coming- >> 10G, that's a different market. (Dave laughs) >> Actually, they're bogged down by the innovation, I think. >> And the generational thing is really important because we're planning for 6G in all sorts of good ways but actually what we use in our daily lives, we've gone through the barrier, we've got enough to do that. So 4G gives us enough, the fiber in the ground or even old copper gives us enough. So the question is, what are we willing to pay for more than that basic connectivity? And the answer to your point, Dave, is not a lot, right? So therefore, that's why the emphasis is on the business market on that B2B and B2B2X. >> But we'll pay for Netflix all day long. >> All day long. (all laugh) >> The one thing Chris, I don't know, I want to know your viewpoints and we have talked in the past as well, there's absence of think tanks in tech, right? So we have think tanks on the foreign policy and economic policy in every country, and we have global think tanks, but tech is becoming a huge part of the economy, global economy as well as national economies, right? But we don't have think tanks on like policy around tech. For example, this 4G is good for a lot of use cases. Then 5G is good for smaller number of use cases. And then 6G will be like, fewer people need 6G for example. Why can't we have sort of those kind of entities dictating those kind of like, okay, is this a wiser way to go about it? >> Lina Khan wants to. She wants to break up big tech- >> You're too young to remember but the IT used to have a show every four years in Geneva, there were standards around there. So I think there are bodies. I think the balance of power obviously has gone from the telecom to the west coast to the IT markets. And it's changing the balance about, it moves more quickly, right? Telecoms has never moved quickly enough. I think there is hope by the way, that telecoms now that we are moving to more softwarized environment, and God forbid, we're moving into CICD in the telecom world, right? Which is a massive change, but I think there's hopes for it to change. The mentality is changing, the culture is changing, but to change those old structured organizations from the British telecom or the France telecom into the modern world, it's a hell of a long journey. It's not an overnight journey at all. >> Well, of course the theme of the event is velocity. >> Yeah, I know that. >> And it's been interesting sitting here with the three of you talking about from a historic perspective, how slow and molasseslike telecom has been. They don't have a choice anymore. As consumers, we have this expectation we're going to get anything we want on our mobile device, 24 by seven. We don't care about how the sausage is made, we just want the end result. So do you really think, and we're only on day one guys... And Chris we'll start with you. Is the theme really velocity? Is it disruption? Are they able to move faster? >> Actually, I think invisibility is the real answer. (Lisa laughs) We want communication to be invisible, right? >> Absolutely. >> We want it to work. When we switch our phones on, we want it to work and we want to... Well, they're not even phones anymore, are they really? I mean that's the... So no, velocity, we've got... There is momentum in the industry, there's no doubt about that. The cloud guys coming in, making telecoms think about the way they run their own business, where they meet, that collision point on the edges you talked about Sarbjeet. We do have velocity, we've got momentum. There's so many interested parties. The way I think of this is that the telecom industry used to be inward looking, just design its own technology and then expect everyone else to dance to our tune. We're now flipping that 180 degrees and we are now having to work with all the different outside forces shaping us. Whether it's devices, whether it's smart cities, governments, the hosting guys, the Equinoxis, all these things. So everyone wants a piece of this telecom world so we've got to make ourselves more open. That's why you get in a more open environment. >> But you did... I just want to bring back a point you made during COVID, which was when everybody switched to work from home, started using their landlines again, telcos had to respond and nothing broke. I mean, it was pretty amazing. >> Chris: It did a good job. >> It was kind of invisible. So, props to the telcos for making that happen. >> They did a great job. >> So it really did. Now, okay, what have you done for me lately? So now they've got to deal with the future and they're talking monetization. But to me, monetization is all about data and not necessarily just the network data. Yeah, they can sell that 'cause they own that but what kind of incremental value are they going to create for the consumers that... >> Yeah, actually that's a problem. I think the problem is that they have been strangled by the regulation for a long time and they cannot look at their data. It's a lot more similar to the FinTech world, right? I used to work at Visa. And then Visa, we did trillion dollars in transactions in '96. Like we moved so much money around, but we couldn't look at these things, right? So yeah, I think regulation is a problem that holds you back, it's the antithesis of velocity, it slows you down. >> But data means everything, doesn't it? I mean, it means everything and nothing. So I think the challenge here is what data do the telcos have that is useful, valuable to me, right? So in the home environment, the fact that my broadband provider says, oh, by the way, you've got 20 gadgets on that network and 20 on that one... That's great, tell me what's on there. I probably don't know what's taking all my valuable bandwidth up. So I think there's security wrapped around that, telling me the way I'm using it if I'm getting the best out of my service. >> You pay for that? >> No, I'm saying they don't do it yet. I think- >> But would you pay for that? >> I think I would, yeah. >> Would you pay a lot for that? I would expect it to be there as part of my dashboard for my monthly fee. They're already charging me enough. >> Well, that's fine, but you pay a lot more in North America than I do in Europe, right? >> Yeah, no, that's true. >> You're really overpaying over there, right? >> Way overpaying. >> So, actually everybody's looking at these devices, right? So this is a radio operated device basically, right? And then why couldn't they benefit from this? This is like we need to like double click on this like 10 times to find out why telcos failed to leverage this device, right? But I think the problem is their reliance on regulations and their being close to the national sort of governments and local bodies and authorities, right? And in some countries, these telcos are totally controlled in very authoritarian ways, right? It's not like open, like in the west, most of the west. Like the world is bigger than five, six countries and we know that, right? But we end up talking about the major economies most of the time. >> Dave: Always. >> Chris: We have a topic we want to hit on. >> We do have a topic. Our last topic, Chris, it's for you. You guys have done an amazing job for the last 25 minutes talking about the industry, where it's going, the evolution. But Chris, you're registered blind throughout your career. You're a leading user of assertive technologies. Talk about diversity, equity, inclusion, accessibility, some of the things you're doing there. >> Well, we should have had 25 minutes on that and five minutes on- (all laugh) >> Lisa: You'll have to come back. >> Really interesting. So I've been looking at it. You're quite right, I've been using accessible technology on my iPhone and on my laptop for 10, 20 years now. It's amazing. And what I'm trying to get across to the industry is to think about inclusive design from day one. When you're designing an app or you're designing a service, make sure you... And telecom's a great example. In fact, there's quite a lot of sign language around here this week. If you look at all the events written, good to see that coming in. Obviously, no use to me whatsoever, but good for the hearing impaired, which by the way is the biggest category of disability in the world. Biggest chunk is hearing impaired, then vision impaired, and then cognitive and then physical. And therefore, whenever you're designing any service, my call to arms to people is think about how that's going to be used and how a blind person might use it or how a deaf person or someone with physical issues or any cognitive issues might use it. And a great example, the GSMA and I have been talking about the app they use for getting into the venue here. I downloaded it. I got the app downloaded and I'm calling my guys going, where's my badge? And he said, "it's top left." And because I work with a screen reader, they hadn't tagged it properly so I couldn't actually open my badge on my own. Now, they changed it overnight so it worked this morning, which is fantastic work by Trevor and the team. But it's those things that if you don't build it in from scratch, you really frustrate a whole group of users. And if you think about it, people with disabilities are excluded from so many services if they can't see the screen or they can't hear it. But it's also the elderly community who don't find it easy to get access to things. Smart speakers have been a real blessing in that respect 'cause you can now talk to that thing and it starts talking back to you. And then there's the people who can't afford it so we need to come down market. This event is about launching these thousand dollars plus devices. Come on, we need below a hundred dollars devices to get to the real mass market and get the next billion people in and then to educate people how to use it. And I think to go back to your previous point, I think governments are starting to realize how important this is about building the community within the countries. You've got some massive projects like NEOM in Saudi Arabia. If you have a look at that, if you get a chance, a fantastic development in the desert where they're building a new city from scratch and they're building it so anyone and everyone can get access to it. So in the past, it was all done very much by individual disability. So I used to use some very expensive, clunky blind tech stuff. I'm now using mostly mainstream. But my call to answer to say is, make sure when you develop an app, it's accessible, anyone can use it, you can talk to it, you can get whatever access you need and it will make all of our lives better. So as we age and hearing starts to go and sight starts to go and dexterity starts to go, then those things become very useful for everybody. >> That's a great point and what a great champion they have in you. Chris, Sarbjeet, Dave, thank you so much for kicking things off, analyzing day one keynote, the ecosystem day, talking about what velocity actually means, where we really are. We're going to have to have you guys back 'cause as you know, we can keep going, but we are out of time. But thank you. >> Pleasure. >> We had a very spirited, lively conversation. >> Thanks, Dave. >> Thank you very much. >> For our guests and for Dave Vellante, I'm Lisa Martin, you're watching theCUBE live in Barcelona, Spain at MWC '23. We'll be back after a short break. See you soon. (uplifting instrumental music)

Published Date : Feb 27 2023

SUMMARY :

that drive human progress. the founder and MD of Lewis Insight. of the telecom industry and making sure the services are right is that the right way to build bridges? the treasure chest, if you like, But the techco model, Sarbjeet, is the edge computing, I believe. We're going to talk from the big cloud providers So, Chris, the cloud heads in the clouds. And of course, the people Well, the cloud guys They don't own the access. That's the one thing they don't own. I don't know about where you live, the telcos are fundamentally Some have a little bit of regional, Dave: Keep your friends Well, Sarbjeet, one of the and the telcos are competing that the cloud is a big force. Are they in denial? to the pragmatism of the situation. the big telecom act It made the US less We need that fiber in the ground but the governments are conservative in the past. We know that the clouds are but it means that the telco at the ref all the time. in front of the opposition. that we had Google Cloud, You'd have to ask GSMA. and the way they wanted to pitch things But the industry needs them there, Dave, they're there be on the keynote stage. glitz and the glamour, Chris. It's not glitz, it's espresso. sees the tech market and the more employee but in the EU, highly regulated, the issue is the telecom buying the whole underlying network. And dare I say, I mean, that's... now the next 6G you know, like then- 10G, that's a different market. down by the innovation, I think. And the answer to your point, (all laugh) on the foreign policy Lina Khan wants to. And it's changing the balance about, Well, of course the theme Is the theme really velocity? invisibility is the real answer. is that the telecom industry But you did... So, props to the telcos and not necessarily just the network data. it's the antithesis of So in the home environment, No, I'm saying they don't do it yet. Would you pay a lot for that? most of the time. topic we want to hit on. some of the things you're doing there. So in the past, We're going to have to have you guys back We had a very spirited, See you soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NokiaORGANIZATION

0.99+

ChrisPERSON

0.99+

Lisa MartinPERSON

0.99+

Chris LewisPERSON

0.99+

DavePERSON

0.99+

EuropeLOCATION

0.99+

Dave VellantePERSON

0.99+

Lina KhanPERSON

0.99+

LisaPERSON

0.99+

BoschORGANIZATION

0.99+

GermanyLOCATION

0.99+

EricssonORGANIZATION

0.99+

Telecom ItaliaORGANIZATION

0.99+

SarbjeetPERSON

0.99+

AWSORGANIZATION

0.99+

KKRORGANIZATION

0.99+

20 gadgetsQUANTITY

0.99+

GenevaLOCATION

0.99+

25 minutesQUANTITY

0.99+

10 timesQUANTITY

0.99+

Saudi ArabiaLOCATION

0.99+

USLOCATION

0.99+

GoogleORGANIZATION

0.99+

Sarbjeet JohalPERSON

0.99+

TrevorPERSON

0.99+

OrangeORGANIZATION

0.99+

180 degreesQUANTITY

0.99+

30 yearsQUANTITY

0.99+

five minutesQUANTITY

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

EricssonsORGANIZATION

0.99+

North AmericaLOCATION

0.99+

telcoORGANIZATION

0.99+

20QUANTITY

0.99+

46%QUANTITY

0.99+

threeQUANTITY

0.99+

Dell TechnologiesORGANIZATION

0.99+

next yearDATE

0.99+

Barcelona, SpainLOCATION

0.99+

'96DATE

0.99+

GSMAORGANIZATION

0.99+

telcosORGANIZATION

0.99+

VisaORGANIZATION

0.99+

trillion dollarsQUANTITY

0.99+

thousand dollarsQUANTITY

0.99+

Robert Nishihara, Anyscale | CUBE Conversation


 

(upbeat instrumental) >> Hello and welcome to this CUBE conversation. I'm John Furrier, host of theCUBE, here in Palo Alto, California. Got a great conversation with Robert Nishihara who's the co-founder and CEO of Anyscale. Robert, great to have you on this CUBE conversation. It's great to see you. We did your first Ray Summit a couple years ago and congratulations on your venture. Great to have you on. >> Thank you. Thanks for inviting me. >> So you're first time CEO out of Berkeley in Data. You got the Databricks is coming out of there. You got a bunch of activity coming from Berkeley. It's like a, it really is kind of like where a lot of innovations going on data. Anyscale has been one of those startups that has risen out of that scene. Right? You look at the success of what the Data lakes are now. Now you've got the generative AI. This has been a really interesting innovation market. This new wave is coming. Tell us what's going on with Anyscale right now, as you guys are gearing up and getting some growth. What's happening with the company? >> Yeah, well one of the most exciting things that's been happening in computing recently, is the rise of AI and the excitement about AI, and the potential for AI to really transform every industry. Now of course, one of the of the biggest challenges to actually making that happen is that doing AI, that AI is incredibly computationally intensive, right? To actually succeed with AI to actually get value out of AI. You're typically not just running it on your laptop, you're often running it and scaling it across thousands of machines, or hundreds of machines or GPUs, and to, so organizations and companies and businesses that do AI often end up building a large infrastructure team to manage the distributed systems, the computing to actually scale these applications. And that's a, that's a, a huge software engineering lift, right? And so, one of the goals for Anyscale is really to make that easy. To get to the point where, developers and teams and companies can succeed with AI. Can build these scalable AI applications, without really you know, without a huge investment in infrastructure with a lot of, without a lot of expertise in infrastructure, where really all they need to know is how to program on their laptop, how to program in Python. And if you have that, then that's really all you need to succeed with AI. So that's what we've been focused on. We're building Ray, which is an open source project that's been starting to get adopted by tons of companies, to actually train these models, to deploy these models, to do inference with these models, you know, to ingest and pre-process their data. And our goals, you know, here with the company are really to make Ray successful. To grow the Ray community, and then to build a great product around it and simplify the development and deployment, and productionization of machine learning for, for all these businesses. >> It's a great trend. Everyone wants developer productivity seeing that, clearly right now. And plus, developers are voting literally on what standards become. As you look at how the market is open source driven, a lot of that I love the model, love the Ray project love the, love the Anyscale value proposition. How big are you guys now, and how is that value proposition of Ray and Anyscale and foundational models coming together? Because it seems like you guys are in a perfect storm situation where you guys could get a real tailwind and draft off the the mega trend that everyone's getting excited. The new toy is ChatGPT. So you got to look at that and say, hey, I mean, come on, you guys did all the heavy lifting. >> Absolutely. >> You know how many people you are, and what's the what's the proposition for you guys these days? >> You know our company's about a hundred people, that a bit larger than that. Ray's been going really quickly. It's been, you know, companies using, like OpenAI uses Ray to train their models, like ChatGPT. Companies like Uber run all their deep learning you know, and classical machine learning on top of Ray. Companies like Shopify, Spotify, Netflix, Cruise, Lyft, Instacart, you know, Bike Dance. A lot of these companies are investing heavily in Ray for their machine learning infrastructure. And I think it's gotten to the point where, if you're one of these, you know type of businesses, and you're looking to revamp your machine learning infrastructure. If you're looking to enable new capabilities, you know make your teams more productive, increase, speed up the experimentation cycle, you know make it more performance, like build, you know, run applications that are more scalable, run them faster, run them in a more cost efficient way. All of these types of companies are at least evaluating Ray and Ray is an increasingly common choice there. I think if they're not using Ray, if many of these companies that end up not using Ray, they often end up building their own infrastructure. So Ray has been, the growth there has been incredibly exciting over the, you know we had our first in-person Ray Summit just back in August, and planning the next one for, for coming September. And so when you asked about the value proposition, I think there's there's really two main things, when people choose to go with Ray and Anyscale. One reason is about moving faster, right? It's about developer productivity, it's about speeding up the experimentation cycle, easily getting their models in production. You know, we hear many companies say that they, you know they, once they prototype a model, once they develop a model, it's another eight weeks, or 12 weeks to actually get that model in production. And that's a reason they talk to us. We hear companies say that, you know they've been training their models and, and doing inference on a single machine, and they've been sort of scaling vertically, like using bigger and bigger machines. But they, you know, you can only do that for so long, and at some point you need to go beyond a single machine and that's when they start talking to us. Right? So one of the main value propositions is around moving faster. I think probably the phrase I hear the most is, companies saying that they don't want their machine learning people to have to spend all their time configuring infrastructure. All this is about productivity. >> Yeah. >> The other. >> It's the big brains in the company. That are being used to do remedial tasks that should be automated right? I mean that's. >> Yeah, and I mean, it's hard stuff, right? It's also not these people's area of expertise, and or where they're adding the most value. So all of this is around developer productivity, moving faster, getting to market faster. The other big value prop and the reason people choose Ray and choose Anyscale, is around just providing superior infrastructure. This is really, can we scale more? You know, can we run it faster, right? Can we run it in a more cost effective way? We hear people saying that they're not getting good GPU utilization with the existing tools they're using, or they can't scale beyond a certain point, or you know they don't have a way to efficiently use spot instances to save costs, right? Or their clusters, you know can't auto scale up and down fast enough, right? These are all the kinds of things that Ray and Anyscale, where Ray and Anyscale add value and solve these kinds of problems. >> You know, you bring up great points. Auto scaling concept, early days, it was easy getting more compute. Now it's complicated. They're built into more integrated apps in the cloud. And you mentioned those companies that you're working with, that's impressive. Those are like the big hardcore, I call them hardcore. They have a good technical teams. And as the wave starts to move from these companies that were hyper scaling up all the time, the mainstream are just developers, right? So you need an interface in, so I see the dots connecting with you guys and I want to get your reaction. Is that how you see it? That you got the alphas out there kind of kicking butt, building their own stuff, alpha developers and infrastructure. But mainstream just wants programmability. They want that heavy lifting taken care of for them. Is that kind of how you guys see it? I mean, take us through that. Because to get crossover to be democratized, the automation's got to be there. And for developer productivity to be in, it's got to be coding and programmability. >> That's right. Ultimately for AI to really be successful, and really you know, transform every industry in the way we think it has the potential to. It has to be easier to use, right? And that is, and being easier to use, there's many dimensions to that. But an important one is that as a developer to do AI, you shouldn't have to be an expert in distributed systems. You shouldn't have to be an expert in infrastructure. If you do have to be, that's going to really limit the number of people who can do this, right? And I think there are so many, all of the companies we talk to, they don't want to be in the business of building and managing infrastructure. It's not that they can't do it. But it's going to slow them down, right? They want to allocate their time and their energy toward building their product, right? To building a better product, getting their product to market faster. And if we can take the infrastructure work off of the critical path for them, that's going to speed them up, it's going to simplify their lives. And I think that is critical for really enabling all of these companies to succeed with AI. >> Talk about the customers you guys are talking to right now, and how that translates over. Because I think you hit a good thread there. Data infrastructure is critical. Managed services are coming online, open sources continuing to grow. You have these people building their own, and then if they abandon it or don't scale it properly, there's kind of consequences. 'Cause it's a system you mentioned, it's a distributed system architecture. It's not as easy as standing up a monolithic app these days. So when you guys go to the marketplace and talk to customers, put the customers in buckets. So you got the ones that are kind of leaning in, that are pretty peaked, probably working with you now, open source. And then what's the customer profile look like as you go mainstream? Are they looking to manage service, looking for more architectural system, architecture approach? What's the, Anyscale progression? How do you engage with your customers? What are they telling you? >> Yeah, so many of these companies, yes, they're looking for managed infrastructure 'cause they want to move faster, right? Now the kind of these profiles of these different customers, they're three main workloads that companies run on Anyscale, run with Ray. It's training related workloads, and it is serving and deployment related workloads, like actually deploying your models, and it's batch processing, batch inference related workloads. Like imagine you want to do computer vision on tons and tons of, of images or videos, or you want to do natural language processing on millions of documents or audio, or speech or things like that, right? So the, I would say the, there's a pretty large variety of use cases, but the most common you know, we see tons of people working with computer vision data, you know, computer vision problems, natural language processing problems. And it's across many different industries. We work with companies doing drug discovery, companies doing you know, gaming or e-commerce, right? Companies doing robotics or agriculture. So there's a huge variety of the types of industries that can benefit from AI, and can really get a lot of value out of AI. And, but the, but the problems are the same problems that they all want to solve. It's like how do you make your team move faster, you know succeed with AI, be more productive, speed up the experimentation, and also how do you do this in a more performant way, in a faster, cheaper, in a more cost efficient, more scalable way. >> It's almost like the cloud game is coming back to AI and these foundational models, because I was just on a podcast, we recorded our weekly podcast, and I was just riffing with Dave Vellante, my co-host on this, were like, hey, in the early days of Amazon, if you want to build an app, you just, you have to build a data center, and then you go to now you go to the cloud, cloud's easier, pay a little money, penny's on the dollar, you get your app up and running. Cloud computing is born. With foundation models in generative AI. The old model was hard, heavy lifting, expensive, build out, before you get to do anything, as you mentioned time. So I got to think that you're pretty much in a good position with this foundational model trend in generative AI because I just looked at the foundation map, foundation models, map of the ecosystem. You're starting to see layers of, you got the tooling, you got platform, you got cloud. It's filling out really quickly. So why is Anyscale important to this new trend? How do you talk to people when they ask you, you know what does ChatGPT mean for Anyscale? And how does the financial foundational model growth, fit into your plan? >> Well, foundational models are hugely important for the industry broadly. Because you're going to have these really powerful models that are trained that you know, have been trained on tremendous amounts of data. tremendous amounts of computes, and that are useful out of the box, right? That people can start to use, and query, and get value out of, without necessarily training these huge models themselves. Now Ray fits in and Anyscale fit in, in a number of places. First of all, they're useful for creating these foundation models. Companies like OpenAI, you know, use Ray for this purpose. Companies like Cohere use Ray for these purposes. You know, IBM. If you look at, there's of course also open source versions like GPTJ, you know, created using Ray. So a lot of these large language models, large foundation models benefit from training on top of Ray. And, but of course for every company training and creating these huge foundation models, you're going to have many more that are fine tuning these models with their own data. That are deploying and serving these models for their own applications, that are building other application and business logic around these models. And that's where Ray also really shines, because Ray you know, is, can provide common infrastructure for all of these workloads. The training, the fine tuning, the serving, the data ingest and pre-processing, right? The hyper parameter tuning, the and and so on. And so where the reason Ray and Anyscale are important here, is that, again, foundation models are large, foundation models are compute intensive, doing you know, using both creating and using these foundation models requires tremendous amounts of compute. And there there's a big infrastructure lift to make that happen. So either you are using Ray and Anyscale to do this, or you are building the infrastructure and managing the infrastructure yourself. Which you can do, but it's, it's hard. >> Good luck with that. I always say good luck with that. I mean, I think if you really need to do, build that hardened foundation, you got to go all the way. And I think this, this idea of composability is interesting. How is Ray working with OpenAI for instance? Take, take us through that. Because I think you're going to see a lot of people talking about, okay I got trained models, but I'm going to have not one, I'm going to have many. There's big debate that OpenAI is going to be the mother of all LLMs, but now, but really people are also saying that to be many more, either purpose-built or specific. The fusion and these things come together there's like a blending of data, and that seems to be a value proposition. How does Ray help these guys get their models up? Can you take, take us through what Ray's doing for say OpenAI and others, and how do you see the models interacting with each other? >> Yeah, great question. So where, where OpenAI uses Ray right now, is for the training workloads. Training both to create ChatGPT and models like that. There's both a supervised learning component, where you're pre-training this model on doing supervised pre-training with example data. There's also a reinforcement learning component, where you are fine-tuning the model and continuing to train the model, but based on human feedback, based on input from humans saying that, you know this response to this question is better than this other response to this question, right? And so Ray provides the infrastructure for scaling the training across many, many GPUs, many many machines, and really running that in an efficient you know, performance fault tolerant way, right? And so, you know, open, this is not the first version of OpenAI's infrastructure, right? They've gone through iterations where they did start with building the infrastructure themselves. They were using tools like MPI. But at some point, you know, given the complexity, given the scale of what they're trying to do, you hit a wall with MPI and that's going to happen with a lot of other companies in this space. And at that point you don't have many other options other than to use Ray or to build your own infrastructure. >> That's awesome. And then your vision on this data interaction, because the old days monolithic models were very rigid. You couldn't really interface with them. But we're kind of seeing this future of data fusion, data interaction, data blending at large scale. What's your vision? How do you, what's your vision of where this goes? Because if this goes the way people think. You can have this data chemistry kind of thing going on where people are integrating all kinds of data with each other at large scale. So you need infrastructure, intelligence, reasoning, a lot of code. Is this something that you see? What's your vision in all this? Take us through. >> AI is going to be used everywhere right? It's, we see this as a technology that's going to be ubiquitous, and is going to transform every business. I mean, imagine you make a product, maybe you were making a tool like Photoshop or, or whatever the, you know, tool is. The way that people are going to use your tool, is not by investing, you know, hundreds of hours into learning all of the different, you know specific buttons they need to press and workflows they need to go through it. They're going to talk to it, right? They're going to say, ask it to do the thing they want it to do right? And it's going to do it. And if it, if it doesn't know what it's want, what it's, what's being asked of it. It's going to ask clarifying questions, right? And then you're going to clarify, and you're going to have a conversation. And this is going to make many many many kinds of tools and technology and products easier to use, and lower the barrier to entry. And so, and this, you know, many companies fit into this category of trying to build products that, and trying to make them easier to use, this is just one kind of way it can, one kind of way that AI will will be used. But I think it's, it's something that's pretty ubiquitous. >> Yeah. It'll be efficient, it'll be efficiency up and down the stack, and will change the productivity equation completely. You just highlighted one, I don't want to fill out forms, just stand up my environment for me. And then start coding away. Okay well this is great stuff. Final word for the folks out there watching, obviously new kind of skill set for hiring. You guys got engineers, give a plug for the company, for Anyscale. What are you looking for? What are you guys working on? Give a, take the last minute to put a plug in for the company. >> Yeah well if you're interested in AI and if you think AI is really going to be transformative, and really be useful for all these different industries. We are trying to provide the infrastructure to enable that to happen, right? So I think there's the potential here, to really solve an important problem, to get to the point where developers don't need to think about infrastructure, don't need to think about distributed systems. All they think about is their application logic, and what they want their application to do. And I think if we can achieve that, you know we can be the foundation or the platform that enables all of these other companies to succeed with AI. So that's where we're going. I think something like this has to happen if AI is going to achieve its potential, we're looking for, we're hiring across the board, you know, great engineers, on the go-to-market side, product managers, you know people who want to really, you know, make this happen. >> Awesome well congratulations. I know you got some good funding behind you. You're in a good spot. I think this is happening. I think generative AI and foundation models is going to be the next big inflection point, as big as the pc inter-networking, internet and smartphones. This is a whole nother application framework, a whole nother set of things. So this is the ground floor. Robert, you're, you and your team are right there. Well done. >> Thank you so much. >> All right. Thanks for coming on this CUBE conversation. I'm John Furrier with theCUBE. Breaking down a conversation around AI and scaling up in this new next major inflection point. This next wave is foundational models, generative AI. And thanks to ChatGPT, the whole world's now knowing about it. So it really is changing the game and Anyscale is right there, one of the hot startups, that is in good position to ride this next wave. Thanks for watching. (upbeat instrumental)

Published Date : Feb 24 2023

SUMMARY :

Robert, great to have you Thanks for inviting me. as you guys are gearing up and the potential for AI to a lot of that I love the and at some point you need It's the big brains in the company. and the reason people the automation's got to be there. and really you know, and talk to customers, put but the most common you know, and then you go to now that are trained that you know, and that seems to be a value proposition. And at that point you don't So you need infrastructure, and lower the barrier to entry. What are you guys working on? and if you think AI is really is going to be the next And thanks to ChatGPT,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

IBMORGANIZATION

0.99+

Robert NishiharaPERSON

0.99+

John FurrierPERSON

0.99+

12 weeksQUANTITY

0.99+

RobertPERSON

0.99+

UberORGANIZATION

0.99+

LyftORGANIZATION

0.99+

ShopifyORGANIZATION

0.99+

eight weeksQUANTITY

0.99+

SpotifyORGANIZATION

0.99+

NetflixORGANIZATION

0.99+

AugustDATE

0.99+

SeptemberDATE

0.99+

Palo Alto, CaliforniaLOCATION

0.99+

CruiseORGANIZATION

0.99+

AmazonORGANIZATION

0.99+

InstacartORGANIZATION

0.99+

AnyscaleORGANIZATION

0.99+

firstQUANTITY

0.99+

PhotoshopTITLE

0.99+

One reasonQUANTITY

0.99+

Bike DanceORGANIZATION

0.99+

RayORGANIZATION

0.99+

PythonTITLE

0.99+

thousands of machinesQUANTITY

0.99+

BerkeleyLOCATION

0.99+

two main thingsQUANTITY

0.98+

single machineQUANTITY

0.98+

CohereORGANIZATION

0.98+

Ray and AnyscaleORGANIZATION

0.98+

millions of documentsQUANTITY

0.98+

bothQUANTITY

0.98+

one kindQUANTITY

0.96+

first versionQUANTITY

0.95+

CUBEORGANIZATION

0.95+

about a hundred peopleQUANTITY

0.95+

hundreds of machinesQUANTITY

0.95+

oneQUANTITY

0.95+

OpenAIORGANIZATION

0.94+

FirstQUANTITY

0.94+

hundreds of hoursQUANTITY

0.93+

first timeQUANTITY

0.93+

DatabricksORGANIZATION

0.91+

Ray and AnyscaleORGANIZATION

0.9+

tonsQUANTITY

0.89+

couple years agoDATE

0.88+

Ray andORGANIZATION

0.86+

ChatGPTTITLE

0.81+

tons of peopleQUANTITY

0.8+

Applying Smart Data Fabrics Across Industries


 

(upbeat music) >> Today more than ever before, organizations are striving to gain a competitive advantage, deliver more value to customers, reduce risk, and respond more quickly to the needs of businesses. Now, to achieve these goals, organizations need easy access to a single view of accurate, consistent and very importantly, trusted data. If it's not trusted, nobody's going to use it and all in near real time. However, the growing volumes and complexities of data make this difficult to achieve in practice. Not to mention the organizational challenges that have evolved as data becomes increasingly important to winning in the marketplace. Specifically as data grows, so does the prevalence of data silos, making, integrating and leveraging data from internal and external sources a real challenge. Now, in this final segment, we'll hear from Joe Lichtenberg who's the global head of product and industry marketing, and he's going to discuss how smart data fabrics can be applied to different industries. And by way of these use cases, we'll probe Joe's vast knowledge base and ask him to highlight how InterSystems, which touts a next gen approach to Customer 360, how the company leverages a smart data fabric to provide organizations of varying sizes and sectors in financial services, supply chain, logistics and healthcare with a better, faster and easier way to deliver value to the business. Joe welcome, great to have you here. >> Thank you, it's great to be here. That was some intro. I could not have said it better myself, so thank you for that. >> Thank you. Well, we're happy to have you on this show now. I understand- >> It's great to be here. >> You you've made a career helping large businesses with technology solutions, small businesses, and then scale those solutions to meet whatever needs they had. And of course, you're a vocal advocate as is your company of data fabrics. We talked to Scott earlier about data fabrics, how it relates to data mesh big discussions in the industry. So tell us more about your perspective. >> Sure, so first I would say that I have been in this industry for a very long time so I've been like you, I'm sure, for decades working with customers and with technology, really to solve these same kinds of challenges. So for decades, companies have been working with lots and lots of data and trying to get business value to solve all sorts of different challenges. And I will tell you that I've seen many different approaches and different technologies over the years. So, early on, point to point connections with custom coding, and I've worked with integration platforms 20 years ago with the advent of web services and service-oriented architectures and exposing endpoints with wisdom and getting access to disparate data from across the organization. And more recently, obviously with data warehouses and data lakes and now moving workloads to the cloud with cloud-based data marts and data warehouses. Lots of approaches that I've seen over the years but yet still challenges remain in terms of getting access to a single trusted real-time view of data. And so, recently, we ran a survey of more than 500 different business users across different industries and 86% told us that they still lack confidence in using their data to make decisions. That's a huge number, right? And if you think about all of the work and all of the technology and approaches over the years, that is a surprising number and drilling into why that is, there were three main reasons. One is latency. So the amount of time that it takes to access the data and process the data and make it fit for purpose by the time the business has access to the data and the information that they need, the opportunity has passed. >> Elapsed time, not speed a light, right? But that too maybe. >> But it takes a long time if you think about these processes and you have to take the data and copy it and run ETL processes and prepare it. So that's one, one is just the amount of data that's disparate in data silos. So still struggling with data that is dispersed across different systems in different formats. And the third, is data democratization. So the business really wants to have access to the data so that they can drill into the data and ask ad hoc questions and the next question and drill into the information and see where it leads them rather than having sort of pre-structured data and pre-structured queries and having to go back to IT and put the request back on the queue again and waiting. >> So it takes too long, the data's too hard to get to 'cause it's in silos and the data lacks context because it's technical people that are serving up the data to the business people. >> Exactly. >> And there's a mismatch. >> Exactly right. So they call that data democratization or giving the business access to the data and the tools that they need to get the answers that they need in the moment. >> So the skeptic in me, 'cause you're right I have seen this story before and the problems seem like they keep coming up, year after year, decade after decade. But I'm an optimist and so. >> As am I. >> And so I sometimes say, okay, same wine new bottle, but it feels like it's different this time around with data fabrics. You guys talk about smart data fabrics from your perspective, what's different? >> Yeah, it's very exciting and it's a fundamentally different approach. So if you think about all of these prior approaches, and by the way, all of these prior approaches have added value, right? It's not like they were bad, but there's still limitations and the business still isn't getting access to all the data that they need in the moment, right? So data warehouses are terrific if you know the questions that you want answered and you take the data and you structure the data in advance. And so now you're serving the business with sort of pre-planned answers to pre-planned queries, right? The data fabric, what we call a smart data fabric is fundamentally different. It's a fundamentally different approach in that rather than sort of in batch mode, taking the data and making it fit for purpose with all the complexity and delays associated with it, with a data fabric where accessing the data on demand as it's needed, as it's requested, either by the business or by applications or by the data scientists directly from the source systems. >> So you're not copying it necessarily to that to make that you're not FTPing it, for instance. I've got it, you take it, you're basically using the same source. >> You're pulling the data on demand as it's being requested by the consumers. And then all of the data management processes that need to be applied for integration and transformation to get the data into a consistent format and business rules and analytic queries. And with Jess showed with machine learning, predictive prescriptive analytics all sorts of powerful capabilities are built into the fabric so that as you're pulling the data on demand, right, all of these processes are being applied and the net result is you're addressing these limitations around latency and silos that we've seen in the past. >> Okay, so you've talked about you have a lot of customers, InterSystems does in different industries supply chain, financial services, manufacturing. We heard from just healthcare. What are you seeing in terms of applications of smart data fabrics in the real world? >> Yeah, so we see it in every industry. So InterSystems, as you know, has been around now for 43 years, and we have tens of thousands of customers in every industry. And this architectural pattern now is providing value for really critical use cases in every industry. So I'm happy to talk to you about some that we're seeing. I could actually spend like three hours here and there but I'm very passionate about working with customers and there's all sorts of exciting. >> What are some of your favorites? >> So, obviously supply chain right now is going through a very challenging time. So the combination of what's happening with the pandemic and disruptions and now I understand eggs are difficult to come by I just heard on NPR. >> Yeah and it's in part a data problem and a big part of data problem, is that fair? >> Yeah and so, in supply chain, first there's supply chain visibility. So organizations want a real time or near real time expansive view of what's happening across the entire supply chain from a supply all the way through distribution, right? So that's only part of the issue but that's a huge sort of real-time data silos problem. So if you think about your extended supply chain, it's complicated enough with all the systems and silos inside your firewall, before all of your suppliers even just thinking about your tier one suppliers let alone tier two and tier three. And then building on top of real-time visibility is what the industry calls a control tower, what we call the ultimate control tower. And so it's built in analytics to be able to sense disruptions and exceptions as they occur and predict the likelihood of these disruptions occurring. And then having data driven and analytics driven guidance in terms of the best way to deal with these disruptions. So for example, an order is missing line items or a cargo ship is stuck off port somewhere. What do you do about it? Do you reroute a different cargo ship, right? Do you take an order that's en route to a different client and reroute that? What's the cost associated? What's the impact associated with it? So that's a huge issue right now around control towers for supply chain. So that's one. >> Can I ask you a question about that? Because you and I have both seen a lot but we've never seen, at least I haven't the economy completely shut down like it was in March of 2020, and now we're seeing this sort of slingshot effect almost like you're driving on the highway sometimes you don't know why, but all of a sudden you slow down and then you speed up, you think it's okay then you slow down again. Do you feel like you guys can help get a handle on that product because it goes on both sides. Sometimes you can't get the product, sometimes there's too much of a product as well and that's not good for business. >> Yeah, absolutely. You want to smooth out the peaks and valleys. >> Yeah. >> And that's a big business goal, business challenge for supply chain executives, right? So you want to make sure that you can respond to demand but you don't want to overstock because there's cost associated with that as well. So how do you optimize the supply chains and it's very much a data silo and a real time challenge. So it's a perfect fit for this new architectural pattern. >> All right, what else? >> So if we look at financial services, we have many, many customers in financial services and that's another industry where they have many different sources of data that all have information that organizations can use to really move the needle if they could just get to that single source of truth in real time. So we sort of bucket many different implementations and use cases that we do around what we call Business 360 and Customer 360. So Business 360, there's all sorts of ways to add business value in terms of having a real-time operational view across all of the different GOs and parts of the business, especially in these very large global financial services institutions like capital markets and investment firms and so forth. So around Business 360, having a realtime view of risk, operational performance regulatory compliance, things like that. Customer 360, there's a whole set of use cases around Customer 360 around hyper-personalization of customers and in realtime next best action looking to see how you can sell more increase share of wallet, cross-sell, upsell to customers. We also do a lot in terms of predicting customer churn. So if you have all the historical data and what's the likelihood of customers churning to be able to proactively intercede, right? It's much more cost effective to keep assets under management and keep clients rather than going and getting new clients to come to the firm. A very interesting use case from one of our customers in Latin America, so Banco do Brasil largest bank in all of Latin America and they have a very innovative CTO who's always looking for new ways to move the needle for the bank. And so one of their ideas and we're working with them to do this is how can they generate net new revenue streams by bringing in new business to the bank? And so they identified a large percentage of the population in Latin America that does no banking. So they have no banking history not only with Banco do Brasil, but with any bank. So there's a fair amount of risk associated with offering services to this segment of the population that's not associated with any banks or financial institutions. >> There is no historical data on them, there's no. >> So it's a data challenge. And so, they're bringing in data from a variety of different sources, social media, open source data that they find online and so forth. And with us running risk models to identify which are the citizens that there's acceptable risk to offer their services. >> It's going to be huge market of unbanked people in vision Latin America. >> Wow, that's interesting. >> Yeah, yeah, totally vision. >> And if you can lower the risk and you could tap that market and be first >> And they are, yeah. >> Yeah. >> So very exciting. Manufacturing, we know industry 4.0 which is about taking the OT data, so the data from the MES systems and the streaming data, real-time streaming data from the machine controllers and integrating it with the IT data, so your data warehouses and your ERP systems and so forth to have not only a real-time view of manufacturing from supply and source all the way through demand but also predictive maintenance and things like that. So that's very big right now in manufacturing. >> Kind of cool to hear these use cases beyond your healthcare, which is obviously, your wheelhouse, Scott defined this term of smart data fabrics, different than data fabrics, I guess. So when we think about these use cases what's the value add of so-called smart data fabrics? >> Yeah, it's a great question. So we did not define the term data fabric or enterprise data fabric. The analysts now are all over it. They're all saying it's the future of data management. It's a fundamentally different approach this architectural approach to be able to access the data on demand. The canonical definition of a data fabric is to access the data where it lies and apply a set of data management processes, but it does not include analytics, interestingly. And so we firmly believe that most of these use cases gain value from having analytics built directly into the fabric. So whether that's business rules or predictive analytics to predict the likelihood of a customer churn or a machine on the shop floor failing or prescriptive analytics. So if there's a problem in the supply chain, what's the guidance for the supply chain managers to take the best action, right? Prescriptive analytics based on data. So rather than taking the data and the data fabric and moving it to another environment to run those analytics where you have complexity and latency, having tall of those analytics capabilities built directly into the fabric, which is why we call it a smart data fabric, brings a lot of value to our customers. >> So simplifies the whole data lifecycle, data pipelining, the hyper-specialized roles that you have to have, you can really just focus on one platform, is that? >> Exactly, basically, yeah. And it's a simplicity of architecture and faster speed to production. So a big differentiator for our technology, for InterSystems, Iris, is most if not all of the capabilities that are needed are built into one engine, right? So you don't need to stitch together 10 or 15 or 20 different data management services for relational database in a non-relational database and a caching layer and a data warehouse and security and so forth. And so you can do that. There's many ways to build this data fabric architecture, right? InterSystems is not the only way. >> Right? >> But if you can speed and simplify the implementation of the fabric by having most of what you need in one engine, one product that gets you to where you need to go much, much faster. >> Joe, how can people learn more about smart data Fabric some of the use cases that you've presented here? >> Yeah, come to our website, intersystems.com. If you go to intersystems.com/smartdatafabric that'll take you there. >> I know that you have like probably dozens more examples but it would be cool- >> I do. >> If people reach out to you, how can they get in touch? >> Oh, I would love that. So feel free to reach out to me on LinkedIn. It's Joe Lichtenberg I think it's linkedin.com/joeLichtenberg and I'd love to connect. >> Awesome. Joe, thanks so much for your time. Really appreciate it. >> It was great to be here. Thank you, Dave. >> All right, I hope you've enjoyed our program today. You know, we heard Scott now he helped us understand this notion of data fabrics and smart data fabrics and how they can address the data challenges faced by the vast majority of organizations today. Jess Jody's demo was awesome. It was really a highlight of the program where she showed the smart data fabrics inaction and Joe Lichtenberg, we just heard from him dug in to some of the prominent use cases and proof points. We hope this content was educational and inspires you to action. Now, don't forget all these videos are available on Demand to watch, rewatch and share. Go to theCUBE.net, check out siliconangle.com for all the news and analysis and we'll summarize the highlights of this program and go to intersystems.com because there are a ton of resources there. In particular, there's a knowledge hub where you'll find some excellent educational content and online learning courses. There's a resource library with analyst reports, technical documentation videos, some great freebies. So check it out. This is Dave Vellante. On behalf of theCUBE and our supporter, InterSystems, thanks for watching and we'll see you next time. (upbeat music)

Published Date : Feb 15 2023

SUMMARY :

and ask him to highlight how InterSystems, so thank you for that. you on this show now. big discussions in the industry. and all of the technology and But that too maybe. and drill into the information and the data lacks context or giving the business access to the data and the problems seem And so I sometimes say, okay, and by the way, to that to make that you're and the net result is you're fabrics in the real world? So I'm happy to talk to you So the combination and predict the likelihood of but all of a sudden you slow the peaks and valleys. So how do you optimize the supply chains of the different GOs and parts data on them, there's no. risk models to identify It's going to be huge market and integrating it with the IT Kind of cool to hear these use cases and moving it to another if not all of the capabilities and simplify the Yeah, come to our and I'd love to connect. Joe, thanks so much for your time. It was great to be here. and go to intersystems.com

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

JoePERSON

0.99+

Joe LichtenbergPERSON

0.99+

DavePERSON

0.99+

Banco do BrasilORGANIZATION

0.99+

ScottPERSON

0.99+

March of 2020DATE

0.99+

Jess JodyPERSON

0.99+

Latin AmericaLOCATION

0.99+

InterSystemsORGANIZATION

0.99+

Latin AmericaLOCATION

0.99+

Banco do BrasilORGANIZATION

0.99+

10QUANTITY

0.99+

43 yearsQUANTITY

0.99+

three hoursQUANTITY

0.99+

15QUANTITY

0.99+

86%QUANTITY

0.99+

JessPERSON

0.99+

one productQUANTITY

0.99+

linkedin.com/joeLichtenbergOTHER

0.99+

theCUBE.netOTHER

0.99+

LinkedInORGANIZATION

0.99+

both sidesQUANTITY

0.99+

intersystems.com/smartdatafabricOTHER

0.99+

OneQUANTITY

0.99+

one engineQUANTITY

0.99+

oneQUANTITY

0.99+

thirdQUANTITY

0.98+

TodayDATE

0.98+

bothQUANTITY

0.98+

intersystems.comOTHER

0.98+

more than 500 different business usersQUANTITY

0.98+

firstQUANTITY

0.98+

one platformQUANTITY

0.98+

siliconangle.comOTHER

0.98+

singleQUANTITY

0.96+

theCUBEORGANIZATION

0.95+

tens of thousands of customersQUANTITY

0.95+

three main reasonsQUANTITY

0.94+

20 years agoDATE

0.92+

dozens more examplesQUANTITY

0.9+

todayDATE

0.9+

NPRORGANIZATION

0.9+

tier oneQUANTITY

0.9+

single viewQUANTITY

0.89+

single sourceQUANTITY

0.88+

Business 360TITLE

0.82+

pandemicEVENT

0.81+

one ofQUANTITY

0.77+

20 different data management servicesQUANTITY

0.76+

tierQUANTITY

0.74+

resourcesQUANTITY

0.73+

Customer 360ORGANIZATION

0.72+

tier threeOTHER

0.72+

Business 360ORGANIZATION

0.72+

decadeQUANTITY

0.68+

BusinessORGANIZATION

0.68+

decadesQUANTITY

0.68+

IrisORGANIZATION

0.63+

360TITLE

0.63+

twoOTHER

0.61+

Customer 360TITLE

0.47+

tonQUANTITY

0.43+

360OTHER

0.24+

Mobile World Congress Preview 2023 | Mobile World Congress 2023


 

(electronic music) (graphics whooshing) (graphics tinkling) >> Telecommunications is well north of a trillion-dollar business globally, that provides critical services on which virtually everyone on the planet relies. Dramatic changes are occurring in the sector, and one of the most important dimensions of this change is the underlying infrastructure that powers global telecommunications networks. Telcos have been thawing out, if you will, they're frozen infrastructure, modernizing. They're opening up, they're disaggregating their infrastructure, separating, for example, the control plane from the data plane, and adopting open standards. Telco infrastructure is becoming software-defined. And leading telcos are adopting cloud native microservices to help make developers more productive, so they can respond more quickly to market changes. They're embracing technology consumption models, and selectively leveraging the cloud where it makes sense. And these changes are being driven by market forces, the root of which stem from customer demand. So from a customer's perspective, they want services, and they want them fast. Meaning, not only at high speeds, but also they want them now. Customers want the latest, the greatest, and they want these services to be reliable and stable with high quality of service levels. And they want them to be highly cost-effective. Hello and welcome to this preview of Mobile World Congress 2023. My name is Dave Vellante, and at this year's event, theCUBE has a major presence at the show made possible by Dell Technologies, and with me to unpack the trends in telco, and look ahead to MWC23 are Dennis Hoffman, he's the Senior Vice President and General Manager of Dell's telecom business, and Aaron Chaisson, who is the Vice President of Telecom and Edge Solutions Marketing at Dell Technologies, gentlemen, welcome, thanks so much for spending some time with me. >> Thank you, Dave. >> Thanks, glad to be here. >> So, Dennis, let's start with you. Telcos in recent history have been slow to deliver and to monetize new services, and a large part because their purpose-built infrastructure could been somewhat of a barrier to responding to all these market forces. In many ways, this is what makes telecoms, really this market so exciting. So from your perspective, where is the action in this space? >> Yeah, the action Dave is kind of all over the place, partly because it's an ecosystem play. I think it's been, as you point out, the disaggregation trend has been going on for a while. The opportunity's been clear, but it has taken a few years to get all of the vendors, and all of the components that make up a solution, as well as the operators themselves, to a point where we can start putting this stuff together, and actually achieving some of the promise. >> So Aaron, for those who might not be as familiar with Dell's a activities in this area, here we are just ahead of Mobile World Congress, it's the largest event for telecoms, what should people know about Dell? And what's the key message to this industry? >> Sure, yeah, I think everybody knows that there's a lot of innovation that's been happening in the industry of late. One of the major trends that we're seeing is that shift from more of a vertically-integrated technology stack, to more of a disaggregated set of solutions, and that trend has actually created a ton of innovation that's happening across the industry, or along technology vendors and providers, the telecoms themselves. And so, one of the things that Dell's really looking to do is, as Dennis talked about, is build out a really strong ecosystem of partners and vendors that we're working closely together to be able to collaborate on new technologies, new capabilities that are solving challenges that the networks are seeing today. Be able to create new solutions built on those in order to be able to bring new value to the industry. And then finally, we want to help both partners, as well as our CSP providers activate those changes, so that they can bring new solutions to market, to be able to serve their customers. And so, the key areas that we're really focusing on with our customers is, technologies to help modernize the network, to be able to capitalize on the value of open architectures, and bring price performance to what they're expecting, and availability that they're expecting today. And then also, partner with the lines of business to be able to take these new capabilities, produce new solutions, and then deliver new value to their customers. >> Great, thank you, Aaron. So Dennis, you and I, known you for a number of years. I've watched you, you're are a trend spotter. You're a strategic thinker. I love now the fact that you're running a business that you had to go out and analyze, and now you got to make it happen. So, how would you describe Dell's strategy in this market? >> Well, it's really two things. And I appreciate the comment, I'm not sure how much of a trend spotter I am, but I certainly enjoy, and I think I'm fascinated by what's going on in this industry right now. Our two main thrusts, Dave, are first round, trying to catalyze that ecosystem, be a force for pulling together a group of folks, vendors that have been flying in fairly loose formation for a couple of years, to deliver the kinds of solutions that move the needle forward, and produce the outcomes that our network operator customers can actually buy and consume, and deploy, and have them be supported. The other thing is, there's a couple of very key technology areas that need to be advanced here. This ends up being a much anticipated year in telecom. Because of the delivery of some open infrastructure solutions that have being developed for years. With the Intel Sapphire Rapids program coming to market, we've of course got some purpose-built solutions on top of that for telecommunications networks. Some expanded partnerships in the area of multi-cloud infrastructure. And so, I would say the second main thrust is, we've got to bring some intellectual property to the party. It's not just about pulling the ecosystem together. But those two things together really form the twin thrusts of our strategy. >> Okay, so as you point out, you obviously not going to go alone in this market, it's way too broad, there's so many routes to market, partnerships, obviously very, very important. So, can you share a little bit more about the ecosystem and partners, maybe give some examples of some of the key partners that you'd be highlighting or working with, maybe at Mobile World Congress, or other activities this year? >> Yeah, absolutely. As Aaron touched on, I'm a visual thinker. The way I think about this thing is a very, very vertical architecture is tipping sideways. It's becoming horizontal. And all of the layers of that horizontal architecture are really where the partnerships are at. So, let's start at the bottom, silicon. The silicon ecosystem is very much focused on this market. And producing very specific products to enable open, high performance telecom networks. That's both in the form of host processors, as well as accelerators. One layer up, of course, is the stuff that we're known for, subsystems, compute storage, the hardware infrastructure that forms the foundation for telco clouds. A layer above that, all of the cloud software layer, the virtualization and containerization software, and all of the usual suspects there, all of whom are very good partners of ours, and we're looking to expand that pretty broadly this year. And then at the top of the layer cake, all of the network functions, all of the VNF's and CNF's that were once kind of the top of proprietary stacks, that are now opening up and being delivered, as well-formed containers that can run on these clouds. So, we're focusing on all of those, if you will, product partnerships, and there is a services wrapper around all of it. The systems integration necessary to make these systems part of a carrier's network, which of course, has been running for a long time, and needs to be integrated with in a very specific way. And so, all of that, together kind of forms the ecosystem, all of those are partners, and we're really excited about being at the heart of it. >> Interesting, it's not like we've never seen this movie before, which is, it's sort of repeating itself in telco. Aaron, you heard my little intro up front about the need to modernize infrastructure, I wonder if I could touch on another major trend, which we're seeing is the cloud, and I'm talkin' about not only public, but private and hybrid cloud. The public cloud is an opportunity, but it's also a threat for telcos. Telcom providers are lookin' to the public cloud for specific use cases, you think about like bursting for an iPhone launch or whatever. But at the same time, these cloud vendors, they're sort of competing with telcos. They're providing local zones, for example, sometimes trying to do an end run on the telco connectivity services, so telecom companies, they have to find the right balance between what they own and what they rent. And I wonder if you could add some color as to what you see in the market and what Dell specifically is doing to support these trends. >> Yeah, and I think the most important thing is what we're seeing, as you said, is these aren't things that we haven't seen before. And I think that telecom is really going through their own set of cloud transformations, and so, one of the hot topics in the industry now is, what is telco cloud? And what does that look like going forward? And it's going to be, as you said, a combination of services that they offer, services that they leverage. But at the end of the day, it's going to help them modernize how they deliver telecommunication services to their customers, and then provide value added services on top of that. From a Dell perspective, we're really providing the technologies to provide the underpinnings to lay a foundation on which that network can be built, whether that's best of breed servers that are built in design for the telecom environments. Recently, we announced our Infer block program, in partnering with virtualization providers, to be able to provide engineered systems that dramatically simplify how our customers can deploy, manage, and lifecycle manage throughout day two operations, an entire cloud environment. And whether they're using Red Hat, whether they're using Wind River, or VMware, or other virtualization layers, they can deploy the right virtualization layer at the right part of their network to support the applications they're looking to drive. And Dell is looking to solve how they simplify and manage all of that, both from a hardware, as well as on management software perspective. So, this is really what Dell's doing to, again, partner with the broader technology community, to help make that telco cloud a reality. >> Aaron, let's stay here for a second, I'm interested in some of the use cases that you're going after with customers. You've got Edge infrastructure, remote work, 5G, where's security fit, what are the focus areas for Dell, and can we double click on that a little bit? >> Yeah, I mean, I think there's two main areas of telecommunication industry that we're talking to. One, we've really been talking about the sort of the network buyer, how do they modernize the core, the network Edge, the RAN capabilities to deliver traditional telecommunication services, and modernize that as they move into 5G and beyond. I think the other side of the business is, telecoms are really looking from a line of business perspective to figure out how do they monetize that network, and be able to deliver value added services to their enterprise customers on top of these new networks. So, you were just touching on a couple of things that are really critical. In the enterprise space, AI and IoT is driving a tremendous amount of innovation out there, and there's a need for being able to support and manage Edge compute at scale, be able to provide connectivity, like private mobility, and 4G and 5G, being able to support things like mobile workforces and client capabilities, to be able to access these devices that are around all of these Edge environments of the enterprises. And telecoms are seeing as that, as an opportunity for them to not only provide connectivity, but how do they extend their cloud out into these enterprise environments with compute, with connectivity, with client and connectivity resources, and even also provide protection for those environments as well. So, these are areas that Dell is historically very strong at. Being able to provide compute, be able to provide connectivity, and being able to provide data protection and client services, we are looking to work closely with lines of businesses to be able to develop solutions that they can bring to market in combination with us, to be able to serve their end user customers and their enterprises. So, those are really the two key areas, not only network buyer, but being able to enable the lines of business to go and capitalize on the services they're developing for their customers. >> I think that line of business aspect is key, I mean, the telcos have had to sit back and provide the plumbing, cost per bit goes down, data consumption going through the roof, all the over at the top guys have had the field day with the data, and the customer relationships, and now it's almost like the revenge (chuckles) of the telcos. Dennis, I wonder if we could talk about the future. What can we expect in the years ahead from Dell, if you break out the binoculars a little bit. >> Yeah, I think you hit it earlier. We've seen the movie before. This has happened in the IT data center. We went from proprietary vertical solutions to horizontal open systems. We went from client server to software-defined open hardware cloud native. And the trend is likely to be exactly that, in the telecom industry because that's what the operators want. They're not naive to what's happened in the IT data center, they all run very large data centers. And they're trying to get some of the scale economies. Some of the agility, the cost of ownership benefits for the reasons Aaron just discussed. It's clear as you point out, this industry's been really defined by the inability to stop investing, and the difficulty to monetize that investment. And I think now, everybody's looking at this 5G, and frankly, 5G plus 6G, and beyond, as the opportunity to really go get a chunk of that revenue, and Enterprise Edge is the target. >> And 5G is touching so many industries, and that kind of brings me, Aaron into Mobile World Congress. I mean, you look at the floor layout, it's amazing. You got Industry 4.0, you've got our traditional industry and telco colliding. There's public policy. So, give us a teaser to Mobile World Congress 23, what's on deck at the show from Dell? >> Yeah, we're really excited about Mobile World Congress. This, as you know, is a massive event for the industry every year. And it's really the event that the whole industry uses to kick off this coming year. So, we're going to be using this obviously to talk to our customers and our partners about what Dell's looking to do, and what we're innovating on right now, and what we're looking to partner with them around. In the front of the house, we're going to be doin', we're going to be highlighting 13 different solutions and demonstrations to be able to show our customers what we're doing today, and show them the use cases, and put into action, so they get to actually look and feel, and touch, and experience what it is that we're working around. Obviously, meetings are important, everybody knows Mobile World Congress is the place to get those meetings and kickoff for the year. So, we're going to have, we're lookin' at several hundred meetings, hundreds of meetings that we're going to be lookin' to have across the industry with our customers and partners in the broader community. And of course, we've also got technology that's going to be in a variety of different partner spaces as well. So, you can come and see us in hall three, but we're also going to have technologies, kind of spread all over the floor. And of course, there's always theCUBE. You're going to be able to see us live all four days, all day, every day. You're going to be hearing our executives, our partners, our customers, talk about what Dell is doing to innovate in the industry, and how we're looking to leverage the broader, open ecosystem to be able to transform the network, and what we're lookin' to do. So, in that space, we're going to be focusing on what we're doing from an ecosystem perspective, our infrastructure focus. We'll be talking about what we're doing to support telco cloud transformation. And then finally, as we talked about earlier, how are we helping the lines of business within our telecoms monetize the opportunity? So, these are all different things we're really excited to be focusing on, and look forward to the event next month. >> Yeah, it's going to be awesome in Barcelona at the FITA, as you say, Dell's big presence in hall three, Orange is in there, Deutsche Telecom, Intel's in hall three. VMware's there, Nokia, Vodafone, you got some great things to see there. Check that out, and of course, theCUBE, we are super excited to be collaborating with you, we got a great setup. We're in the walkway right between halls four and five, right across from the government of Catalonia, who are the host partners for the event, so there's going to be a ton of action there. Guys, can't wait to see you there, really appreciate your time today. >> Great, thanks. >> Alright, Mobile World Congress, theCUBE's coverage starts on February 27th right after the keynotes. So, first thing in the morning, east coast time, we'll be broadcasting is, Aaron said all week, Monday through Thursday in the show floor, check that out at thecube.net. siliconangle.com has all the written coverage, and go to dell.com, see what's happenin' there, have all the action from the event. Don't miss us, this is Dave Vellante, we'll see you there. (electronic music)

Published Date : Feb 13 2023

SUMMARY :

and one of the most important and to monetize new and all of the components the network, to be able to capitalize on I love now the fact that Because of the delivery of some open examples of some of the key and all of the usual suspects there, about the need to the applications they're looking to drive. I'm interested in some of the use cases the lines of business to go and capitalize I mean, the telcos have had to sit back and the difficulty to and that kind of brings me, Aaron and kickoff for the year. awesome in Barcelona at the FITA, and go to dell.com, see

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

DennisPERSON

0.99+

DavePERSON

0.99+

Dave VellantePERSON

0.99+

AaronPERSON

0.99+

VodafoneORGANIZATION

0.99+

Aaron ChaissonPERSON

0.99+

Dennis HoffmanPERSON

0.99+

February 27thDATE

0.99+

DellORGANIZATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

OrangeORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

NokiaORGANIZATION

0.99+

Mobile World CongressEVENT

0.99+

hundredsQUANTITY

0.99+

Deutsche TelecomORGANIZATION

0.99+

MondayDATE

0.99+

IntelORGANIZATION

0.99+

Dell TechnologiesORGANIZATION

0.99+

first roundQUANTITY

0.99+

two thingsQUANTITY

0.99+

ThursdayDATE

0.99+

Mobile World CongressEVENT

0.99+

next monthDATE

0.99+

TelcoORGANIZATION

0.98+

13 different solutionsQUANTITY

0.98+

todayDATE

0.98+

TelcosORGANIZATION

0.98+

thecube.net.OTHER

0.98+

bothQUANTITY

0.98+

Mobile World Congress 23EVENT

0.98+

this yearDATE

0.98+

OneQUANTITY

0.98+

One layerQUANTITY

0.98+

VMwareORGANIZATION

0.98+

both partnersQUANTITY

0.98+

Mobile World Congress 2023EVENT

0.97+

oneQUANTITY

0.97+

MWC23EVENT

0.97+

twin thrustsQUANTITY

0.97+

two key areasQUANTITY

0.96+

telcoORGANIZATION

0.95+

two main thrustsQUANTITY

0.94+

fiveQUANTITY

0.93+

second main thrustQUANTITY

0.93+

2023DATE

0.93+

EdgeTITLE

0.92+

theCUBEORGANIZATION

0.92+

a trillion-dollarQUANTITY

0.91+

TelcomORGANIZATION

0.91+

firstQUANTITY

0.91+

hall threeQUANTITY

0.9+

dell.comORGANIZATION

0.89+

Breaking Analysis: Google's Point of View on Confidential Computing


 

>> From theCUBE studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data and isolating data from apps in a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon CUBE Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show, but before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing. I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics, are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data and transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system. Arm, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images updates different services and the entire code flow aren't directly addressed by memory encryption, rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Branco sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign for memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the the consortium is seen as limiting by AWS. This is my guess, not AWS's words, and but I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got a lead with this Annapurna acquisition. This was way ahead with Arm integration and so it probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names including Arm, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic, Nelly Porter is head of product for GCP confidential computing and encryption, and Dr. Patricia Florissi is the technical director for the office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again security or infrastructure securities that I usually own. And we are talking about encryption and when encryption and confidential computing is a part of portfolio in additional areas that I contribute together with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operate in your confidential environment to have end-to-end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay. Patricia? >> Well, I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologists from large corporations, institutions and a lot of success, we're startups as well. And we have two main goals. First, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we are devise Google and Google Cloud engineering and product management and tech on there, on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO, I spend a lot of time collaborating with customers and the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing? From Google's perspective, how do you define it? >> Confidential computing is a tool and it's still one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do, Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential commuting matters, because at the end of the day, it reduces more and more the customer's thresh boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way, is a natural progression that you're using encryption to secure and protect the data. In the same way that we are encrypting data in transit and at rest, now we are also encrypting data while in use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud, and specifically double finance where you are, a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting. And I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can because there's a narrative out there that says confidential computing is a marketing ploy, I talked about this upfront, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption and it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree, as you can imagine, with this statement, but the most importantly is we mixing multiple concepts, I guess. And exactly as Patricia said, we need to look at the end-to-end story, not again the mechanism how confidential computing trying to again, execute and protect a customer's data and why it's so critically important because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud covering to offer additional stronger isolation. They called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenant that's running on the same host but also us because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers, so tenants from us. We also writing code, we also software providers will also make mistakes or have some zero days. Sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and amongst those tenants, we're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating to gather this very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. Operator access, yeah, maybe I trust my clouds provider, but if I can fence off your access even better, I'll sleep better at night. Separating a code from the data, everybody's, Arm, Intel, AMD, Nvidia, others, they're all doing it. I wonder if, Nelly, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally. We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely. And Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate on those VMs exactly as they would with normal non-confidential VMs, but to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any cloud can, something that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, when the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 called Titan. It was our specific ASIC, specific, again, inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tampered. We do it for everybody, confidential computing included. But for confidential computing, what we have to change, we bring in AMD, or again, future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity, not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine, as you can see, we validate that integrity of all of the system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD secure processor, it's special ASICs, best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop or Spark capability. We offer all of that. And those keys are not available to us. It's the best keys ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing provides so revolutionary technology, us cloud providers, who don't have access to the keys. They sitting in the hardware and they head to memory controller. And it means when hypervisors that also know about these wonderful things saying I need to get access to the memories that this particular VM trying to get access to, they do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but the most importantly, in hardware not exportable. And it means now you would be able to have this very interesting role that customers or cloud providers will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications, their VMs are running exactly as it should run and what you're running in VM, you actually see your memory in clear, it's not encrypted, but God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, they would not be able to do it. Now you'll see cyber and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified. And OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you, as customer, can verify. But the most interesting thing, I guess, how to ensure the super performance of this environment because you can imagine, Dave, that encrypting and it's additional performance, additional time, additional latency. So we were able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent. Appreciate that explanation. So, again, the narrative on this as well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is, in addition to, let's go pre confidential computing days, what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that recovered with Nelly, that it is. Confidential computing actually ensures that the applications and data internals remain secret, right? The code is actually looking at the data, the only the memory is decrypting the data with a key that is ephemeral and per VM and generated on demand. Then you have the second point where you have code and data integrity, and now customers want to know whether their data was corrupted, tampered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data, it's also, it has not been tampered and preserves integrity. I would also say that this is all verifiable. So you have attestation and these attestation actually generates a log trail and the log trail guarantees that, provides a proof that it was preserved. And I think that the offer's also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tampered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications, it's transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before. I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem, or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate in open, so again, our operating system, we working with operating system repository OSs, OS vendors to ensure that all capabilities that we need is part of the kernels, are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors a kernel, host kernel to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this whole, we moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed, Intel is pulling the lead and also announcing their trusted domain extension, very similar architecture. And no surprise, it's, again, a lot of work done with our partners to, again, convince, work with them and make this capability available. The same with Arm this year, actually last year, Arm announced their future design for confidential computing. It's called Confidential Computing Architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop, as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this attestation sig, the, again, the community based systems that we want to build and influence and work with Arm and every other cloud providers to ensure that we can interrupt and it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers way. And to do it, we need to continue what we are doing, working open, again, and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what we want it to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, the technology industry and sometimes is problematic. I know there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove that data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability, that you can actually survive if you are untethered to the cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing, it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here, Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data and the code. And that's similar because with data sovereignty we care about whether it resides, where, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data are going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement, now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in 23 and what's the maturity curve look like, this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years, as I started, it'll become utility. It'll become TLS as of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do and it's become ubiquity. It's exactly where confidential computing is getting and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we will be there. >> Thank you. And Patricia, what's your prediction? >> I will double that and say, hey, in the future, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes evermore top of mind with sovereign states and also for multi national organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, if I say, mode of operation. I like to compare that today is inconceivable. If we talk to the young technologists, it's inconceivable to think that at some point in history, and I happen to be alive that we had data at rest that was not encrypted, data in transit that was not encrypted, and I think that will be inconceivable at some point in the near future that to have unencrypted data while in use. >> And plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those, as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look, as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition, in our view, will moderate price hikes. And at the end of the day, this is under the covers technology that essentially will come for free. So we'll take it. I want to thank our guests today, Nelly and Patricia from Google, and thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio, Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters. And Rob Hof is our editor-in-chief over at siliconangle.com. Does some great editing for us, thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or dm me @DVellante. And you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights, powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (upbeat music)

Published Date : Feb 11 2023

SUMMARY :

bringing you data-driven and at the end of the day, Just tell the audience a little and confidential computing Got it. and the industry at large for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. people that are scared of the cloud. and eliminate some of the we could stay with you and they head to memory controller. So, again, the narrative on this as well, and integrity of the data and of the code. how does Google ensure the compatibility and ideas of our partners to this role One of the frequent examples and that the data will be only used of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive beauty of the this industry and the constraints of

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BrancoPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

MicrosoftORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

2019DATE

0.99+

2017DATE

0.99+

Kristin MartinPERSON

0.99+

Nelly PorterPERSON

0.99+

Ken SchiffmanPERSON

0.99+

Rob HofPERSON

0.99+

Cheryl KnightPERSON

0.99+

last yearDATE

0.99+

Palo AltoLOCATION

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

AMDORGANIZATION

0.99+

Patricia FlorissiPERSON

0.99+

IntelORGANIZATION

0.99+

oneQUANTITY

0.99+

fiveQUANTITY

0.99+

second pointQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

MetaORGANIZATION

0.99+

secondQUANTITY

0.99+

thirdQUANTITY

0.99+

OneQUANTITY

0.99+

twoQUANTITY

0.99+

ArmORGANIZATION

0.99+

eachQUANTITY

0.99+

two expertsQUANTITY

0.99+

FirstQUANTITY

0.99+

first questionQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

two decades agoDATE

0.99+

bothQUANTITY

0.99+

this yearDATE

0.99+

seven yearsQUANTITY

0.99+

OCTOORGANIZATION

0.99+

zero daysQUANTITY

0.98+

10 years agoDATE

0.98+

each weekQUANTITY

0.98+

todayDATE

0.97+

Google's PoV on Confidential Computing NO PUB


 

>> Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start, and then Patricia you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm honing a lot of interesting activities in Google and again, security or infrastructure securities that I usually hone, and we're talking about encryption, Antware encryption, and confidential computing is a part of portfolio. In additional areas that I contribute to get with my team to Google and our customers is secure software supply chain. Because you need to trust your software. Is it operating your confidential environment to have end to end story about if you believe that your software and your environment doing what you expect, it's my role. >> Got it, okay. Patricia? >> Well I am a technical director in the office of the CTO, OCTO for short, in Google Cloud. And we are a global team. We include former CTOs like myself and senior technologies from large corporations, institutions, and a lot of success for startups as well. And we have two main goals. First, we work side by side with some of our largest, more strategic or most strategic customers and we help them solve complex engineering technical problems. And second, we are device Google and Google Cloud engineering and product management on emerging trends in technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent, thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool. And it's one of the tools in our toolbox. And confidential computing is a way how would help our customers to complete this very interesting end to end lifecycle of their data. And when customers bring in the data to Cloud and want to protect it, as they ingest it to the Cloud, they protect it address when they store data in the Cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they running them. And again, because data is not brought to Cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end to end protection of our customer's data when they bring the workloads and data to Cloud, thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain, do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential matters. Because at the end of the day it reduces more and more the customers thrush boundaries and the attack surface, that's about reducing that periphery, the boundary, in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now we are also encrypting data while in use. And among other beneficial I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry. Even though it's highly focused on, I wouldn't say highly focused, but very beneficial for highly regulated industries. It applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where you are a customer is actually trying to get a finance on an asset, let's say a boat or a house and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the of the data. >> Interesting, and I want to understand that a little bit more but I'm going to push you a little bit on this, Nelly, if I can, because there's a narrative out there that says confidential computing is a marketing ploy. I talked about this upfront, by Cloud providers that are just trying to placate people that are scared of the Cloud. And I'm presuming you don't agree with that but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems, it is overhyped by Cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine, it's a crazy statement. But the most importantly is we mixing multiple concepts I guess. And exactly as Patricia said, we need to look at the end-to-end story not again the mechanism of how confidential computing trying to again execute and protect customer's data, and why it's so critically important. Because what confidential computing was able to do it's in addition to isolate our tenants in multi-tenant environments the Cloud over. To offer additional stronger isolation, we called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants that's running on the same host but also us, because they don't need to worry about against threats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers, stronger isolation between tenants in this multi-tenant environment but also incredibly important, stronger isolation of our customers. So tenants from us, we also writing code, we also software providers will also make mistakes or have some zero days sometimes again us introduced, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants, and amongst those tenants, they're really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together this very sensitive data, knowing that this particular protection is available to them. >> Okay, thank you, appreciate that. And I, you know, I think malicious code is often a threat model missed in these narratives. You know, operator access, yeah, could maybe I trust my Clouds provider, but if I can fence off your access even better I'll sleep better at night. Separating a code from the data, everybody's arm Intel, AM, Invidia, others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and industry way of dealing with confidential computing is to ensure as it's three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift or no changing their apps and performing and having very, very, very low latency and scale as any Cloud can, something that Google actually pioneered in confidential computing. I think we need to open and explain how this magic was actually done. And as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine, the whole entire post has integrity guarantee, means nobody changing my code on the most low level of system. And we introduce this in 2017 code Titan. Those our specific ASIC specific, again inch by inch system on every single motherboard that we have, that ensures that your low level former, your actually system code, your kernel, the most powerful system, is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing concluded. But for confidential computing what we have to change we bring in a MD again, future silicon vendors, and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate integrity not only our software and our firmware but also firmware and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of the secure processor. It's special Asics best, specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes, or every single worker thread in our Spark capability. We offer all of that, and those keys are not available to us. It's the best keys ever in encryption space. Because when we are talking about encryption the first question that I'm receiving all the time, where's the key, who will have access to the key? Because if you have access to the key then it doesn't matter if you encrypt it enough. But the case in confidential computing quite so revolutionary technology, ask Cloud providers who don't have access to the keys. They're sitting in the hardware and they fed to memory controller. And it means when Hypervisors that also know about these wonderful things, saying I need to get access to the memories that this particular VM I'm trying to get access to. They do not encrypt the data, they don't have access to the key. Because those keys are random, ephemeral and VM, but the most importantly in hardware not exportable. And it means now you will be able to have this very interesting role that customers all Cloud providers, will not be able to get access to your memory. And what we do, again, as you can see our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you're running in VM you actually see your memory in clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box. No, no, no, no, no, you will not be able to do it. Now you'll see cybernet. And it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified, and OS is modified such way to provide integrity. It means even OS that you're running in UVM bucks is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine, Dave, that's increasing it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance, and scales as they would expect from Cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well you know you've already given me guarantees as a Cloud provider that you don't have access to my data but this gives another level of assurance. Key management as they say is key. Now you're not, humans aren't managing the keys the machines are managing them. So Patricia, my question to you is in addition to, you know, let's go pre-confidential computing days what are the sort of new guarantees that these hardware-based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality the customer cares then they want to know whether their systems are protected from outside or unauthorized access. And that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret, right? The code is actually looking at the data only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered, with or impacted by outside actors. And what confidential computing insures is that application internals are not tampered with. So the application, the workload as we call it, that is processing the data it's also it has not been tempered and preserves integrity. I would also say that this is all verifiable. So you have attestation, and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call ceiling, this idea that the secrets have been preserved and not tempered with. Confidentiality and integrity of code and data. >> Got it, okay, thank you. You know, Nelly, you mentioned, I think I heard you say that the applications, it's transparent,you don't have to change the application it just comes for free essentially. And I'm, we showed some various parts of the stack before. I'm curious as to what's affected but really more importantly what is specifically Google's value add? You know, how do partners, you know, participate in this? The ecosystem or maybe said another way how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way. And it's very difficult and definitely complicated world because to be able to provide these guarantees actually a lot of works was done by community. Google is very much operate and open. So again, our operating system we working in this operating system repository OS vendors to ensure that all capabilities that we need is part of their kernels, are part of their releases, and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors, kernel, host kernel, to support this capability and it means working this community to ensure that all of those patches are there. We also worked with every single silicon vendor as you've seen, and that's what I probably feel that Google contributed quite a bit in this role. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is pulling the lead and also announcing the trusted domain extension very similar architecture and no surprise, it's again a lot of work done with our partners to again, convince, work with them, and make this capability available. The same with ARM this year, actually last year, ARM unknowns are future design for confidential computing. It's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing. For example, simply to mention to ensure interop, as you mentioned, between different confidential environments of Cloud providers. We want to ensure that they can attest to each other. Because when you're communicating with different environments, you need to trust them. And if it's running on different Cloud providers you need to ensure that you can trust your receiver when you are sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at the station, the community based systems that we want to build and influence and work with ARM and every other Cloud providers to ensure that they can interrupt. And it means it doesn't matter where confidential workloads will be hosted but they can exchange the data in secure, verifiable, and controlled by customers way. And to do it, we need to continue what we are doing. Working open again and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let talk about data sovereignty, because when you think about data sharing you think about data sharing across, you know, the ecosystem and different regions and then of course data sovereignty comes up. Typically public policy lags, you know, the technology industry and sometimes is problematic. I know, you know, there's a lot of discussions about exceptions, but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you you know, when you delete data, can you actually prove the data is deleted with a hundred percent certainty? You got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect, so for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses at all. That's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption, and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software, stack, any operations, that is full transparency, full visibility. And then the third pillar is around software sovereignty where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the Cloud and that you can use open source. Now let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing it typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection. We want to ensure the confidentiality and integrity and availability of the data which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data. And this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and login accesses. But once you were in, you were able to do everything you wanted with the data, an insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty we care about whether it resides, who is operating on the data. But the moment that the data is being processed, I need to trust that the processing of the data will abide by user control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA, and Gaia X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment. That the workload is cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity, safety of the confidential computing environment. And that's why we believe confidential computing is one, necessary and essential technology that will allow us to ensure data sovereignty especially when it comes to user control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed, so I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year end prediction post you guys sent in some predictions, and I wasn't able to get to them in the predictions post. So I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like, you know, this decade in, in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it'll become utility. It'll become TLS. As of, again, 10 years ago we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heading and heading, I don't know if we are there yet yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you, and Patricia, what's your prediction? >> I would double that and say, hey, in the future, in the very near future you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations and for organizations that want to collaborate with each other, confidential computing will become the norm. It'll become the default, If I say mode of operation, I like to compare that, today is inconceivable if we talk to the young technologists. It's inconceivable to think that at some point in history and I happen to be alive that we had data at address that was not encrypted. Data in transit, that was not encrypted. And I think that we will be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus, I think the beauty of the this industry is because there's so much competition this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis. There's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much.

Published Date : Feb 10 2023

SUMMARY :

Patricia, great to have you. and then Patricia you can weigh in. In additional areas that I contribute to Got it, okay. of the CTO, OCTO for Excellent, thank you in the data to Cloud into the architecture a bit and privacy of the of the data. but I'm going to push you a is available to them. we could stay with you and they fed to memory controller. So Patricia, my question to you is and integrity of the data and of the code. that the applications, and ideas of our partners to this role is when you you know, and that the data will be only used of the enforcement. and we will support encrypted traffic. and I happen to be alive and we can double click

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

GoogleORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

last yearDATE

0.99+

2017DATE

0.99+

two partiesQUANTITY

0.99+

oneQUANTITY

0.99+

twoQUANTITY

0.99+

second pointQUANTITY

0.99+

FirstQUANTITY

0.99+

ARMORGANIZATION

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

bothQUANTITY

0.99+

IntelORGANIZATION

0.99+

two decades agoDATE

0.99+

AsicsORGANIZATION

0.99+

secondQUANTITY

0.99+

Gaia XORGANIZATION

0.99+

OneQUANTITY

0.99+

eachQUANTITY

0.98+

seven yearsQUANTITY

0.98+

OCTOORGANIZATION

0.98+

one thoughtQUANTITY

0.98+

a decade agoDATE

0.98+

this yearDATE

0.98+

10 years agoDATE

0.98+

InvidiaORGANIZATION

0.98+

'23DATE

0.98+

todayDATE

0.98+

CloudTITLE

0.98+

three pillarsQUANTITY

0.97+

one wayQUANTITY

0.97+

hundred percentQUANTITY

0.97+

zero daysQUANTITY

0.97+

three main propertyQUANTITY

0.95+

third pillarQUANTITY

0.95+

two main goalsQUANTITY

0.95+

CTOORGANIZATION

0.93+

NellPERSON

0.9+

KubernetesTITLE

0.89+

every single VMQUANTITY

0.86+

NellyORGANIZATION

0.83+

Google CloudTITLE

0.82+

every single workerQUANTITY

0.77+

every single nodeQUANTITY

0.74+

AMORGANIZATION

0.73+

doubleQUANTITY

0.71+

single motherboardQUANTITY

0.68+

single siliconQUANTITY

0.57+

SparkTITLE

0.53+

kernelTITLE

0.53+

inchQUANTITY

0.48+

Breaking Analysis: Google's PoV on Confidential Computing


 

>> From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. >> Confidential computing is a technology that aims to enhance data privacy and security, by providing encrypted computation on sensitive data and isolating data, and apps that are fenced off enclave during processing. The concept of, I got to start over. I fucked that up, I'm sorry. That's not right, what I said was not right. On Dave in five, four, three. Confidential computing is a technology that aims to enhance data privacy and security by providing encrypted computation on sensitive data, isolating data from apps and a fenced off enclave during processing. The concept of confidential computing is gaining popularity, especially in the cloud computing space, where sensitive data is often stored and of course processed. However, there are some who view confidential computing as an unnecessary technology in a marketing ploy by cloud providers aimed at calming customers who are cloud phobic. Hello and welcome to this week's Wikibon Cube Insights powered by ETR. In this Breaking Analysis, we revisit the notion of confidential computing, and to do so, we'll invite two Google experts to the show. But before we get there, let's summarize briefly. There's not a ton of ETR data on the topic of confidential computing, I mean, it's a technology that's deeply embedded into silicon and computing architectures. But at the highest level, security remains the number one priority being addressed by IT decision makers in the coming year as shown here. And this data is pretty much across the board by industry, by region, by size of company. I mean we dug into it and the only slight deviation from the mean is in financial services. The second and third most cited priorities, cloud migration and analytics are noticeably closer to cybersecurity in financial services than in other sectors, likely because financial services has always been hyper security conscious, but security is still a clear number one priority in that sector. The idea behind confidential computing is to better address threat models for data in execution. Protecting data at rest and data in transit have long been a focus of security approaches, but more recently, silicon manufacturers have introduced architectures that separate data and applications from the host system, ARM, Intel, AMD, Nvidia and other suppliers are all on board, as are the big cloud players. Now, the argument against confidential computing is that it narrowly focuses on memory encryption and it doesn't solve the biggest problems in security. Multiple system images, updates, different services and the entire code flow aren't directly addressed by memory encryption. Rather to truly attack these problems, many believe that OSs need to be re-engineered with the attacker and hacker in mind. There are so many variables and at the end of the day, critics say the emphasis on confidential computing made by cloud providers is overstated and largely hype. This tweet from security researcher Rodrigo Bronco, sums up the sentiment of many skeptics. He says, "Confidential computing is mostly a marketing campaign from memory encryption. It's not driving the industry towards the hard open problems. It is selling an illusion." Okay. Nonetheless, encrypting data in use and fencing off key components of the system isn't a bad thing, especially if it comes with the package essentially for free. There has been a lack of standardization and interoperability between different confidential computing approaches. But the confidential computing consortium was established in 2019 ostensibly to accelerate the market and influence standards. Notably, AWS is not part of the consortium, likely because the politics of the consortium were probably a conundrum for AWS because the base technology defined by the consortium is seen as limiting by AWS. This is my guess, not AWS' words. But I think joining the consortium would validate a definition which AWS isn't aligned with. And two, it's got to lead with this Annapurna acquisition. It was way ahead with ARM integration, and so it's probably doesn't feel the need to validate its competitors. Anyway, one of the premier members of the confidential computing consortium is Google, along with many high profile names, including Aem, Intel, Meta, Red Hat, Microsoft, and others. And we're pleased to welcome two experts on confidential computing from Google to unpack the topic. Nelly Porter is Head of Product for GCP Confidential Computing and Encryption and Dr. Patricia Florissi is the Technical Director for the Office of the CTO at Google Cloud. Welcome Nelly and Patricia, great to have you. >> Great to be here. >> Thank you so much for having us. >> You're very welcome. Nelly, why don't you start and then Patricia, you can weigh in. Just tell the audience a little bit about each of your roles at Google Cloud. >> So I'll start, I'm owning a lot of interesting activities in Google and again, security or infrastructure securities that I usually own. And we are talking about encryption, end-to-end encryption, and confidential computing is a part of portfolio. Additional areas that I contribute to get with my team to Google and our customers is secure software supply chain because you need to trust your software. Is it operate in your confidential environment to have end-to-end security, about if you believe that your software and your environment doing what you expect, it's my role. >> Got it. Okay, Patricia? >> Well, I am a Technical Director in the Office of the CTO, OCTO for short in Google Cloud. And we are a global team, we include former CTOs like myself and senior technologies from large corporations, institutions and a lot of success for startups as well. And we have two main goals, first, we walk side by side with some of our largest, more strategic or most strategical customers and we help them solve complex engineering technical problems. And second, we advice Google and Google Cloud Engineering, product management on emerging trends and technologies to guide the trajectory of our business. We are unique group, I think, because we have created this collaborative culture with our customers. And within OCTO I spend a lot of time collaborating with customers in the industry at large on technologies that can address privacy, security, and sovereignty of data in general. >> Excellent. Thank you for that both of you. Let's get into it. So Nelly, what is confidential computing from Google's perspective? How do you define it? >> Confidential computing is a tool and one of the tools in our toolbox. And confidential computing is a way how we would help our customers to complete this very interesting end-to-end lifecycle of the data. And when customers bring in the data to cloud and want to protect it as they ingest it to the cloud, they protect it at rest when they store data in the cloud. But what was missing for many, many years is ability for us to continue protecting data and workloads of our customers when they run them. And again, because data is not brought to cloud to have huge graveyard, we need to ensure that this data is actually indexed. Again, there is some insights driven and drawn from this data. You have to process this data and confidential computing here to help. Now we have end-to-end protection of our customer's data when they bring the workloads and data to cloud thanks to confidential computing. >> Thank you for that. Okay, we're going to get into the architecture a bit, but before we do Patricia, why do you think this topic of confidential computing is such an important technology? Can you explain? Do you think it's transformative for customers and if so, why? >> Yeah, I would maybe like to use one thought, one way, one intuition behind why confidential computing matters because at the end of the day, it reduces more and more the customer's thrush boundaries and the attack surface. That's about reducing that periphery, the boundary in which the customer needs to mind about trust and safety. And in a way is a natural progression that you're using encryption to secure and protect data in the same way that we are encrypting data in transit and at rest. Now, we are also encrypting data while in the use. And among other beneficials, I would say one of the most transformative ones is that organizations will be able to collaborate with each other and retain the confidentiality of the data. And that is across industry, even though it's highly focused on, I wouldn't say highly focused but very beneficial for highly regulated industries, it applies to all of industries. And if you look at financing for example, where bankers are trying to detect fraud and specifically double finance where a customer is actually trying to get a finance on an asset, let's say a boat or a house, and then it goes to another bank and gets another finance on that asset. Now bankers would be able to collaborate and detect fraud while preserving confidentiality and privacy of the data. >> Interesting and I want to understand that a little bit more but I got to push you a little bit on this, Nellie if I can, because there's a narrative out there that says confidential computing is a marketing ploy I talked about this up front, by cloud providers that are just trying to placate people that are scared of the cloud. And I'm presuming you don't agree with that, but I'd like you to weigh in here. The argument is confidential computing is just memory encryption, it doesn't address many other problems. It is over hyped by cloud providers. What do you say to that line of thinking? >> I absolutely disagree as you can imagine Dave, with this statement. But the most importantly is we mixing a multiple concepts I guess, and exactly as Patricia said, we need to look at the end-to-end story, not again, is a mechanism. How confidential computing trying to execute and protect customer's data and why it's so critically important. Because what confidential computing was able to do, it's in addition to isolate our tenants in multi-tenant environments the cloud offering to offer additional stronger isolation, they called it cryptographic isolation. It's why customers will have more trust to customers and to other customers, the tenants running on the same host but also us because they don't need to worry about against rats and more malicious attempts to penetrate the environment. So what confidential computing is helping us to offer our customers stronger isolation between tenants in this multi-tenant environment, but also incredibly important, stronger isolation of our customers to tenants from us. We also writing code, we also software providers, we also make mistakes or have some zero days. Sometimes again us introduce, sometimes introduced by our adversaries. But what I'm trying to say by creating this cryptographic layer of isolation between us and our tenants and among those tenants, we really providing meaningful security to our customers and eliminate some of the worries that they have running on multi-tenant spaces or even collaborating together with very sensitive data knowing that this particular protection is available to them. >> Okay, thank you. Appreciate that. And I think malicious code is often a threat model missed in these narratives. You know, operator access. Yeah, maybe I trust my cloud's provider, but if I can fence off your access even better, I'll sleep better at night separating a code from the data. Everybody's ARM, Intel, AMD, Nvidia and others, they're all doing it. I wonder if Nell, if we could stay with you and bring up the slide on the architecture. What's architecturally different with confidential computing versus how operating systems and VMs have worked traditionally? We're showing a slide here with some VMs, maybe you could take us through that. >> Absolutely, and Dave, the whole idea for Google and now industry way of dealing with confidential computing is to ensure that three main property is actually preserved. Customers don't need to change the code. They can operate in those VMs exactly as they would with normal non-confidential VMs. But to give them this opportunity of lift and shift though, no changing the apps and performing and having very, very, very low latency and scale as any cloud can, some things that Google actually pioneer in confidential computing. I think we need to open and explain how this magic was actually done, and as I said, it's again the whole entire system have to change to be able to provide this magic. And I would start with we have this concept of root of trust and root of trust where we will ensure that this machine within the whole entire host has integrity guarantee, means nobody changing my code on the most low level of system, and we introduce this in 2017 called Titan. So our specific ASIC, specific inch by inch system on every single motherboard that we have that ensures that your low level former, your actually system code, your kernel, the most powerful system is actually proper configured and not changed, not tempered. We do it for everybody, confidential computing included, but for confidential computing is what we have to change, we bring in AMD or future silicon vendors and we have to trust their former, their way to deal with our confidential environments. And that's why we have obligation to validate intelligent not only our software and our former but also former and software of our vendors, silicon vendors. So we actually, when we booting this machine as you can see, we validate that integrity of all of this system is in place. It means nobody touching, nobody changing, nobody modifying it. But then we have this concept of AMD Secure Processor, it's special ASIC best specific things that generate a key for every single VM that our customers will run or every single node in Kubernetes or every single worker thread in our Hadoop spark capability. We offer all of that and those keys are not available to us. It's the best case ever in encryption space because when we are talking about encryption, the first question that I'm receiving all the time, "Where's the key? Who will have access to the key?" because if you have access to the key then it doesn't matter if you encrypted or not. So, but the case in confidential computing why it's so revolutionary technology, us cloud providers who don't have access to the keys, they're sitting in the hardware and they fed to memory controller. And it means when hypervisors that also know about this wonderful things saying I need to get access to the memories, that this particular VM I'm trying to get access to. They do not decrypt the data, they don't have access to the key because those keys are random, ephemeral and per VM, but most importantly in hardware not exportable. And it means now you will be able to have this very interesting world that customers or cloud providers will not be able to get access to your memory. And what we do, again as you can see, our customers don't need to change their applications. Their VMs are running exactly as it should run. And what you've running in VM, you actually see your memory clear, it's not encrypted. But God forbid is trying somebody to do it outside of my confidential box, no, no, no, no, no, you will now be able to do it. Now, you'll see cyber test and it's exactly what combination of these multiple hardware pieces and software pieces have to do. So OS is also modified and OS is modified such way to provide integrity. It means even OS that you're running in your VM box is not modifiable and you as customer can verify. But the most interesting thing I guess how to ensure the super performance of this environment because you can imagine Dave, that's increasing and it's additional performance, additional time, additional latency. So we're able to mitigate all of that by providing incredibly interesting capability in the OS itself. So our customers will get no changes needed, fantastic performance and scales as they would expect from cloud providers like Google. >> Okay, thank you. Excellent, appreciate that explanation. So you know again, the narrative on this is, well, you've already given me guarantees as a cloud provider that you don't have access to my data, but this gives another level of assurance, key management as they say is key. Now humans aren't managing the keys, the machines are managing them. So Patricia, my question to you is in addition to, let's go pre-confidential computing days, what are the sort of new guarantees that these hardware based technologies are going to provide to customers? >> So if I am a customer, I am saying I now have full guarantee of confidentiality and integrity of the data and of the code. So if you look at code and data confidentiality, the customer cares and they want to know whether their systems are protected from outside or unauthorized access, and that we covered with Nelly that it is. Confidential computing actually ensures that the applications and data antennas remain secret. The code is actually looking at the data, only the memory is decrypting the data with a key that is ephemeral, and per VM, and generated on demand. Then you have the second point where you have code and data integrity and now customers want to know whether their data was corrupted, tempered with or impacted by outside actors. And what confidential computing ensures is that application internals are not tempered with. So the application, the workload as we call it, that is processing the data is also has not been tempered and preserves integrity. I would also say that this is all verifiable, so you have attestation and this attestation actually generates a log trail and the log trail guarantees that provides a proof that it was preserved. And I think that the offers also a guarantee of what we call sealing, this idea that the secrets have been preserved and not tempered with, confidentiality and integrity of code and data. >> Got it. Okay, thank you. Nelly, you mentioned, I think I heard you say that the applications is transparent, you don't have to change the application, it just comes for free essentially. And we showed some various parts of the stack before, I'm curious as to what's affected, but really more importantly, what is specifically Google's value add? How do partners participate in this, the ecosystem or maybe said another way, how does Google ensure the compatibility of confidential computing with existing systems and applications? >> And a fantastic question by the way, and it's very difficult and definitely complicated world because to be able to provide these guarantees, actually a lot of work was done by community. Google is very much operate and open. So again our operating system, we working this operating system repository OS is OS vendors to ensure that all capabilities that we need is part of the kernels are part of the releases and it's available for customers to understand and even explore if they have fun to explore a lot of code. We have also modified together with our silicon vendors kernel, host kernel to support this capability and it means working this community to ensure that all of those pages are there. We also worked with every single silicon vendor as you've seen, and it's what I probably feel that Google contributed quite a bit in this world. We moved our industry, our community, our vendors to understand the value of easy to use confidential computing or removing barriers. And now I don't know if you noticed Intel is following the lead and also announcing a trusted domain extension, very similar architecture and no surprise, it's a lot of work done with our partners to convince work with them and make this capability available. The same with ARM this year, actually last year, ARM announced future design for confidential computing, it's called confidential computing architecture. And it's also influenced very heavily with similar ideas by Google and industry overall. So it's a lot of work in confidential computing consortiums that we are doing, for example, simply to mention, to ensure interop as you mentioned, between different confidential environments of cloud providers. They want to ensure that they can attest to each other because when you're communicating with different environments, you need to trust them. And if it's running on different cloud providers, you need to ensure that you can trust your receiver when you sharing your sensitive data workloads or secret with them. So we coming as a community and we have this at Station Sig, the community-based systems that we want to build, and influence, and work with ARM and every other cloud providers to ensure that they can interop. And it means it doesn't matter where confidential workloads will be hosted, but they can exchange the data in secure, verifiable and controlled by customers really. And to do it, we need to continue what we are doing, working open and contribute with our ideas and ideas of our partners to this role to become what we see confidential computing has to become, it has to become utility. It doesn't need to be so special, but it's what what we've wanted to become. >> Let's talk about, thank you for that explanation. Let's talk about data sovereignty because when you think about data sharing, you think about data sharing across the ecosystem in different regions and then of course data sovereignty comes up, typically public policy, lags, the technology industry and sometimes it's problematic. I know there's a lot of discussions about exceptions but Patricia, we have a graphic on data sovereignty. I'm interested in how confidential computing ensures that data sovereignty and privacy edicts are adhered to, even if they're out of alignment maybe with the pace of technology. One of the frequent examples is when you delete data, can you actually prove the data is deleted with a hundred percent certainty, you got to prove that and a lot of other issues. So looking at this slide, maybe you could take us through your thinking on data sovereignty. >> Perfect. So for us, data sovereignty is only one of the three pillars of digital sovereignty. And I don't want to give the impression that confidential computing addresses it at all, that's why we want to step back and say, hey, digital sovereignty includes data sovereignty where we are giving you full control and ownership of the location, encryption and access to your data. Operational sovereignty where the goal is to give our Google Cloud customers full visibility and control over the provider operations, right? So if there are any updates on hardware, software stack, any operations, there is full transparency, full visibility. And then the third pillar is around software sovereignty, where the customer wants to ensure that they can run their workloads without dependency on the provider's software. So they have sometimes is often referred as survivability that you can actually survive if you are untethered to the cloud and that you can use open source. Now, let's take a deep dive on data sovereignty, which by the way is one of my favorite topics. And we typically focus on saying, hey, we need to care about data residency. We care where the data resides because where the data is at rest or in processing need to typically abides to the jurisdiction, the regulations of the jurisdiction where the data resides. And others say, hey, let's focus on data protection, we want to ensure the confidentiality, and integrity, and availability of the data, which confidential computing is at the heart of that data protection. But it is yet another element that people typically don't talk about when talking about data sovereignty, which is the element of user control. And here Dave, is about what happens to the data when I give you access to my data, and this reminds me of security two decades ago, even a decade ago, where we started the security movement by putting firewall protections and logging accesses. But once you were in, you were able to do everything you wanted with the data. An insider had access to all the infrastructure, the data, and the code. And that's similar because with data sovereignty, we care about whether it resides, who is operating on the data, but the moment that the data is being processed, I need to trust that the processing of the data we abide by user's control, by the policies that I put in place of how my data is going to be used. And if you look at a lot of the regulation today and a lot of the initiatives around the International Data Space Association, IDSA and Gaia-X, there is a movement of saying the two parties, the provider of the data and the receiver of the data going to agree on a contract that describes what my data can be used for. The challenge is to ensure that once the data crosses boundaries, that the data will be used for the purposes that it was intended and specified in the contract. And if you actually bring together, and this is the exciting part, confidential computing together with policy enforcement. Now, the policy enforcement can guarantee that the data is only processed within the confines of a confidential computing environment, that the workload is in cryptographically verified that there is the workload that was meant to process the data and that the data will be only used when abiding to the confidentiality and integrity safety of the confidential computing environment. And that's why we believe confidential computing is one necessary and essential technology that will allow us to ensure data sovereignty, especially when it comes to user's control. >> Thank you for that. I mean it was a deep dive, I mean brief, but really detailed. So I appreciate that, especially the verification of the enforcement. Last question, I met you two because as part of my year-end prediction post, you guys sent in some predictions and I wasn't able to get to them in the predictions post, so I'm thrilled that you were able to make the time to come on the program. How widespread do you think the adoption of confidential computing will be in '23 and what's the maturity curve look like this decade in your opinion? Maybe each of you could give us a brief answer. >> So my prediction in five, seven years as I started, it will become utility, it will become TLS. As of freakin' 10 years ago, we couldn't believe that websites will have certificates and we will support encrypted traffic. Now we do, and it's become ubiquity. It's exactly where our confidential computing is heeding and heading, I don't know we deserve yet. It'll take a few years of maturity for us, but we'll do that. >> Thank you. And Patricia, what's your prediction? >> I would double that and say, hey, in the very near future, you will not be able to afford not having it. I believe as digital sovereignty becomes ever more top of mind with sovereign states and also for multinational organizations, and for organizations that want to collaborate with each other, confidential computing will become the norm, it will become the default, if I say mode of operation. I like to compare that today is inconceivable if we talk to the young technologists, it's inconceivable to think that at some point in history and I happen to be alive, that we had data at rest that was non-encrypted, data in transit that was not encrypted. And I think that we'll be inconceivable at some point in the near future that to have unencrypted data while we use. >> You know, and plus I think the beauty of the this industry is because there's so much competition, this essentially comes for free. I want to thank you both for spending some time on Breaking Analysis, there's so much more we could cover. I hope you'll come back to share the progress that you're making in this area and we can double click on some of these topics. Really appreciate your time. >> Anytime. >> Thank you so much, yeah. >> In summary, while confidential computing is being touted by the cloud players as a promising technology for enhancing data privacy and security, there are also those as we said, who remain skeptical. The truth probably lies somewhere in between and it will depend on the specific implementation and the use case as to how effective confidential computing will be. Look as with any new tech, it's important to carefully evaluate the potential benefits, the drawbacks, and make informed decisions based on the specific requirements in the situation and the constraints of each individual customer. But the bottom line is silicon manufacturers are working with cloud providers and other system companies to include confidential computing into their architectures. Competition in our view will moderate price hikes and at the end of the day, this is under-the-covers technology that essentially will come for free, so we'll take it. I want to thank our guests today, Nelly and Patricia from Google. And thanks to Alex Myerson who's on production and manages the podcast. Ken Schiffman as well out of our Boston studio. Kristin Martin and Cheryl Knight help get the word out on social media and in our newsletters, and Rob Hoof is our editor-in-chief over at siliconangle.com, does some great editing for us. Thank you all. Remember all these episodes are available as podcasts. Wherever you listen, just search Breaking Analysis podcast. I publish each week on wikibon.com and siliconangle.com where you can get all the news. If you want to get in touch, you can email me at david.vellante@siliconangle.com or DM me at D Vellante, and you can also comment on my LinkedIn post. Definitely you want to check out etr.ai for the best survey data in the enterprise tech business. I know we didn't hit on a lot today, but there's some amazing data and it's always being updated, so check that out. This is Dave Vellante for theCUBE Insights powered by ETR. Thanks for watching and we'll see you next time on Breaking Analysis. (subtle music)

Published Date : Feb 10 2023

SUMMARY :

bringing you data-driven and at the end of the day, and then Patricia, you can weigh in. contribute to get with my team Okay, Patricia? Director in the Office of the CTO, for that both of you. in the data to cloud into the architecture a bit, and privacy of the data. that are scared of the cloud. and eliminate some of the we could stay with you and they fed to memory controller. to you is in addition to, and integrity of the data and of the code. that the applications is transparent, and ideas of our partners to this role One of the frequent examples and a lot of the initiatives of the enforcement. and we will support encrypted traffic. And Patricia, and I happen to be alive, the beauty of the this industry and at the end of the day,

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NellyPERSON

0.99+

PatriciaPERSON

0.99+

Alex MyersonPERSON

0.99+

AWSORGANIZATION

0.99+

International Data Space AssociationORGANIZATION

0.99+

DavePERSON

0.99+

AWS'ORGANIZATION

0.99+

MicrosoftORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Rob HoofPERSON

0.99+

Cheryl KnightPERSON

0.99+

Nelly PorterPERSON

0.99+

GoogleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

IDSAORGANIZATION

0.99+

Rodrigo BroncoPERSON

0.99+

2019DATE

0.99+

Ken SchiffmanPERSON

0.99+

IntelORGANIZATION

0.99+

AMDORGANIZATION

0.99+

2017DATE

0.99+

ARMORGANIZATION

0.99+

AemORGANIZATION

0.99+

NelliePERSON

0.99+

Kristin MartinPERSON

0.99+

Red HatORGANIZATION

0.99+

two partiesQUANTITY

0.99+

Palo AltoLOCATION

0.99+

last yearDATE

0.99+

Patricia FlorissiPERSON

0.99+

oneQUANTITY

0.99+

MetaORGANIZATION

0.99+

twoQUANTITY

0.99+

thirdQUANTITY

0.99+

Gaia-XORGANIZATION

0.99+

second pointQUANTITY

0.99+

two expertsQUANTITY

0.99+

david.vellante@siliconangle.comOTHER

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

first questionQUANTITY

0.99+

fiveQUANTITY

0.99+

OneQUANTITY

0.99+

theCUBE StudiosORGANIZATION

0.99+

two decades agoDATE

0.99+

'23DATE

0.99+

eachQUANTITY

0.99+

a decade agoDATE

0.99+

threeQUANTITY

0.99+

zero daysQUANTITY

0.98+

fourQUANTITY

0.98+

OCTOORGANIZATION

0.98+

todayDATE

0.98+

Liz Rice, Isovalent | CloudNativeSecurityCon 23


 

(upbeat music) >> Hello, everyone, from Palo Alto, Lisa Martin here. This is The Cube's coverage of CloudNativeSecurityCon, the inaugural event. I'm here with John Furrier in studio. In Boston, Dave Vellante joins us, and our guest, Liz Rice, one of our alumni, is joining us from Seattle. Great to have everyone here. Liz is the Chief Open Source officer at Isovalent. She's also the Emeritus Chair Technical Oversight Committee at CNCF, and a co-chair of this new event. Everyone, welcome Liz. Great to have you back on theCUBE. Thanks so much for joining us today. >> Thanks so much for having me, pleasure. >> So CloudNativeSecurityCon. This is the inaugural event, Liz, this used to be part of KubeCon, it's now its own event in its first year. Talk to us about the importance of having it as its own event from a security perspective, what's going on? Give us your opinions there. >> Yeah, I think security was becoming so- at such an important part of the conversation at KubeCon, CloudNativeCon, and the TAG security, who were organizing the co-located Cloud Native Security Day which then turned into a two day event. They were doing this amazing job, and there was so much content and so much activity and so much interest that it made sense to say "Actually this could stand alone as a dedicated event and really dedicate, you know, all the time and resources of running a full conference, just thinking about cloud native security." And I think that's proven to be true. There's plenty of really interesting talks that we're going to see. Things like a capture the flag. There's all sorts of really good things going on this week. >> Liz, great to see you, and Dave, great to see you in Boston Lisa, great intro. Liz, you've been a CUBE alumni. You've been a great contributor to our program, and being part of our team, kind of extracting that signal from the CNCF cloud native world KubeCon. This event really kind of to me is a watershed moment, because it highlights not only security as a standalone discussion event, but it's also synergistic with KubeCon. And, as co-chair, take us through the thought process on the sessions, the experts, it's got a practitioner vibe there. So we heard from Priyanka early on, bottoms up, developer first. You know KubeCon's shift left was big momentum. This seems to be a breakout of very focused security. Can you share the rationale and the thoughts behind how this is emerging, and how you see this developing? I know it's kind of a small event, kind of testing the waters it seems, but this is really a directional shift. Can you share your thoughts? >> Yeah I'm just, there's just so many different angles that you can consider security. You know, we are seeing a lot of conversations about supply chain security, but there's also runtime security. I'm really excited about eBPF tooling. There's also this opportunity to talk about how do we educate people about security, and how do security practitioners get involved in cloud native, and how do cloud native folks learn about the security concepts that they need to keep their deployments secure. So there's lots of different groups of people who I think maybe at a KubeCon, KubeCon is so wide, it's such a diverse range of topics. If you really just want to focus in, drill down on what do I need to do to run Kubernetes and cloud native applications securely, let's have a really focused event, and just drill down into all the different aspects of that. And I think that's great. It brings the right people together, the practitioners, the experts, the vendors to, you know, everyone can be here, and we can find each other at a smaller event. We are not spread out amongst the thousands of people that would attend a KubeCon. >> It's interesting, Dave, you know, when we were talking, you know, we're going to bring you in real quick, because AWS, which I think is the bellweather for, you know, cloud computing, has now two main shows, AWS re:Invent and re:Inforce. Security, again, broken out there. you see the classic security events, RSA, Black Hat, you know, those are the, kind of, the industry kind of mainstream security, very wide. But you're starting to see the cloud native developer first with both security and cloud native, kind of, really growing so fast. This is a major trend for a lot of the ecosystem >> You know, and you hear, when you mention those other conferences, John you hear a lot about, you know, shift left. There's a little bit of lip service there, and you, we heard today way more than lip service. I mean deep practitioner level conversations, and of course the runtime as well. Liz, you spent a lot of time obviously in your keynote on eBPF, and I wonder if you could share with the audience, you know, why you're so excited about that. What makes it a more effective tool compared to other traditional methods? I mean, it sounds like it simplifies things. You talked about instrumenting nodes versus workloads. Can you explain that a little bit more detail? >> Yeah, so with eBPF programs, we can load programs dynamically into the kernel, and we can attach them to all kinds of different events that could be happening anywhere on that virtual machine. And if you have the right knowledge about where to hook into, you can observe network events, you can observe file access events, you can observe pretty much anything that's interesting from a security perspective. And because eBPF programs are living in the kernel, there's only one kernel shared amongst all of the applications that are running on that particular machine. So you don't- you no longer have to instrument each individual application, or each individual pod. There's no more need to inject sidecars. We can apply eBPF based tooling on a per node basis, which just makes things operationally more straightforward, but it's also extremely performant. We can hook these programs into events that typically very lightweight, small programs, kind of, emitting an event, making a decision about whether to drop a packet, making a decision about whether to allow file access, things of that nature. There's super fast, there's no need to transition between kernel space and user space, which is usually quite a costly operation from performance perspective. So eBPF makes it really, you know, it's taking the security tooling, and other forms of tooling, networking and observability. We can take these tools into the kernel, and it's really efficient there. >> So Liz- >> So, if I may, one, just one quick follow up. You gave kind of a space age example (laughs) in your keynote. When, do you think a year from now we'll be able to see, sort of, real world examples in in action? How far away are we? >> Well, some of that is already pretty widely deployed. I mean, in my keynote I was talking about Cilium. Cilium is adopted by hundreds of really big scale deployments. You know, the users file is full of household names who've been using cilium. And as part of that they will be using network policies. And I showed some visualizations this morning of network policy, but again, network policy has been around, pretty much since the early days of Kubernetes. It can be quite fiddly to get it right, but there are plenty of people who are using it at scale today. And then we were also looking at some runtime security detections, seeing things like, in my example, exfiltrating the plans to the Death Star, you know, looking for suspicious executables. And again, that's a little bit, it's a bit newer, but we do have people running that in production today, proving that it really does work, and that eBPF is a scalable technology. It's, I've been fascinated by eBPF for years, and it's really amazing to see it being used in the real world now. >> So Liz, you're a maintainer on the Cilium project. Talk about the use of eBPF in the Cilium project. How is it contributing to cloud native security, and really helping to change the dials on that from an efficiency, from a performance perspective, as well as a, what's in it for me as a business perspective? >> So Cilium is probably best known as a networking plugin for Kubernetes. It, when you are running Kubernetes, you have to make a decision about some networking plugin that you're going to use. And Cilium is, it's an incubating project in the CNCF. It's the most mature of the different CNIs that's in the CNCF at the moment. As I say, very widely deployed. And right from day one, it was based on eBPF. And in fact some of the people who contribute to the eBPF platform within the kernel, are also working on the Cilium project. They've been kind of developed hand in hand for the last six, seven years. So really being able to bring some of that networking capability, it required changes in the kernel that have been put in place several years ago, so that now we can build these amazing tools for Kubernetes operators. So we are using eBPF to make the networking stack for Kubernetes and cloud native really efficient. We can bypass some of the parts of the network stack that aren't necessarily required in a cloud native deployment. We can use it to make these incredibly fast decisions about network policy. And we also have a sub-project called Tetragon, which is a newer part of the Cilium family which uses eBPF to observe these runtime events. The things like people opening a file, or changing the permissions on a file, or making a socket connection. All of these things that as a security engineer you are interested in. Who is running executables who is making network connections, who's accessing files, all of these operations are things that we can observe with Cilium Tetragon. >> I mean it's exciting. We've chatted in the past about that eBPF extended Berkeley Packet Filter, which is about the Linux kernel. And I bring that up Liz, because I think this is the trend I'm trying to understand with this event. It's, I hear bottoms up developer, developer first. It feels like it's an under the hood, infrastructure, security geek fest for practitioners, because Brian, in his keynote, mentioned BIND in reference the late Dan Kaminsky, who was, obviously found that error in BIND at the, in DNS. He mentioned DNS. There's a lot of things that's evolving at the silicone, kernel, kind of root levels of our infrastructure. This seems to be a major shift in focus and rightfully so. Is that something that you guys talk about, or is that coincidence, or am I just overthinking this point in terms of how nerdy it's getting in terms of the importance of, you know, getting down to the low level aspects of protecting everything. And as we heard also the quote was no software secure. (Liz chuckles) So that's up and down the stack of the, kind of the old model. What's your thoughts and reaction to that? >> Yeah, I mean I think a lot of folks who get into security really are interested in these kind of details. You know, you see write-ups of exploits and they, you know, they're quite often really involved, and really require understanding these very deep detailed technical levels. So a lot of us can really geek out about the details of that. The flip side of that is that as an application developer, you know, as- if you are working for a bank, working for a media company, you're writing applications, you shouldn't have to be worried about what's happening at the kernel level. This might be kind of geeky interesting stuff, but really, operationally, it should be taken care of for you. You've got your work cut out building business value in applications. So I think there's this interesting, kind of dual track going on almost, if you like, of the people who really want to get involved in those nitty gritty details, and understand how the underlying, you know, kernel level exploits maybe working. But then how do we make that really easy for people who are running clusters to, I mean like you said, nothing is ever secure, but trying to make things as secure as they can be easily, and make things visual, make things accessible, make things, make it easy to check whether or not you are compliant with whatever regulations you need to be compliant with. That kind of focus on making things usable for the platform team, for the application developers who deliver apps on the platform, that's the important (indistinct)- >> I noticed that the word expert was mentioned, I mentioned earlier with Priyanka. Was there a rationale on the 72 sessions, was there thinking around it or was it kind of like, these are urgent areas, they're obvious low hanging fruit. Was there, take us through the selection process of, or was it just, let's get 72 sessions going to get this (Liz laughs) thing moving? >> No, we did think quite carefully about how we wanted to, what the different focus areas we wanted to include. So we wanted to make sure that we were including things like governance and compliance, and that we talk about not just supply chain, which is clearly a very hot topic at the moment, but also to talk about, you know, threat detection, runtime security. And also really importantly, we wanted to have space to talk about education, to talk about how people can get involved. Because maybe when we talk about all these details, and we get really technical, maybe that's, you know, a bit scary for people who are new into the cloud native security space. We want to make sure that there are tracks and content that are accessible for newcomers to get involved. 'Cause, you know, given time they'll be just as excited about diving into those kind of kernel level details. But everybody needs a place to start, and we wanted to make sure there were conversations about how to get started in security, how to educate other members of your team in your organization about security. So hopefully there's something for everyone. >> That education piece- >> Liz, what's the- >> Oh sorry, Dave. >> What the buzz on on AI? We heard Dan talk about, you know, chatGPT, using it to automate spear phishing. There's always been this tension between security and speed to market, but CISOs are saying, "Hey we're going to a zero trust architecture and that's helping us move faster." Will, in your, is the talk on the floor, AI is going to slow us down a little bit until we figure it out? Or is it actually going to be used as an offensive defensive tool if I can use that angle? >> Yeah, I think all of the above. I actually had an interesting chat this morning. I was talking with Andy Martin from Control Plane, and we were talking about the risk of AI generated code that attempts to replicate what open source libraries already do. So rather than using an existing open source package, an organization might think, "Well, I'll just have my own version, and I'll have an AI write it for me." And I don't, you know, I'm not a lawyer so I dunno what the intellectual property implications of this will be, but imagine companies are just going, "Well you know, write me an SSL library." And that seems terrifying from a security perspective, 'cause there could be all sorts of very slightly different AI generated libraries that pick up the same vulnerabilities that exist in open source code. So, I think we're going to go through a pretty interesting period of vulnerabilities being found in AI generated code that look familiar, and we'll be thinking "Haven't we seen these vulnerabilities before? Yeah, we did, but they were previously in handcrafted code and now we'll see the same things being generated by AI." I mean, in the same way that if you look at an AI generated picture and it's got I don't know, extra fingers, or, you know, extra ears or something that, (Dave laughs) AI does make mistakes. >> So Liz, you talked about the education, the enablement, the 72 sessions, the importance of CloudNativeSecurityCon being its own event this year. What are your hopes and dreams for the practitioners to be able to learn from this event? How do you see the event as really supporting the growth, the development of the cloud native security community as a whole? >> Yeah, I think it's really important that we think of it as a Cloud Native Security community. You know, there are lots of interesting sort of hacker community security related community. Cloud native has been very community focused for a long time, and we really saw, particularly through the tag, the security tag, that there was this growing group of people who were, really wanted to work at that intersection between security and cloud native. And yeah, I think things are going really well this week so far, So I hope this is, you know, the first of many additions of this conference. I think it will also be interesting to see how the balance between a smaller, more focused event, compared to the giant KubeCon and cloud native cons. I, you know, I think there's space for both things, but whether or not there will be other smaller focus areas that want to stand alone and justify being able to stand alone as their own separate conferences, it speaks to the growth of cloud native in general that this is worthwhile doing. >> Yeah. >> It is, and what also speaks to, it reminds me of our tagline here at theCUBE, being able to extract the signal from the noise. Having this event as a standalone, being able to extract the value in it from a security perspective, that those practitioners and the community at large is going to be able to glean from these conversations is something that will be important, that we'll be keeping our eyes on. >> Absolutely. Makes sense for me, yes. >> Yeah, and I think, you know, one of the things, Lisa, that I want to get in, and if you don't mind asking Dave his thoughts, because he just did a breaking analysis on the security landscape. And Dave, you know, as Liz talking about some of these root level things, we talk about silicon advances, powering machine learning, we've been covering a lot of that. You've been covering the general security industry. We got RSA coming up reinforced with AWS, and as you see the cloud native developer first, really driving the standards of the super cloud, the multicloud, you're starting to see a lot more application focus around latency and kind of controlling that, These abstraction layer's starting to see a lot more growth. What's your take, Dave, on what Liz and- is talking about because, you know, you're analyzing the horses on the track, and there's sometimes the old guard security folks, and you got open source continuing to kick butt. And even on the ML side, we've been covering some of these foundation models, you're seeing a real technical growth in open source at all levels and, you know, you still got some proprietary machine learning stuff going on, but security's integrating all that. What's your take and your- what's your breaking analysis on the security piece here? >> I mean, to me the two biggest problems in cyber are just the lack of talent. I mean, it's just really hard to find super, you know, deep expertise and get it quickly. And I think the second is it's just, it's so many tools to deal with. And so the architecture of security is just this mosaic and a mess. That's why I'm excited about initiatives like eBPF because it does simplify things, and developers are being asked to do a lot. And I think one of the other things that's emerging is when you- when we talk about Industry 4.0, and IIoT, you- I'm seeing a lot of tools that are dedicated just to that, you know, slice of the world. And I don't think that's the right approach. I think that there needs to be a more comprehensive view. We're seeing, you know, zero trust architectures come together, and it's going to take some time, but I think that you're going to definitely see, you know, some rethinking of how to architect security. It's a game of whack-a-mole, but I think the industry is just- the technology industry is doing a really really good job of, you know, working hard to solve these problems. And I think the answer is not just another bespoke tool, it's a broader thinking around architectures and consolidating some of those tools, you know, with an end game of really addressing the problem in a more comprehensive fashion. >> Liz, in the last minute or so we have your thoughts on how automation and scale are driving some of these forcing functions around, you know, taking away the toil and the muck around developers, who just want stuff to be code, right? So infrastructure as code. Is that the dynamic here? Is this kind of like new, or is it kind of the same game, different kind of thing? (chuckles) 'Cause you're seeing a lot more machine learning, a lot more automation going on. What's, is that having an impact? What's your thoughts? >> Automation is one of the kind of fundamental underpinnings of cloud native. You know, we're expecting infrastructure to be written as code, We're expecting the platform to be defined in yaml essentially. You know, we are expecting the Kubernetes and surrounding tools to self-heal and to automatically scale and to do things like automated security. If we think about supply chain, you know, automated dependency scanning, think about runtime. Network policy is automated firewalling, if you like, for a cloud native era. So, I think it's all about making that platform predictable. Automation gives us some level of predictability, even if the underlying hardware changes or the scale changes, so that the application developers have something consistent and standardized that they can write to. And you know, at the end of the day, it's all about the business applications that run on top of this infrastructure >> Business applications and the business outcomes. Liz, we so appreciate your time talking to us about this inaugural event, CloudNativeSecurityCon 23. The value in it for those practitioners, all of the content that's going to be discussed and learned, and the growth of the community. Thank you so much, Liz, for sharing your insights with us today. >> Thanks for having me. >> For Liz Rice, John Furrier and Dave Vellante, I'm Lisa Martin. You're watching the Cube's coverage of CloudNativeSecurityCon 23. (electronic music)

Published Date : Feb 2 2023

SUMMARY :

Great to have you back on theCUBE. This is the inaugural event, Liz, and the TAG security, kind of testing the waters it seems, that you can consider security. the bellweather for, you know, and of course the runtime as well. of the applications that are running You gave kind of a space exfiltrating the plans to the Death Star, and really helping to change the dials of the network stack that in terms of the importance of, you know, of the people who really I noticed that the but also to talk about, you know, We heard Dan talk about, you know, And I don't, you know, I'm not a lawyer for the practitioners to be you know, the first of many and the community at large Yeah, and I think, you know, hard to find super, you know, Is that the dynamic here? so that the application developers all of the content that's going of CloudNativeSecurityCon 23.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dan KaminskyPERSON

0.99+

BrianPERSON

0.99+

Dave VellantePERSON

0.99+

DavePERSON

0.99+

Lisa MartinPERSON

0.99+

Liz RicePERSON

0.99+

Andy MartinPERSON

0.99+

Liz RicePERSON

0.99+

SeattleLOCATION

0.99+

LizPERSON

0.99+

Palo AltoLOCATION

0.99+

BostonLOCATION

0.99+

DanPERSON

0.99+

LisaPERSON

0.99+

JohnPERSON

0.99+

John FurrierPERSON

0.99+

AWSORGANIZATION

0.99+

two dayQUANTITY

0.99+

72 sessionsQUANTITY

0.99+

PriyankaPERSON

0.99+

eBPFTITLE

0.99+

CNCFORGANIZATION

0.99+

CloudNativeSecurityConEVENT

0.99+

Control PlaneORGANIZATION

0.99+

KubeConEVENT

0.99+

todayDATE

0.99+

CloudNativeConEVENT

0.99+

Cloud Native Security DayEVENT

0.99+

CUBEORGANIZATION

0.99+

CiliumTITLE

0.99+

secondQUANTITY

0.99+

Boston LisaLOCATION

0.99+

oneQUANTITY

0.99+

each individual applicationQUANTITY

0.98+

bothQUANTITY

0.98+

firstQUANTITY

0.98+

CloudNativeSecurityCon 23EVENT

0.98+

hundredsQUANTITY

0.97+

each individual podQUANTITY

0.97+

both thingsQUANTITY

0.97+

first yearQUANTITY

0.97+

TetragonTITLE

0.97+

BINDORGANIZATION

0.96+

this weekDATE

0.96+

Mobile Word Congress Preview 2023 | Mobile Word Congress 2023


 

(upbeat music) >> Telecommunic^ations is well north of a trillion-dollar business globally that provides critical services on which virtually everyone on the planet relies. Dramatic changes are occurring in the sector, and one of the most important dimensions of this change is the underlying infrastructure that powers global telecommunications networks. Telcos have been thawing out, if you will, their frozen infrastructure, modernizing. They're opening up. They're disaggregating their infrastructure, separating, for example, the control plane from the data plane and adopting open standards. Telco infrastructure is becoming software-defined, and leading telcos are adopting cloud-native microservices to help make developers more productive, so they can respond more quickly to market changes. They're embracing technology consumption models and selectively leveraging the cloud where it makes sense, and these changes are being driven by market forces, the root of which stem from customer demand. So from a customer's perspective, they want services, and they want them fast, meaning not only at high speeds, but also they want them now. Customers want the latest, the greatest, and they want these services to be reliable and stable with high quality of service levels, and they want them to be highly cost effective. Hello and welcome to this preview of Mobile World Congress 2023. My name is Dave Vellante and at this year's event, theCUBE has a major presence at the show, made possible by Dell Technologies, and with me, to unpack the trends in Telco and look ahead to MWC 23, Dennis Hoffman. He's the senior vice-president and general manager of Dell's telecom business and Aaron Chaisson, who is the vice-president of telecom and edge solutions marketing at Dell Technologies. Gentlemen, welcome. Thanks so much for spending some time with me. >> Thank you, Dave. >> Thanks, glad to be here. So, Dennis, let's start with you. Telcos in recent history have been slow to deliver and to monetize new services, in a large part, because their purpose-built infrastructure can been somewhat of a barrier to respondent to these market forces. In many ways, this is what makes telecoms, really, this market, so exciting. So from your perspective, where is the action in this space? >> Yeah, the action, Dave, is kind of all over the place, partly because it's an ecosystem play. You know, I think it's been, as you point out, the disaggregation trend has been going on for a while. The opportunity's been clear, but it has taken a few years to get all of the vendors and all of the components that make up a solution, as well as the operators themselves, to a point where we can start putting this stuff together and actually achieving some of the promise. >> So, Aaron, for those who might not be as familiar with Dell's a activities in this area, you know, here we are just ahead of Mobile World Congress. It's the largest event for telecoms. What should people know about Dell, and what's the key message to this industry? >> Sure, yeah, I think everybody knows that there's a lot of innovation that's been happening in the industry of late. One of the major trends that we're seeing is that shift from more of a vertically-integrated technology stack to more of a disaggregated set of solutions, and that trend has actually created a ton of innovation that's happening across the industry, well, along technology vendors and providers, the telecoms themselves, and so one of the things that Dell's really looking to do is, as Dennis talked about, is build out a really strong ecosystem of partners and vendors that we're working closely together to be able to collaborate on new technologies, new capabilities, that are solving challenges that the networks are seeing today, be able to create new solutions built on those in order to be able to bring new value to the industry and then finally, we want to help both partners as well as our CSP providers activate those changes so that they can bring new solutions to market to be able to serve their customers, and so the key areas that we're really focusing on, with our customers, is technologies to help modernize the network to be able to capitalize on the value of open architectures and bring price performance to what they're expecting and availability that they're expecting today and then also partner with the lines of business to be able to take these new capabilities, produce new solutions and then deliver new value to their customers. >> Great, thank you, Aaron. So, Dennis, I have known you for a number of years. I've watched you. You are a trend spotter, and you're a strategic thinker, and I love now the fact that you're running a business that you had to go out and analyze, and now you got got to make it happen. So how would you describe Dell's strategy in this market? >> Well, it's really two things, and I appreciate the comment. I'm not sure how much of a trend spotter I am, but I certainly enjoy, and I think I'm fascinated by what's going on in this industry right now. Our two main thrusts, Dave, are, first round, trying to catalyze that ecosystem, you know, be a force for pulling together a group of folks, vendors, that have been flying in fairly loose formation for a couple of years to deliver the kinds of solutions that move the needle forward and produce the outcomes that our network-operator customers can actually buy, and consume, and deploy, and have them be supported. The other thing is there's a couple of very key technology areas that need to be advanced here. This ends up being a much anticipated year, in telecom, because of the delivery of some open infrastructure solutions that have been being developed for years, with the Intel Sapphire Rapids program coming to market. We've of course got some purpose-built solutions on top of that for telecommunications networks, some expanded partnerships in the area of multi-cloud infrastructure, and so I would say the second main thrust is we've got to bring some intellectual property to the party. It's not just about pulling the ecosystem together, but those two things together really form the twin thrusts of our strategy. >> Okay, so as you point out, you're obviously not going to go alone in this market. It's way too broad. There's so many routes to market, partnerships, obviously, very, very important. So can you share a little bit more about the ecosystem and partners, maybe give some examples of some of the key partners that you'd be highlighting or working with, maybe at Mobile World Congress or other activities this year? >> Yeah, absolutely. You know, as Aaron touched on. I'm a visual thinker. The way I think about this thing is a very, very vertical architecture is tipping sideways. It's becoming horizontal, and all of the layers of that horizontal architecture are really where the partnerships are at. So let's start at the bottom, silicon. The silicon ecosystem is very much focused on this market and producing very specific products to enable open, high-performance telecom networks. That's both in the form of host processors as well as accelerators. One layer up, of course, is the stuff that we're known for, subsystems, compute, storage, the hardware infrastructure that forms the foundation for telco clouds. A layer above that, all of the cloud software layer, the virtualization and containerization software and all of the usual suspects there, all of whom are very good partners of ours, and we're looking to expand that pretty broadly this year, and then at the top of the layer cake, all of the network functions, all of the VNFs and CNFs that were once kind of the top of proprietary stacks that are now opening up and being delivered as well-formed containers that can run on these clouds. So, you know, we're focusing on all of those, if you will, product partnerships, and there is a services wrapper around all of it, the systems integration necessary to make these systems part of a carrier's network, which, of course, has been running for a long time and needs to be integrated with in a very specific way, and so all of that together kind of forms the ecosystem. All of those are partners, and we're really excited about being at the heart of it. >> Interesting, it's not like we've never seen this movie before, which is sort of repeating itself in telco. Aaron, you heard my little intro up front about the need to modernize infrastructure. I wonder if I could touch on, you know, another major trend which we're seeing, is the cloud, and I'm talking about, not only public, but private and hybrid cloud. The public cloud is an opportunity, but it's also a threat for telcos. You know, telecom providers are looking to the public cloud for specific use cases. You think about, like, bursting for an iPhone launch or whatever but at the same time, these cloud vendors, they're sort of competing with telcos. They're providing, you know, local zones, for example, sometimes trying to do an end run on the telco connectivity services. So telecom companies, they have to find the right balance between what they own and what they rent, and I wonder if you could add some color as to what you see in the market and what Dell, specifically, is doing to support these trends. >> Yeah, I think the most important thing is what we're seeing, as you said, is these aren't things that we haven't seen before, and I think that telecom is really going through their own set of cloud transformations, and so one of the hot topics in the industry now is what is telco cloud and what does that look like going forward? And it's going to be a, as you said, a combination of services that they offer, services that they leverage, but at the end of the day, it's going to help them modernize how they deliver telecommunication services to their customers and then provide value-added services on top of that. From a Dell perspective, you know, we're really providing the technologies to provide the underpinnings to lay a foundation on which that network can be built, whether that's best-of-breed servers that are built and designed for the telecom environments. Recently we announced our, our Infra Block program in partnering with virtualization providers to be able to provide engineered systems that dramatically simplify how our customers can deploy, manage and lifecycle-manage throughout day-two operations, an entire cloud environment, and whether they're using Red Hat, whether they're using Wind River or VMware or other virtualization layers, they can deploy the right virtualization layer at the right part of their network to support the applications they're looking to drive, and Dell is looking to solve how they simplify and manage all of that, both from a hardware as well as a management software perspective. So this is really what Dell's doing to, again, partner with the broader technology community to help make that telco cloud a reality. >> Aaron, let's stay here for a second. I'm interested in some of the use cases that you're going after with customers. You've got edge infrastructure, remote work, 5G. Where's security fit? What are the focus areas for Dell, and can we double-click on that a little bit? >> Yeah, I mean, I think there's two main areas of telecommunication industry that we're talking to. One, we've really been talking about sort of the network buyer, how do they modernize the core, the network edge, the RAN capabilities, to deliver traditional telecommunication services and modernize that as they move into 5G and beyond. I think the other side of the business is telecoms are really looking, from a line of business perspective, to figure out how do they monetize that network and be able to deliver value-added services to their enterprise customers on top of these new networks. So you were just touching on a couple of things that are really critical. You know, in the enterprise space, AI and IoT is driving a tremendous amount of innovation out there, and there's a need for being able to support and manage edge compute at scale, be able to provide connectivity, like private mobility and 4G and 5G, being able to support things like mobile workforces and client capabilities to be able to access these devices that are around all of these edge environments of the enterprises, and telecoms are seen as that, as an opportunity for them to not only provide connectivity, but how do they extend their cloud out into these enterprise environments with compute, with connectivity, with client and connectivity resources, and even also provide protection for those environments as well. So these are areas that Dell's historically very strong at, being able to provide compute, being able to provide connectivity and being able to provide data protection and client services. We are looking to work closely with lines of businesses to be able to develop solutions that they can bring to market in combination with us to be able to serve their end user customers and their enterprises. So those are really the two key areas, not only network buyer, but being able to enable the lines of business to go and capitalize on the services they're developing for their customers. >> I think that line of business aspect is key. I mean, the telcos have had to sit back and provide the plumbing. Cost per bit goes down. Data consumption going through the roof. All the way over to the top guys, you know, had the field day with the data and the customer relationships, and now it's almost like the revenge of the telcos. (chuckles) Dennis, I wonder if we could talk about the future. What can we expect in the years ahead from Dell, if you, you know, break out the binoculars a little bit? >> Yeah, I think you hit it earlier. We've seen the movie before. This has happened in the IT data center. We went from proprietary vertical solutions to horizontal open systems. We went from client server to software-defined, open-hardware, cloud-native and you know, the trend is likely to be exactly that, in the telecom industry, because that's what the operators want. They're not naive to what's happened in the IT data center. They all run very large data centers, and they're trying to get some of the scale economies, some of the agility, the cost of ownership benefits for the reasons Aaron just discussed. You know, it's clear, as you point out, this industry's been really defined by the inability to stop investing and the difficulty to monetize that investment, and I think now everybody's looking at this 5G, and, frankly, 5G plus, 6G and beyond, as the opportunity to really go get a chunk of that revenue, and enterprise edge is the target. >> And 5G is touching so many industries, and that kind of brings me here into Mobile World Congress. I mean, you look at the floor layout, it's amazing. You got industry 4.0. You've got, you know, our traditional industry and telco colliding. There's public policy. So give us a teaser to Mobile World Congress '23. What's on deck at the show for from Dell? >> Yeah, we're really excited about Mobile World Congress. This, as you know, is a massive event for the industry every year, and it's really the event that the whole industry uses to kick off this coming year. So we're going to be using this, obviously, to talk to our customers and our partners about what Dell's looking to do and what we're innovating on right now, and what we're looking to partner with them around. In the front of the house, we're going to be highlighting 13 different solutions and demonstrations to be able to show our customers what we're doing today and show them the use cases and put it into action, so they get to actually look and feel and touch and experience what it is that we're working around. Obviously, meetings are important. Everybody knows Mobile World Congress is the place to get those meetings and kick off for the year. You know, we're looking at several hundred meetings, hundreds of meetings that we're going to be looking to have across the industry with our customers and partners and the broader community, and, of course, we've also got technology that's going to be in a variety of different partner spaces as well. So you can come and see us in hall three, but we're also going to have technologies kind of spread all over the floor, and, of course, there's always theCUBE. You're going to be able to see us live all four days, all day, every day. You're going to be hearing our executives, our partners, our customers, talk about, you know, what Dell is doing to innovate in the industry and how we're looking to leverage the broader open ecosystem to be able to transform, you know, the network and what we're looking to do. So in that space, we're going to be focusing on what we're doing from an ecosystem perspective, our infrastructure focus. We'll be talking about what we're doing to support telco cloud transformation and then finally, as we talked about earlier, how are we helping the lines of business within our telecoms monetize the opportunity. So these are all different things we're really excited to be focusing on and look forward to the event next month. >> Yeah, it's going to be awesome In Barcelona at the Fira. As you say, Dell's big presence in Hall three. Orange is in there, Deutsche Telekom. Intel's in Hall three. VMware's there, Nokia, Vodafone. You got great things to see there. Check that out and of course, theCUBE, we are super excited to be collaborating with you. We got a great setup. We're in the walkway, right between halls four and five, right across from the Government of Catalonia, who are the host partners for the event. So there's going to be a ton of action there. Guys, can't wait to see you there. Really appreciate your time today. >> Great, thanks. >> All right, Mobile World Congress, theCUBE's coverage starts on February 27th, right after the keynotes. So first thing in the morning, East coast time, we'll be broadcasting, as Aaron said, all week, Monday through Thursday, on the show floor. Check that out at thecube.net. Siliconangle.com has all the written coverage, and go to dell.com, see what's happening there. Have all the action from the event. Don't miss us. This is Dave Vellante. We'll see you there. (upbeat music)

Published Date : Jan 30 2023

SUMMARY :

and one of the most important dimensions and to monetize new and all of the components It's the largest event for telecoms. the network to be able to and I love now the fact that of solutions that move the of some of the key partners and all of the layers about the need to and so one of the hot topics I'm interested in some of the use cases the lines of business to go and capitalize and now it's almost like the revenge as the opportunity to really What's on deck at the show for from Dell? and partners and the broader community, So there's going to be and go to dell.com, see

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AaronPERSON

0.99+

DennisPERSON

0.99+

Dave VellantePERSON

0.99+

Aaron ChaissonPERSON

0.99+

DavePERSON

0.99+

Dennis HoffmanPERSON

0.99+

VodafoneORGANIZATION

0.99+

February 27thDATE

0.99+

Dell TechnologiesORGANIZATION

0.99+

BarcelonaLOCATION

0.99+

iPhoneCOMMERCIAL_ITEM

0.99+

DellORGANIZATION

0.99+

OrangeORGANIZATION

0.99+

TelcoORGANIZATION

0.99+

Mobile World CongressEVENT

0.99+

hundredsQUANTITY

0.99+

thecube.netOTHER

0.99+

ThursdayDATE

0.99+

secondQUANTITY

0.99+

NokiaORGANIZATION

0.99+

Mobile World CongressEVENT

0.99+

13 different solutionsQUANTITY

0.99+

TelcosORGANIZATION

0.99+

next monthDATE

0.99+

two key areasQUANTITY

0.99+

MondayDATE

0.98+

first roundQUANTITY

0.98+

Deutsche TelekomORGANIZATION

0.98+

two thingsQUANTITY

0.98+

todayDATE

0.98+

Government of CataloniaORGANIZATION

0.98+

Mobile Word CongressEVENT

0.97+

bothQUANTITY

0.97+

MWC 23EVENT

0.97+

Mobile World Congress 2023EVENT

0.97+

IntelORGANIZATION

0.97+

VMwareORGANIZATION

0.97+

OneQUANTITY

0.97+

this yearDATE

0.96+

oneQUANTITY

0.96+

two main areasQUANTITY

0.96+

firstQUANTITY

0.95+

both partnersQUANTITY

0.94+

twin thrustsQUANTITY

0.94+

fiveQUANTITY

0.93+

Red HatTITLE

0.93+

One layerQUANTITY

0.92+

telcoORGANIZATION

0.92+

FiraLOCATION

0.91+

a trillion-dollarQUANTITY

0.91+

theCUBEORGANIZATION

0.9+

twoQUANTITY

0.88+

hundred meetingsQUANTITY

0.86+

Mobile World Congress '23EVENT

0.83+

two main thrustsQUANTITY

0.82+

2023DATE

0.8+

Oracle Aspires to be the Netflix of AI | Cube Conversation


 

(gentle music playing) >> For centuries, we've been captivated by the concept of machines doing the job of humans. And over the past decade or so, we've really focused on AI and the possibility of intelligent machines that can perform cognitive tasks. Now in the past few years, with the popularity of machine learning models ranging from recent ChatGPT to Bert, we're starting to see how AI is changing the way we interact with the world. How is AI transforming the way we do business? And what does the future hold for us there. At theCube, we've covered Oracle's AI and ML strategy for years, which has really been used to drive automation into Oracle's autonomous database. We've talked a lot about MySQL HeatWave in database machine learning, and AI pushed into Oracle's business apps. Oracle, it tends to lead in AI, but not competing as a direct AI player per se, but rather embedding AI and machine learning into its portfolio to enhance its existing products, and bring new services and offerings to the market. Now, last October at Cloud World in Las Vegas, Oracle partnered with Nvidia, which is the go-to AI silicon provider for vendors. And they announced an investment, a pretty significant investment to deploy tens of thousands more Nvidia GPUs to OCI, the Oracle Cloud Infrastructure and build out Oracle's infrastructure for enterprise scale AI. Now, Oracle CEO, Safra Catz said something to the effect of this alliance is going to help customers across industries from healthcare, manufacturing, telecoms, and financial services to overcome the multitude of challenges they face. Presumably she was talking about just driving more automation and more productivity. Now, to learn more about Oracle's plans for AI, we'd like to welcome in Elad Ziklik, who's the vice president of AI services at Oracle. Elad, great to see you. Welcome to the show. >> Thank you. Thanks for having me. >> You're very welcome. So first let's talk about Oracle's path to AI. I mean, it's the hottest topic going for years you've been incorporating machine learning into your products and services, you know, could you tell us what you've been working on, how you got here? >> So great question. So as you mentioned, I think most of the original four-way into AI was on embedding AI and using AI to make our applications, and databases better. So inside mySQL HeatWave, inside our autonomous database in power, we've been driving AI, all of course are SaaS apps. So Fusion, our large enterprise business suite for HR applications and CRM and ELP, and whatnot has built in AI inside it. Most recently, NetSuite, our small medium business SaaS suite started using AI for things like automated invoice processing and whatnot. And most recently, over the last, I would say two years, we've started exposing and bringing these capabilities into the broader OCI Oracle Cloud infrastructure. So the developers, and ISVs and customers can start using our AI capabilities to make their apps better and their experiences and business workflow better, and not just consume these as embedded inside Oracle. And this recent partnership that you mentioned with Nvidia is another step in bringing the best AI infrastructure capabilities into this platform so you can actually build any type of machine learning workflow or AI model that you want on Oracle Cloud. >> So when I look at the market, I see companies out there like DataRobot or C3 AI, there's maybe a half dozen that sort of pop up on my radar anyway. And my premise has always been that most customers, they don't want to become AI experts, they want to buy applications and have AI embedded or they want AI to manage their infrastructure. So my question to you is, how does Oracle help its OCI customers support their business with AI? >> So it's a great question. So I think what most customers want is business AI. They want AI that works for the business. They want AI that works for the enterprise. I call it the last mile of AI. And they want this thing to work. The majority of them don't want to hire a large and expensive data science teams to go and build everything from scratch. They just want the business problem solved by applying AI to it. My best analogy is Lego. So if you think of Lego, Lego has these millions Lego blocks that you can use to build anything that you want. But the majority of people like me or like my kids, they want the Lego death style kit or the Lego Eiffel Tower thing. They want a thing that just works, and it's very easy to use. And still Lego blocks, you still need to build some things together, which just works for the scenario that you're looking for. So that's our focus. Our focus is making it easy for customers to apply AI where they need to, in the right business context. So whether it's embedding it inside the business applications, like adding forecasting capabilities to your supply chain management or financial planning software, whether it's adding chat bots into the line of business applications, integrating these things into your analytics dashboard, even all the way to, we have a new platform piece we call ML applications that allows you to take a machine learning model, and scale it for the thousands of tenants that you would be. 'Cause this is a big problem for most of the ML use cases. It's very easy to build something for a proof of concept or a pilot or a demo. But then if you need to take this and then deploy it across your thousands of customers or your thousands of regions or facilities, then it becomes messy. So this is where we spend our time making it easy to take these things into production in the context of your business application or your business use case that you're interested in right now. >> So you mentioned chat bots, and I want to talk about ChatGPT, but my question here is different, we'll talk about that in a minute. So when you think about these chat bots, the ones that are conversational, my experience anyway is they're just meh, they're not that great. But the ones that actually work pretty well, they have a conditioned response. Now they're limited, but they say, which of the following is your problem? And then if that's one of the following is your problem, you can maybe solve your problem. But this is clearly a trend and it helps the line of business. How does Oracle think about these use cases for your customers? >> Yeah, so I think the key here is exactly what you said. It's about task completion. The general purpose bots are interesting, but as you said, like are still limited. They're getting much better, I'm sure we'll talk about ChatGPT. But I think what most enterprises want is around task completion. I want to automate my expense report processing. So today inside Oracle we have a chat bot where I submit my expenses the bot ask a couple of question, I answer them, and then I'm done. Like I don't need to go to our fancy application, and manually submit an expense report. I do this via Slack. And the key is around managing the right expectations of what this thing is capable of doing. Like, I have a story from I think five, six years ago when technology was much inferior than it is today. Well, one of the telco providers I was working with wanted to roll a chat bot that does realtime translation. So it was for a support center for of the call centers. And what they wanted do is, Hey, we have English speaking employees, whatever, 24/7, if somebody's calling, and the native tongue is different like Hebrew in my case, or Chinese or whatnot, then we'll give them a chat bot that they will interact with and will translate this on the fly and everything would work. And when they rolled it out, the feedback from customers was horrendous. Customers said, the technology sucks. It's not good. I hate it, I hate your company, I hate your support. And what they've done is they've changed the narrative. Instead of, you go to a support center, and you assume you're going to talk to a human, and instead you get a crappy chat bot, they're like, Hey, if you want to talk to a Hebrew speaking person, there's a four hour wait, please leave your phone and we'll call you back. Or you can try a new amazing Hebrew speaking AI powered bot and it may help your use case. Do you want to try it out? And some people said, yeah, let's try it out. Plus one to try it out. And the feedback, even though it was the exact same technology was amazing. People were like, oh my God, this is so innovative, this is great. Even though it was the exact same experience that they hated a few weeks earlier on. So I think the key lesson that I picked from this experience is it's all about setting the right expectations, and working around the right use case. If you are replacing a human, the level is different than if you are just helping or augmenting something that otherwise would take a lot of time. And I think this is the focus that we are doing, picking up the tasks that people want to accomplish or that enterprise want to accomplish for the customers, for the employees. And using chat bots to make those specific ones better rather than, hey, this is going to replace all humans everywhere, and just be better than that. >> Yeah, I mean, to the point you mentioned expense reports. I'm in a Twitter thread and one guy says, my favorite part of business travel is filling out expense reports. It's an hour of excitement to figure out which receipts won't scan. We can all relate to that. It's just the worst. When you think about companies that are building custom AI driven apps, what can they do on OCI? What are the best options for them? Do they need to hire an army of machine intelligence experts and AI specialists? Help us understand your point of view there. >> So over the last, I would say the two or three years we've developed a full suite of machine learning and AI services for, I would say probably much every use case that you would expect right now from applying natural language processing to understanding customer support tickets or social media, or whatnot to computer vision platforms or computer vision services that can understand and detect objects, and count objects on shelves or detect cracks in the pipe or defecting parts, all the way to speech services. It can actually transcribe human speech. And most recently we've launched a new document AI service. That can actually look at unstructured documents like receipts or invoices or government IDs or even proprietary documents, loan application, student application forms, patient ingestion and whatnot and completely automate them using AI. So if you want to do one of the things that are, I would say common bread and butter for any industry, whether it's financial services or healthcare or manufacturing, we have a suite of services that any developer can go, and use easily customized with their own data. You don't need to be an expert in deep learning or large language models. You could just use our automobile capabilities, and build your own version of the models. Just go ahead and use them. And if you do have proprietary complex scenarios that you need customer from scratch, we actually have the most cost effective platform for that. So we have the OCI data science as well as built-in machine learning platform inside the databases inside the Oracle database, and mySQL HeatWave that allow data scientists, python welding people that actually like to build and tweak and control and improve, have everything that they need to go and build the machine learning models from scratch, deploy them, monitor and manage them at scale in production environment. And most of it is brand new. So we did not have these technologies four or five years ago and we've started building them and they're now at enterprise scale over the last couple of years. >> So what are some of the state-of-the-art tools, that AI specialists and data scientists need if they're going to go out and develop these new models? >> So I think it's on three layers. I think there's an infrastructure layer where the Nvidia's of the world come into play. For some of these things, you want massively efficient, massively scaled infrastructure place. So we are the most cost effective and performant large scale GPU training environment today. We're going to be first to onboard the new Nvidia H100s. These are the new super powerful GPU's for large language model training. So we have that covered for you in case you need this 'cause you want to build these ginormous things. You need a data science platform, a platform where you can open a Python notebook, and just use all these fancy open source frameworks and create the models that you want, and then click on a button and deploy it. And it infinitely scales wherever you need it. And in many cases you just need the, what I call the applied AI services. You need the Lego sets, the Lego death style, Lego Eiffel Tower. So we have a suite of these sets for typical scenarios, whether it's cognitive services of like, again, understanding images, or documents all the way to solving particular business problems. So an anomaly detection service, demand focusing service that will be the equivalent of these Lego sets. So if this is the business problem that you're looking to solve, we have services out there where we can bring your data, call an API, train a model, get the model and use it in your production environment. So wherever you want to play, all the way into embedding this thing, inside this applications, obviously, wherever you want to play, we have the tools for you to go and engage from infrastructure to SaaS at the top, and everything in the middle. >> So when you think about the data pipeline, and the data life cycle, and the specialized roles that came out of kind of the (indistinct) era if you will. I want to focus on two developers and data scientists. So the developers, they hate dealing with infrastructure and they got to deal with infrastructure. Now they're being asked to secure the infrastructure, they just want to write code. And a data scientist, they're spending all their time trying to figure out, okay, what's the data quality? And they're wrangling data and they don't spend enough time doing what they want to do. So there's been a lack of collaboration. Have you seen that change, are these approaches allowing collaboration between data scientists and developers on a single platform? Can you talk about that a little bit? >> Yeah, that is a great question. One of the biggest set of scars that I have on my back from for building these platforms in other companies is exactly that. Every persona had a set of tools, and these tools didn't talk to each other and the handoff was painful. And most of the machine learning things evaporate or die on the floor because of this problem. It's very rarely that they are unsuccessful because the algorithm wasn't good enough. In most cases it's somebody builds something, and then you can't take it to production, you can't integrate it into your business application. You can't take the data out, train, create an endpoint and integrate it back like it's too painful. So the way we are approaching this is focused on this problem exactly. We have a single set of tools that if you publish a model as a data scientist and developers, and even business analysts that are seeing a inside of business application could be able to consume it. We have a single model store, a single feature store, a single management experience across the various personas that need to play in this. And we spend a lot of time building, and borrowing a word that cellular folks used, and I really liked it, building inside highways to make it easier to bring these insights into where you need them inside applications, both inside our applications, inside our SaaS applications, but also inside custom third party and even first party applications. And this is where a lot of our focus goes to just because we have dealt with so much pain doing this inside our own SaaS that we now have built the tools, and we're making them available for others to make this process of building a machine learning outcome driven insight in your app easier. And it's not just the model development, and it's not just the deployment, it's the entire journey of taking the data, building the model, training it, deploying it, looking at the real data that comes from the app, and creating this feedback loop in a more efficient way. And that's our focus area. Exactly this problem. >> Well thank you for that. So, last week we had our super cloud two event, and I had Juan Loza on and he spent a lot of time talking about how open Oracle is in its philosophy, and I got a lot of feedback. They were like, Oracle open, I don't really think, but the truth is if you think about database Oracle database, it never met a hardware platform that it didn't like. So in that sense it's open. So, but my point is, a big part of of machine learning and AI is driven by open source tools, frameworks, what's your open source strategy? What do you support from an open source standpoint? >> So I'm a strong believer that you don't actually know, nobody knows where the next slip fog or the next industry shifting innovation in AI is going to come from. If you look six months ago, nobody foreseen Dali, the magical text to image generation and the exploding brought into just art and design type of experiences. If you look six weeks ago, I don't think anybody's seen ChatGPT, and what it can do for a whole bunch of industries. So to me, assuming that a customer or partner or developer would want to lock themselves into only the tools that a specific vendor can produce is ridiculous. 'Cause nobody knows, if anybody claims that they know where the innovation is going to come from in a year or two, let alone in five or 10, they're just wrong or lying. So our strategy for Oracle is to, I call this the Netflix of AI. So if you think about Netflix, they produced a bunch of high quality shows on their own. A few years ago it was House of Cards. Last month my wife and I binge watched Ginny and Georgie, but they also curated a lot of shows that they found around the world and bought them to their customers. So it started with things like Seinfeld or Friends and most recently it was Squid games and those are famous Israeli TV series called Founder that Netflix bought in, and they bought it as is and they gave it the Netflix value. So you have captioning and you have the ability to speed the movie and you have it inside your app, and you can download it and watch it offline and everything, but nobody Netflix was involved in the production of these first seasons. Now if these things hunt and they're great, then the third season or the fourth season will get the full Netflix production value, high value budget, high value location shooting or whatever. But you as a customer, you don't care whether the producer and director, and screenplay writing is a Netflix employee or is somebody else's employee. It is fulfilled by Netflix. I believe that we will become, or we are looking to become the Netflix of AI. We are building a bunch of AI in a bunch of places where we think it's important and we have some competitive advantage like healthcare with Acellular partnership or whatnot. But I want to bring the best AI software and hardware to OCI and do a fulfillment by Oracle on that. So you'll get the Oracle security and identity and single bill and everything you'd expect from a company like Oracle. But we don't have to be building the data science, and the models for everything. So this means both open source recently announced a partnership with Anaconda, the leading provider of Python distribution in the data science ecosystem where we are are doing a joint strategic partnership of bringing all the goodness into Oracle customers as well as in the process of doing the same with Nvidia, and all those software libraries, not just the Hubble, both for other stuff like Triton, but also for healthcare specific stuff as well as other ISVs, other AI leading ISVs that we are in the process of partnering with to get their stuff into OCI and into Oracle so that you can truly consume the best AI hardware, and the best AI software in the world on Oracle. 'Cause that is what I believe our customers would want the ability to choose from any open source engine, and honestly from any ISV type of solution that is AI powered and they want to use it in their experiences. >> So you mentioned ChatGPT, I want to talk about some of the innovations that are coming. As an AI expert, you see ChatGPT on the one hand, I'm sure you weren't surprised. On the other hand, maybe the reaction in the market, and the hype is somewhat surprising. You know, they say that we tend to under or over-hype things in the early stages and under hype them long term, you kind of use the internet as example. What's your take on that premise? >> So. I think that this type of technology is going to be an inflection point in how software is being developed. I truly believe this. I think this is an internet style moment, and the way software interfaces, software applications are being developed will dramatically change over the next year two or three because of this type of technologies. I think there will be industries that will be shifted. I think education is a good example. I saw this thing opened on my son's laptop. So I think education is going to be transformed. Design industry like images or whatever, it's already been transformed. But I think that for mass adoption, like beyond the hype, beyond the peak of inflected expectations, if I'm using Gartner terminology, I think certain things need to go and happen. One is this thing needs to become more reliable. So right now it is a complete black box that sometimes produce magic, and sometimes produce just nonsense. And it needs to have better explainability and better lineage to, how did you get to this answer? 'Cause I think enterprises are going to really care about the things that they surface with the customers or use internally. So I think that is one thing that's going to come out. And the other thing that's going to come out is I think it's going to come industry specific large language models or industry specific ChatGPTs. Something like how OpenAI did co-pilot for writing code. I think we will start seeing this type of apps solving for specific business problems, understanding contracts, understanding healthcare, writing doctor's notes on behalf of doctors so they don't have to spend time manually recording and analyzing conversations. And I think that would become the sweet spot of this thing. There will be companies, whether it's OpenAI or Microsoft or Google or hopefully Oracle that will use this type of technology to solve for specific very high value business needs. And I think this will change how interfaces happen. So going back to your expense report, the world of, I'm going to go into an app, and I'm going to click on seven buttons in order to get some job done like this world is gone. Like I'm going to say, hey, please do this and that. And I expect an answer to come out. I've seen a recent demo about, marketing in sales. So a customer sends an email that is interested in something and then a ChatGPT powered thing just produces the answer. I think this is how the world is going to evolve. Like yes, there's a ton of hype, yes, it looks like magic and right now it is magic, but it's not yet productive for most enterprise scenarios. But in the next 6, 12, 24 months, this will start getting more dependable, and it's going to change how these industries are being managed. Like I think it's an internet level revolution. That's my take. >> It's very interesting. And it's going to change the way in which we have. Instead of accessing the data center through APIs, we're going to access it through natural language processing and that opens up technology to a huge audience. Last question, is a two part question. And the first part is what you guys are working on from the futures, but the second part of the question is, we got data scientists and developers in our audience. They love the new shiny toy. So give us a little glimpse of what you're working on in the future, and what would you say to them to persuade them to check out Oracle's AI services? >> Yep. So I think there's two main things that we're doing, one is around healthcare. With a new recent acquisition, we are spending a significant effort around revolutionizing healthcare with AI. Of course many scenarios from patient care using computer vision and cameras through automating, and making better insurance claims to research and pharma. We are making the best models from leading organizations, and internal available for hospitals and researchers, and insurance providers everywhere. And we truly are looking to become the leader in AI for healthcare. So I think that's a huge focus area. And the second part is, again, going back to the enterprise AI angle. Like we want to, if you have a business problem that you want to apply here to solve, we want to be your platform. Like you could use others if you want to build everything complicated and whatnot. We have a platform for that as well. But like, if you want to apply AI to solve a business problem, we want to be your platform. We want to be the, again, the Netflix of AI kind of a thing where we are the place for the greatest AI innovations accessible to any developer, any business analyst, any user, any data scientist on Oracle Cloud. And we're making a significant effort on these two fronts as well as developing a lot of the missing pieces, and building blocks that we see are needed in this space to make truly like a great experience for developers and data scientists. And what would I recommend? Get started, try it out. We actually have a shameless sales plug here. We have a free deal for all of our AI services. So it typically cost you nothing. I would highly recommend to just go, and try these things out. Go play with it. If you are a python welding developer, and you want to try a little bit of auto mail, go down that path. If you're not even there and you're just like, hey, I have these customer feedback things and I want to try out, if I can understand them and apply AI and visualize, and do some cool stuff, we have services for that. My recommendation is, and I think ChatGPT got us 'cause I see people that have nothing to do with AI, and can't even spell AI going and trying it out. I think this is the time. Go play with these things, go play with these technologies and find what AI can do to you or for you. And I think Oracle is a great place to start playing with these things. >> Elad, thank you. Appreciate you sharing your vision of making Oracle the Netflix of AI. Love that and really appreciate your time. >> Awesome. Thank you. Thank you for having me. >> Okay. Thanks for watching this Cube conversation. This is Dave Vellante. We'll see you next time. (gentle music playing)

Published Date : Jan 24 2023

SUMMARY :

AI and the possibility Thanks for having me. I mean, it's the hottest So the developers, So my question to you is, and scale it for the thousands So when you think about these chat bots, and the native tongue It's just the worst. So over the last, and create the models that you want, of the (indistinct) era if you will. So the way we are approaching but the truth is if you the movie and you have it inside your app, and the hype is somewhat surprising. and the way software interfaces, and what would you say to them and you want to try a of making Oracle the Netflix of AI. Thank you for having me. We'll see you next time.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
NetflixORGANIZATION

0.99+

OracleORGANIZATION

0.99+

NvidiaORGANIZATION

0.99+

Dave VellantePERSON

0.99+

Elad ZiklikPERSON

0.99+

MicrosoftORGANIZATION

0.99+

twoQUANTITY

0.99+

Safra CatzPERSON

0.99+

EladPERSON

0.99+

thousandsQUANTITY

0.99+

AnacondaORGANIZATION

0.99+

two partQUANTITY

0.99+

fourth seasonQUANTITY

0.99+

House of CardsTITLE

0.99+

LegoORGANIZATION

0.99+

second partQUANTITY

0.99+

GoogleORGANIZATION

0.99+

first seasonsQUANTITY

0.99+

SeinfeldTITLE

0.99+

Last monthDATE

0.99+

third seasonQUANTITY

0.99+

four hourQUANTITY

0.99+

last weekDATE

0.99+

HebrewOTHER

0.99+

Las VegasLOCATION

0.99+

last OctoberDATE

0.99+

OCIORGANIZATION

0.99+

three yearsQUANTITY

0.99+

bothQUANTITY

0.99+

two frontsQUANTITY

0.99+

first partQUANTITY

0.99+

Juan LozaPERSON

0.99+

FounderTITLE

0.99+

fourDATE

0.99+

six weeks agoDATE

0.99+

todayDATE

0.99+

two yearsQUANTITY

0.99+

pythonTITLE

0.99+

fiveQUANTITY

0.99+

a yearQUANTITY

0.99+

six months agoDATE

0.99+

two developersQUANTITY

0.99+

firstQUANTITY

0.98+

PythonTITLE

0.98+

H100sCOMMERCIAL_ITEM

0.98+

five years agoDATE

0.98+

oneQUANTITY

0.98+

FriendsTITLE

0.98+

one guyQUANTITY

0.98+

10QUANTITY

0.97+

Lee Klarich, Palo Alto Networks | Palo Alto Networks Ignite22


 

>>The cube presents Ignite 22, brought to you by Palo Alto Networks. >>Good morning. Live from the MGM Grand. It's the cube at Palo Alto Networks Ignite 2022. Lisa Martin here with Dave Valante, day two, Dave of our coverage, or last live day of the year, which I can't believe, lots of good news coming out from Palo Alto Networks. We're gonna sit down with its Chief product officer next and dissect all of that. >>Yeah. You know, oftentimes in, in events like this, day two is product day. And look, it's all about products and sales. Yeah, I mean those, that's the, the, the golden rule. Get the product right, get the sales right, and everything else will take care of itself. So let's talk product. >>Yeah, let's talk product. Lee Claridge joins us, the Chief Product Officer at Palo Alto Networks. Welcome Lee. Great to have >>You. Thank you so much. >>So we didn't get to see your keynote yesterday, but we heard one of the things, you know, we've been talking about the threat landscape, the challenges. We had Unit 42, Wendy on yesterday. We had Nash on and near talking about the massive challenges in the threat landscape. But we understand, despite that you are optimistic. I am. Talk about your optimism given the massive challenges that every organization is facing today. >>Look, cybersecurity's hard and often in cybersecurity in the industry, a lot of people get sort of really focused on what the threat actors are doing, why they're successful. We investigate breaches and we think of it, it just starts to feel somewhat overwhelming for a lot of folks. And I just happen to think a little bit differently. I, I look at it and I think it's actually a solvable problem. >>Talk about cyber resilience. How does Palo Alto Networks define that and how does it help customers achieve that? Cuz that's the, that's the holy grail these days. >>Yes. Look, the, the way I think about cyber resilience is basically in two pieces. One, it's all about how do we prevent the threat actors from actually being successful in the first place. Second, we also have to be prepared for what happens if they happen to find a way to get through, and how do we make sure that that happens? The blast radius is, is as narrowly contained as possible. And so the, the way that we approach this is, you know, I, I kind of think in terms of like threes three core principles. Number one, we have to have amazing technology and we have to constantly be, keep keeping up with and ideally ahead of what attackers are doing. It's a big part of my job as the chief product officer, right? Second is we, you know, one of the, the big transformations that's happened is the advent of, of AI and the opportunity, as long as we can do it, a great job of collecting great data, we can drive AI and machine learning models that can start to be used for our advantage as defenders, and then further use that to drive automation. >>So we take the human out of the response as much as possible. What that allows us to do is actually to start using AI and automation to disrupt attackers as it's happening. The third piece then becomes natively integrating these capabilities into a platform. And when we do that, what allows us to do is to make sure that we are consistently delivering cybersecurity everywhere that it needs to happen. That we don't have gaps. Yeah. So great tech AI and automation deliver natively integrated through platforms. This is how we achieve cyber resilience. >>So I like the positivity. In fact, Steven Schmidt, who's now the CSO of, of Amazon, you know, Steven, and it was the CSO at AWS at the time, the first reinforced, he stood up on stage and said, listen, this narrative that's all gloom and doom is not the right approach. We actually are doing a good job and we have the capability. So I was like, yeah, you know, okay. I'm, I'm down with that. Now when I, my question is around the, the portfolio. I, I was looking at, you know, some of your alternatives and options and the website. I mean, you got network security, cloud security, you got sassy, you got capp, you got endpoint, pretty much everything. You got cider security, which you just recently acquired for, you know, this whole shift left stuff, you know, nothing in there on identity yet. That's good. You partner for that, but, so could you describe sort of how you think about the portfolio from a product standpoint? How you continue to evolve it and what's the direction? Yes. >>So the, the, the cybersecurity industry has long had this, I'm gonna call it a major flaw. And the major flaw of the cybersecurity industry has been that every time there is a problem to be solved, there's another 10 or 20 startups that get funded to solve that problem. And so pretty soon what you have is you're, if you're a customer of this is you have 50, a hundred, the, the record is over 400 different cybersecurity products that as a customer you're trying to operationalize. >>It's not a good record to have. >>No, it's not a good record. No. This is, this is the opposite of Yes. Not a good personal best. So the, so the reason I start there in answering your question is the, the way that, so that's one end of the extreme, the other end of the extreme view to say, is there such a thing as a single platform that does everything? No, there's not. That would be nice. That was, that sounds nice. But the reality is that cybersecurity has to be much broader than any one single thing can do. And so the, the way that we approach this is, is three fundamental areas that, that we, Palo Alto Networks are going to be the best at. One is network security within network security. This includes hardware, NextGen, firewalls, software NextGen, firewalls, sassy, all the different security services that tie into that. All of that makes up our network security platforms. >>So everything to do with network security is integrated in that one place. Second is around cloud security. The shift to the cloud is happening is very real. That's where Prisma Cloud takes center stage. C a P is the industry acronym. If if five letters thrown together can be called an acronym. The, so cloud native application protection platform, right? So this is where we bring all of the different cloud security capabilities integrated together, delivered through one platform. And then security, security operations is the third for us. This is Cortex. And this is where we bring together endpoint security, edr, ndr, attack, surface management automation, all of this. And what we had, what we announced earlier this year is x Im, which is a Cortex product for actually integrating all of that together into one SOC transformation platform. So those are the three platforms, and that's how we deliver much, much, much greater levels of native integration of capabilities, but in a logical way where we're not trying to overdo it. >>And cider will fit into two or three >>Into Prisma cloud into the second cloud to two. Yeah. As part of the shift left strategy of how we secure makes sense applications in the cloud >>When you're in customer conversations. You mentioned the record of 400 different product. That's crazy. Nash was saying yesterday between 30 and 50 and we talked with him and near about what's realistic in terms of getting organizations to, to be able to consolidate. I'd love to understand what does cybersecurity transformation look like for the average organization that's running 30 to 50 point >>Solutions? Yeah, look, 30 to 50 is probably, maybe normal. A hundred is not unusual. Obviously 400 is the extreme example. But all of those are, those numbers are too big right now. I think, I think realistic is high. Single digits, low double digits is probably somewhat realistic for most organizations, the most complex organizations that might go a bit above that if we're really doing a good job. That's, that's what I think. Now second, I do really want to point out on, on the product guy. So, so maybe this is just my way of thinking, consolidation is an outcome of having more tightly and natively integrated capabilities. Got you. And the reason I flip that around is if I just went to you and say, Hey, would you like to consolidate? That just means maybe fewer vendors that that helps the procurement person. Yes. You know, have to negotiate with fewer companies. Yeah. Integration is actually a technology statement. It's delivering better outcomes because we've designed multiple capabilities to work together natively ourselves as the developers so that the customer doesn't have to figure out how to do it. It just happens that by, by doing that, the customer gets all this wonderful technical benefit. And then there's this outcome sitting there called, you've just consolidated your complexity. How >>Specialized is the customer? I think a data pipelines, and I think I have a data engineer, have a data scientists, a data analyst, but hyper specialized roles. If, if, let's say I have, you know, 30 or 40, and one of 'em is an SD wan, you know, security product. Yeah. I'm best of breed an SD wan. Okay, great. Palo Alto comes in as you, you pointed out, I'm gonna help you with your procurement side. Are there hyper specialized individuals that are aligned to that? And how that's kind of part A and B, how, assuming that's the case, how does that integration, you know, carry through to the business case? So >>Obviously there are specializations, this is the, and, and cybersecurity is really important. And so there, this is why there had, there's this tendency in the past to head toward, well I have this problem, so who's the best at solving this one problem? And if you only had one problem to solve, you would go find the specialist. The, the, the, the challenge becomes, well, what do you have a hundred problems to solve? I is the right answer, a hundred specialized solutions for your a hundred problems. And what what I think is missing in this approach is, is understanding that almost every problem that needs to be solved is interconnected with other problems to be solved. It's that interconnectedness of the problems where all of a sudden, so, so you mentioned SD wan. Okay, great. I have Estee wan, I need it. Well what are you connecting SD WAN to? >>Well, ideally our view is you would connect SD WAN and branch to the cloud. Well, would you run in the cloud? Well, in our case, we can take our SD wan, connect it to Prisma access, which is our cloud security solution, and we can natively integrate those two things together such that when you use 'em together, way easier. Right? All of a sudden we took what seemed like two separate problems. We said, no, actually these problems are related and we can deliver a solution where those, those things are actually brought together. And that's just one simple example, but you could, you could extend that across a lot of these other areas. And so that's the difference. And that's how the, the, the mindset shift that is happening. And, and I I was gonna say needs to happen, but it's starting to happen. I'm talking to customers where they're telling me this as opposed to me telling them. >>So when you walk around the floor here, there's a visual, it's called a day in the life of a fuel member. And basically what it has, it's got like, I dunno, six or seven different roles or personas, you know, one is management, one is a network engineer, one's a coder, and it gives you an X and an O. And it says, okay, put the X on things that you spend your time doing, put the o on things that you wanna spend your time doing a across all different sort of activities that a SecOps pro would do. There's Xs and O's in every one of 'em. You know, to your point, there's so much overlap going on. This was really difficult to discern, you know, any kind of consistent pattern because it, it, it, unlike the hyper specialization and data pipelines that I just described, it, it's, it's not, it, it, there's way more overlap between those, those specialization roles. >>And there's a, there's a second challenge that, that I've observed and that we are, we've, we've been trying to solve this and now I'd say we've become, started to become a lot more purposeful in, in, in trying to solve this, which is, I believe cybersecurity, in order for cyber security vendors to become partners, we actually have to start to become more opinionated. We actually have to start, guys >>Are pretty opinionated. >>Well, yes, but, but the industry large. So yes, we're opinionated. We build these products, but that have, that have our, I'll call our opinions built into it, and then we, we sell the, the product and then, and then what happens? Customer says, great, thank you for the product. I'm going to deploy it however I want to, which is fine. Obviously it's their choice at the end of the day, but we actually should start to exert an opinion to say, well, here's what we would recommend, here's why we would recommend that. Here's how we envisioned it providing the most value to you. And actually starting to build that into the products themselves so that they start to guide the customer toward these outcomes as opposed to just saying, here's a product, good luck. >>What's, what's the customer lifecycle, not lifecycle, but really kind of that, that collaboration, like it's one thing to, to have products that you're saying that have opinions to be able to inform customers how to deploy, how to use, but where is their feedback in this cycle of product development? >>Oh, look, my, this, this is, this is my life. I'm, this is, this is why I'm here. This is like, you know, all day long I'm meeting with customers and, and I share what we're doing. But, but it's, it's a, it's a 50 50, I'm half the time I'm listening as well to understand what they're trying to do, what they're trying to accomplish, and how, what they need us to do better in order to help them solve the problem. So the, the, and, and so my entire organization is oriented around not just telling customers, here's what we did, but listening and understanding and bringing that feedback in and constantly making the products better. That's, that's the, the main way in which we do this. Now there's a second way, which is we also allow our products to be customized. You know, I can say, here's our best practices, we see it, but then allowing our customer to, to customize that and tailor it to their environment, because there are going to be uniquenesses for different customers in parti, we need more complex environments. Explain >>Why fire firewalls won't go away >>From your perspective. Oh, Nikesh actually did a great job of explaining this yesterday, and although he gave me credit for it, so this is like a, a circular kind of reference here. But if you think about the firewalls slightly more abstract, and you basically say a NextGen firewalls job is to inspect every connection in order to make sure the connection should be allowed. And then if it is allowed to make sure that it's secure, >>Which that is the definition of an NextGen firewall, by the way, exactly what I just said. Now what you noticed is, I didn't describe it as a hardware device, right? It can be delivered in hardware because there are environments where you need super high throughput, low latency, guess what? Hardware is the best way of delivering that functionality. There's other use cases cloud where you can't, you, you can't ship hardware to a cloud provider and say, can you install this hardware in front of my cloud? No, no, no. You deployed in a software. So you take that same functionality, you instantly in a software, then you have other use cases, branch offices, remote workforce, et cetera, where you say, actually, I just want it delivered from the cloud. This is what sassy is. So when I, when I look at and say, the firewall's not going away, what, what, what I see is the functionality needed is not only not going away, it's actually expanding. But how we deliver it is going to be across these three form factors. And then the customer's going to decide how they need to intermix these form factors for their environment. >>We put forth this notion of super cloud a while about a year ago. And the idea being you're gonna leverage the hyperscale infrastructure and you're gonna build a, a, you're gonna solve a common problem across clouds and even on-prem, super cloud above the cloud. Not Superman, but super as in Latin. But it turned into this sort of, you know, superlative, which is fun. But the, my, my question to you is, is, is, is Palo Alto essentially building a common cross-cloud on-prem, presumably out to the edge consistent experience that we would call a super cloud? >>Yeah, I don't know that we've ever used the term surfer cloud to describe it. Oh, you don't have to, but yeah. But yes, based on how you describe it, absolutely. And it has three main benefits that I describe to customers all the time. The first is the end user experience. So imagine your employee, and you might work from the office, you might work from home, you might work while from, from traveling and hotels and conferences. And, and by the way, in one day you might actually work from all of those places. So, so the first part is the end user experience becomes way better when it doesn't matter where they're working from. They always get the same experience, huge benefit from productivity perspective, no second benefit security operations. You think about the, the people who are actually administering these policies and analyzing the security events. >>Imagine how much better it is for them when it's all common and consistent across everywhere that has to happen. Cloud, on-prem branch, remote workforce, et cetera. So there's a operational benefit that is super valuable. Third, security benefit. Imagine if in this, this platform-based approach, if we come out with some new amazing innovation that is able to detect and block, you know, new types of attacks, guess what, we can deliver that across hardware, software, and sassi uniformly and keep it all up to date. So from a security perspective, way better than trying to figure out, okay, there's some new technology, you know, does my hardware provider have that technology or not? Does my soft provider? So it's bringing that in to one place. >>From a developer perspective, is there a, a, a PAs layer, forgive me super PAs, that a allows the developers to have a common experience across irrespective of physical location with the explicit purpose of serving the objective of your platform. >>So normally when I think of the context of developers, I'm thinking of the context of, of the people who are building the applications that are being deployed. And those applications may be deployed in a data center, increasing the data centers, depending private clouds might be deployed into, into public cloud. It might even be hybrid in nature. And so if you think about what the developer wants, the developer actually wants to not have to think about security, quite frankly. Yeah. They want to think about how do I develop the functionality I need as quickly as possible with the highest quality >>Possible, but they are being forced to think about it more and more. Well, but anyway, I didn't mean to >>Interrupt you. No, it's a, it is a good, it's a, it's, it's a great point. The >>Well we're trying to do is we're trying to enable our security capabilities to work in a way that actually enables what the developer wants that actually allows them to develop faster that actually allows them to focus on the things they want to focus. And, and the way we do that is by actually surfacing the security information that they need to know in the tools that they use as opposed to trying to bring them to our tools. So you think about this, so our customer is a security customer. Yet in the application development lifecycle, the developer is often the user. So we, we we're selling, we're so providing a solution to security and then we're enabling them to surface it in the developer tools. And by, by doing this, we actually make life easier for the developers such that they're not actually thinking about security so much as they're just saying, oh, I pulled down the wrong open source package, it's outdated, it has vulnerabilities. I was notified the second I did it, and I was told which one I should pull down. So I pulled down the right one. Now, if you're a developer, do you think that's security getting your way? Not at all. No. If you're a developer, you're thinking, thank god, thank you, thank, thank you. Yeah. You told me at a point where it was easy as opposed to waiting a week or two and then telling me where it's gonna be really hard to fix it. Yeah. Nothing >>More than, so maybe be talking to Terraform or some other hash corp, you know, environment. I got it. Okay. >>Absolutely. >>We're 30 seconds. We're almost out of time. Sure. But I'd love to get your snapshot. Here we are at the end of calendar 2022. What are you, we know you're optimistic in this threat landscape, which we're gonna see obviously more dynamics next year. What kind of nuggets can you drop about what we might hear and see in 23? >>You're gonna see across everything. We do a lot more focus on the use of AI and machine learning to drive automated outcomes for our customers. And you're gonna see us across everything we do. And that's going to be the big transformation. It'll be a multi-year transformation, but you're gonna see significant progress in the next 12 months. All >>Right, well >>What will be the sign of that progress? If I had to make a prediction, which >>I'm better security with less effort. >>Okay, great. I feel like that's, we can measure that. I >>Feel, I feel like that's a mic drop moment. Lee, it's been great having you on the program. Thank you for walking us through such great detail. What's going on in the organization, what you're doing for customers, where you're meeting, how you're meeting the developers, where they are. We'll have to have you back. There's just, just too much to unpack. Thank you both so much. Actually, our pleasure for Lee Cler and Dave Valante. I'm Lisa Martin. You're watching The Cube Live from Palo Alto Networks Ignite 22, the Cube, the leader in live, emerging and enterprise tech coverage.

Published Date : Dec 14 2022

SUMMARY :

The cube presents Ignite 22, brought to you by Palo Alto It's the cube at Palo Alto Networks get the sales right, and everything else will take care of itself. Great to have But we understand, despite that you are optimistic. And I just happen to think a little bit Cuz that's the, that's the holy grail these days. And so the, the way that we approach this is, you know, I, I kind of think in terms of like threes three core delivering cybersecurity everywhere that it needs to happen. So I was like, yeah, you know, And so pretty soon what you have is you're, the way that we approach this is, is three fundamental areas that, So everything to do with network security is integrated in that one place. Into Prisma cloud into the second cloud to two. look like for the average organization that's running 30 to 50 point And the reason I flip that around is if I just went to you and say, Hey, would you like to consolidate? kind of part A and B, how, assuming that's the case, how does that integration, the problems where all of a sudden, so, so you mentioned SD wan. And so that's the difference. and it gives you an X and an O. And it says, okay, put the X on things that you spend your And there's a, there's a second challenge that, that I've observed and that we And actually starting to build that into the products themselves so that they start This is like, you know, all day long I'm meeting with customers and, and I share what we're doing. And then if it is allowed to make sure that it's secure, Which that is the definition of an NextGen firewall, by the way, exactly what I just said. my question to you is, is, is, is Palo Alto essentially building a And, and by the way, in one day you might actually work from all of those places. with some new amazing innovation that is able to detect and block, you know, forgive me super PAs, that a allows the developers to have a common experience And so if you think Well, but anyway, I didn't mean to No, it's a, it is a good, it's a, it's, it's a great point. And, and the way we do that is by actually More than, so maybe be talking to Terraform or some other hash corp, you know, environment. But I'd love to get your snapshot. And that's going to be the big transformation. I feel like that's, we can measure that. We'll have to have you back.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Dave ValantePERSON

0.99+

Lee ClaridgePERSON

0.99+

Lee KlarichPERSON

0.99+

DavePERSON

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Lee ClerPERSON

0.99+

NashPERSON

0.99+

StevenPERSON

0.99+

LeePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Steven SchmidtPERSON

0.99+

Palo Alto NetworksORGANIZATION

0.99+

yesterdayDATE

0.99+

30QUANTITY

0.99+

a weekQUANTITY

0.99+

30 secondsQUANTITY

0.99+

three platformsQUANTITY

0.99+

SecondQUANTITY

0.99+

one platformQUANTITY

0.99+

two piecesQUANTITY

0.99+

twoQUANTITY

0.99+

next yearDATE

0.99+

thirdQUANTITY

0.99+

firstQUANTITY

0.99+

first partQUANTITY

0.99+

50QUANTITY

0.99+

five lettersQUANTITY

0.99+

one problemQUANTITY

0.99+

threeQUANTITY

0.99+

sixQUANTITY

0.99+

two separate problemsQUANTITY

0.99+

two thingsQUANTITY

0.99+

third pieceQUANTITY

0.99+

bothQUANTITY

0.99+

NextGenORGANIZATION

0.99+

oneQUANTITY

0.99+

10QUANTITY

0.99+

ThirdQUANTITY

0.99+

TerraformORGANIZATION

0.99+

second challengeQUANTITY

0.98+

second wayQUANTITY

0.98+

secondQUANTITY

0.98+

20 startupsQUANTITY

0.98+

400QUANTITY

0.98+

sevenQUANTITY

0.98+

second cloudQUANTITY

0.98+

OneQUANTITY

0.97+

The Cube LiveTITLE

0.97+

over 400 different cybersecurity productsQUANTITY

0.97+

one placeQUANTITY

0.96+

one dayQUANTITY

0.96+

day twoQUANTITY

0.96+

todayDATE

0.96+

40QUANTITY

0.96+

one simple exampleQUANTITY

0.95+

three fundamental areasQUANTITY

0.94+

next 12 monthsDATE

0.94+

earlier this yearDATE

0.93+

three main benefitsQUANTITY

0.93+

WendyPERSON

0.91+

Subbu Iyer, Aerospike | AWS re:Invent 2022


 

>>Hey everyone, welcome to the Cube's coverage of AWS Reinvent 2022. Lisa Martin here with you with Subaru ier, one of our alumni who's now the CEO of Aerospike. Sabu. Great to have you on the program. Thank you for joining us. >>Great as always, to be on the cube. Luisa, good to meet you. >>So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized, yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >>Well, you know, we, we see this across the board when I talk to customers and prospects. There's a desire from the business and from it actually to leverage data to really fuel newer applications, newer services, newer business lines, if you will, for companies. I think the struggle is one, I think one the, you know, the plethora of data that is created, you know, surveys say that over the next three years data is gonna be, you know, by 2025, around 175 zetabytes, right? A hundred and zetabytes of data is gonna be created. And that's really a, a, a growth of north of 30% year over year. But the more important, and the interesting thing is the real time component of that data is actually growing at, you know, 35% cagr. And what enterprises desire is decisions that are made in real time or near real time. >>And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient if you'll, so you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you, for both you, both users, so to speak? And the last point that we see out there is even if you're able to, you know, bring all that data, you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one, capturing the data, you know, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >>You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data, it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >>Yeah. When, when, when we started Aerospike, right when the company started, it started with the premise that data is gonna grow, number one, exponentially. Two, when applications open up to the internet, there's gonna be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply side and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years, what we've seen is as digitization has actually permeated every industry out there, the need to harness data in real time is pretty much present in every industry. >>Whether that's retail, whether that's financial services, telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather, are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't wanna be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, the customer exp you know, customer experience is paramount and we as customers expect answers in, you know, an instant in real time. And on the other hand, the way they make decisions is based on a large data set because you know, larger data sets actually propel better decisions. So there's competing pressures here, which essentially drive the need. One from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an inces need to actually make decisions in real or near real time. >>You know, I think one of the things that's been in short supply over the last couple of years is patients we do expect as consumers, whether we're in our business lives, our personal lives that we're going to be getting, be given information and data that's relevant, it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >>So, you know, going back to your initial question Lisa, around why is data really a high value but underutilized or underleveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus and they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? >>It's really easy to build an application that operates at low scale or low throughput or low concurrency, but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a, a really robust data platform that can be up on a five, nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer, which is, can you operate all of this at a cost point? Which is not prohibitive, but it makes sense from a TCO perspective. Cuz a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey, the revenue starts going up, the user base starts going up, but the cost basis starts crossing over the revenue and they're losing money on the service, ironically, as the service becomes more popular. So really unlimited scale, predictable performance always on, on a globally resilient basis and low tco. These are the four essential capabilities of any modern data platform. >>So then talk to me with those as the four main core functionalities of a modern data platform. How does aerospace deliver that? >>So we were built, as I said, from the from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers, we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are, who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know, globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here, essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid state devices as essentially extended memory. So you're getting memory performance, but you're accessing these SSDs, you're not paying memory prices, but you're getting memory performance as a result of that. >>You can attach a lot more data to each node or each server in your distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with aerospike, the same things at 60 to 80% lower server count and as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said, that's the key kind of starting point to the innovation. We layer around capabilities like, you know, replication change, data notification, you know, synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service, you can have a single aerospace cluster with one node in San Francisco, one northern New York, another one in London. And this would be basically seamlessly operating. So that, you know, this is strongly consistent. >>Very few no SQL data platforms are strongly consistent or if they are strongly consistent, they will actually suffer performance degradation. And what strongly consistent means is, you know, all your data is always available, it's guaranteed to be available, there is no data lost anytime. So in this configuration that I talked about, if the node in London goes down, your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up, it rejoins the cluster and everything is back to kind of the way it was before, you know, London left the cluster so to speak. So the op, the ability to do this globally resilient, highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or hybrid memory architecture and then we start building out a lot of these other capabilities around the platform. >>And then over the years, what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in a silo. So aerospace gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, pulsar, so that as you're ingesting data from a variety of data sources, you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike, you can actually run spark jobs across that data in a, in a multithreaded parallel fashion to get really insight from that data at really high, high throughput and high speed, >>High throughput, high speed, incredibly important, especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, edge IOT devices, the workforce embracing more and more hybrid these days. How are you ex helping customers to extract more value from data while also lowering costs? Go into some customer examples cause I know you have some great ones. >>Yeah, you know, I think we have, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples, let me talk to you about some of kind of the use cases which we see out there. We see a lot of aerospace being used in fraud detection. We see us being used in recommendations and since we use get used in customer data profiles or customer profiles, customer 360 stores, you know, multiplayer gaming and entertainment, these are kind of the repeated use case digital payments. We power most of the digital payment systems across the globe. Specific example from a, from a specific example perspective, the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you actually paying somebody your transaction is, you know, being sent through aero spike to really decide whether this is a fraudulent transaction or not. >>And when you do that, you know, you and I as a customer not gonna wait around for 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal for every transaction that goes through PayPal before us, you know, PayPal was missing out on about 2% of their SLAs, which was essentially millions of dollars, which they were losing because, you know, they were letting transactions go through and taking the risk that it, it's not a fraudulent transaction with the aerospace. They can now actually get a much better sla and the data set on which they compute the fraud score has gone up by, you know, several factors. So by 30 x if you will. So not only has the data size that is powering the fraud engine actually grown up 30 x with Aerospike. Yeah. But they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's, >>And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >>Yes. And so that's a, that's a really powerful use case and you know, it's, it's a great customer, great customer success story. The other one I would talk about is really Wayfair, right? From retail and you know, from e-commerce. So everybody knows Wayfair global leader in really, you know, online home furnishings and they use us to power their recommendations engine and you know, it's basically if you're purchasing this, people who bought this but also bought these five other things, so on and so forth, they have actually seen the card size at checkout go by up to 30% as a result of actually powering their recommendations in G by through Aerospike. And they, they were able to do this by reducing the server count by nine x. So on one ninth of the servers that were there before aerospace, they're now powering their recommendation engine and seeing card size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair >>Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized, relevant experience that's gonna show me if I bought this, show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >>Exactly. And you know, another great example you asked about, you know, customer stories, Adobe, who doesn't know Adobe, you know, they, they're on a, they're on a mission to deliver the best customer experience that they can and they're talking about, you know, great customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this. With Aerospike going to Aerospike, basically what they have seen is their throughput go up by 70%, their cost has been reduced by three x. So essentially doing it at one third of the cost while their annual data growth continues at, you know, about north of 30%. So not only is their data growing, they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know, on a dataset which is constantly growing at north, north of 30% in this case. >>Those are three great examples, PayPal, Wayfair, Adobe talking about, especially with Wayfair when you talk about increasing their cart checkout sizes, but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >>Yep. I, I'll give you a fun one here. So, you know, you may not have heard about this company, it's called Dream 11 and it's a company based out of India, but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform and you know, India is a nation which is cricket crazy. So you know, when, when they have their premier league going on, you know, there's millions of users logged onto the dream alone platform building their fantasy lead teams and you know, playing on that particular platform, it has a hundred million users, a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered a, an amazing success story in, in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by aerospace where think about that they are able to deliver all of this and support a hundred million users, 5.5 million concurrent users all with you know, 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is you know, world renowned but at least you know from a what we see out there, it's an amazing success story of operating at scale. >>Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike aws, the partnership GRAVITON two better together. What are you guys doing together there? >>Great partnership. AWS has multiple layers in terms of partnerships. So you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know, those instance types work well for us. And then we just released support for Aerospike on the graviton platform and we just announced a benchmark of Aerospike running on graviton on aws. And what we see out there is with the benchmark, a 1.6 x improvement in price performance and you know, about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on graviton. So this is an amazing story from a price performance perspective, performance per wat for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aero Aerospike and aws, not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >>And it sounds like a great sustainability story. I wish we had more time so we would talk about this, but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >>Thank you very much. I mean, if, if folks are at reinvent next week or this week, come on and see us at our booth. We are in the data analytics pavilion. You can find us pretty easily. Would love to talk to you. >>Perfect. We'll send them there. So Ira, thank you so much for joining me on the program today. We appreciate your insights. >>Thank you Lisa. >>I'm Lisa Martin. You're watching The Cubes coverage of AWS Reinvent 2022. Thanks for watching.

Published Date : Dec 7 2022

SUMMARY :

Great to have you on the program. Great as always, to be on the cube. So, you know, every company these days has got to be a data company, the, you know, the plethora of data that is created, you know, surveys say that over the next three years you know, making decisions from it in real time and really operating it You know, you bring up a great point with respect to real time data access. on which ad to put in front of you and I so that we would click or engage with that particular the way they make decisions is based on a large data set because you know, larger data sets actually capabilities of a modern data platform that need to be delivered to meet demanding lot of the data platforms that, you know, some of these applications were built on have goes back to my first answer, which is, can you operate all of this at a cost So then talk to me with those as the four main core functionalities of deliver the always on, you know, operations. So that, you know, this is strongly consistent. the way it was before, you know, London left the cluster so to speak. Once the data is in Aerospike, you can actually run you ex helping customers to extract more value from data while also lowering So, you know, before I get into specific customer examples, let me talk to you about some 10 seconds for PayPal to say yay or me, we expect, you know, the decision to be made in an And that's what we expect as consumers, right? really powerful in terms of the business outcome and what we are able to, you know, We have this expectation that needs to be really fueled by technology. And you know, another great example you asked about, you know, especially with Wayfair when you talk about increasing their cart onto the dream alone platform building their fantasy lead teams and you know, What are you guys doing together there? So you know, we engage with AWS at the executive level. but thank you so much for talking about the main capabilities of a modern data platform, Thank you very much. So Ira, thank you so much for joining me on the program today. Thanks for watching.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

LondonLOCATION

0.99+

IraPERSON

0.99+

LisaPERSON

0.99+

60QUANTITY

0.99+

LuisaPERSON

0.99+

AdobeORGANIZATION

0.99+

San FranciscoLOCATION

0.99+

PayPalORGANIZATION

0.99+

30%QUANTITY

0.99+

70%QUANTITY

0.99+

10 secondsQUANTITY

0.99+

WayfairORGANIZATION

0.99+

35%QUANTITY

0.99+

AerospikeORGANIZATION

0.99+

each serverQUANTITY

0.99+

OneQUANTITY

0.99+

IndiaLOCATION

0.99+

27%QUANTITY

0.99+

nineQUANTITY

0.99+

10 yearsQUANTITY

0.99+

30 xQUANTITY

0.99+

32%QUANTITY

0.99+

99.95%QUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

awsORGANIZATION

0.99+

each nodeQUANTITY

0.99+

next weekDATE

0.99+

2025DATE

0.99+

fiveQUANTITY

0.99+

less than one millisecondQUANTITY

0.99+

millions of usersQUANTITY

0.99+

SubaruORGANIZATION

0.99+

bothQUANTITY

0.99+

secondQUANTITY

0.99+

first answerQUANTITY

0.99+

one thirdQUANTITY

0.99+

this weekDATE

0.99+

millions of dollarsQUANTITY

0.99+

over 70%QUANTITY

0.99+

SabuPERSON

0.99+

both usersQUANTITY

0.99+

threeQUANTITY

0.98+

todayDATE

0.98+

80%QUANTITY

0.98+

KafkaTITLE

0.98+

1.6 xQUANTITY

0.98+

northern New YorkLOCATION

0.98+

5.5 million concurrent usersQUANTITY

0.98+

GRAVITONORGANIZATION

0.98+

hundred million usersQUANTITY

0.97+

Dream 11ORGANIZATION

0.97+

TwoQUANTITY

0.97+

eachQUANTITY

0.97+

AerospikeTITLE

0.97+

third thingQUANTITY

0.96+

hundred million usersQUANTITY

0.96+

The CubesTITLE

0.95+

around 175 zetabytesQUANTITY

0.95+

AWS re:Invent Show Wrap | AWS re:Invent 2022


 

foreign welcome back to re invent 2022 we're wrapping up four days well one evening and three solid days wall-to-wall of cube coverage I'm Dave vellante John furrier's birthday is today he's on a plane to London to go see his nephew get married his his great Sister Janet awesome family the furriers uh spanning the globe and uh and John I know you wanted to be here you're watching in Newark or you were waiting to uh to get in the plane so all the best to you happy birthday one year the Amazon PR people brought a cake out to celebrate John's birthday because he's always here at AWS re invented his birthday so I'm really pleased to have two really special guests uh former Cube host Cube Alum great wikibon contributor Stu miniman now with red hat still good to see you again great to be here Dave yeah I was here for that cake uh the twitterverse uh was uh really helping to celebrate John's birthday today and uh you know always great to be here with you and then with this you know Awesome event this week and friend of the cube of many time Cube often Cube contributor as here's a cube analyst this week as his own consultancy sarbj johal great to see you thanks for coming on good to see you Dave uh great to see you stu I'm always happy to participate in these discussions and um I enjoy the discussion every time so this is kind of cool because you know usually the last day is a getaway day and this is a getaway day but this place is still packed I mean it's I mean yeah it's definitely lighter you can at least walk and not get slammed but I subjit I'm going to start with you I I wanted to have you as the the tail end here because cause you participated in the analyst sessions you've been watching this event from from the first moment and now you've got four days of the Kool-Aid injection but you're also talking to customers developers Partners the ecosystem where do you want to go what's your big takeaways I think big takeaways that Amazon sort of innovation machine is chugging along they are I was listening to some of the accessions and when I was back to my room at nine so they're filling the holes in some areas but in some areas they're moving forward there's a lot to fix still it doesn't seem like that it seems like we are done with the cloud or The Innovation is done now we are building at the millisecond level so where do you go next there's a lot of room to grow on the storage side on the network side uh the improvements we need and and also making sure that the software which is you know which fits the hardware like there's a specialized software um sorry specialized hardware for certain software you know so there was a lot of talk around that and I attended some of those sessions where I asked the questions around like we have a specialized database for each kind of workload specialized processes processors for each kind of workload yeah the graviton section and actually the the one interesting before I forget that the arbitration was I asked that like why there are so many so many databases and IRS for the egress costs and all that stuff can you are you guys thinking about reducing that you know um the answer was no egress cost is not a big big sort of uh um show stopper for many of the customers but but the from all that sort of little discussion with with the folks sitting who build these products over there was that the plethora of choice is given to the customers to to make them feel that there's no vendor lock-in so if you are using some open source you know um soft software it can be on the you know platform side or can be database side you have database site you have that option at AWS so this is a lot there because I always thought that that AWS is the mother of all lock-ins but it's got an ecosystem and we're going to talk about exactly we'll talk about Stu what's working within AWS when you talk to customers and where are the challenges yeah I I got a comment on open source Dave of course there because I mean look we criticized to Amazon for years about their lack of contribution they've gotten better they're doing more in open source but is Amazon the mother of all lock-ins many times absolutely there's certain people inside Amazon I'm saying you know many of us talk Cloud native they're like well let's do Amazon native which means you're like full stack is things from Amazon and do things the way that we want to do things and you know I talk to a lot of customers they use more than one Cloud Dave and therefore certain things absolutely I want to Leverage The Innovation that Amazon has brought I do think we're past building all the main building blocks in many ways we are like in day two yes Amazon is fanatically customer focused and will always stay that way but you know there wasn't anything that jumped out at me last year or this year that was like Wow new category whole new way of thinking about something we're in a vocals last year Dave said you know we have over 200 services and if we listen to you the customer we'd have over two thousand his session this week actually got some great buzz from my friends in the serverless ecosystem they love some of the things tying together we're using data the next flywheel that we're going to see for the next 10 years Amazon's at the center of the cloud ecosystem in the IT world so you know there's a lot of good things here and to your point Dave the ecosystem one of the things I always look at is you know was there a booth that they're all going to be crying in their beer after Amazon made an announcement there was not a tech vendor that I saw this week that was like oh gosh there was an announcement and all of a sudden our business is gone where I did hear some rumbling is Amazon might be the next GSI to really move forward and we've seen all the gsis pushing really deep into supporting Cloud bringing workloads to the cloud and there's a little bit of rumbling as to that balance between what Amazon will do and their uh their go to market so a couple things so I think I think we all agree that a lot of the the announcements here today were taping seams right I call it and as it relates to the mother of all lock-in the reason why I say that it's it's obviously very much a pejorative compare Oracle company you know really well with Amazon's lock-in for Amazon's lock-in is about bringing this ecosystem together so that you actually have Choice Within the the house so you don't have to leave you know there's a there's a lot to eat at the table yeah you look at oracle's ecosystem it's like yeah you know oracle is oracle's ecosystem so so that is how I think they do lock in customers by incenting them not to leave because there's so much Choice Dave I agree with you a thousand I mean I'm here I'm a I'm a good partner of AWS and all of the partners here want to be successful with Amazon and Amazon is open to that it's not our way or get out which Oracle tries how much do you extract from the overall I.T budget you know are you a YouTube where you give the people that help you create a large sum of the money YouTube hasn't been all that profitable Amazon I think is doing a good balance of the ecosystem makes money you know we used to talk Dave about you know how much dollars does VMware make versus there um I think you know Amazon is a much bigger you know VMware 2.0 we used to think talk about all the time that VMware for every dollar spent on VMware licenses 15 or or 12 or 20 were spent in the ecosystem I would think the ratio is even higher here sarbji and an Oracle I would say it's I don't know yeah actually 1 to 0.5 maybe I don't know but I want to pick on your discussion about the the ecosystem the the partner ecosystem is so it's it's robust strong because it's wider I was I was not saying that there's no lock-in with with Amazon right AWS there's lock-in there's lock-in with everything there's lock-in with open source as well but but the point is that they're they're the the circle is so big you don't feel like locked in but they're playing smart as well they're bringing in the software the the platforms from the open source they're picking up those packages and saying we'll bring it in and cater that to you through AWS make it better perform better and also throw in their custom chips on top of that hey this MySQL runs better here so like what do you do I said oh Oracle because it's oracle's product if you will right so they are I think think they're filing or not slenders from their go to market strategy from their engineering and they listen to they're listening to customers like very closely and that has sort of side effects as well listening to customers creates a sprawl of services they have so many services and I criticized them last year for calling everything a new service I said don't call it a new service it's a feature of a existing service sure a lot of features a lot of features this is egress our egress costs a real problem or is it just the the on-prem guys picking at the the scab I mean what do you hear from customers so I mean Dave you know I I look at what Corey Quinn talks about all the time and Amazon charges on that are more expensive than any other Cloud the cloud providers and partly because Amazon is you know probably not a word they'd use they are dominant when it comes to the infrastructure space and therefore they do want to make it a little bit harder to do that they can get away with it um because um yeah you know we've seen some of the cloud providers have special Partnerships where you can actually you know leave and you're not going to be charged and Amazon they've been a little bit more flexible but absolutely I've heard customers say that they wish some good tunning and tongue-in-cheek stuff what else you got we lay it on us so do our players okay this year I think the focus was on the upside it's shifting gradually this was more focused on offside there were less talk of of developers from the main stage from from all sort of quadrants if you will from all Keynotes right so even Werner this morning he had a little bit for he was talking about he he was talking he he's job is to Rally up the builders right yeah so he talks about the go build right AWS pipes I thought was kind of cool then I said like I'm making glue easier I thought that was good you know I know some folks don't use that I I couldn't attend the whole session but but I heard in between right so it is really adopt or die you know I am Cloud Pro for last you know 10 years and I think it's the best model for a technology consumption right um because of economies of scale but more importantly because of division of labor because of specialization because you can't afford to hire the best security people the best you know the arm chip designers uh you can't you know there's one actually I came up with a bumper sticker you guys talked about bumper sticker I came up with that like last couple of weeks The Innovation favorite scale they have scale they have Innovation so that's where the Innovation is and it's it's not there again they actually say the market sets the price Market you as a customer don't set the price the vendor doesn't set the price Market sets the price so if somebody's complaining about their margins or egress and all that I think that's BS um yeah I I have a few more notes on the the partner if you you concur yeah Dave you know with just coming back to some of this commentary about like can Amazon actually enable something we used to call like Community clouds uh your companies like you know Goldman and NASDAQ and the like where Industries will actually be able to share data uh and you know expand the usage and you know Amazon's going to help drive that API economy forward some so it's good to see those things because you know we all know you know all of us are smarter than just any uh single company together so again some of that's open source but some of that is you know I think Amazon is is you know allowing Innovation to thrive I think the word you're looking for is super cloud there well yeah I mean it it's uh Dave if you want to go there with the super cloud because you know there's a metaphor for exactly what you described NASDAQ Goldman Sachs we you know and and you know a number of other companies that are few weeks at the Berkeley Sky Computing paper yeah you know that's a former supercloud Dave Linthicum calls it metacloud I'm not really careful I mean you know I go back to the the challenge we've been you know working at for a decade is the distributed architecture you know if you talk about AI architectures you know what lives in the cloud what lives at the edge where do we train things where do we do inferences um locations should matter a lot less Amazon you know I I didn't hear a lot about it this show but when they came out with like local zones and oh my gosh out you know all the things that Amazon is building to push out to the edge and also enabling that technology and software and the partner ecosystem helps expand that and Pull It in it's no longer you know Dave it was Hotel California all of the data eventually is going to end up in the public cloud and lock it in it's like I don't think that's going to be the case we know that there will be so much data out at the edge Amazon absolutely is super important um there some of those examples we're giving it's not necessarily multi-cloud but there's collaboration happening like in the healthcare world you know universities and hospitals can all share what they're doing uh regardless of you know where they live well Stephen Armstrong in the analyst session did say that you know we're going to talk about multi-cloud we're not going to lead with it necessarily but we are going to actually talk about it and that's different to your points too than in the fullness of time all the data will be in the cloud that's a new narrative but go ahead yeah actually Amazon is a leader in the cloud so if they push the cloud even if they don't say AWS or Amazon with it they benefit from it right and and the narrative is that way there's the proof is there right so again Innovation favorite scale there are chips which are being made for high scale their software being tweaked for high scale you as a Bank of America or for the Chrysler as a typical Enterprise you cannot afford to do those things in-house what cloud providers can I'm not saying just AWS Google cloud is there Azure guys are there and few others who are behind them and and you guys are there as well so IBM has IBM by the way congratulations to your red hat I know but IBM won the award um right you know very good partner and yeah but yeah people are dragging their feet people usually do on the change and they are in denial denial they they drag their feet and they came in IBM director feed the cave Den Dell drag their feed the cave in yeah you mean by Dragon vs cloud deniers cloud deniers right so server Huggers I call them but they they actually are sitting in Amazon Cloud Marketplace everybody is buying stuff from there the marketplace is the new model OKAY Amazon created the marketplace for b2c they are leading the marketplace of B2B as well on the technology side and other people are copying it so there are multiple marketplaces now so now actually it's like if you're in in a mobile app development there are two main platforms Android and Apple you first write the application for Apple right then for Android hex same here as a technology provider as and I I and and I actually you put your stuff to AWS first then you go anywhere else yeah they are later yeah the Enterprise app store is what we've wanted for a long time the question is is Amazon alone the Enterprise app store or are they partner of a of a larger portfolio because there's a lot of SAS companies out there uh that that play into yeah what we need well and this is what you're talking about the future but I just want to make a point about the past you talking about dragging their feet because the Cube's been following this and Stu you remember this in 2013 IBM actually you know got in a big fight with with Amazon over the CIA deal you know and it all became public judge wheeler eviscerated you know IBM and it ended up IBM ended up buying you know soft layer and then we know what happened there and it Joe Tucci thought the cloud was Mosey right so it's just amazing to see we have booksellers you know VMware called them books I wasn't not all of them are like talking about how great Partnerships they are it's amazing like you said sub GC and IBM uh with the the GSI you know Partnership of the year but what you guys were just talking about was the future and that's what I wanted to get to is because you know Amazon's been leading the way I I was listening to Werner this morning and that just reminded me of back in the days when we used to listen to IBM educate us give us a master class on system design and decoupled systems and and IO and everything else now Amazon is you know the master educator and it got me thinking how long will that last you know will they go the way of you know the other you know incumbents will they be disrupted or will they you know keep innovating maybe it's going to take 10 or 20 years I don't know yeah I mean Dave you actually you did some research I believe it was a year or so ago yeah but what will stop Amazon and the one thing that worries me a little bit um is the two Pizza teams when you have over 202 Pizza teams the amount of things that each one of those groups needs to take care of was more than any human could take care of people burn out they run out of people how many amazonians only last two or three years and then leave because it is tough I bumped into plenty of friends of mine that have been you know six ten years at Amazon and love it but it is a tough culture and they are driving werner's keynote I thought did look to from a product standpoint you could say tape over some of the seams some of those solutions to bring Beyond just a single product and bring them together and leverage data so there are some signs that they might be able to get past some of those limitations but I still worry structurally culturally there could be some challenges for Amazon to keep the momentum going especially with the global economic impact that we are likely to see in the next year bring us home I think the future side like we could talk about the vendors all day right to serve the community out there I think we should talk about how what's the future of technology consumption from the consumer side so from the supplier side just a quick note I think the only danger AWS has has that that you know Fred's going after them you know too big you know like we will break you up and that can cause some disruption there other than that I think they they have some more steam to go for a few more years at least before we start thinking about like oh this thing is falling apart or anything like that so they have a lot more they have momentum and it's continuing so okay from the I think game is on retail by the way is going to get disrupted before AWS yeah go ahead from the buyer's side I think um the the future of the sort of Technology consumption is based on the paper uh use and they actually are turning all their services to uh they are sort of becoming serverless behind the scenes right all analytics service they had one service left they they did that this year so every service is serverless so that means you pay exactly for the amount you use the compute the iops the the storage so all these three layers of course Network we talked about the egress stuff and that's a problem there because of the network design mainly because Google has a flatter design and they have lower cost so so they are actually squeezing the their their designing this their services in a way that you don't waste any resources as a buyer so for example very simple example when early earlier In This Cloud you will get a VM right in Cloud that's how we started so and you can get 20 use 20 percent of the VM 80 is getting wasted that's not happening now that that has been reduced to the most extent so now your VM grows as you grow the usage and if you go higher than the tier you picked they will charge you otherwise they will not charge you extra so that's why there's still a lot of instances like many different types you have to pick one I think the future is that those instances will go away the the instance will be formed for you on the fly so that is the future serverless all right give us bumper sticker Stu and then Serb G I'll give you my quick one and then we'll wrap yeah so just Dave to play off of sharp G and to wrap it up you actually wrote about it on your preview post for here uh serverless we're talking about how developers think about things um and you know Amazon in many ways you know is the new default server uh you know for the cloud um and containerization fits into the whole serverless Paradigm uh it's the space that I live in uh you know every day here and you know I was happy to see the last few years serverless and containers there's a blurring a line and you know subject we're still going to see VMS for a long time yeah yeah we will see that so give us give us your book Instagram my number six is innovation favorite scale that's my bumper sticker and and Amazon has that but also I I want everybody else to like the viewers to take a look at the the Google Cloud as well as well as IBM with others like maybe you have a better price to Performance there for certain workloads and by the way one vendor cannot do it alone we know that for sure the market is so big there's a lot of room for uh Red Hats of the world and and and Microsoft's the world to innovate so keep an eye on them they we need the competition actually and that's why competition Will Keep Us to a place where Market sets the price one vendor doesn't so the only only danger is if if AWS is a monopoly then I will be worried I think ecosystems are the Hallmark of a great Cloud company and Amazon's got the the biggest and baddest ecosystem and I think the other thing to watch for is Industries building on top of the cloud you mentioned the Goldman Sachs NASDAQ Capital One and Warner media these all these industries are building their own clouds and that's where the real money is going to be made in the latter half of the 2020s all right we're a wrap this is Dave Valente I want to first of all thank thanks to our great sponsors AWS for for having us here this is our 10th year at the cube AMD you know sponsoring as well the the the cube here Accenture sponsor to third set upstairs upstairs on the fifth floor all the ecosystem partners that came on the cube this week and supported our mission for free content our content is always free we try to give more to the community and we we take back so go to thecube.net and you'll see all these videos go to siliconangle com for all the news wikibon.com I publish weekly a breaking analysis series I want to thank our amazing crew here you guys we have probably 30 35 people unbelievable our awesome last session John Walls uh Paul Gillen Lisa Martin Savannah Peterson John Furrier who's on a plane we appreciate Andrew and Leonard in our ear and all of our our crew Palo Alto Boston and across the country thank you so much really appreciate it all right we are a wrap AWS re invent 2022 we'll see you in two weeks we'll see you two weeks at Palo Alto ignite back here in Vegas thanks for watching thecube the leader in Enterprise and emerging Tech coverage [Music]

Published Date : Dec 2 2022

SUMMARY :

of the ecosystem makes money you know we

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Stephen ArmstrongPERSON

0.99+

DavePERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

LeonardPERSON

0.99+

Joe TucciPERSON

0.99+

JohnPERSON

0.99+

LondonLOCATION

0.99+

Corey QuinnPERSON

0.99+

AndrewPERSON

0.99+

2013DATE

0.99+

10QUANTITY

0.99+

IBMORGANIZATION

0.99+

NASDAQORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

NewarkLOCATION

0.99+

John WallsPERSON

0.99+

Paul GillenPERSON

0.99+

GoldmanORGANIZATION

0.99+

VegasLOCATION

0.99+

10th yearQUANTITY

0.99+

two weeksQUANTITY

0.99+

YouTubeORGANIZATION

0.99+

last yearDATE

0.99+

Dave LinthicumPERSON

0.99+

GoogleORGANIZATION

0.99+

six ten yearsQUANTITY

0.99+

Palo AltoLOCATION

0.99+

thecube.netOTHER

0.99+

AppleORGANIZATION

0.99+

AndroidTITLE

0.99+

John FurrierPERSON

0.99+

over 200 servicesQUANTITY

0.99+

fifth floorQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

this yearDATE

0.99+

Ankur Shah, Palo Alto Networks | AWS re:Invent 2022


 

>>Good afternoon from the Venetian Expo, center, hall, whatever you wanna call it, in Las Vegas. Lisa Martin here. It's day four. I'm not sure what this place is called. Wait, >>What? >>Lisa Martin here with Dave Ante. This is the cube. This is day four of a ton of coverage that we've been delivering to you, which, you know, cause you've been watching since Monday night, Dave, we are almost at the end, we're almost at the show wrap. Excited to bring back, we've been talking about security, a lot about security. Excited to bring back a, an alumni to talk about that. But what's your final thoughts? >>Well, so just in, in, in the context of security, we've had just three in a row talking about cyber, which is like the most important topic. And I, and I love that we're having Palo Alto Networks on Palo Alto Networks is the gold standard in security. Talk to CISOs, they wanna work with them. And, and it was, it's interesting because I've been following them for a little bit now, watch them move to the cloud and a couple of little stumbling points. But I said at the time, they're gonna figure it out and, and come rocking back. And they have, and the company's just performing unbelievably well despite, you know, all the macro headwinds that we love to >>Talk about. So. Right. And we're gonna be unpacking all of that with one of our alumni. As I mentioned, Anker Shaw is with us, the SVP and GM of Palo Alto Networks. Anker, welcome back to the Cub. It's great to see you. It's been a while. >>It's good to be here after a couple years. Yeah, >>Yeah. I think three. >>Yeah, yeah, for sure. Yeah. Yeah. It's a bit of a blur after Covid. >>Everyone's saying that. Yeah. Are you surprised that there are still this many people on the show floor? Cuz I am. >>I am. Yeah. Look, I am not, this is my fourth, last year was probably one third or one fourth of this size. Yeah. But pre covid, this is what dream went looked like. And it's energizing, it's exciting. It's just good to be doing the good old things. So many people and yeah. Amazing technology and innovation. It's been incredible. >>Let's talk about innovation. I know you guys, Palo Alto Networks recently acquired cyber security. Talk to us a little bit about that. How is it gonna compliment Prisma? Give us all the scoop on that. >>Yeah, for sure. Look, some of the recent, the cybersecurity attacks that we have seen are related to supply chain, the colonial pipeline, many, many supply chain. And the reason for that is the modern software supply chain, not the physical supply chain, the one that AWS announced, but this is the software supply chain is really incredibly complicated, complicated developers that are building and shipping code faster than ever before. And the, the site acquisition at the center, the heart of that was securing the entire supply chain. White House came with a new initiative on supply chain security and SBO software bill of material. And we needed a technology, a company, and a set of people who can really deliver to that. And that's why we acquired that for supply chain security, otherwise known as cicd, security, c >>IDC security. Yeah. So how will that complement PRIs McCloud? >>Yeah, so look, if you look at our history lease over the last four years, we have been wanting to, our mission mission has been to build a single code to cloud platform. As you may know, there are over 3000 security vendors in the industry. And we said enough is enough. We need a platform player who can really deliver a unified cohesive platform solution for our customers because they're sick and tired of buying PI point product. So our mission has been to deliver that code to cloud platform supply chain security was a missing piece and we acquired them, it fits right really nicely into our portfolio of products and solution that customers have. And they'll have a single pin of glass with this. >>Yeah. So there's a lot going on. You've got, you've got an adversary that is incredibly capable. Yeah. These days and highly motivated and extremely sophisticated mentioned supply chain. It's caused a shift in, in CSO strategies, talking about the pandemic, of course we know work from home that changed things. You've mentioned public policy. Yeah. And, and so, and as well you have the cloud, cloud, you know, relatively new. I mean, it's not that new, but still. Yeah. But you've got the shared responsibility model and not, not only do you have the shared responsibility model, you have the shared responsibility across clouds and OnPrem. So yes, the cloud helps with security, but that the CISO has to worry about all these other things. The, the app dev team is being asked to shift left, you know, secure and they're not security pros. Yeah. And you know, kind audit is like the last line of defense. So I love this event, I love the cloud, but customers need help in making their lives simpler. Yeah. And the cloud in and of itself, because, you know, shared responsibility doesn't do that. Yeah. That's what Palo Alto and firms like yours come in. >>Absolutely. So look, Jim, this is a unable situation for a lot of the Cisco, simply because there are over 26 million developers, less than 3 million security professional. If you just look at all the announcement the AWS made, I bet you there were like probably over 2000 features. Yeah. I mean, they're shipping faster than ever before. Developers are moving really, really fast and just not enough security people to keep up with the velocity and the innovation. So you are right, while AWS will guarantee securing the infrastructure layer, but everything that is built on top of it, the new machine learning stuff, the new application, the new supply chain applications that are developed, that's the responsibility of the ciso. They stay up at night, they don't know what's going on because developers are bringing new services and new technology. And that's why, you know, we've always taken a platform approach where customers and the systems don't have to worry about it. >>What AWS new service they have, it's covered, it's secured. And that's why the adopters, McCloud and Palo Alto Networks, because regardless what developers bring, security is always there by their side. And so security teams need just a simple one click solution. They don't have to worry about it. They can sleep at night, keep the bad actors away. And, and that's, that's where Palo Alto Networks has been innovating in this area. AWS is one of our biggest partners and you know, we've integrated with, with a lot of their services. We launch about three integrations with their services. And we've been doing this historically for more and >>More. Are you still having conversations with the security folks? Or because security is a board level conversation, are your conversations going up a stack because this is a C-suite problem, this is a board level initiative? >>Absolutely. Look, you know, there was a time about four years ago, like the best we could do is director of security. Now it's just so CEO level conversation, board level conversation to your point, simply because I mean, if, if all your financial stuff is going to public cloud, all your healthcare data, all your supply chain data is going to public cloud, the board is asking very simple question, what are you doing to secure that? And to be honest, the question is simple. The answer's not because all the stuff that we talked about, too many applications, lots and lots of different services, different threat vectors and the bad actors, the bad guys are always a step ahead of the curve. And that's why this has become a board level conversation. They wanna make sure that things are secure from the get go before, you know, the enterprises go too deep into public cloud adoption. >>I mean there, there was shift topics a little bit. There was hope or kinda early this year that that cyber was somewhat insulated from the sort of macro press pressures. Nobody's safe. Even the cloud is sort of, you know, facing those, those headwinds people optimizing costs. But one thing when you talk to customers is, I always like to talk about that, that optiv graph. We've all seen it, right? And it's just this eye test of tools and it's a beautiful taxonomy, but there's just too many tools. So we're seeing a shift from point tools to platforms because obviously a platform play, and that's a way. So what are you seeing in the, in the field with customers trying to optimize their infrastructure costs with regard to consolidating to >>Platforms? Yeah. Look, you rightly pointed out one thing, the cybersecurity industry in general and Palo Alto networks, knock on wood, the stocks doing well. The macro headwinds hasn't impacted the security spend so far, right? Like time will tell, we'll, we'll see how things go. And one of the primary reason is that when you know the economy starts to slow down, the customers again want to invest in platforms. It's simple to deploy, simple to operationalize. They want a security partner of choice that knows that they, it's gonna be by them through the entire journey from code to cloud. And so that's why platform, especially times like these are more important than they've ever been before. You know, customers are investing in the, the, the product I lead at Palo Alto network called Prisma Cloud. It's in the cloud network application protection platform seen app space where once again, customers that investing in platform from quote to cloud and avoiding all the point products for sure. >>Yeah. Yeah. And you've seen it in, in Palo Alto's performance. I mean, not every cyber firm has is, is, >>You know, I know. Ouch. CrowdStrike Yeah. >>Was not. Well you saw that. I mean, and it was, and and you know, the large customers were continuing to spend, it was the small and mid-size businesses Yeah. That were, were were a little bit soft. Yeah. You know, it's a really, it's really, I mean, you see Okta now, you know, after they had some troubles announcing that, you know, their, their, their visibility's a little bit better. So it's, it's very hard to predict right now. And of course if TOMA Brava is buying you, then your stock price has been up and steady. That's, >>Yeah. Look, I think the key is to have a diversified portfolio of products. Four years ago before our CEO cash took over the reins of the company, we were a single product X firewall company. Right. And over time we have added XDR with the first one to introduce that recently launched x Im, you know, to, to make sure we build an NextGen team, cloud security is a completely net new investment, zero trust with access as workers started working remotely and they needed to make sure enterprises needed to make sure that they're accessing the applications securely. So we've added a lot of portfolio products over time. So you have to remain incredibly diversified, stay strong, because there will be stuff like remote work that slowed down. But if you've got other portfolio product like cloud security, while those secular tailwinds continue to grow, I mean, look how fast AWS is growing. 35, 40%, like $80 billion run rate. Crazy at that, that scale. So luckily we've got the portfolio of products to ensure that regardless of what the customer's journey is, macro headwinds are, we've got portfolio of solutions to help our customers. >>Talk a little bit about the AWS partnership. You talked about the run rate and I was reading a few days ago. You're right. It's an 82 billion arr, massive run rate. It's crazy. Well, what are, what is a Palo Alto Networks doing with aws and what's the value in it to help your customers on a secure digital transformation journey? >>Well, absolutely. We have been doing business with aws. We've been one of their security partners of choice for many years now. We have a presence in the marketplace where customers can through one click deploy the, the several Palo Alto Networks security solutions. So that's available. Like I said, we had launch partner to many, many new products and innovation that AWS comes up with. But always the day one partner, Adam was talking about some of those announcements and his keynote security data lake was one of those. And they were like a bunch of others related to compute and others. So we have been a partner for a long time, and look, AWS is an incredibly customer obsessed company. They've got their own security products. But if the customer says like, Hey, like I'd like to pick this from yours, but there's three other things from Palo Alto Networks or S MacCloud or whatever else that may be, they're open to it. And that's the great thing about AWS where it doesn't have to be wall garden open ecosystem, let the customer pick the best. >>And, and that's, I mean, there's, there's examples where AWS is directly competitive. I mean, my favorite example is Redshift and Snowflake. I mean those are directly competitive products, but, but Snowflake is an unbelievably great relationship with aws. They do cyber's, I think different, I mean, yeah, you got guard duty and you got some other stuff there. But generally speaking, the, correct me if I'm wrong, the e the ecosystem has more room to play on AWS than it may on some other clouds. >>A hundred percent. Yeah. Once again, you know, guard duty for examples, we've got a lot of customers who use guard duty and Prisma Cloud and other Palo Alto Networks products. And we also ingest the data from guard duty. So if customers want a single pane of glass, they can use the best of AWS in terms of guard duty threat detection, but leverage other technology suite from, you know, a platform provider like Palo Alto Networks. So you know, that that, you know, look, world is a complicated place. Some like blue, some like red, whatever that may be. But we believe in giving customers that choice, just like AWS customers want that. Not a >>Problem. And at least today they're not like directly, you know, in your space. Yeah. You know, and even if they were, you've got such a much mature stack. Absolutely. And my, my frankly Microsoft's different, right? I mean, you see, I mean even the analysts were saying that some of the CrowdStrike's troubles for, cuz Microsoft's got the good enough, right? So >>Yeah. Endpoint security. Yeah. And >>Yeah, for sure. So >>Do you have a favorite example of a customer where Palo Alto Networks has really helped them come in and, and enable that secure business transformation? Anything come to mind that you think really shines a light on Palo Alto Networks and what it's able to do? >>Yeah, look, we have customers across, and I'm gonna speak to public cloud in general, right? Like Palo Alto has over 60,000 customers. So we've been helping with that business transformation for years now. But because it's reinvented aws, the Prisma cloud product has been helping customers across different industry verticals. Some of the largest credit card processing companies, they can process transactions because we are running security on top of the workloads, the biggest financial services, biggest healthcare customers. They're able to put the patient health records in public cloud because Palo Alto Networks is helping them get there. So we are helping accelerated that digital journey. We've been an enabler. Security is often perceived as a blocker, but we have always treated our role as enabler. How can we get developers and enterprises to move as fast as possible? And like, my favorite thing is that, you know, moving fast and going digital is not a monopoly of just a tech company. Every company is gonna be a tech company Oh absolutely. To public cloud. Yes. And we want to help them get there. Yeah. >>So the other thing too, I mean, I'll just give you some data. I love data. I have a, ETR is our survey partner and I'm looking at Data 395. They do a survey every quarter, 1,250 respondents on this survey. 395 were Palo Alto customers, fortune 500 s and P 500, you know, big global 2000 companies as well. Some small companies. Single digit churn. Yeah. Okay. Yeah. Very, very low replacement >>Rates. Absolutely. >>And still high single digit new adoption. Yeah. Right. So you've got that tailwind going for you. Yeah, >>Right. It's, it's sticky because especially our, our main business firewall, once you deploy the firewall, we are inspecting all the network traffic. It's just so hard to rip and replace. Customers are getting value every second, every minute because we are thwarting attacks from public cloud. And look, we, we, we provide solutions not just product, we just don't leave the product and ask the customers to deploy it. We help them with deployment consumption of the product. And we've been really fortunate with that kind of gross dollar and netten rate for our customers. >>Now, before we wrap, I gotta tease, the cube is gonna be at Palo Alto Ignite. Yeah. In two weeks back here. I think we're at D mgm, right? We >>Were at D MGM December 13th and >>14th. So give us a little, show us a little leg if you would. What could we expect? >>Hey, look, I mean, a lot of exciting new things coming. Obviously I can't talk about it right now. The PR Inc is still not dry yet. But lots of, lots of new innovation across our three main businesses. Network security, public cloud, security, as well as XDR X. Im so stay tuned. You know, you'll, you'll see a lot of new exciting things coming up. >>Looking forward to it. >>We are looking forward to it. Last question on curf. You, if you had a billboard to place in New York Times Square. Yeah. You're gonna take over the the the Times Square Nasdaq. What does the billboard say about why organizations should be working with Palo Alto Networks? Yeah. To really embed security into their dna. Yeah. >>You know when Jim said Palo Alto Networks is the gold standard for security, I thought it was gonna steal it. I think it's pretty good gold standard for security. But I'm gonna go with our mission cyber security partner's choice. We want to be known as that and that's who we are. >>Beautifully said. Walker, thank you so much for joining David in the program. We really appreciate your insights, your time. We look forward to seeing you in a couple weeks back here in Vegas. >>Absolutely. Can't have enough of Vegas. Thank you. Lisa. >>Can't have in Vegas, >>I dunno about that. By this time of the year, I think we can have had enough of Vegas, but we're gonna be able to see you on the cubes coverage, which you could catch up. Palo Alto Networks show Ignite December, I believe 13th and 14th on the cube.net. We want to thank Anker Shaw for joining us. For Dave Ante, this is Lisa Martin. You're watching the Cube, the leader in live enterprise and emerging tech coverage.

Published Date : Dec 2 2022

SUMMARY :

whatever you wanna call it, in Las Vegas. This is the cube. you know, all the macro headwinds that we love to And we're gonna be unpacking all of that with one of our alumni. It's good to be here after a couple years. It's a bit of a blur after Covid. Cuz I am. It's just good to be doing the good old things. I know you guys, Palo Alto Networks recently acquired cyber security. And the reason for that is the modern software supply chain, not the physical supply chain, IDC security. Yeah, so look, if you look at our history lease over the last four years, And the cloud in and of itself, because, you know, shared responsibility doesn't do that. And that's why, you know, we've always taken a platform approach of our biggest partners and you know, we've integrated with, with a lot of their services. this is a board level initiative? the board is asking very simple question, what are you doing to secure that? So what are you seeing in the, And one of the primary reason is that when you know the I mean, not every cyber firm has You know, I know. I mean, and it was, and and you know, the large customers were continuing to And over time we have added XDR with the first one to introduce You talked about the run rate and I was reading a And that's the great thing about AWS where it doesn't have to be wall garden open I think different, I mean, yeah, you got guard duty and you got some other stuff there. So you know, And at least today they're not like directly, you know, in your space. So my favorite thing is that, you know, moving fast and going digital is not a monopoly of just a tech So the other thing too, I mean, I'll just give you some data. Absolutely. So you've got that tailwind going for you. and ask the customers to deploy it. Yeah. So give us a little, show us a little leg if you would. Hey, look, I mean, a lot of exciting new things coming. You're gonna take over the the the Times Square Nasdaq. But I'm gonna go with our mission cyber We look forward to seeing you in a couple weeks back here in Vegas. Can't have enough of Vegas. but we're gonna be able to see you on the cubes coverage, which you could catch up.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

AdamPERSON

0.99+

JimPERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

McCloudORGANIZATION

0.99+

VegasLOCATION

0.99+

Palo Alto NetworksORGANIZATION

0.99+

Ankur ShahPERSON

0.99+

CiscoORGANIZATION

0.99+

$80 billionQUANTITY

0.99+

Las VegasLOCATION

0.99+

White HouseORGANIZATION

0.99+

Anker ShawPERSON

0.99+

1,250 respondentsQUANTITY

0.99+

LisaPERSON

0.99+

WalkerPERSON

0.99+

Dave AntePERSON

0.99+

fourthQUANTITY

0.99+

MicrosoftORGANIZATION

0.99+

82 billionQUANTITY

0.99+

last yearDATE

0.99+

less than 3 millionQUANTITY

0.99+

oneQUANTITY

0.99+

Monday nightDATE

0.99+

Palo AltoORGANIZATION

0.99+

New York Times SquareLOCATION

0.99+

OktaORGANIZATION

0.99+

over 60,000 customersQUANTITY

0.99+

CovidPERSON

0.99+

Prisma CloudORGANIZATION

0.99+

over 2000 featuresQUANTITY

0.99+

todayDATE

0.99+

40%QUANTITY

0.99+

awsORGANIZATION

0.99+

threeQUANTITY

0.99+

DecemberDATE

0.98+

cube.netOTHER

0.98+

PrismaORGANIZATION

0.98+

2000 companiesQUANTITY

0.98+

first oneQUANTITY

0.98+

singleQUANTITY

0.98+

Venetian ExpoEVENT

0.98+

three main businessesQUANTITY

0.98+

395QUANTITY

0.98+

PR IncORGANIZATION

0.98+

over 26 million developersQUANTITY

0.97+

one clickQUANTITY

0.97+

Four years agoDATE

0.97+

35QUANTITY

0.96+

Palo AltoLOCATION

0.96+

December 13thDATE

0.95+

14thDATE

0.95+

Chris DeMars & Pierre-Alexandre Masse, Split Software | AWS re:Invent 2022


 

(bright upbeat music) >> Hey, friends. Welcome back to theCUBE's Live coverage of AWS re:Invent 2022 in Sin City. We are so excited to be here with tens of thousands of people. This is our third day of coverage, really the second full day of the show, but we started Monday night. You're going to get wall-to-wall coverage on theCUBE. You probably know that because you've been watching. I'm Lisa Martin and I'm here with Paul Gill. Paul, this is great. We have had such great conversations. We've been talking a lot about data. Every company is a data company, has to be a data company. We've been talking about developers, the developer experience, and how that's so influential in business decisions for businesses in every industry. >> And it's a key element of what's going on here on the floor at re:Invent is developers, the theme of developers just permeates the show. Lots and lots of boots here devoted to DevOps and Agile approaches. And certainly that is one of the things that the Cloud enables is your team to rethink the way they develop software, and that's what we're going to talk about next. >> That is what we're going to talk about next. We have two guests from Split. split.io is the URL if you want to check it out. Chris Demars joins us Developer advocate. Chris, great to have you and PaaS, VP of Engineering guys thank you so much for joining us on the program. >> Thank you for having us. >> Thank you for having us. >> Talk to us Pierre, we'll start with you. For the audience that might not know Split what does the company do? What's the value in it for customers? What are you all about? >> Sure. So in very simple terms, for those who are familiar, we do feature flags, feature management and experimentation. And essentially that two essential feature of the Agile transformation as you were mentioning and elements that really helps getting as much art we can from the team in term of productivity and in term of impact. And we basically help with those elements. And so that's a very short... >> 'Excellent, very nice. Chris, you were saying before we went live you do a lot of speaking at conferences, you're often in front of large audiences. As the developer advocate, what are some of the key requirements you're hearing from the developer community that organizations need to be encompassing? >> I think community is key. Like community is at the forefront of developer advocacy and developer relations. Like you want to go where the developers are and developers want to hear those stories in those personalized pieces of the puzzle. And when you're able to talk about modern Web and software technology and loop in product with that and still keep talking about those things and bring that to them, like that is on top of the list when it comes to developer advocacy and being embedded within the developer community. >> Lisa: Yeah. >> Tell us about feature flags, because I would assume that for our viewers who are not developers, who are not familiar with Agile technologies, the Agile approaches that might be, may be a new term, what are feature flags? How do you use them? >> Sure, I can start with that. So feature flag is a tool that you embed in your code that allows you to control the activation of your code essentially. And that's allows you to really validate things in a much better and solve way and also attach measurement to it. So, when you're writing your new feature, you just put essentially an if statement around it, if my feature flag is on, then I actually do all those things with soft, then I don't do any of those things and then within our platform, then you can control the activation. Do you want to turn it on for yourself just to try it out? Do you want your QA team to start validating it? Do you want 5% of your users 10%? And start seeing how they interacting with the product. That's what feature flag is. >> It's an amazing piece of any part of the stack, right? 'Cause I'm a Web accessibility and an UI specialist and being able to control the UI with a feature flag and being able to turn on and off those features based on percentage, locale, all of those things. It's very, very powerful. >> What are some of the scenarios which you would use feature flags? You have been testing? >> Yeah, yeah. We actually, you can imagine we use it for pretty much everything. So, as Chris was saying, in the front-end, everything you want to change, you basically can validate and attach measurements. So you can do AB testing, so you can see the impact, you can see if there is a change in performance. We use it also for a lot of backend services and changes and a lot of even infrastructure changes where we can control the traffic and where it goes. So we can validate that things are operating the way that they should before we fully done the market I think. >> 'It can be as small as, you know having a checkout button here and then writing an AB test and running an experiment and moving that checkout button somewhere else because then you can get conversion rates and see which one performed better to a certain amount of people and whatever performed better, that's the feature you would go with. >> Chris, talk about the value of the impact in feature flags for the developer from a developer experience perspective, a productivity perspective. >> So I think that having that feature and being able to write that UI, let's say that you have a checkout button, right? And there's specific content there's verbiage on that checkout button. And then let's say that another team within the organization wants to change that because the conversion is different. You can make those changes, still have it in production and then have it tested. So you don't have to cut specific branches or like test URLs to give to QA, you can do all of that behind that flag. And then once everything is good to go, push it out there and then based on those metrics and that data, see which one performs better and then that's the one that you would go with. >> One of the things with feature flag and it goes to like our main theme of 'What a Release, What a Relief' is that it gives autonomy to the teams and to the developers, enable them to move independently from others. So the deployment can go but their code is not activated until they decide to. And so, they are not impeding anybody else. It makes releases a lot safer, a lot simpler and it gives a lot more speed to everybody because when you do releases with five teams, 10 teams, pushing the code at the same time, you have such a high-risk of breaking something that it's you know... So it's a huge effort and it requires a lot of attention from a lot of people. If anything happens, all those teams needs to investigate. When you decouple all those things, the deployments are essentially not doing anything per se until every individual team activate those things independently. So if anything goes wrong, only them are affected and they don't have to depend on anybody else to get their thing out. So it really helps them making their life a lot safer and gives them a lot more speed because they have autonomy. >> So, why come to re:Invent? What do you get with this audience that you don't get elsewhere? >> Why to re:Invent? I think like re:Invent in the Cloud and AWS is a lot about getting speed to companies to build better product and faster. And essentially like the tool we provide and the technology and the platform we provide is really at the heart of that in itself. And so that's why we feel we have really great conversation with all the people on the floor. >> 'the people who have the right mindset for adopting... >> For me, it's very much community and networking, I love developer community and just community in general is my lifeblood. That's why I travel so much and I talk about these things and I'm with people and if it's not about the products, the story and the story is what gets people. That's why I love being here and being with my team and it's amazing. >> And what is that story? If you had an elevator pitch to give, what would you tell me? >> Hoo, if you were in a late release or deploy at night. I've been there, I'm sure you've been there, it doesn't matter what you're doing. We don't want be up until two, three in the morning doing those things, right? Our product helps alleviate those stresses. And you talking about accessibility, what I do, you know, a big piece of that are hidden impairments like anxiety will stress and anxiety go hand in hand and you want to alleviate that all across the board for everybody involved. >> As you see organizations shift Agile technologies and to parallel development and continuous release cycles, what are some of the biggest barriers they encounter in changing that mindset? >> Ooh, what do you think? >> It depends on where they are in the organization. The Agile transformation is a journey and it's also a change of mindset, it's a change of process. So depending on where they are then they might have some areas where they need a little bit more effort in those directions. What we see is that feature flag just the control of the layout. It's usually something that's fairly easily adopted. Thinking about measurement and attaching measurement to it is often something that requires a little bit more thinking. Like engineers are not really used to thinking about AB testing. It feels like more of a product management thing but AB testing is important also for performance informations like errors and all those things. There is a lot of risk management to be done. We do that through monitoring with APMs, but with feature flag and with Split, you can do that at a feature level and it really gives a great insight. And that's usually something that takes a little bit more digestion from the developers to really get their mind around it and get to it. But there's a lot of value to it. >> I'm looking at the split I/O website and I like the tagline shorten time from code to customer. As customers in any industry, as consumers, we have this expectation that we can get whatever we want anytime 24 by 7 and it's going to be a relevant experience. So it sounds to me like from a speed perspective, there's a lot of business impact that Split can help organizations make from getting releases faster, getting cut faster time-to-market, delivering what customers expect because we all expect real-time these days. Nobody wants to wait. >> Yeah, that's right. Yeah, I think that has to do with the going back to the decoupling of things that, you know... Not having to go through so many teams to have it tested and getting away from all the meetings about meetings to review the metrics, right? We all love meetings about meetings. >> No. (laughs loudly) >> Right, exactly, exactly. So being able to take that away and being able to push all of that stuff into production, getting it tested while it's in production and then being able to turn those features on, it's already there without having to do another deployment. And I think, like that's really powerful to me at least. >> Does your solution have value at the security level as well? >> Yes. So that's one of the particularity on the way we do things is like the way you control the feature flag, you have kind of two ways of doing it. Either the piece of code, the SDKs that we provide, the library we provide, you that you put in your code could come back to our platform and check. The way we do it is we send the rules back to the SDK so the whole evaluation is local. The evaluation is extremely fast and it's very secure because it's all happening within your environment. You never have to share any information, no PI whatsoever, contrary it to some of the other tools that you might find on the market. >> So the theme of the booth is 'What a Release, What a Relief'. What are some of the things that you're hearing as you're engaging folks on the show floor this week? >> Oh, what is Aura Photography and can I take a picture of. (everyone laughs loudly) I think just a lot of the stresses of... They're like the release cycle and you know, having to go through so many teams. I feel like that's a common theme that I've heard of. >> Yeah, we see a number of teams organization that still have like really big deployments with like a lot of teams basically coming together, pushing the code together, and there's a lot of pain in it. It's like, it's a huge effort by huge teams. You get 10, 20 people that have to have watch over it at always weird hours, and I think there is a lot of pain to that and that resonate a lot with people. And when we talk about monitoring at the future level, that also helps a lot. Like I was part of organizations before where we had a dedicated staff engineer to just monitor and fix performance on a daily basis because it's such a huge problem and it affects so much the performance of the company. And so essentially, you have this person that tries to look at is a performance being degraded today with the deployment of yesterday and what went out yesterday and you have so many things that went out. It's so hard to control. With what we provide, we tell you exactly which feature flag is responsible for the degradation. And so, you don't need that person to focus on that anymore. And you can focus on delivering value a lot better. >> I think it also might take away the need for extensive release notebooks and playbooks, right? 'Cause when you do bring all those teams together, it's certain people that are in that meeting and there's a PDF saying, all right, we check this off the list, we check this off the list. I think that might alleviate some of that overhead as well. >> Streamlining processes, process efficiencies, workforce productivity improvements, big impact. >> And that gets code quicker to the user. >> You talk about decoupling deploy from release. What do you mean by that? What's the value? >> So the deployment in my definition is essentially getting the code out to production. The release is activating the code in production. And often people do both of those things at the same time, right? But there's a huge risk when you do that because if anything goes wrong, now you need to revert everything which is not a short operation often and takes a lot of effort. And so now, if you can basically push your code to production but separate the activation of it, the release of it, then it goes a lot faster. It's a lot. You have a lot of autonomy and decoupling and if anything goes wrong, it's the click of a button and it's off. So like there's a lot of safety that comes with it and we know that any outages as a high cost for all the companies. So it's like, if you can reduce the outage to like five seconds... >> Right. >> It's a lot better than basically several hours. >> Can you talk about the value out of Split versus DIY and where are most of your customers in this process? Do they have a bunch of tools, a bunch of processes, a bunch of teams, and you're really helping them consolidate streamline? >> The one thing I hear a lot is we rolled our own AB testing and feature flagging system, but some of the issues I've seen and I've heard are that they don't have all those metrics or they have to work with a specific data team to get those metrics. And then you go back to having those meetings about meetings... >> Lisa: Dependencies. >> Right, you have a data team that's putting together a report that is then presented to you and then that's got to be presented to a stakeholder and then that stakeholder makes a decision whether to turn on feature A or feature B, right? Our product from my understanding is we have those metrics already built in and you can have that at your disposal. >> Yeah, the other thing I would add to that is like we see a number of people, they start on the feature flag journey just because they have a high risk thing that they need to put out. So they do the minimal thing to basically control it somehow, but it works only in one part of the stacks. They can't basically leverage it anywhere else and it's very limited in capability so that it just serve the purpose that was needed at that time. They don't have a dedicated team to manage it. So it just there, but it's very constrained and it's not supported effectively. The other thing is like for those companies is like they have a question to ask themselves. It's like do they want to invest resources in managing that kind of tool or is it not so core to their business that they want essentially to have vendor deal with it at a much lower price and they would have to invest resources for them to support it, and... >> Sounds like feature flags are kind of a team building. Have you have a team building dimension to them? >> Yeah. >> Yeah. >> It takes a team for sure. >> Yeah, and then once you add like AB testing and the feature flag, it's the collaboration between product management and engineering. It can go even further. Like two executives like to basically, you know, view the impact, understand the impact. So it goes from the control to the risk management to the product and to the impact and measuring the flow of delivery and the communication around it. >> Here we are at re:Invent, so many thousands of people as I mentioned, we're on the second full-day of the event. What have you heard from AWS that really excites you about being in their ecosystem? Any news in particular that jumps out at you that really speaks to improving that developer experience as if we've heard a lot of focus on the developer? >> Chris: Yeah, I haven't heard much, have you? >> So, I arrived yesterday, I haven't followed yet all the announcement, I'm just like, >> there's so many- >> on the news, yeah, yeah. >> So I'm on the booth at the same time. >> I stopped counting at 15 during the Keynote this morning. >> Many of them just can't keep up, there's so much happening at one time's so much. >> This event is a can of content, can of news re:Invent. It is hard. But yesterday they were spent so much time talking about data and how... And I always think every company today has to be a data company, have to be a software company, we were just talking with Capital One and they think of themselves as a technology company that does banking. And sometimes, I'll talk with retailers that think of themselves as technology companies that do retail and they love that but that's what companies like Split have to enable these days. It's companies to become technology companies, deliver code faster to customer because the customer's demanding it. We're not going to want less stuff slower. >> Yeah, I mean it's so essential I think for me like I joined Split because of that premises. Like every company now is a software company and every company has really to compete in innovation. You know all those banks, Capital One like we see it a lot in the financial industry where our message resonates extremely strongly is really in a high-competitive environment and they have to be innovative and innovation comes when people have speed and autonomy. And if you basically provide that to teams and the tools to basically get some signals and some quick feedback loop, that's how you get innovation. Like you can't decide what to build but you can basically provide the tools to enable them to think about. >> Right, you can experiment more flexibly right, faster. >> And developers have to be empowered, right? >> Yes. >> I think that's the probably one of the number one messages I've heard at all the shows we've done this year. How influential the developer is in the direction of the business. >> Autonomy and empowerment are two main factors 'cause I'm a front end developer at heart and I want to work on cool stuff and we're doing cool stuff. Like we are doing cool stuff. We can't talk about all of it, right? But I think we're doing a lot of cool things at Split and I'm really stoked to be a part of the team and grow developer relations, grow developer advocacy and be along for the journey. >> Yeah, I love that. Last question for both of you, same question. If you had a bumper sticker and you were going to put it on a fancy shiny new car, car of your choice about Split, what would it say? Pierre I'll start with you then Chris. >> Bumper sticker. >> On the spot question. >> On the question, (everyone laughs happily) I mean the easy answer is probably written on my t-shirt. Like, you know, 'What a Release, What a Relief'. I think that the first step for teams is like, you can have a message that's very like even further, you know, the Agile transformation is a journey and I basically tell people, you need to first crawl, walk and run and I think the 'What a Release, What a Relief' is a good step to like getting to the working. And I think like that would be the first bumper sticker before I get to the further one about AP testing and innovative. >> Love it. Chris, what would your bumper sticker say? >> It would say Split software, feature flags for the masses. Hard stop. >> Mic drop. >> Done. >> Awesome guys, thank you so much for joining Paul and me on the program. It's been outstanding introducing Split to our audience, what you do, how you're impacting the developer experience and ultimately, the business and the end customer on the backend who just wants things to work. We appreciate your insights, we appreciate your time. >> Thanks so much for having us. >> Appreciate it. >> Our pleasure. For our guests and Paul Gillin, I'm Lisa Martin. You're watching theCUBE, which you know is the leader in live enterprise and emerging tech coverage. (bright upbeat music)

Published Date : Nov 30 2022

SUMMARY :

We are so excited to be here of the things that the Cloud enables Chris, great to have you and What's the value in it for customers? and elements that really helps As the developer advocate, and bring that to them, like and also attach measurement to it. and being able to control So you can do AB testing, that's the feature you would go with. of the impact in feature flags and being able to write that UI, and they don't have to and the technology and 'the people who have the it's not about the products, and you want to alleviate from the developers to really and I like the tagline shorten to do with the going back and then being able to the library we provide, you What are some of the things and you know, having to and it affects so much the the need for extensive release notebooks Streamlining processes, What's the value? And so now, if you can It's a lot better than And then you go back to a report that is then presented to you so that it just serve the purpose Have you have a team and the feature flag, of focus on the developer? on the news, during the Keynote this morning. Many of them just can't keep and they think of themselves and they have to be innovative Right, you can experiment of the number one messages I've heard and be along for the journey. and you were going to put I mean the easy answer is Chris, what would your bumper sticker say? feature flags for the masses. and the end customer which you know is the leader

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Paul GillinPERSON

0.99+

ChrisPERSON

0.99+

Paul GillPERSON

0.99+

PaulPERSON

0.99+

10 teamsQUANTITY

0.99+

Chris DemarsPERSON

0.99+

five teamsQUANTITY

0.99+

Capital OneORGANIZATION

0.99+

10QUANTITY

0.99+

LisaPERSON

0.99+

Chris DeMarsPERSON

0.99+

Monday nightDATE

0.99+

PierrePERSON

0.99+

AWSORGANIZATION

0.99+

yesterdayDATE

0.99+

third dayQUANTITY

0.99+

5%QUANTITY

0.99+

10%QUANTITY

0.99+

five secondsQUANTITY

0.99+

SplitORGANIZATION

0.99+

two executivesQUANTITY

0.99+

two guestsQUANTITY

0.99+

bothQUANTITY

0.99+

7QUANTITY

0.99+

oneQUANTITY

0.99+

Split. split.ioOTHER

0.99+

Sin CityLOCATION

0.99+

first stepQUANTITY

0.99+

todayDATE

0.98+

one partQUANTITY

0.98+

two waysQUANTITY

0.98+

second full dayQUANTITY

0.97+

OneQUANTITY

0.97+

second full-dayQUANTITY

0.97+

two main factorsQUANTITY

0.97+

firstQUANTITY

0.96+

this weekDATE

0.96+

AgileTITLE

0.96+

24QUANTITY

0.95+

15QUANTITY

0.95+

KeynoteEVENT

0.95+

threeQUANTITY

0.95+

20 peopleQUANTITY

0.95+

theCUBEORGANIZATION

0.94+

this morningDATE

0.93+

this yearDATE

0.92+

Aura PhotographyORGANIZATION

0.91+

feature BOTHER

0.87+

InventEVENT

0.85+

tens of thousands of peopleQUANTITY

0.84+

Pierre-Alexandre MassePERSON

0.84+

one timeQUANTITY

0.84+

feature AOTHER

0.83+

re:InventEVENT

0.82+

re:Invent 2022EVENT

0.82+

What a Release, What a ReliefTITLE

0.8+

one thingQUANTITY

0.79+

two essentialQUANTITY

0.77+

thousands of peopleQUANTITY

0.77+

first bumperQUANTITY

0.76+

re:Invent inTITLE

0.75+

twoQUANTITY

0.75+

DevOpsTITLE

0.66+

thingsQUANTITY

0.66+

SoftwareORGANIZATION

0.66+

every individual teamQUANTITY

0.65+

one messagesQUANTITY

0.6+

lot of peopleQUANTITY

0.6+

Dan Kogan, Pure Storage & Venkat Ramakrishnan, Portworx by Pure Storage | AWS re:Invent 2022


 

(upbeat music) >> Welcome back to Vegas. Lisa Martin and Dave Vellante here with theCUBE live on the Venetian Expo Hall Floor, talking all things AWS re:Invent 2022. This is the first full day of coverage. It is jam-packed here. People are back. They are ready to hear all the new innovations from AWS. Dave, how does it feel to be back yet again in Vegas? >> Yeah, Vegas. I think it's my 10th time in Vegas this year. So, whatever. >> This year alone. You must have a favorite steak restaurant then. >> There are several. The restaurants in Vegas are actually really good. >> You know? >> They are good. >> They used to be terrible. But I'll tell you. My favorite? The place that closed. >> Oh! >> Yeah, closed. In between where we are in the Wynn and the Venetian. Anyway. >> Was it CUT? >> No, I forget what the name was. >> Something else, okay. >> It was like a Greek sort of steak place. Anyway. >> Now, I'm hungry. >> We were at Pure Accelerate a couple years ago. >> Yes, we were. >> When they announced Cloud Block Store. >> That's right. >> Pure was the first- >> In Austin. >> To do that. >> Yup. >> And then they made the acquisition of Portworx which was pretty prescient given that containers have been going through the roof. >> Yeah. >> So I'm sort of excited to have these guys on and talk about that. >> We're going to unpack all of this. We've got one of our alumni back with us, Venkat Ramakrishna, VP of Product, Portworx by Pure Storage. And Dan Kogan joins us for the first time, VP of Product Management and Product Marketing, FlashArray at Pure Storage. Guys, welcome to the program. >> Thank you. >> Hey, guys. >> Dan: Thanks for having us. >> Do you have a favorite steak restaurant in Vegas? Dave said there's a lot of good choices. >> There's a lot of good steak restaurants here. >> I like SDK. >> Yeah, that's a good one. >> That's the good one. >> That's a good one. >> Which one? >> SDK. >> SDK. >> Where's that? >> It's, I think, in Cosmopolitan. >> Ooh. >> Yeah. >> Oh, yeah, yeah, yeah. >> It's pretty good, yeah. >> There's one of the Western too that's pretty. >> I'm an Herbs and Rye guy. Have you ever been there? >> No. >> No. >> Herbs and Rye is off strip, but it's fantastic. It's kind of like a locals joint. >> I have to dig through all of this great stuff today and then check that out. Talk to me. This is our first day, obviously. First main day. I want to get both of your perspectives. Dan, we'll start with you since you're closest to me. How are you finding this year's event so far? Obviously, tons of people. >> Busy. >> Busy, yeah. >> Yeah, it is. It is old times. Bigger, right? Last re:Invent I was at was 2019 right before everything shut down and it's probably half the size of this which is a different trend than I feel like most other tech conferences have gone where they've come back, but a little bit smaller. re:Invent seems to be the IT show. >> It really does. Venkat, are you finding the same? In terms of what you're experiencing so far on day one of the events? >> Yeah, I mean... There's tremendous excitement. Overall, I think it's good to be back. Very good crowd, great turnout, lot of excitement around some of the new offerings we've announced. The booth traffic has been pretty good. And just the quality of the conversations, the customer meetings, have been really good. There's very interesting use cases shaping up and customers really looking to solve real large scale problems. Yeah, it's been a phenomenal first day. >> Venkat, talk a little bit about, and then we'll get to you Dan as well, the relationship that Portworx by Pure Storage has with AWS. Maybe some joint customers. >> Yeah, so we... Definitely, we have been a partner of AWS for quite some time, right? Earlier this year, we signed what is called a strategic investment letter with AWS where we kind of put some joint effort together like to better integrate our products. Plus, kind of get in front of our customers more together and educate them on how going to how they can deploy and build vision critical apps on EKS and EKS anywhere and Outpost. So that partnership has grown a lot over the last year. We have a lot of significant mutual customer wins together both on the public cloud on EKS as well as on EKS anywhere, right? And there are some exciting use cases around Edge and Edge deployments and different levels of Edge as well with EKS anywhere. And there are pretty good wins on the Outpost as well. So that partnership I think is kind of like growing across not just... We started off with the one product line. Now our Portworx backup as a service is also available on EKS and along with the Portworx Data Services. So, it is also expanded across the product lanes as well. >> And then Dan, you want to elaborate a bit on AWS Plus Pure? >> Yeah, it's for kind of what we'll call the core Pure business or the traditional Pure business. As Dave mentioned, Cloud Block Store is kind of where things started and we're seeing that move and evolve from predominantly being a DR site and kind of story into now more and more production applications being lifted and shifted and running now natively in AWS honor storage software. And then we have a new product called Pure Fusion which is our storage as code automation product essentially. It takes you from moving and managing of individual arrays, now obfuscates a fleet level allows you to build a very cloud-like backend and consume storage as code. Very, very similar to how you do with AWS, with an EBS. That product is built in AWS. So it's a SaaS product built in AWS, really allowing you to turn your traditional Pure storage into an AWS-like experience. >> Lisa: Got it. >> What changed with Cloud Block Store? 'Cause if I recall, am I right that you basically did it on S3 originally? >> S3 is a big... It's a number of components. >> And you had a high performance EC2 instances. >> Dan: Yup, that's right. >> On top of lower cost object store. Is that still the case? >> That's still the architecture. Yeah, at least for AWS. It's a different architecture in Azure where we leverage their disc storage more. But in AWS were just based on essentially that backend. >> And then what's the experience when you go from, say, on-prem to AWS to sort of a cross cloud? >> Yeah, very, very simple. It's our replication technology built in. So our sync rep, our async rep, our active cluster technology is essentially allowing you to move the data really, really seamlessly there and then again back to Fusion, now being that kind of master control plan. You can have availability zones, running Cloud Block Store instances in AWS. You can be running your own availability zones in your data centers wherever those may happen to be, and that's kind of a unification layer across it all. >> It looks the same to the customer. >> To the customer, at the end of the day, it's... What the customer sees is the purity operating system. We have FlashArray proprietary hardware on premises. We have AWS's hardware that we run it on here. But to the customer, it's just the FlashArray. >> That's a data super cloud actually. Yeah, it's a data super cloud. >> I'd agree. >> It spans multiple clouds- >> Multiple clouds on premises. >> It extracts all the complexity of the underlying muck and the primitives and presents a common experience. >> Yeah, and it's the same APIs, same management console. >> Dave: Yeah, awesome. >> Everything's the same. >> See? It's real. It's a thing, On containers, I have a question. So we're in this environment, everybody wants to be more efficient, what's happening with containers? Is there... The intersection of containers and serverless, right? You think about all the things you have to do to run containers in VMs, configure everything, configure the memory, et cetera, and then serverless simplifies all that. I guess Knative in between or I guess Fargate. What are you seeing with customers between stateless apps, stateful apps, and how it all relates to containers? >> That's a great question, right? I think that one of the things that what we are seeing is that as people run more and more workloads in the cloud, right? There's this huge movement towards being the ability to bring these applications to run anywhere, right? Not just in one public cloud, but in the data centers and sometimes the Edge clouds. So there's a lot of portability requirements for the applications, right? I mean, yesterday morning I was having breakfast with a customer who is a big AWS customer but has to go into an on-prem air gap deployment for one of their large customers and is kind of re-platforming some other apps into containers in Kubernetes because it makes it so much easier for them to deploy. So there is no longer the debate of, is it stateless versus it stateful, it's pretty much all applications are moving to containers, right? And in that, you see people are building on Kubernetes and containers is because they wanted multicloud portability for their applications. Now the other big aspect is cost, right? You can significantly run... You know, like lower cost by running with Kubernetes and Portworx and by on the public cloud or on a private cloud, right? Because it lets you get more out of your infrastructure. You're not all provisioning your infrastructure. You are like just deploying the just-enough infrastructure for your application to run with Kubernetes and scale it dynamically as your application load scales. So, customers are better able to manage costs. >> Does serverless play in here though? Right? Because if I'm running serverless, I'm not paying for the compute the whole time. >> Yeah. >> Right? But then stateless and stateful come into play. >> Serverless has a place, but it is more for like quick event-driven decision. >> Dave: The stateless apps. >> You know, stuff that needs to happen. The serverless has a place, but majority of the applications have need compute and more compute to run because there's like a ton of processing you have to do, you're serving a whole bunch of users, you're serving up media, right? Those are not typically good serverless apps, right? The several less apps do definitely have a place. There's a whole bunch of minor code snippets or events you need to process every now and then to make some decisions. In that, yeah, you see serverless. But majority of the apps are still requiring a lot of compute and scaling the compute and scaling storage requirements at a time. >> So what Venkat was talking about is cost. That is probably our biggest tailwind from a cloud adoption standpoint. I think initially for on-premises vendors like Pure Storage or historically on-premises vendors, the move to the cloud was a concern, right? In that we're getting out the data center business, we're going all in on the cloud, what are you going to do? That's kind of why we got ahead of that with Cloud Block Store. But as customers have matured in their adoption of cloud and actually moved more applications, they're becoming much more aware of the costs. And so anywhere you can help them save money seems to drive adoption. So they see that on the Kubernetes side, on our side, just by adding in things that we do really well: Data reduction, thin provisioning, low cost snaps. Those kind of things, massive cost savings. And so it's actually brought a lot of customers who thought they weren't going to be using our storage moving forward back into the fold. >> Dave: Got it. >> So cost saving is great, huge business outcomes potentially for customers. But what are some of the barriers that you're helping customers to overcome on the storage side and also in terms of moving applications to Kubernetes? What are some of those barriers that you could help us? >> Yeah, I mean, I can answer it simply from a core FlashArray side, it's enabling migration of applications without having to refactor them entirely, right? That's Kubernetes side is when they think about changing their applications and building them, we'll call quote unquote more cloud native, but there are a lot of customers that can't or won't or just aren't doing that, but they want to run those applications in the cloud. So the movement is easier back to your data super cloud kind of comment, and then also eliminating this high cost associated with it. >> I'm kind of not a huge fan of the whole repatriation narrative. You know, you look at the numbers and it's like, "Yeah, there's something going on." But the one use case that looks like it's actually valid is, "I'm going to test in the cloud and I'm going to deploy on-prem." Now, I dunno if that's even called repatriation, but I'm looking to help the repatriation narrative because- >> Venkat: I think it's- >> But that's a real thing, right? >> Yeah, it's more than repatriation, right? It's more about the ability to run your app, right? It's not just even test, right? I mean, you're going to have different kinds of governance and compliance and regulatory requirements have to run your apps in different kinds of cloud environments, right? There are certain... Certain regions may not have all of the compliance and regulatory requirements implemented in that cloud provider, right? So when you run with Kubernetes and containers, I mean, you kind of do the transformation. So now you can take that app and run an infrastructure that allows you to deliver under those requirements as well, right? So that portability is the major driver than repatriation. >> And you would do that for latency reasons? >> For latency, yeah. >> Or data sovereign? >> Data sovereignty. >> Data sovereignty. >> Control. >> I mean, yeah. Availability of your application and data just in that region, right? >> Okay, so if the capability is not there in the cloud region, you come in and say, "Hey, we can do that on-prem or in a colo and get you what you need to comply to your EDX." >> Yeah, or potentially moves to a different cloud provider. It's just a lot more control that you're providing on customer at the end of the day. >> What's that move like? I mean, now you're moving data and everybody's going to complain about egress fees. >> Well, you shouldn't be... I think it's more of a one-time move. You're probably not going to be moving data between cloud providers regularly. But if for whatever reasons you decide that I'm going to stop running in X Cloud and I'm going to move to this cloud, what's the most seamless way to do? >> So a customer might say, "Okay, that's certification's not going to be available in this region or gov cloud or whatever for a year, I need this now." >> Yeah, or various commercial. Whatever it might be. >> "And I'm going to make the call now, one-way door, and I'm going to keep it on-prem." And then worry about it down the road. Okay, makes sense. >> Dan, I got to talk to you about the sustainability element there because it's increasingly becoming a priority for organizations in every industry where they need to work with companies that really have established sustainability programs. What are some of the factors that you talk with customers about as they have choice in all FlashArray between Pure and competitors where sustainability- >> Yeah, I mean we've leaned very heavily into that from a marketing standpoint recently because it has become so top of mind for so many customers. But at the end of the day, sustainability was built into the core of the Purity operating system in FlashArray back before it was FlashArray, right? In our early generation of products. The things that drive that sustainability of high density, high data reduction, small footprint, we needed to build that for Pure to exist as a company. And we are maybe kind of the last all-flash vendor standing that came ground up all-flash, not just the disc vendor that's refactored, right? And so that's sort of engineering from the ground up that's deeply, deeply into our software as a huge sustainability payout now. And we see that and that message is really, really resonating with customers. >> I haven't thought about that in a while. You actually are. I don't think there's any other... Nobody else made it through the knothole. And you guys hit escape velocity and then some. >> So we hit escape velocity and it hasn't slowed down, right? Earnings will be tomorrow, but the last many quarters have been pretty good. >> Yeah, we follow you pretty closely. I mean, there was one little thing in the pandemic and then boom! It's just kept cranking since, so. >> So at the end of the day though, right? We needed that level to be economically viable as a flash bender going against disc. And now that's really paying off in a sustainability equation as well because we consume so much less footprint, power cooling, all those factors. >> And there's been some headwinds with none pricing up until recently too that you've kind of blown right through. You know, you dealt with the supply issues and- >> Yeah, 'cause the overall... One, we've been, again, one of the few vendors that's been able to navigate supply really well. We've had no major delays in disruptions, but the TCO argument's real. Like at the end of the day, when you look at the cost of running on Pure, it's very, very compelling. >> Adam Selipsky made the statement, "If you're looking to tighten your belt, the cloud is the place to do it." Yeah, okay. It might be that, but... Maybe. >> Maybe, but you can... So again, we are seeing cloud customers that are traditional Pure data center customers that a few years ago said, "We're moving these applications into the cloud. You know, it's been great working with you. We love Pure. We'll have some on-prem footprint, but most of everything we're going to do is in the cloud." Those customers are coming back to us to keep running in the cloud. Because again, when you start to factor in things like thin provisioning, data reduction, those don't exist in the cloud. >> So, it's not repatriation. >> It's not repatriation. >> It's we want Pure in the cloud. >> Correct. We want your software. So that's why we built CBS, and we're seeing that come all the way through. >> There's another cost savings is on the... You know, with what we are doing with Kubernetes and containers and Portworx Data Services, right? So when we run Portworx Data Services, typically customers spend a lot of money in running the cloud managed services, right? Where there is obviously a sprawl of those, right? And then they end up spending a lot of item costs. So when we move that, like when they run their data, like when they move their databases to Portworx Data Services on Kubernetes, because of all of the other cost savings we deliver plus the licensing costs are a lot lower, we deliver 5X to 10X savings to our customers. >> Lisa: Significant. >> You know, significant savings on cloud as well. >> The operational things he's talking about, too. My Fusion engineering team is one of his largest customers from Portworx Data Services. Because we don't have DBAs on that team, it's just developers. But they need databases. They need to run those databases. We turn to PDS. >> This is why he pays my bills. >> And that's why you guys have to come back 'cause we're out of time, but I do have one final question for each of you. Same question. We'll start with you Dan, the Venkat we'll go to you. Billboard. Billboard or a bumper sticker. We'll say they're going to put a billboard on Castor Street in Mountain View near the headquarters about Pure, what does it say? >> The best container for containers. (Dave and Lisa laugh) >> Venkat, Portworx, what's your bumper sticker? >> Well, I would just have one big billboard that goes and says, "Got PX?" With the question mark, right? And let people start thinking about, "What is PX?" >> I love that. >> Dave: Got Portworx, beautiful. >> You've got a side career in marketing, I can tell. >> I think they moved him out of the engineering. >> Ah, I see. We really appreciate you joining us on the program this afternoon talking about Pure, Portworx, AWS. Really compelling stories about how you're helping customers just really make big decisions and save considerable costs. We appreciate your insights. >> Awesome. Great. Thanks for having us. >> Thanks, guys. >> Thank you. >> For our guests and for Dave Vellante, I'm Lisa Martin. You're watching theCUBE, the leader in live enterprise and emerging tech coverage. (upbeat music)

Published Date : Nov 29 2022

SUMMARY :

This is the first full day of coverage. I think it's my 10th You must have a favorite are actually really good. The place that closed. the Wynn and the Venetian. the name was. It was like a Greek a couple years ago. And then they made the to have these guys on We're going to unpack all of this. Do you have a favorite There's a lot of good There's one of the I'm an Herbs and Rye guy. It's kind of like a locals joint. I have to dig through all and it's probably half the size of this so far on day one of the events? and customers really looking to solve and then we'll get to you Dan as well, a lot over the last year. the core Pure business or the It's a number of components. And you had a high Is that still the case? That's still the architecture. and then again back to Fusion, it's just the FlashArray. Yeah, it's a data super cloud. and the primitives and Yeah, and it's the same APIs, and how it all relates to containers? and by on the public cloud I'm not paying for the But then stateless and but it is more for like and scaling the compute the move to the cloud on the storage side So the movement is easier and I'm going to deploy on-prem." So that portability is the Availability of your application and data Okay, so if the capability is not there on customer at the end of the day. and everybody's going to and I'm going to move to this cloud, not going to be available Yeah, or various commercial. and I'm going to keep it on-prem." What are some of the factors that you talk But at the end of the day, And you guys hit escape but the last many quarters Yeah, we follow you pretty closely. So at the end of the day though, right? the supply issues and- Like at the end of the day, the cloud is the place to do it." applications into the cloud. come all the way through. because of all of the other You know, significant They need to run those databases. the Venkat we'll go to you. (Dave and Lisa laugh) I can tell. out of the engineering. We really appreciate you Thanks for having us. the leader in live enterprise

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Dave VellantePERSON

0.99+

Lisa MartinPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

Dan KoganPERSON

0.99+

Dave VellantePERSON

0.99+

PortworxORGANIZATION

0.99+

Venkat RamakrishnanPERSON

0.99+

VegasLOCATION

0.99+

Adam SelipskyPERSON

0.99+

Venkat RamakrishnaPERSON

0.99+

DanPERSON

0.99+

AustinLOCATION

0.99+

LisaPERSON

0.99+

yesterday morningDATE

0.99+

tomorrowDATE

0.99+

Pure StorageORGANIZATION

0.99+

Castor StreetLOCATION

0.99+

CBSORGANIZATION

0.99+

10XQUANTITY

0.99+

10th timeQUANTITY

0.99+

Portworx Data ServicesORGANIZATION

0.99+

last yearDATE

0.99+

5XQUANTITY

0.99+

bothQUANTITY

0.99+

Cloud Block StoreTITLE

0.99+

first dayQUANTITY

0.98+

Cloud Block StoreORGANIZATION

0.98+

PureORGANIZATION

0.98+

VenetianLOCATION

0.98+

todayDATE

0.98+

VenkatPERSON

0.98+

S3TITLE

0.98+

first timeQUANTITY

0.98+

this yearDATE

0.98+

pandemicEVENT

0.98+

one final questionQUANTITY

0.98+

This yearDATE

0.98+

KubernetesTITLE

0.97+

EdgeORGANIZATION

0.97+

2019DATE

0.97+

oneQUANTITY

0.97+

AzureTITLE

0.97+

Cloud Block StoreTITLE

0.97+

eachQUANTITY

0.97+

InventEVENT

0.97+

Pure AccelerateORGANIZATION

0.97+

Earlier this yearDATE

0.97+

EKSORGANIZATION

0.96+

PurityORGANIZATION

0.96+

one-timeQUANTITY

0.96+

Cloud Block StoreTITLE

0.96+

Rick Clark, Veritas | AWS re:Invent 2022


 

>>Hey everyone, and welcome back to The Cube's live coverage of AWS Reinvented 2022 Live from the Venetian Expo in Las Vegas. We're happy to be back. This is first full day of coverage over here last night. We've got three full days of coverage in addition to last night, and there's about 50,000 people here. This event is ready, people are ready to be back, which is so exciting. Lisa Martin here with Paul Gill and Paul, it's great to be back in person. Great to be hosting with you >>And likewise with you, Lisa. I think the first time we hosted again, >>It is our first time exactly. >>And we come here to the biggest event that the cube ever does during the year. >>It's the Super Bowl of the >>Cube. It's it's elbow to elbow out there. It's, it's, it's full tackle football, totally on the, on the floor of reinvent. And very exciting. This, you know, I've been to a lot of conferences going back 40 years, long as I can remember. Been going to tech conferences. This one, the, the intensity, the excitement around this is really unusual. People are jazzed, they're excited to be here, and that's great to see, particularly coming back from two years of isolation. >>Absolutely. The energy is so palpable. Even yesterday, evening, afternoon when I was walking in, you just feel it with all the people here. You know, we talk to so many different companies on the Q Paul. Every company these days has to be a data company. The most important thing about data is making sure that it's backed up and it's protected, that it's secure, that it can be recovered if anything happens. So we're gonna be having a great conversation next about data resiliency with one of our alumni. >>And that would be Rick Scott, Rick, excuse me, Rick Scott, >>Rick Clark. Rick Clark, say Rick Scott, cloud sales Veritas. Rick, welcome back >>To the program. Thank you. Thank you so much. It's a pleasure being here, you know, thank you so much. You're definitely very excited to myself and 40,000 of my closest cousins and friends all in one place. Yep. Or I could possibly go wrong, right? So >>Yeah, absolutely nothing. So, Rick, so Veritas has made some exciting announcements. Talk to us about some of the new things that you've >>Unveiled. Yeah, we've been, we've been incredibly busy and, you know, the journey that we've been on, one of the big announcement that we made about three or four weeks ago is the introduction, really, of a brand new cloud native data management platform that we call Veritas Alta. And this is a journey that we've been on for the better part of seven years. We actually started it with our, our flex appliances. We continued, that was a containerization of our traditional net backup business in, into a highly secured appliance that was loved by our customers. And we continued that theme and that investment into what we call a scale out and scale up form factor appliance as well, what we called flex scale. And then we continued on that investment theme, basically spending over a billion dollars over that seven year journey in our cloud native. And we call that basically the Veritas altar platform with our cloud native platform. And I think if you really look at what that is, it truly is a data management platform. And I emphasize the term cloud native. And so our traditional technologies around data protection, obviously application resiliency and digital compliance or data compliance and governance. We are the only, the first and only company in the world to provide really a cloud optimized, cloud native platform, really, that addresses that. So it's been fun, it's been a fun journey. >>Talk a little bit about the customer experience. I see over 85% of the Fortune 100 trust Veritas with their data management. That's >>A big number. Yeah. Yeah. It's, it is incredible actually. And it really comes back to the Veritas older platform. We sort of built that with, with four tenants in mind, all driving back to this very similar to AWS's customer obsession. Everything we do each and every day of our waiting moments is a Veritas employee is really surrounds the customer. So it starts with the customer experience on how do they find us to, how do they procure our solutions through things like AWS marketplace and how do they deploy it? And the second thing is around really cost optimization, as we know, you know, to, to say that companies are going through a digital transformation and moving workloads to the cloud. I mean, I've got customers that literally were 20% in cloud a year ago and 80% a year later, we've never seen that kind of velocity. >>And so we've doubled down on this notion of cost optimization. You can only do that with these huge investments that I talked about. And so we're a very profitable company. We've been around, got a great heritage of over 30 years, and we've really taken those investments in r and d to provide that sort of cloud native technology to ultimately make it elastic. And so everything from will spin up and spin down services to optimize the cloud bill for our customers, but we'll also provide the greatest workload support. You know, obviously on-prem workloads are very different from cloud workloads and it's almost like turning the clock back 20 years to see all of those new systems. There's no standard API like s and MP on the network. And so we have to talk to every single PAs service, every single DB PAs, and we capture that information and protect it. So it's really has been a phenomenal journey. It's been great. >>You said this, that that al represents a shift from clouds from flex scale to cloud native. What is the difference there? >>The, the main difference really is we took, you know, obviously our traditional product that you've known for many media years, net backup. It's got, you know, tens of millions of lines of code in that. And we knew if we lifted and shifted it up into the cloud, into an I AEs infrastructure, it's just not, it obviously would perform extremely well, but it wasn't cost optimized for our customer. It was too expensive to to run. And so what we did is we rewrote with microservices and containerization, Kubernetes huge parts of that particular product to really optimize it for the cloud. And not only have we done it for that technology, what we now call alter data protection, but we've done it across our entire port portfolio. That was really the main change that we made as part of this particular transition. And >>What have you done to prepare customers for that shift? Is this gonna be a, a drop in simple upgrade for them? >>Absolutely. Yeah. In fact, one of the things that we introduced is we, we invest still very heavily with regards to our OnPrem solutions. We're certainly not abandoning, we're still innovating. There's a lot of data still OnPrem that needs to move to the cloud. And so we have a unique advantage of all of the different workload supports that we provide OnPrem. We continue that expansion into the cloud. So we, we create it as part of the Veritas AL Vision, a technology, we call it AL view. So it's a single painter glass across both OnPrem and cloud for our customers. And so now they can actually see all of their data protection, all our application availability, single collect, all through that single unified interface, which is really game changing in the industry for us. >>It's game changing for customers too, because customers have what generally six to seven different backup technologies in their environment that they're having to individually manage and provision. So the, the workforce productivity improvements I can imagine are, are huge with Veritas. >>Yeah. You you nailed it, right? You must have seen my script, but Absolutely. I mean, I look at the analogy of, you think about the airlines, what's one of the first things airlines do with efficiency? South Southwest Airlines was the best example, a standardized on the 7 37, right? And so all of their pilots, all of their mechanics, all know how to operate the 7 37. So we are doing the same thing with enterprise data protection. So whether you're OnPrem at the edge or in the cloud or even multi-cloud, we can provide that single painter glass. We've done it for our customers for 30 plus years. We'll continue to do it for another 30 something years. And so it's really the first time with Veritas altar that, that we're, we're coming out with something that we've invested for so long and put, put such a huge investment on that can create those changes and that compelling solution for our customers. So as you can see, we're pretty pumped and excited about it. >>Yes, I can >>Use the term data management to describe Alta, and I want to ask about that term because I hear it a lot these days. Data management used to be database, now data management is being applied to all kinds of different functions across the spectrum. How do you define data management in Veritas >>Perspective? Yeah, there's a, we, we see it as really three main pillars across the environment. So one is protection, and we'll talk a little bit about this notion of ransomware is probably the number one use case. So the ability to take the most complex and the biggest, most vast applications. SAP is an example with hundreds of different moving parts to it and being able to protect that. The second is application resiliency. If, if you look at the cloud, there's this notion of, of responsibility, shared responsibility in the cloud. You've heard it, right? Yep. Every single one of the cloud service providers, certainly AWS has up on their website, this is what we protect, here's the demarcation line, the line in the sand, and you, the customer are responsible for that other level. And so we've had a technology, you previously knew it as InfoScale, we now call it alter application resiliency. >>And it can provide availability zone to availability zone, real time replication, high availability of your mission critical applications, right? So not only do we do the traditional backups, but we can also provide application resiliency for mission critical. And then the third thing really from a data management standpoint is all around governance and compliance. You know, ac a lot of our customers need to keep data for five, 10 years or forever. They're audited. There's regulations and different geographies around the world. And, and those regulations require them to be able to really take control of their cloud, take control of their data. And so we have a whole portfolio of solutions under that data compliance, data government. So back to your, your question Paul, it's really the integration and the intersection of those three main pillars. We're not a one trick pony. We've been at this for a long time, and they're not just new products that we invented a couple of months ago and brought to market. They're tried and tested with eight 80,000 customers and the most complex early solutions on the planet that we've been supporting. >>I gotta ask you, you know, we talked about those three pillars and you talked about the shared responsibility model. And think of that where you mentioned aws, Salesforce, Microsoft 365, Google workspace, whatnot. Are you finding that most customers aren't aware of that and haven't been protecting those workloads and then come to you and saying, Hey guys, guess what, this is what this is what they're responsible for. The data is >>You Yeah, I, it's, it's our probably biggest challenge is, is one of awareness, you know, with the cloud, I mean, how many times have you spoken to someone? You just put it in the cloud. Your applications, like the cloud providers like aws, they'll protect everything. Nothing will ever go down. And it's kind like if you, unless your house was ever broken into, you're probably not gonna install that burglar alarm or that fire alarm, right? Hopefully that won't be an event that you guys have to suffer through. So yeah, it's definitely, it wasn't till the last year or so the cloud service providers really published jointly as to where is their responsibility, right? So a great example is an attack vector for a lot of corporations is their SAS applications. So, you know, whether it it's your traditional SA applications that is available that's available on the web to their customers as a sas. >>And so it's certainly available to the bad actors. They're gonna, where there's, there's gonna be a point they're gonna try to get in. And so no matter what your resiliency plan is, at the end of the day, you really need to protect it. And protection isn't just, for example, with M 365 having a snapshot or a recycle bin, that's just not good enough. And so we actually have some pretty compelling technology, what we call ALTA SAS protection, which covers the, pretty much the, the gamut of the major SAS technologies to protect those and make it available for our customers. So yeah, certainly it's a big part of it is awareness. Yeah. >>Well, I understand that the shared responsibility model, I, I realize there's a lot of confusion about that still, but in the SaaS world that's somewhat different. The responsibility of the SaaS provider for protecting data is somewhat different. How, how should, what should customers know about that? >>I think, you know, the, the related to that, if, if you look at OnPrem, you know, approximately 35 to 40% of OnPrem enterprise data is protected. It's kind of in a long traditional problem. Everyone's aware of it. You know, I remember going to a presentation from IBM 20 something years ago, and someone held their push hand up in the room about the dis drives and says, you need to back it up. And the IBM sales guy said, no, IBM dis drives never crash. Right? And so fast forward to here we are today, things have changed. So we're going through almost a similar sort of changes and culture in the cloud. 8% of the data in the cloud is protected today, 8%. That's incredible. Meaning >>That there is independent backup devoted >>To that data in some cases, not at all. And something many cases, the customer just assumes that it's in the cloud, therefore it's always available. I never have to worry about protecting it, right? And so that's a big problem that we're obviously trying to, trying to solve. And we do that all under the umbrella of ransomware. That's a huge theme, huge investment that, that Veritas does with regards to providing that resiliency for our >>Customers. Ransomware is scary. It is becoming so prolific. The bad actors have access to technologies. Obviously companies are fighting them, but now ransomware has evolved into, no longer are we gonna get hit, it's when, yeah, it's how often it's what's the damage going to be. So the ability to help customers recover from ransomware, that resiliency is table stakes for businesses in any industry these days. Does that, that one of the primary pain points that your customers are coming to you with? >>It's the number one pain point. Yeah, it's, it's incredible. I mean, there's not a single briefing that our teams are doing customer meetings where that term ransomware doesn't come up as, as their number one use case. Just to give you something, a couple of statistics. There's a ransomware attack attack that happens 11 times a second right around the globe. And this isn't just, you know, minor stuff, right? I've got friends that are, you know, executives of large company that have been hit that have that some, you know, multimillion dollar ransom attack. So our, our play on this is, when you think about it, is data protection is the last line of defense. Yes. And so if they break through, it's not a case, Lisa, as you mentioned, if it's a case of when Yeah. And so it's gonna happen. So one of the most important things is knowing how do you know you have a gold copy, a clean copy, and you can recover at speed in some cases. >>We're talking about tens of thousands of systems to do that at speed. That's in our dna. We've been doing it for many, many years. And we spoke through a lot of the cyber insurance companies on this particular topic as well. And what really came back from that is that they're actually now demanding things like immutable storage, malware detection, air gaping, right? Anomaly detection is sort of core technologies tick the box that they literally won't ensure you unless you have those core components. And so what we've done is we've doubled down on that investment. We use AI in ML technologies, particularly around the anomaly detection. One of the, the, the unique and ne differentiators that Verto provides is a ransomware resiliency scorecard. Imagine the ability to save uran a corporation. We can come in and run our analytics on your environment and kind of give you a grade, right? Wouldn't you prefer that than waiting for the event to take place to see where your vulnerability really is? And so these are some of the advantages that we can actually provide for our customers, really, really >>To help. Just a final quick question. There is a, a common perception, I believe that ransomware is an on premise problem. In fact, it is also a cloud problem. Is that not right? >>Oh, absolutely. I I think that probably the biggest attack vector is in the cloud. If it's, if it's OnPrem, you've certainly got a certain line of defense that's trying to break through. But, you know, you're in the open world there. Obviously with SAS applications in the cloud, it's not a case of if, but when, and it's, and it's gonna continue to get, you know, more and more prevalent within corporations. There's always gonna be those attack factors that they find the, the flash wounds that they can attack to break through. What we are concentrating on is that resiliency, that ability for customers to recover at speed. We've done that with our traditional appliances from our heritage OnPrem. We continue to do that with regard to resiliency at speed with our customers in the cloud, with partners like aws >>For sure. Almost done. Give me your 30 seconds on AWS and Veritas. >>We've had a partnership for the better part of 10 years. It's incredible when you think about aws, where they released the elastic compute back in 2006, right? We've been delivering data protection, a data management solutions for, for the better part of 30 years, right? So, so we're, we're Junos in our space. We're the leader in, in data protection and enterprise data protection. We were on-prem. We, we continue to be in the cloud as AWS was with the cloud service provided. So the synergies are incredible. About 80 to 85% of our, our joint customers are the same. We take core unique superpowers of aws, like AWS outposts and AWS Glacier Instant retrieval, for example, those core technologies and incorporate them into our products as we go to Mark. And so we released a core technology a few months ago, we call it ultra recovery vault. And it's an air gap, a mutable storage, worm storage, right Once, right? You can't change it even when the bad actors try to get in. They're independent from the customer's tenant and aws. So we manage it as a managed backup service for our customers. Got it. And so our customers are using that to really help them with their ransomware. So it's been a tremendous partnership with AWS >>Standing 10 years of accounting. Last question for you, Rick. You got a billboard on the 1 0 1 in Santa Clara, right? By the fancy Verto >>1 0 1? >>Yeah. Right. Well, there's no traffic. What does that billboard say? What's that bumper sticker about? Vertus, >>I think, I think the billboard would say, welcome to the new Veritas. This is not your grandfather's old mobile. We've done a phenomenal job in, in the last, particularly the last three or four years, to really reinvent ourselves in the cloud and the investments that we made are really paying off for our customers today. So I'm excited to be part of this journey and excited to talk to you guys today. >>Love it. Not your grandfather's Veritas. Rick, thank you so much for joining Paula, me on the forgot talking about what you guys are doing, how you're helping customers, really established that cyber of resiliency, which is absolutely critical these days. We appreciate your >>Time. My pleasure. Thank you so much. >>All right, for our guest and Paul Gilland, I'm Lisa Martin, you're watching the Queue, which as you know is the leader in live enterprise and emerging check coverage.

Published Date : Nov 29 2022

SUMMARY :

Great to be hosting with you And likewise with you, Lisa. you know, I've been to a lot of conferences going back 40 years, long as I can remember. many different companies on the Q Paul. Rick, welcome back It's a pleasure being here, you know, thank you so much. Talk to us about some of the new things that you've And I emphasize the term cloud native. Talk a little bit about the customer experience. And it really comes back to the Veritas older platform. And so we have What is the difference there? The, the main difference really is we took, you know, obviously our traditional product that you've known for many media And so we have a unique advantage of all of the different workload supports that we backup technologies in their environment that they're having to individually manage and provision. And so it's really the first time with Use the term data management to describe Alta, and I want to ask about that term because I hear it a lot these So the ability to take the most complex and the biggest, And so we have a whole portfolio of solutions under that data And think of that where you mentioned aws, Salesforce, Microsoft 365, that is available that's available on the web to their customers as a sas. And so it's certainly available to the bad actors. that still, but in the SaaS world that's somewhat different. And so fast forward to here we are today, And something many cases, the customer just assumes that it's in So the ability to help customers recover from ransomware, So one of the most important things is knowing how do you know you have a gold copy, And so these are some of the advantages that we can actually provide for our customers, really, I believe that ransomware is an on premise problem. it's not a case of if, but when, and it's, and it's gonna continue to get, you know, Give me your 30 seconds on AWS and Veritas. And so we released a core technology a You got a billboard on the 1 0 1 in What does that billboard say? the investments that we made are really paying off for our customers today. Rick, thank you so much for joining Paula, me on the forgot talking about what you guys are doing, Thank you so much. which as you know is the leader in live enterprise and emerging check coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Lisa MartinPERSON

0.99+

Paul GillandPERSON

0.99+

Rick ClarkPERSON

0.99+

AWSORGANIZATION

0.99+

RickPERSON

0.99+

Rick ScottPERSON

0.99+

fiveQUANTITY

0.99+

2006DATE

0.99+

PaulaPERSON

0.99+

30 secondsQUANTITY

0.99+

Santa ClaraLOCATION

0.99+

IBMORGANIZATION

0.99+

sixQUANTITY

0.99+

PaulPERSON

0.99+

South Southwest AirlinesORGANIZATION

0.99+

LisaPERSON

0.99+

Paul GillPERSON

0.99+

40,000QUANTITY

0.99+

VeritasORGANIZATION

0.99+

10 yearsQUANTITY

0.99+

80%QUANTITY

0.99+

30 plus yearsQUANTITY

0.99+

20%QUANTITY

0.99+

30 yearsQUANTITY

0.99+

Veritas AltaORGANIZATION

0.99+

first timeQUANTITY

0.99+

two yearsQUANTITY

0.99+

seven yearQUANTITY

0.99+

Las VegasLOCATION

0.99+

seven yearsQUANTITY

0.99+

8%QUANTITY

0.99+

firstQUANTITY

0.99+

last nightDATE

0.99+

Super BowlEVENT

0.99+

a year laterDATE

0.99+

todayDATE

0.99+

bothQUANTITY

0.98+

second thingQUANTITY

0.98+

last yearDATE

0.98+

yesterdayDATE

0.98+

GoogleORGANIZATION

0.98+

a year agoDATE

0.98+

oneQUANTITY

0.98+

third thingQUANTITY

0.98+

singleQUANTITY

0.98+

VertoORGANIZATION

0.98+

eight 80,000 customersQUANTITY

0.98+

first full dayQUANTITY

0.98+

OneQUANTITY

0.98+

secondQUANTITY

0.98+

over 85%QUANTITY

0.97+

about 50,000 peopleQUANTITY

0.97+

awsORGANIZATION

0.97+

three main pillarsQUANTITY

0.97+

Venetian ExpoEVENT

0.97+

over 30 yearsQUANTITY

0.97+

three full daysQUANTITY

0.97+

over a billion dollarsQUANTITY

0.96+

approximately 35QUANTITY

0.96+

Ajay Patel, VMware | AWS re:Invent 2022


 

>>Hello everyone. Welcome back to the Cube Live, AWS Reinvent 2022. This is our first day of three and a half days of wall to wall coverage on the cube. Lisa Martin here with Dave Valante. Dave, it's getting louder and louder behind us. People are back. They're excited. >>You know what somebody told me today? Hm? They said that less than 15% of the audience is developers. I'm like, no way. I don't believe it. But now maybe there's a redefinition of developers because it's all about the data and it's all about the developers in my mind. And that'll never change. >>It is. And one of the things we're gonna be talking about is app modernization. As customers really navigate the journey to do that so that they can be competitive and, and meet the demands of customers. We've got an alumni back with us to talk about that. AJ Patel joins us, the SVP and GM Modern Apps and Management business group at VMware. Aj, welcome back. Thank >>You. It's always great to be here, so thank you David. Good to see >>You. Isn't great. It's great to be back in person. So the VMware Tansu team here back at Reinvent on the Flow Shore Flow show floor. There we go. Talk about some of the things that you guys are doing together, innovating with aws. >>Yeah, so it's, it's great to be back after in person after multiple years and the energy level continues to amaze me. The partnership with AWS started on the infrastructure side with VMware cloud on aws. And when with tanza, we're extending it to the application space. And the work here is really about how do you make developers productive To your earlier point, it's all about developers. It's all about getting applications in production securely, safely, continuously. And tanza is all about making that bridge between great applications being built, getting them deployed and running, running and operating at scale. And EKS is a dominant Kubernetes platform. And so the better together story of tanu and EKS is a great one for us, and we're excited to announce some sort of innovations in that area. >>Well, Tanu was so front and center at VMware Explorer. I wasn't at in, in VMware Explorer, Europe. Right. But I'm sure it was a similar kind of focus. When are customers choosing Tanu? Why are they choosing Tanu? What's, what's, what's the update since last August when >>We, you know, the market settled into three main use cases. One is all about developer productivity. You know, consistently we're all dealing with skill set gap issues. How do we make every developer productive, modern developer? And so 10 is all about enabling that develop productivity. And we can talk quite a bit about it. Second one is security's front and center and security's being shifted left right into how you build great software. How do you secure that through the entire supply chain process? And how do you run and operationalize secure at runtime? So we're hearing consistently about making secure software supply chain heart of what our solution is. And third one is, how do I run and operate the modern application at scale across any Kubernetes, across any cloud? These are the three teams that are continuing to get resonance and empowering. All of this is exciting. David is this formation of platform teams. I just finished a study with Bain Consulting doing some research for me. 40% of our organization now have some form of a central team that's responsive for, for we call platform engineering and building platforms to make developers productive. That is a big change since about two years ago even. So this is becoming mainstream and customers are really focusing on delivering in value to making developers productive. >>Now. And, and, and the other nuance that I see, and you kinda see it here in the ecosystem, but when you talk about your customers with platform engineering, they're actually building their, they're pointing their business. They gonna page outta aws, pointing their businesses to their customers, right? Becoming software companies, becoming cloud companies and really generating new forms of revenue. >>You know, the interesting thing is, some of my customers I would never have thought as leading edge are retailers. Yeah. And not your typical Starbucks that you get a great example. I have an auto parts company that's completely modernizing how they deliver point of sale all the way to the supply chain. All built on ES at scale. You're typically think of that a financial services or a telco leading the pack. But I'm seeing innovation in India. I'm seeing the innovation in AMEA coming out of there, across the board. Every industry is becoming a product company. A digital twin as we would call it. Yeah. And means they become software houses. Yeah. They behave more like you and I in this event versus a, a traditional enterprise. >>And they're building their own ecosystems and that ecosystem's generating data that's generating more value. And it's just this cycle. It's, >>It's a amazing, it's a flywheel. So innovation continues to grow. Talk about really unlocking the developer experience and delivering to them what they need to modernize apps to move as fast and quickly as they want to. >>So, you know, I think AWS coin this word undifferentiated heavy lifting. If you think of a typical developer today, how much effort does he have to put in before he can get a single line of code out in production? If you can take away all the complexity, typically security compliance is a big headache for them, right? Developer doesn't wanna worry about that. Infrastructure provisioning, getting all the configurations right, is a headache for them. Being able to understand what size of infrastructure or resource to use cost effectively. How do you run it operationally? Cuz the application team is responsible for the operational cost of the product or service. So these are the un you know, heavy lifting that developers want to get away from. So they wanna write great code, build great experiences. And we've always talked about frameworks a way to abstract with the complexity. And so for us, there's a massive opportunity to say, how do I simplify and take away all the heavy lifting to get an idea into production seamlessly, continuously, securely. >>Is that part of your partnership? Because you think about a aws, they're really not about frameworks, they're about primitives. I mean, Warner Vos even talks about that in his, in his speech, you know, but, but that makes it more challenging for developers. >>No, actually, if you look at some of their initial investments around proton and et cetera work, they're starting to do, they're recognized, you know, PS is a bad, bad word, but the outcomes a platform as a service offers is what everybody wants. Just talking to the AWS leaders, responsible area, he actually has a separate build team. He didn't know what to call the third team. He has a Kubernetes team, he has a serverless team and has a build team. And that build team is everything above Kubernetes to make the developer productive. Right. And the ecosystem to bring together to make that happen. So I think AWS is recognizing that primitives are great for the elite developers, but if they want to get the mass scale and adoption in the business, it, if you will, they're gonna have to provide richer set of building blocks and reduce the complex and partnership like ours. Make that a reality. And what I'm excited about is there's a clear gap here, and t's the best platform to kind of fill that gap. Well, >>And I, I think that, you know, they're gonna double down triple, I just wrote about this double down, triple down on the primitives. Yes. They have to have the best, you know, servers and storage and database. And I think the way they, they, I call it taping the seams is with the ecosystem. Correct. You know, and they, nobody has a, a better ecosystem. I mean, you guys are, you know, the, the postage child for the ecosystem and now this even exceeds that. But partnering up, that's how they >>Continue to, and they're looking for someone who's open, right? Yeah. Yeah. And so one of the first question is, you know, are you proprie or open? Because one of the things they're fighting against is the lock in. So they can find a friendly partner who is open source, led, you know, upstream committing to the code, delivering that innovation, and bring the ecosystem into orchestrated choreography. It's like singing a music, right? They're running a, running an application delivery team is like running a, a musical orchestra. There's so many moving parts here, right? How do you make them sing together? And so if Tan Zoo and our platform can help them sing and drive more of their services, it's only more valuable for them. And >>I think the partners would generally say, you know, AWS always talking about customer obsession. It's like becomes this bromine, you go, yeah, yeah. But I actually think in the field, the the sellers would say, yeah, we're gonna do what the customer, if that means we're gonna partner up. Yeah. And I think AWS's comp structure makes it sort >>Of, I learned today how, how incentives with marketplaces work. Yeah. And it is powerful. It's very powerful. Yeah. Right. So you line up the sales incentive, you line up the customer and the benefits, you line up bringing the ecosystem to drive business results and everybody, and so everybody wins. And which is what you're seeing here, the excitement and the crowd is really the whole, all boats are rising. Yeah. Yeah. Right, right. And it's driven by the fact that customers are getting true value out of it. >>Oh, absolutely. Tremendous value. Speaking of customers, give us an example of a customer story that you think really articulates the value of what Tanzi was delivering, especially making that developer experience far simpler. What are some of those big business outcomes that that delivers? >>You know, at Explorer we had the CIO of cvs and with their acquisition of Aetna and CVS Health, they're transforming the, the health industry. And they talked about the whole covid and then how they had to deliver the number of, you know, vaccines to u i and how quickly they had to deliver on that. It talked about Tanu and how they leverage, leverage a Tanza platform to get those new applications out and start to build that. And Ro was basically talking about his number one prior is how does he get his developers more productive? Number to priority? How does he make sure the apps are secure? Number three, priority, how does he do it cost effectively in the world? Particularly where we're heading towards where, you know, the budgets are gonna get tighter. So how do I move more dollars to innovation while I continue to drive more efficiency in my platform? And so cloud is the future. How does he make the best use of the cloud both for his developers and his operations team? Right? >>What's happening in serverless, I, in 2017, Andy Chassy was in the cube. He said if AWS or if Amazon had to build all over again, they would build in, in was using serverless. And that was a big quote. We've mined that for years. And as you were talking about developer productivity, I started writing down all the things developers have to do. Yep. With it, they gotta, they gotta build a container image. They said they gotta deploy an EC two instance. They gotta allocate memory, they gotta fence off the apps in a virtual machine. They gotta run the, you know, compute against the app goes, they gotta pay for all that. So, okay, what's your story on, what's the market asking for in terms of serverless? Because there's still some people who want control over the run time. Help us sift through that. >>And it really comes back to the application pattern or the type you're running. If it's a stateless application that you need to spin up and spin down. Serverless is awesome. Why would I wanna worry about scaling it up in, I wanna set up some SLAs, SLIs service level objectives or, or, or indicators and then let the systems bring the resources I need as I need them. That's a perfect example for serverless, right? On the other hand, if you have a, a more of a workflow type application, there's a sequence, there's state, try building an application using serverless where you had to maintain state between two, two steps in the process. Not so much fun, right? So I don't think serverless is the answer for everything, but many use cases, the scale to zero is a tremendous benefit. Events happen. You wanna process something, work is done, you quietly go away. I don't wanna shut down the server started up, I want that to happen magically. So I think there's a role of serverless. So I believe Kubernetes and servers are the new runtime platform. It's not one or the other. It's about marrying that around the application patterns. I DevOps shouldn't care about it. That's an infrastructure concern. Let me just run application, let the infrastructure manage the operations of it, whether it's serverless, whether it's Kubernetes clusters, whether it's orchestration, that's details right. I I I shouldn't worry about it. Right. >>So we shouldn't think of those as separate architectures. We should think of it as an architecture, >>The continuum in some ways Yeah. Of different application workload types. And, and that's a toolkit that the operator has at his disposal to configure and saying, where does, should that application run? Should I want control? You can run it on a, a conveyance cluster. Can I just run it on a serverless infrastructure and and leave it to the cloud provider? Do it all for me. Sure. What, what was PAs? PAs was exactly that. Yeah. Yeah. Write the code once you do the rest. Yeah. Okay. Those are just elements of that. >>And then K native is kinda in the middle, >>Right? K native is just a technology that's starting to build that capability out in a standards way to make serverless available consistently across all clouds. So I'm not building to a, a lambda or a particular, you know, technology type. I'm building it in a standard way, in a standard programming model. And infrastructure just >>Works for me on any cloud. >>The whole idea portability. Consistency. >>Right. Powerful. Yep. >>What are some of the things that, that folks can expect to learn from VMware Tan to AWS this week at the >>Show? Yeah, so there's some really great announcements. First of all, we're excited to extend our, our partnership with AWS in the area of eks. What I mean by that is we traditionally, we would manage an EKS cluster, you visibility of what's running in there, but we weren't able to manage the lifecycle With this announcement. We can give you a full management of lifecycle of S workloads. Our customers have 400 plus EKS clusters, multiple teams sharing those in a multi-tenanted way with common policy. And they wanna manage a full life cycle, including all the upstream open source component that make up Kubernetes people. That ES is the one thing, it's a collection of a lot of open, open source packages. We're making it simple to manage it consistently from a single place on the security front. We're now making tons of service mesh available in the marketplace. >>And if you look at what service MeSHs, it's an overlay. It's an abstraction. I can create an idea of a global name space that cuts across multiple VPCs. I'm, I'm hearing at Amazon's gonna make some announcements around VPC and how they stitch VPCs together. It's all moving towards this idea of abstractions. I can set policy at logical level. I don't have to worry about data security and the communication between services. These are the things we're now enabling, which are really an, and to make EKS even more productive, making enterprise grade enterprise ready. And so a lot of excitement from the EKS development teams as well to partner closely with us to make this an end to end solution for our >>Customers. Yeah. So I mean it's under chasy, it was really driving those primitives and helping developers under continuing that path, but also recognizing the need for solutions. And that's where the ecosystem comes in, >>Right? And the question is, what is that box? As you said last time, right? For the super cloud, there is a cloud infrastructure, which is becoming the new palette, but how do you make sense of the 300 plus primitives? How do you bring them together? What are the best practices, patterns? How do I manage that when something goes wrong? These are real problems that we're looking to solve. >>And if you're gonna have deeper business integration with the cloud and technology in general, you have to have that >>Abstraction. You know, one of the simple question I ask is, how do you know you're getting value from your cloud investment? That's a very hard question. What's your trade off between performance and cost? Do you know where your security, when a lock 4G happens, do you know all the open source packages you need to patch? These are very simple questions, but imagine today having to do that when everybody's doing in a bespoke manner using the set of primitives. You need a platform. The industry is shown at scale. You have to start standardizing and building a consistent way of delivering and abstracting stuff. And that's where the next stage of the cloud journey >>And, and with the economic environment, I think people are also saying, okay, how do we get more? Exactly. We're in the cloud now. How do we get more? How do we >>Value out of the cloud? >>Exactly. Totally. >>How do we transform the business? Last question, AJ for you, is, if you had a bumper sticker and you're gonna put it on your fancy car, what would it say about VMware tan zone aws? >>I would say tan accelerates apps. >>Love >>It. Thank you so much. >>Thank you. Thank you so much for joining us. >>Appreciate it. Always great to be here. >>Pleasure. Likewise. For our guest, I'm Dave Ante. I'm Lisa Martin. You're watching The Cube, the leader in emerging and enterprise tech coverage.

Published Date : Nov 29 2022

SUMMARY :

Welcome back to the Cube Live, AWS Reinvent 2022. They said that less than 15% of the audience is developers. And one of the things we're gonna be talking about is app modernization. Good to see Talk about some of the things that you guys are doing together, innovating with aws. And so the better together Why are they choosing Tanu? And how do you run and operationalize secure at runtime? but when you talk about your customers with platform engineering, they're actually building their, You know, the interesting thing is, some of my customers I would never have thought as leading edge are retailers. And it's just this cycle. So innovation continues to grow. how do I simplify and take away all the heavy lifting to get an idea into production in his speech, you know, but, but that makes it more challenging for developers. And the ecosystem to bring together to make that happen. And I, I think that, you know, they're gonna double down triple, I just wrote about this double down, triple down on the primitives. And so one of the first question is, I think the partners would generally say, you know, AWS always talking about customer And it's driven by the fact that customers are getting true value out of it. that you think really articulates the value of what Tanzi was delivering, especially making that developer experience far And so cloud is the future. And as you were talking about developer productivity, On the other hand, if you have a, So we shouldn't think of those as separate architectures. Write the code once you do the rest. you know, technology type. The whole idea portability. Yep. And they wanna manage a full life cycle, including all the upstream open source component that make up Kubernetes people. And if you look at what service MeSHs, it's an overlay. continuing that path, but also recognizing the need for solutions. And the question is, what is that box? You know, one of the simple question I ask is, how do you know you're getting value from your cloud investment? We're in the cloud now. Exactly. Thank you so much for joining us. Always great to be here. the leader in emerging and enterprise tech coverage.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
DavidPERSON

0.99+

Dave ValantePERSON

0.99+

Andy ChassyPERSON

0.99+

AmazonORGANIZATION

0.99+

AWSORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

2017DATE

0.99+

AJ PatelPERSON

0.99+

AetnaORGANIZATION

0.99+

Ajay PatelPERSON

0.99+

DavePERSON

0.99+

Dave AntePERSON

0.99+

StarbucksORGANIZATION

0.99+

IndiaLOCATION

0.99+

CVS HealthORGANIZATION

0.99+

last AugustDATE

0.99+

three teamsQUANTITY

0.99+

twoQUANTITY

0.99+

40%QUANTITY

0.99+

two stepsQUANTITY

0.99+

VMwareORGANIZATION

0.99+

third teamQUANTITY

0.99+

less than 15%QUANTITY

0.99+

Bain ConsultingORGANIZATION

0.99+

RoPERSON

0.99+

The CubeTITLE

0.99+

OneQUANTITY

0.99+

TanuORGANIZATION

0.99+

todayDATE

0.98+

oneQUANTITY

0.98+

first dayQUANTITY

0.98+

third oneQUANTITY

0.98+

Second oneQUANTITY

0.98+

400 plusQUANTITY

0.98+

TanzaORGANIZATION

0.98+

bothQUANTITY

0.97+

first questionQUANTITY

0.97+

Cube LiveCOMMERCIAL_ITEM

0.97+

this weekDATE

0.96+

EuropeLOCATION

0.96+

VMware TansuORGANIZATION

0.96+

three and a half daysQUANTITY

0.95+

Warner VosPERSON

0.95+

EC twoTITLE

0.94+

awsORGANIZATION

0.94+

ESTITLE

0.94+

EKSORGANIZATION

0.92+

FirstQUANTITY

0.92+

zeroQUANTITY

0.92+

single placeQUANTITY

0.91+

about two years agoDATE

0.9+

twinQUANTITY

0.89+

tanzaORGANIZATION

0.88+

single lineQUANTITY

0.87+

one thingQUANTITY

0.86+

GMORGANIZATION

0.85+

tanuORGANIZATION

0.84+

TanziPERSON

0.83+

AMEAORGANIZATION

0.83+

three main use casesQUANTITY

0.82+

KubernetesTITLE

0.81+

ExplorerORGANIZATION

0.79+

10QUANTITY

0.78+

VMware ExplorerTITLE

0.75+

AppsORGANIZATION

0.74+

EKSTITLE

0.74+

tanzaPERSON

0.73+

AJPERSON

0.73+

300 plus primitivesQUANTITY

0.68+

Subbu Iyer


 

>> And it'll be the fastest 15 minutes of your day from there. >> In three- >> We go Lisa. >> Wait. >> Yes >> Wait, wait, wait. I'm sorry I didn't pin the right speed. >> Yap, no, no rush. >> There we go. >> The beauty of not being live. >> I think, in the background. >> Fantastic, you all ready to go there, Lisa? >> Yeah. >> We are speeding around the horn and we are coming to you in five, four, three, two. >> Hey everyone, welcome to theCUBE's coverage of AWS re:Invent 2022. Lisa Martin here with you with Subbu Iyer one of our alumni who's now the CEO of Aerospike. Subbu, great to have you on the program. Thank you for joining us. >> Great as always to be on theCUBE Lisa, good to meet you. >> So, you know, every company these days has got to be a data company, whether it's a retailer, a manufacturer, a grocer, a automotive company. But for a lot of companies, data is underutilized yet a huge asset that is value added. Why do you think companies are struggling so much to make data a value added asset? >> Well, you know, we see this across the board. When I talk to customers and prospects there is a desire from the business and from IT actually to leverage data to really fuel newer applications, newer services newer business lines if you will, for companies. I think the struggle is one, I think one the, the plethora of data that is created. Surveys say that over the next three years data is going to be you know by 2025 around 175 zettabytes, right? A hundred and zettabytes of data is going to be created. And that's really a growth of north of 30% year over year. But the more important and the interesting thing is the real time component of that data is actually growing at, you know 35% CAGR. And what enterprises desire is decisions that are made in real time or near real time. And a lot of the challenges that do exist today is that either the infrastructure that enterprises have in place was never built to actually manipulate data in real time. The second is really the ability to actually put something in place which can handle spikes yet be cost efficient to fuel. So you can build for really peak loads, but then it's very expensive to operate that particular service at normal loads. So how do you build something which actually works for you for both users, so to speak. And the last point that we see out there is even if you're able to, you know bring all that data you don't have the processing capability to run through that data. So as a result, most enterprises struggle with one capturing the data, making decisions from it in real time and really operating it at the cost point that they need to operate it at. >> You know, you bring up a great point with respect to real time data access. And I think one of the things that we've learned the last couple of years is that access to real time data it's not a nice to have anymore. It's business critical for organizations in any industry. Talk about that as one of the challenges that organizations are facing. >> Yeah, when we started Aerospike, right? When the company started, it started with the premise that data is going to grow, number one exponentially. Two, when applications open up to the internet there's going to be a flood of users and demands on those applications. And that was true primarily when we started the company in the ad tech vertical. So ad tech was the first vertical where there was a lot of data both on the supply set and the demand side from an inventory of ads that were available. And on the other hand, they had like microseconds or milliseconds in which they could make a decision on which ad to put in front of you and I so that we would click or engage with that particular ad. But over the last three to five years what we've seen is as digitization has actually permeated every industry out there the need to harness data in real time is pretty much present in every industry. Whether that's retail, whether that's financial services telecommunications, e-commerce, gaming and entertainment. Every industry has a desire. One, the innovative companies, the small companies rather are innovating at a pace and standing up new businesses to compete with the larger companies in each of these verticals. And the larger companies don't want to be left behind. So they're standing up their own competing services or getting into new lines of business that really harness and are driven by real time data. So this compelling pressures, one, you know customer experience is paramount and we as customers expect answers in you know an instant, in real time. And on the other hand, the way they make decisions is based on a large data set because you know larger data sets actually propel better decisions. So there's competing pressures here which essentially drive the need one from a business perspective, two from a customer perspective to harness all of this data in real time. So that's what's driving an incessant need to actually make decisions in real or near real time. >> You know, I think one of the things that's been in short supply over the last couple of years is patience. We do expect as consumers whether we're in our business lives our personal lives that we're going to be getting be given information and data that's relevant it's personal to help us make those real time decisions. So having access to real time data is really business critical for organizations across any industries. Talk about some of the main capabilities that modern data applications and data platforms need to have. What are some of the key capabilities of a modern data platform that need to be delivered to meet demanding customer expectations? >> So, you know, going back to your initial question Lisa around why is data really a high value but underutilized or under-leveraged asset? One of the reasons we see is a lot of the data platforms that, you know, some of these applications were built on have been then around for a decade plus. And they were never built for the needs of today, which is really driving a lot of data and driving insight in real time from a lot of data. So there are four major capabilities that we see that are essential ingredients of any modern data platform. One is really the ability to, you know, operate at unlimited scale. So what we mean by that is really the ability to scale from gigabytes to even petabytes without any degradation in performance or latency or throughput. The second is really, you know, predictable performance. So can you actually deliver predictable performance as your data size grows or your throughput grows or your concurrent user on that application of service grows? It's really easy to build an application that operates at low scale or low throughput or low concurrency but performance usually starts degrading as you start scaling one of these attributes. The third thing is the ability to operate and always on globally resilient application. And that requires a really robust data platform that can be up on a five nine basis globally, can support global distribution because a lot of these applications have global users. And the last point is, goes back to my first answer which is, can you operate all of this at a cost point which is not prohibitive but it makes sense from a TCO perspective. 'Cause a lot of times what we see is people make choices of data platforms and as ironically their service or applications become more successful and more users join their journey the revenue starts going up, the user base starts going up but the cost basis starts crossing over the revenue and they're losing money on the service, ironically as the service becomes more popular. So really unlimited scale predictable performance always on a globally resilient basis and low TCO. These are the four essential capabilities of any modern data platform. >> So then talk to me with those as the four main core functionalities of a modern data platform, how does Aerospike deliver that? >> So we were built, as I said from day one to operate at unlimited scale and deliver predictable performance. And then over the years as we work with customers we build this incredible high availability capability which helps us deliver the always on, you know, operations. So we have customers who are who have been on the platform 10 years with no downtime for example, right? So we are talking about an amazing continuum of high availability that we provide for customers who operate these, you know globally resilient services. The key to our innovation here is what we call the hybrid memory architecture. So, you know, going a little bit technically deep here essentially what we built out in our architecture is the ability on each node or each server to treat a bank of SSDs or solid-state devices as essentially extended memory. So you're getting memory performance but you're accessing these SSDs. You're not paying memory prices but you're getting memory performance. As a result of that you can attach a lot more data to each node or each server in a distributed cluster. And when you kind of scale that across basically a distributed cluster you can do with Aerospike the same things at 60 to 80% lower server count. And as a result 60 to 80% lower TCO compared to some of the other options that are available in the market. Then basically, as I said that's the key kind of starting point to the innovation. We lay around capabilities like, you know replication, change data notification, you know synchronous and asynchronous replication. The ability to actually stretch a single cluster across multiple regions. So for example, if you're operating a global service you can have a single Aerospike cluster with one node in San Francisco one node in New York, another one in London and this would be basically seamlessly operating. So that, you know, this is strongly consistent, very few no SQL data platforms are strongly consistent or if they are strongly consistent they will actually suffer performance degradation. And what strongly consistent means is, you know all your data is always available it's guaranteed to be available there is no data lost any time. So in this configuration that I talked about if the node in London goes down your application still continues to operate, right? Your users see no kind of downtime and you know, when London comes up it rejoins the cluster and everything is back to kind of the way it was before, you know London left the cluster so to speak. So the ability to do this globally resilient highly available kind of model is really, really powerful. A lot of our customers actually use that kind of a scenario and we offer other deployment scenarios from a higher availability perspective. So everything starts with HMA or Hybrid Memory Architecture and then we start building a lot of these other capabilities around the platform. And then over the years what our customers have guided us to do is as they're putting together a modern kind of data infrastructure, we don't live in the silo. So Aerospike gets deployed with other technologies like streaming technologies or analytics technologies. So we built connectors into Kafka, Pulsar, so that as you're ingesting data from a variety of data sources you can ingest them at very high ingest speeds and store them persistently into Aerospike. Once the data is in Aerospike you can actually run Spark jobs across that data in a multi-threaded parallel fashion to get really insight from that data at really high throughput and high speed. >> High throughput, high speed, incredibly important especially as today's landscape is increasingly distributed. Data centers, multiple public clouds, Edge, IoT devices, the workforce embracing more and more hybrid these days. How are you helping customers to extract more value from data while also lowering costs? Go into some customer examples 'cause I know you have some great ones. >> Yeah, you know, I think, we have built an amazing set of customers and customers actually use us for some really mission critical applications. So, you know, before I get into specific customer examples let me talk to you about some of kind of the use cases which we see out there. We see a lot of Aerospike being used in fraud detection. We see us being used in recommendations engines we get used in customer data profiles, or customer profiles, Customer 360 stores, you know multiplayer gaming and entertainment. These are kind of the repeated use case, digital payments. We power most of the digital payment systems across the globe. Specific example from a specific example perspective the first one I would love to talk about is PayPal. So if you use PayPal today, then you know when you're actually paying somebody your transaction is, you know being sent through Aerospike to really decide whether this is a fraudulent transaction or not. And when you do that, you know, you and I as a customer are not going to wait around for 10 seconds for PayPal to say yay or nay. We expect, you know, the decision to be made in an instant. So we are powering that fraud detection engine at PayPal. For every transaction that goes through PayPal. Before us, you know, PayPal was missing out on about 2% of their SLAs which was essentially millions of dollars which they were losing because, you know, they were letting transactions go through and taking the risk that it's not a fraudulent transaction. With Aerospike they can now actually get a much better SLA and the data set on which they compute the fraud score has gone up by you know, several factors. So by 30X if you will. So not only has the data size that is powering the fraud engine actually gone up 30X with Aerospike but they're actually making decisions in an instant for, you know, 99.95% of their transactions. So that's- >> And that's what we expect as consumers, right? We want to know that there's fraud detection on the swipe regardless of who we're interacting with. >> Yes, and so that's a really powerful use case and you know, it's a great customer success story. The other one I would talk about is really Wayfair, right, from retail and you know from e-commerce. So everybody knows Wayfair global leader in really in online home furnishings and they use us to power their recommendations engine. And you know it's basically if you're purchasing this, people who bought this also bought these five other things, so on and so forth. They have actually seen their cart size at checkout go up by up to 30%, as a result of actually powering their recommendations engine through Aerospike. And they were able to do this by reducing the server count by 9X. So on one ninth of the servers that were there before Aerospike, they're now powering their recommendations engine and seeing cart size checkout go up by 30%. Really, really powerful in terms of the business outcome and what we are able to, you know, drive at Wayfair. >> Hugely powerful as a business outcome. And that's also what the consumer wants. The consumer is expecting these days to have a very personalized relevant experience that's going to show me if I bought this show me something else that's related to that. We have this expectation that needs to be really fueled by technology. >> Exactly, and you know, another great example you asked about you know, customer stories, Adobe. Who doesn't know Adobe, you know. They're on a mission to deliver the best customer experience that they can. And they're talking about, you know great Customer 360 experience at scale and they're modernizing their entire edge compute infrastructure to support this with Aerospike. Going to Aerospike basically what they have seen is their throughput go up by 70%, their cost has been reduced by 3X. So essentially doing it at one third of the cost while their annual data growth continues at, you know about north of 30%. So not only is their data growing they're able to actually reduce their cost to actually deliver this great customer experience by one third to one third and continue to deliver great Customer 360 experience at scale. Really, really powerful example of how you deliver Customer 360 in a world which is dynamic and you know on a data set which is constantly growing at north of 30% in this case. >> Those are three great examples, PayPal, Wayfair, Adobe, talking about, especially with Wayfair when you talk about increasing their cart checkout sizes but also with Adobe increasing throughput by over 70%. I'm looking at my notes here. While data is growing at 32%, that's something that every organization has to contend with data growth is continuing to scale and scale and scale. >> Yap, I'll give you a fun one here. So, you know, you may not have heard about this company it's called Dream11 and it's a company based out of India but it's a very, you know, it's a fun story because it's the world's largest fantasy sports platform. And you know, India is a nation which is cricket crazy. So you know, when they have their premier league going on and there's millions of users logged onto the Dream11 platform building their fantasy league teams and you know, playing on that particular platform, it has a hundred million users a hundred million plus users on the platform, 5.5 million concurrent users and they have been growing at 30%. So they are considered an amazing success story in terms of what they have accomplished and the way they have architected their platform to operate at scale. And all of that is really powered by Aerospike. Think about that they're able to deliver all of this and support a hundred million users 5.5 million concurrent users all with, you know 99 plus percent of their transactions completing in less than one millisecond. Just incredible success story. Not a brand that is, you know, world renowned but at least you know from what we see out there it's an amazing success story of operating at scale. >> Amazing success story, huge business outcomes. Last question for you as we're almost out of time is talk a little bit about Aerospike AWS the partnership Graviton2 better together. What are you guys doing together there? >> Great partnership. AWS has multiple layers in terms of partnerships. So, you know, we engage with AWS at the executive level. They plan out, really roll out of new instances in partnership with us, making sure that, you know those instance types work well for us. And then we just released support for Aerospike on the Graviton platform and we just announced a benchmark of Aerospike running on Graviton on AWS. And what we see out there is with the benchmark a 1.6X improvement in price performance. And you know about 18% increase in throughput while maintaining a 27% reduction in cost, you know, on Graviton. So this is an amazing story from a price performance perspective, performance per watt for greater energy efficiencies, which basically a lot of our customers are starting to kind of talk to us about leveraging this to further meet their sustainability target. So great story from Aerospike and AWS not just from a partnership perspective on a technology and an executive level, but also in terms of what joint outcomes we are able to deliver for our customers. >> And it sounds like a great sustainability story. I wish we had more time so we would talk about this but thank you so much for talking about the main capabilities of a modern data platform, what's needed, why, and how you guys are delivering that. We appreciate your insights and appreciate your time. >> Thank you very much. I mean, if folks are at re:Invent next week or this week come on and see us at our booth and we are in the data analytics pavilion and you can find us pretty easily. Would love to talk to you. >> Perfect, we'll send them there. Subbu Iyer, thank you so much for joining me on the program today. We appreciate your insights. >> Thank you Lisa. >> I'm Lisa Martin, you're watching theCUBE's coverage of AWS re:Invent 2022. Thanks for watching. >> Clear- >> Clear cutting. >> Nice job, very nice job.

Published Date : Nov 25 2022

SUMMARY :

the fastest 15 minutes I'm sorry I didn't pin the right speed. and we are coming to you in Subbu, great to have you on the program. Great as always to be on So, you know, every company these days And a lot of the challenges that access to real time data to put in front of you and I and data platforms need to have. One of the reasons we see is So the ability to do How are you helping customers let me talk to you about fraud detection on the swipe and you know, it's a great We have this expectation that needs to be Exactly, and you know, with Wayfair when you talk So you know, when they have What are you guys doing together there? And you know about 18% and how you guys are delivering that. and you can find us pretty easily. for joining me on the program today. of AWS re:Invent 2022.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AWSORGANIZATION

0.99+

Lisa MartinPERSON

0.99+

60QUANTITY

0.99+

LondonLOCATION

0.99+

LisaPERSON

0.99+

PayPalORGANIZATION

0.99+

New YorkLOCATION

0.99+

15 minutesQUANTITY

0.99+

3XQUANTITY

0.99+

2025DATE

0.99+

WayfairORGANIZATION

0.99+

35%QUANTITY

0.99+

AdobeORGANIZATION

0.99+

30%QUANTITY

0.99+

99.95%QUANTITY

0.99+

10 secondsQUANTITY

0.99+

San FranciscoLOCATION

0.99+

30XQUANTITY

0.99+

70%QUANTITY

0.99+

32%QUANTITY

0.99+

27%QUANTITY

0.99+

1.6XQUANTITY

0.99+

each serverQUANTITY

0.99+

twoQUANTITY

0.99+

oneQUANTITY

0.99+

OneQUANTITY

0.99+

AerospikeORGANIZATION

0.99+

millions of dollarsQUANTITY

0.99+

IndiaLOCATION

0.99+

SubbuPERSON

0.99+

9XQUANTITY

0.99+

fiveQUANTITY

0.99+

99 plus percentQUANTITY

0.99+

first answerQUANTITY

0.99+

third thingQUANTITY

0.99+

less than one millisecondQUANTITY

0.99+

10 yearsQUANTITY

0.99+

this weekDATE

0.99+

Subbu IyerPERSON

0.99+

one thirdQUANTITY

0.99+

millions of usersQUANTITY

0.99+

over 70%QUANTITY

0.98+

both usersQUANTITY

0.98+

Dream11ORGANIZATION

0.98+

80%QUANTITY

0.98+

todayDATE

0.98+

GravitonTITLE

0.98+

each nodeQUANTITY

0.98+

secondQUANTITY

0.98+

bothQUANTITY

0.98+

threeQUANTITY

0.98+

fourQUANTITY

0.98+

TwoQUANTITY

0.98+

one nodeQUANTITY

0.98+

hundred million usersQUANTITY

0.98+

first verticalQUANTITY

0.97+

about 2%QUANTITY

0.97+

AerospikeTITLE

0.97+

single clusterQUANTITY

0.96+

Anne Zaremba, AWS & Steven White, EdgeML | AWS re:Invent 2022


 

foreign to the AWS re invent Cube coverage I'm John Furrier here with thecube got a great guest line up here talking about computer vision at the edge and saramba product lead AWS events mobile app and Steven White solution architect for Edge ml thanks for joining me today computer vision at the edge with adios Panorama thanks for coming on happy to be here so what is Ada's Panorama let's get that out there right away what's the focus of that let's define what that is and we'll get into this computer vision at the Edge Story yeah so thanks Sean uh AWS Panorama is our managed uh computer vision at the Ed service and so to put that perspective you know imagine with me the last time that you've been into a restaurant or maybe your favorite retail store or even office building and didn't notice a camera and so we were talking to customers and trying to understand you know what is it that they do with all of this uh video content that they're collecting and surprisingly we found out that large part of this data just sits on a hard drive somewhere and never gets used and so as we dug in a little deeper to better understand you know why this data is just sitting there I think there were three main themes that continue to come up across the board uh one is you know around privacy right privacy security a lot of the data that's being captured with these cameras tend to be either intellectual property that is you know focused on kind of the Manifest factoring process or maybe about their products that they don't want to get out there you know or and or it could just be a private pii data privacy data related to their employee Workforce and and maybe even customers so you know privacy is is a big concern second was just the amount of bandwidth that cameras create and produce tend to be uh prohibitive from for you know sending back to a centralized location for processing uh each camera stream tends to generate about a couple of megabytes of data so it could get very voluminous as you've got tons of cameras at your location and the other issue was around just the latency required to take action on the data so a lot of times especially in the manufacturing space um you know as as you've got a manufacturing line of products that are coming through and you need to take action in milliseconds and so latency is extremely important from process processing time to taking action so those three uh main drivers you know we ended up developing this AWS service called Panorama that addressed these three main challenges with uh you know with analyzing video content and database Panorama in particular there's there's two main components right we've got the compute platform that is about the size of a sheet of paper your standard you know eight and a half by eleven size sheet of paper so the platform itself is extremely compact it's a it's a video and and deep learning algorithms it sits at the customer premise and directly interfaces with video cameras using the standard IP protocols collects that data uh processes it and then immediately deletes the data so there isn't any any information that's actually stored at the location and you know basically the only thing that's left over is just metadata that describes that data and then the other key component here is the cloud um you know service component which helps manage the fleet of devices that are existing so all of these Panorama appliances that are sitting at your premise there's a cloud component that helps you configure you know operationalize check the health as well as deploy applications and configure cameras so that's uh basically you know the the service is really hopefully optimal or you know is focused on um helping customers really make use of all of their video data at the edge you know the theme here at re invent this year is applications we've seen things like connect add value to customers this is one of those situations where everyone's got cameras it's easy to connect to an IP address and Cloud kind of gives you all those Services there are a lot of real world applications that people can can Implement with this because with the cloud you kind of have this ability to kind of stand it up and get value out of that data what are some of the real world applications that it was because they're implementing with the camera because I mean I can see a lot of use cases here where I can you don't have to build the clouds there for me I can stand it up and start getting value what kind of use cases do you see implementing from your customers yeah so our customers are really amazing with the different types of problems um and opportunities that they bring to us for uh using computer vision at the edge in their data um you know we've got everything from animal Warfare use cases to being able to use you know video to uh to to make sure that you know food processing and just you know the health of animals is uh is uh sufficient we've got cases in manufacturing doing visual inspection and anomaly detection so looking at products that are on the conveyor belt as they're being manufactured and put together to make sure that obviously they're they're put together in the right in the right way um and then we've got different port authority and airports that use uh for you know security and cargo tracking to make sure that the products get to where they're supposed to go in a timely and efficient manner manager manage and then finally one of the use cases that really show facing a re invent this year is a part of our retail analytics portfolio which is line counting and so in particular we see a lot of customers in the retail space such as quick service restaurants even you know Peril retail and convenience stores where they want to better understand um you know whether their product is being made to the customer specification we've got like french fry use cases to see how the quality of that french fry is um you know over time and if they need to make a new batch when they've got a influx of customers coming in and to understanding employee to customer ratio maybe they need to put somebody on the cash register you know at busy time so there's really just a big number of customers you know opportunities that we've really solving with the computer vision service looks like a great service Panorama looking good and I want to get your thoughts you have the events happy the product lead take us through with your app I know you have decided to use it was Panorama I was a fit for you this year at re invent 2022 but you know you've been doing this event app for a while now take us through the app when it started how it's evolved and kind of what's the focus this year of course Sean app started in 26 4 re invent and since we've really expanded this year we've actually supported up to 34 events for AWS and continue to expand that for future years for this year though specifically we wanted to contribute to the overall event experience at re invent by helping people go through the process of checking in and picking up their badge in a more formed and efficient way so we decided that the AWS Panorama team and their computer vision and Edge capabilities were the best fit to analyze the lines and the registration kiosks that we have on site at both the Venetian and MGM at the airport we'll have digital signage showcasing our bad pickup wait times that will help attendees select which badge pickup location that they want to go to and see the current wait times live on those signs as well as through the mobile app so I can basically um get the feel for the line size when to come in does it give me a little recognition of who I am and kind of when I get there there's a TIA pull up my records as I do a little intelligence behind the scenes give us a little peek under the covers what's the solution look like so you do have to sign into the mobile app with your registration and so with that we will have your QR code specific for your check-in experience available to you you'll see that at the top of the screen and we'll know once you've checked in that will disappear but if you haven't checked in that Banner is at the top of the event screen and when you tap that that's when you can see all the different options where you can go and pick up your badge we do have five locations this year for badge pickup and the app will help you kind of navigate which one of those options will be best for you given you know maybe you want to pick it up right away at the airport or you may want to go even to one of your other Hotel options that we'll have um to pick it up at foreign okay now I gotta get I got to ask you on the app what's the coolest thing you got going on this year what's new every year there seems to be a new feature what's the focus this year so can you share a a peek on some of the key features yeah so our biggest and most popular features are always around the session catalog and calendar as you can utilize both to of course organize your event schedule and really stay on top of what you want to do on site and get the most out of your reinvent experience this year we have a few new exciting features of course badge pickup line counting is is one of our biggest but we also will have a one-way calendar sync so you can sync all of your calendar activities to your native device calendar as well as pure talk which is our newest feature that we launched at the start of November where you can interact with other attendees who have opted in and even set up time on site to meet one-on-one with them we've also filled that experience with peer talk experts that include AWS experts that are ready to meet and interact with attendees who have interest on site you know I love this topic it's a very cool video we love video we're doing this remote video I'm getting ready for you know all the action and and analyzing it video's cool and so to me if we could look at the video and say hey we haven't soon that might have body cams in the future um video is great people love videos very engaging but always people that say what about my privacy so how do you guys put in place uh mechanisms to preserve attendee privacy yeah I think so I'm not I think you know you and our customers share the same concern and so we have built uh foundationally that AWS Panorama to address you know both privacy and security concerns with uh associated with all this video content and so in particular the AWS Panorama Appliance is something that sits at the customer premise it interface directly with video cameras uh the data all the video that's processed is immediately deleted nothing stored um and you know the outcome of the processing is just simple metadata so it's Text data that you know as an example in the case of the AWS uh line counting solution that we're demoing this year at Panorama along with you know the events team uh it's simply a count of the number of people in the video at any given time so so you know we we do take privacy uh at heart and have made every effort to address them and what are some of the things that you're doing at the event app I mean I'm imagining you're probably looking at space I mean there's a fire marshal issues around you know people do you take it to that level I mean what's how far are you pushing the envelope on on Panorama what are some of the things that you guys are doing besides check-ins or anything you can share on what's Happening the area where we're utilizing you know Anonymous attendee data otherwise other things in the app are very Anonymous just in nature I mean you do sign in but besides that everything we collect is anonymous and we don't collect unless you consent with the cookie consent that appears right when you first launch the app experience besides that we do have as I mentioned peer talk and and that's just where you're sharing information that you want to share with other attendees on site and then we do have session surveys where you can provide information that you wish about how this survey or how the sessions rather went that you attended on-site yeah Stephen you're you're uh your title has you the solution architect for Edge ml this is the Ultimate Edge use case you're seeing here I mean it's a big part of the future of how companies are going to use video and data just what's your reaction to all this I mean we're at a time it's very kind of an interesting time in the history of the industry as you look at this this is a really big part of of the future with video and Edge like I mentioned users are involved people are involved spaces are involved kind of a fun area what's your reaction to where this is right now so personally I'm very passionate about this uh particular solution and service I've been doing computer vision now for 12 years I started doing in the cloud but when I heard about you know customers really looking for an edge component solution and this you know AWS was still in the early stages I knew I had to be a part of it and so I I you know work with some amazing talented engineers and scientists putting this solution together and of course you know our customers continue to bring us these amazing use cases that you know that just I wouldn't get an opportunity to um you know witness without without you know the support of our customers and so we've got some amazing opportunity amazing projects and you know I just love the love to uh experience that with our customers and partners yeah and and Stephen this is like one of those times where the industry has always had this everyone's scratching the niche somewhere but then you get cloud and scale and data come in and just it accelerates some of these areas that were you know I won't say not growing fast but very interesting like computer vision video events technology in the cloud is changing in a good way some of these areas uh and we're seeing that like computer vision as you mentioned Stephen so Ann event same thing I can imagine this event app will blow up to probably be all things Amazon events and and be the touch Touchstone for all customers and attendees I'm probably thinking the road map there's looking pretty interesting with all the vision you have there what's your what's your reaction to the cloud scale meets events absolutely yeah I know we we have a lot of events that happen at AWS and our goal is to have as many of them in the app as possible where it makes sense right we have a lot of partial Day events to multi-day events and the multi-day events are definitely the area where it's harder for an attendee to organize all that they have to do going on on site as well as everything surrounding the event pre-event uh topics and sessions looking up what they want to do to make sure that they're getting the most of their time on site so we really want to make sure that that's something that an attendee can do with our app as well as it showcase as many of the AWS Services as we have like we are doing here with Panorama we have a few other services in the app as well Amazon location service and Amazon connect to name a couple and we hope to just include more and more with each year as well as more events as the time goes on I'm sure your roadmaps looking great the computer vision is awesome I mean this is a mashup integration apis are going to come around the corner so much excitement after re invent love to follow up with you guys and find out more I think this is a super interesting area the convergence of what you guys are working on to kind of wrap up where do you guys see um AWS Panorama going and where can people learn more about how to get involved how to use the service how to test it out where's this going and how do people learn more but first off you can get customers can get more information about panorama from our website aws.amazon.com Panorama and you know I think where we're going is super exciting you know we continue to improve the product to add support for as an example containers we've added support for Hardware acceleration to improve the number of cameras that we can support so we've you know we've got um you know we can support now with a single device up to 30 40 cameras we've got the ability now to support many different uh we continue to expand the interface types that we support um you know and the different types of even adding sensors and you know expanding to Sensor Fusion so not just computer vision but we've learned from customers that they actually want to incorporate other uh other sensor types and other interfaces so we're bringing in the ability to handle you know computer vision and video but also many other data types as well all right and and Stephen thank you for sharing great stuff computer vision at the edge with Panorama thanks for coming on thecube appreciate it thanks for coming on thank you okay AWS coverage here in the cube I'm John for your host thanks for watching

Published Date : Nov 23 2022

SUMMARY :

in the ability to handle you know

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
Anne ZarembaPERSON

0.99+

AWSORGANIZATION

0.99+

12 yearsQUANTITY

0.99+

John FurrierPERSON

0.99+

StephenPERSON

0.99+

Steven WhitePERSON

0.99+

eight and a halfQUANTITY

0.99+

two main componentsQUANTITY

0.99+

JohnPERSON

0.99+

three main themesQUANTITY

0.98+

three main challengesQUANTITY

0.98+

AmazonORGANIZATION

0.98+

aws.amazon.comOTHER

0.98+

five locationsQUANTITY

0.98+

PanoramaTITLE

0.98+

each cameraQUANTITY

0.98+

one-wayQUANTITY

0.98+

this yearDATE

0.97+

VenetianLOCATION

0.97+

tons of camerasQUANTITY

0.97+

each yearQUANTITY

0.96+

oneQUANTITY

0.95+

todayDATE

0.95+

this yearDATE

0.93+

EdgeTITLE

0.93+

bothQUANTITY

0.93+

up to 34 eventsQUANTITY

0.93+

start of NovemberDATE

0.92+

firstQUANTITY

0.9+

PanoramaORGANIZATION

0.88+

a lot of eventsQUANTITY

0.88+

single deviceQUANTITY

0.87+

about a couple of megabytes of dataQUANTITY

0.86+

EdgeORGANIZATION

0.86+

CubeCOMMERCIAL_ITEM

0.86+

panoramaTITLE

0.86+

a lot of timesQUANTITY

0.85+

MGMLOCATION

0.84+

millisecondsQUANTITY

0.82+

secondQUANTITY

0.79+

one of those situationsQUANTITY

0.79+

eleven sizeQUANTITY

0.76+

up to 30 40 camerasQUANTITY

0.76+

one of those timesQUANTITY

0.76+

three uh main driversQUANTITY

0.76+

a lot of customersQUANTITY

0.76+

AWS PanoramaORGANIZATION

0.74+

frenchOTHER

0.72+

PerilORGANIZATION

0.71+

AdaORGANIZATION

0.71+

lotQUANTITY

0.66+

dPERSON

0.66+

SeanPERSON

0.64+

use casesQUANTITY

0.64+

every yearQUANTITY

0.63+

Mike Thompson & Ali Zafar | AWS re:Invent 2022


 

(intro upbeat music) >> Hello everyone and welcome to our continued coverage of AWS re:Invent here on theCUBE. My name is Savannah Peterson and I am very excited about the conversation coming up. Not only are we joined by two brilliant minds in the cloud, one of them happens to be a CUBE alumni. Please welcome Mike from AMD and Ali from Dropbox. Ali, welcome back to the show, how you been? >> Thanks Savannah. I'm doing great and really excited to be back on theCUBE. It was great discussion last time and really excited for both re:Invent and also to see how this video turns out. >> Hey, that makes two of us and probably three of us. How are you doing today, Mike? >> Doing great. It's really nice to be getting back to in-person events again and to be out solving problems with customers and partners like Dropbox. >> I know, isn't it? We've all missed each other. Was a lonely couple of years. Mike, I'm going to open it up with you. I'm sure a lot of people are curious. What's new at AMD? >> Well, there's a lot that's new at AMD, so I'll share a subset of what's new and what we've been working on. We've expanded our global coverage in Amazon EC2 with new regions and instance types. So users can deploy any application pretty much anywhere AWS has a presence. Our partner ecosystems for solutions and services has expanded quite a bit. We're currently focused on enabling partners and solutions that focus on cloud cost optimization, modernizing infrastructure, and pushing performance to the limit, especially for HPC. But the biggest buzz, of course, is around AMD's new fourth generation of our EPYC CPU Genoa. It's the world's fastest data center CPU with transformative energy efficiency and that's a really interesting combination, highest performance and most efficient. So on launch day, AWS announced their plans to roll out AMD EPYC Genoa processor-based EC2 instances. So we're pretty excited about that and that's what we'll be working on in the near term. >> Wow, that's a big deal and certainly not a casual announcement. Obviously, power and efficiency hot topics here at re:Invent but also looking at the greater impact on the planet is a big conversation we've been having here as well. So this is exciting and timely and congratulations to you and the team on all that seems to be going on. Ali, what's going on at Dropbox? >> Yeah, thanks Savannah. The Q3 2022 was actually a very strong quarter for Dropbox during a very difficult macroeconomic backdrop. Our focus has continued to be on innovation and this is around both new products and also driving multi-product adoption which is paying a lot of dividends for us, so essentially, bringing products like Dropbox Sign, DocSend, Capture, and other exciting products to our customers. On the infra side, it's all about how do we scale our infrastructure to meet the business needs, right? How do we keep up with the accelerated growth during the pandemic and also leveraging both AMD and AWS for investments in our public cloud? >> Let's talk about the cloud a bit. You are both cloud experts and I'm glad that you brought that up. We'll keep it there with Ali. When, why, and how should users leverage public cloud? >> Yeah, so Dropbox is hybrid cloud which means we are running applications both in private and public cloud and within a unique position to leverage the best of both worlds. And Savannah, this is a decision we continue to reevaluate on a regular basis. And there are really three key factors that come into play here. First is scale and scale, are we operating at a scale where customization is cost-efficient for us? Next is uniqueness. Is our workload unique compared to what the public cloud supports? And lastly, innovation. Do we have the expertise to innovate faster than public cloud or not? So based on these three key factors, we try and balance all of them and then come up with the best option for us at Dropbox. And kind of elaborating over here, things like international storage, we're leveraging public cloud, things like AI and ML, we're leveraging public cloud, but when we talk about Magic Pocket, which is our multi-exabyte storage system, that has the scale which is why we are doing that on our own private cloud. >> Wow, I think you just gave everybody a fantastic framework for thinking about their decision matrix there if nothing else. Mike, is there anything that you'd like to add to that? Anything that AMD considers when contemplating public cloud versus private? >> Yeah, so there's really three main drivers that I see when users consider when, why, and how should they leverage public cloud. Three main drivers: establishing a global footprint, accelerating product release cycles, and efficiently rightsizing infrastructure. So customers looking to establish a global footprint often turn to public cloud deployments to quickly reach their clients in workforces around the world, most importantly with minimal capital expense. I understand Dropbox uses public cloud to establish their global presence scaling out from their core data centers in North America. And then a lot of industries have tremendous pressure to accelerate product release cycles. With public cloud, organizations can immediately deploy new applications without a long site and hardware acquisition cycle and then the associated ongoing maintenance and operational overhead. And the third thing is customers that need to rightsize and dynamically scale their infrastructure and application deployments are drawn to public cloud, for example, customers that have cyclical compute or application load peaks can efficiently deploy in the cloud without overdeploying their on-prem infrastructure for most of the year which is off-peak during those off-peak times. That infrastructure idle time is a waste of resources and OPEX. So scalable rightsizing draws a lot of users to cloud deployment. >> Yeah, wow. I think there's a lot of factors to consider but also it seems like a pretty streamlined process for navigating that or at least you two both made it sound that way. Another hot topic in the space right now is security. Mike, let's start with you a little bit. What are the most important security issues for AMD right now that you can talk about? >> Yeah, sure. So, well, first of all, AWS provides a wide variety of really good security services to protect customers that are working in the cloud. Like from a processor technology perspective, there's three main security aspects to consider, two of which are common practice today and one of which AMD brings significant differentiation and value. The first two are protecting data at rest and data in transit. And these two are part of the prevalent security models of today where AMD provides distinct value and differentiation is in protecting data in use. So EPYC Milan and Genoa processors support a function called SEV-SNP and this enables users to reside and their applications to reside within their own cryptographic context and environment with data integrity protection to accomplish what's called comprehensive confidential computing. Ethics confidential computing solution is hardware-based. So it's easy to leverage, there's no code rewrite required unlike comparable solutions that are software-based that require recoding to a proprietary SDK and come with a significant performance trade-off. So with EPYC processors, you can protect your data at rest, in transit, and most importantly, in use. >> Everybody needs to protect their data everywhere it is. So I love that. That's fantastic to hear and I'm sure gives your customers a lot of confidence. What about over at Dropbox? What security issues are you facing, Ali? >> Yeah, so the first company value at Dropbox is actually being worthy of trust, and what this really means from a security perspective is how do we keep all of our users content safe? And this means keeping everything down to all of the infrastructure hardware secure. So partnering with AMD, which is one of our strongest partners out there, the new security features that AMD have and the hardware are critical for us and we are able to take advantage of some of these best security practices within our compute infrastructure by leveraging AMD's secure ship architecture. >> How important, you just touched on it a little bit, and I want to ask, how important are partnerships like the one you have with each other as you innovate at scale? Ali, you're nodding, I'm going to go to you first. >> Yeah, so like I mentioned, the partnership with with AMD is one of the strongest that we have and it just goes beyond like a regular partnership where it's just buy and sell. We talk about technology together, we talk about innovation together, we talk about partnership together, and for us, as I look look at our hybrid cloud strategy, we would not be able to get the benefits in terms of efficiency, scale, or liability performance without having a strong partner like AMD. >> That's awesome. Mike, anything you want to add there? >> I'd reiterate some of what Ali had to say. One of my favorite parts about my job is getting together with partners and customers to figure out how to optimize their applications and deployments around the world to get the most efficient use of the cloud infrastructure for servers that are based on AMD technology. In many cases, we can find 10% or better performance or cost optimization by working closely with partners like Dropbox. And then in addition, if we keep in lock step together to look at what's coming on the roadmap, by the time the latest and greatest technology is finally deployed, our customers and our partners are ready to take advantage of it. So that's the fun part of the job and I really appreciate the Dropbox's cooperation, optimizing their infrastructure, and using AMD products >> Well, what a synergistic relationship of mutual admiration and support. We love to hear it here in the tech world. Mike, last question for you. What's next for AMD? >> Well, heading into 2023, considering the current challenge macroeconomic environment and geopolitical instability, doing more with less will be top of mind for many CFOs and CEOs in 2023. And AMD can help accomplish that. AMD's EPYC processors, leadership performance, and lower EC2 retail costs can help users reduce costs without impacting performance, or the flip side of that, they can scale capacity without increasing costs. And because of EPYC's higher core counts, really high core density, applications can be deployed with fewer servers or smaller instances that has both economic and environmental benefits that reduce usage costs as well as environmental impacts. And that allows customers to optimize their application and infrastructure spend. And then the second thing that I've seen over the last couple of years and I see this trajectory continuing is increased geographic distribution of our colleagues and workforces is here to stay, people work from everywhere. In modern cross platform, collaboration platforms, that bring teams, tools, and content together have a really important role to play to enable that new, more flexible style of working. And those tools need to be really agile and easy to use. I think Dropbox is really well positioned to enable this new style of working. AMD's really happy to work closely with Dropbox to enable these modern work styles, both on premises, hybrid, and fully in the public cloud. >> Well, it sounds like a very exciting and optimistically, bright future for you all at AMD. We love to hear that here at theCUBE. Ali, what about you? What is 2023 going to hold for Dropbox? >> Yeah, so I think we're going to continue on this journey of transformation where our focus is on new products and also multi-product adoption. And from a cloud perspective, how do we continue to evolve our hybrid cloud so that we remain a competitive advantage for our business and also for our customers? I think right now, Savannah, we're in a very unique position to utilize some of the best AMD technology that's out there and that's both on premise and in the cloud. Some of the AMD Epic processors delivered the performance that we need for our hybrid cloud and we want to continue to leverage these also in public cloud which is the EC2 instances that are powered by AMD in the long run. So overall, Dropbox is looking forward to continue to evaluate some of the AMD's Genoa CPUs that are coming out but also want to continue to grow our EC2 footprint powered by AMD in the long run. >> Fantastic. Well, it sounds like this second showing here on theCUBE is just the tee up for your third and we'll definitely have to have Mike back on for the second time around to hear how things are going. Thank you both so much for taking the time today to join me here. Mike and Ali, it was fantastic getting to chat to you and thank you to our audience for tuning into theCUBE's special coverage of AWS re:Invent. My name's Savannah Peterson and I hope we can learn together soon. (outro upbeat music)

Published Date : Nov 21 2022

SUMMARY :

one of them happens to be a CUBE alumni. and also to see how this video turns out. Hey, that makes two of It's really nice to be getting back Mike, I'm going to open it up with you. and solutions that focus and congratulations to you and the team and this is around both new products and I'm glad that you brought that up. and then come up with the Wow, I think you just gave customers that need to rightsize of factors to consider and their applications to reside That's fantastic to hear and the hardware are critical for us going to go to you first. is one of the strongest that we have Mike, anything you want to add there? and deployments around the world We love to hear it here in the tech world. And that allows customers to What is 2023 going to hold for Dropbox? and we want to continue and I hope we can learn together soon.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
AliPERSON

0.99+

SavannahPERSON

0.99+

MikePERSON

0.99+

AWSORGANIZATION

0.99+

Savannah PetersonPERSON

0.99+

DropboxORGANIZATION

0.99+

10%QUANTITY

0.99+

twoQUANTITY

0.99+

2023DATE

0.99+

threeQUANTITY

0.99+

AMDORGANIZATION

0.99+

Mike ThompsonPERSON

0.99+

Ali ZafarPERSON

0.99+

North AmericaLOCATION

0.99+

FirstQUANTITY

0.99+

thirdQUANTITY

0.99+

oneQUANTITY

0.99+

firstQUANTITY

0.99+

todayDATE

0.99+

Three main driversQUANTITY

0.99+

secondQUANTITY

0.99+

bothQUANTITY

0.99+

second thingQUANTITY

0.99+

three main driversQUANTITY

0.98+

fourth generationQUANTITY

0.98+

AmazonORGANIZATION

0.98+

second timeQUANTITY

0.98+

three key factorsQUANTITY

0.98+

OneQUANTITY

0.98+

third thingQUANTITY

0.97+

The Truth About MySQL HeatWave


 

>>When Oracle acquired my SQL via the Sun acquisition, nobody really thought the company would put much effort into the platform preferring to focus all the wood behind its leading Oracle database, Arrow pun intended. But two years ago, Oracle surprised many folks by announcing my SQL Heatwave a new database as a service with a massively parallel hybrid Columbia in Mary Mary architecture that brings together transactional and analytic data in a single platform. Welcome to our latest database, power panel on the cube. My name is Dave Ante, and today we're gonna discuss Oracle's MySQL Heat Wave with a who's who of cloud database industry analysts. Holgar Mueller is with Constellation Research. Mark Stammer is the Dragon Slayer and Wikibon contributor. And Ron Westfall is with Fu Chim Research. Gentlemen, welcome back to the Cube. Always a pleasure to have you on. Thanks for having us. Great to be here. >>So we've had a number of of deep dive interviews on the Cube with Nip and Aggarwal. You guys know him? He's a senior vice president of MySQL, Heatwave Development at Oracle. I think you just saw him at Oracle Cloud World and he's come on to describe this is gonna, I'll call it a shock and awe feature additions to to heatwave. You know, the company's clearly putting r and d into the platform and I think at at cloud world we saw like the fifth major release since 2020 when they first announced MySQL heat wave. So just listing a few, they, they got, they taken, brought in analytics machine learning, they got autopilot for machine learning, which is automation onto the basic o l TP functionality of the database. And it's been interesting to watch Oracle's converge database strategy. We've contrasted that amongst ourselves. Love to get your thoughts on Amazon's get the right tool for the right job approach. >>Are they gonna have to change that? You know, Amazon's got the specialized databases, it's just, you know, the both companies are doing well. It just shows there are a lot of ways to, to skin a cat cuz you see some traction in the market in, in both approaches. So today we're gonna focus on the latest heat wave announcements and we're gonna talk about multi-cloud with a native MySQL heat wave implementation, which is available on aws MySQL heat wave for Azure via the Oracle Microsoft interconnect. This kind of cool hybrid action that they got going. Sometimes we call it super cloud. And then we're gonna dive into my SQL Heatwave Lake house, which allows users to process and query data across MyQ databases as heatwave databases, as well as object stores. So, and then we've got, heatwave has been announced on AWS and, and, and Azure, they're available now and Lake House I believe is in beta and I think it's coming out the second half of next year. So again, all of our guests are fresh off of Oracle Cloud world in Las Vegas. So they got the latest scoop. Guys, I'm done talking. Let's get into it. Mark, maybe you could start us off, what's your opinion of my SQL Heatwaves competitive position? When you think about what AWS is doing, you know, Google is, you know, we heard Google Cloud next recently, we heard about all their data innovations. You got, obviously Azure's got a big portfolio, snowflakes doing well in the market. What's your take? >>Well, first let's look at it from the point of view that AWS is the market leader in cloud and cloud services. They own somewhere between 30 to 50% depending on who you read of the market. And then you have Azure as number two and after that it falls off. There's gcp, Google Cloud platform, which is further way down the list and then Oracle and IBM and Alibaba. So when you look at AWS and you and Azure saying, hey, these are the market leaders in the cloud, then you start looking at it and saying, if I am going to provide a service that competes with the service they have, if I can make it available in their cloud, it means that I can be more competitive. And if I'm compelling and compelling means at least twice the performance or functionality or both at half the price, I should be able to gain market share. >>And that's what Oracle's done. They've taken a superior product in my SQL heat wave, which is faster, lower cost does more for a lot less at the end of the day and they make it available to the users of those clouds. You avoid this little thing called egress fees, you avoid the issue of having to migrate from one cloud to another and suddenly you have a very compelling offer. So I look at what Oracle's doing with MyQ and it feels like, I'm gonna use a word term, a flanking maneuver to their competition. They're offering a better service on their platforms. >>All right, so thank you for that. Holger, we've seen this sort of cadence, I sort of referenced it up front a little bit and they sat on MySQL for a decade, then all of a sudden we see this rush of announcements. Why did it take so long? And and more importantly is Oracle, are they developing the right features that cloud database customers are looking for in your view? >>Yeah, great question, but first of all, in your interview you said it's the edit analytics, right? Analytics is kind of like a marketing buzzword. Reports can be analytics, right? The interesting thing, which they did, the first thing they, they, they crossed the chasm between OTP and all up, right? In the same database, right? So major engineering feed very much what customers want and it's all about creating Bellevue for customers, which, which I think is the part why they go into the multi-cloud and why they add these capabilities. And they certainly with the AI capabilities, it's kind of like getting it into an autonomous field, self-driving field now with the lake cost capabilities and meeting customers where they are, like Mark has talked about the e risk costs in the cloud. So that that's a significant advantage, creating value for customers and that's what at the end of the day matters. >>And I believe strongly that long term it's gonna be ones who create better value for customers who will get more of their money From that perspective, why then take them so long? I think it's a great question. I think largely he mentioned the gentleman Nial, it's largely to who leads a product. I used to build products too, so maybe I'm a little fooling myself here, but that made the difference in my view, right? So since he's been charged, he's been building things faster than the rest of the competition, than my SQL space, which in hindsight we thought was a hot and smoking innovation phase. It kind of like was a little self complacent when it comes to the traditional borders of where, where people think, where things are separated between OTP and ola or as an example of adjacent support, right? Structured documents, whereas unstructured documents or databases and all of that has been collapsed and brought together for building a more powerful database for customers. >>So I mean it's certainly, you know, when, when Oracle talks about the competitors, you know, the competitors are in the, I always say they're, if the Oracle talks about you and knows you're doing well, so they talk a lot about aws, talk a little bit about Snowflake, you know, sort of Google, they have partnerships with Azure, but, but in, so I'm presuming that the response in MySQL heatwave was really in, in response to what they were seeing from those big competitors. But then you had Maria DB coming out, you know, the day that that Oracle acquired Sun and, and launching and going after the MySQL base. So it's, I'm, I'm interested and we'll talk about this later and what you guys think AWS and Google and Azure and Snowflake and how they're gonna respond. But, but before I do that, Ron, I want to ask you, you, you, you can get, you know, pretty technical and you've probably seen the benchmarks. >>I know you have Oracle makes a big deal out of it, publishes its benchmarks, makes some transparent on on GI GitHub. Larry Ellison talked about this in his keynote at Cloud World. What are the benchmarks show in general? I mean, when you, when you're new to the market, you gotta have a story like Mark was saying, you gotta be two x you know, the performance at half the cost or you better be or you're not gonna get any market share. So, and, and you know, oftentimes companies don't publish market benchmarks when they're leading. They do it when they, they need to gain share. So what do you make of the benchmarks? Have their, any results that were surprising to you? Have, you know, they been challenged by the competitors. Is it just a bunch of kind of desperate bench marketing to make some noise in the market or you know, are they real? What's your view? >>Well, from my perspective, I think they have the validity. And to your point, I believe that when it comes to competitor responses, that has not really happened. Nobody has like pulled down the information that's on GitHub and said, Oh, here are our price performance results. And they counter oracles. In fact, I think part of the reason why that hasn't happened is that there's the risk if Oracle's coming out and saying, Hey, we can deliver 17 times better query performance using our capabilities versus say, Snowflake when it comes to, you know, the Lakehouse platform and Snowflake turns around and says it's actually only 15 times better during performance, that's not exactly an effective maneuver. And so I think this is really to oracle's credit and I think it's refreshing because these differentiators are significant. We're not talking, you know, like 1.2% differences. We're talking 17 fold differences, we're talking six fold differences depending on, you know, where the spotlight is being shined and so forth. >>And so I think this is actually something that is actually too good to believe initially at first blush. If I'm a cloud database decision maker, I really have to prioritize this. I really would know, pay a lot more attention to this. And that's why I posed the question to Oracle and others like, okay, if these differentiators are so significant, why isn't the needle moving a bit more? And it's for, you know, some of the usual reasons. One is really deep discounting coming from, you know, the other players that's really kind of, you know, marketing 1 0 1, this is something you need to do when there's a real competitive threat to keep, you know, a customer in your own customer base. Plus there is the usual fear and uncertainty about moving from one platform to another. But I think, you know, the traction, the momentum is, is shifting an Oracle's favor. I think we saw that in the Q1 efforts, for example, where Oracle cloud grew 44% and that it generated, you know, 4.8 billion and revenue if I recall correctly. And so, so all these are demonstrating that's Oracle is making, I think many of the right moves, publishing these figures for anybody to look at from their own perspective is something that is, I think, good for the market and I think it's just gonna continue to pay dividends for Oracle down the horizon as you know, competition intens plots. So if I were in, >>Dave, can I, Dave, can I interject something and, and what Ron just said there? Yeah, please go ahead. A couple things here, one discounting, which is a common practice when you have a real threat, as Ron pointed out, isn't going to help much in this situation simply because you can't discount to the point where you improve your performance and the performance is a huge differentiator. You may be able to get your price down, but the problem that most of them have is they don't have an integrated product service. They don't have an integrated O L T P O L A P M L N data lake. Even if you cut out two of them, they don't have any of them integrated. They have multiple services that are required separate integration and that can't be overcome with discounting. And the, they, you have to pay for each one of these. And oh, by the way, as you grow, the discounts go away. So that's a, it's a minor important detail. >>So, so that's a TCO question mark, right? And I know you look at this a lot, if I had that kind of price performance advantage, I would be pounding tco, especially if I need two separate databases to do the job. That one can do, that's gonna be, the TCO numbers are gonna be off the chart or maybe down the chart, which you want. Have you looked at this and how does it compare with, you know, the big cloud guys, for example, >>I've looked at it in depth, in fact, I'm working on another TCO on this arena, but you can find it on Wiki bod in which I compared TCO for MySEQ Heat wave versus Aurora plus Redshift plus ML plus Blue. I've compared it against gcps services, Azure services, Snowflake with other services. And there's just no comparison. The, the TCO differences are huge. More importantly, thefor, the, the TCO per performance is huge. We're talking in some cases multiple orders of magnitude, but at least an order of magnitude difference. So discounting isn't gonna help you much at the end of the day, it's only going to lower your cost a little, but it doesn't improve the automation, it doesn't improve the performance, it doesn't improve the time to insight, it doesn't improve all those things that you want out of a database or multiple databases because you >>Can't discount yourself to a higher value proposition. >>So what about, I wonder ho if you could chime in on the developer angle. You, you followed that, that market. How do these innovations from heatwave, I think you used the term developer velocity. I've heard you used that before. Yeah, I mean, look, Oracle owns Java, okay, so it, it's, you know, most popular, you know, programming language in the world, blah, blah blah. But it does it have the, the minds and hearts of, of developers and does, where does heatwave fit into that equation? >>I think heatwave is gaining quickly mindshare on the developer side, right? It's not the traditional no sequel database which grew up, there's a traditional mistrust of oracles to developers to what was happening to open source when gets acquired. Like in the case of Oracle versus Java and where my sql, right? And, but we know it's not a good competitive strategy to, to bank on Oracle screwing up because it hasn't worked not on Java known my sequel, right? And for developers, it's, once you get to know a technology product and you can do more, it becomes kind of like a Swiss army knife and you can build more use case, you can build more powerful applications. That's super, super important because you don't have to get certified in multiple databases. You, you are fast at getting things done, you achieve fire, develop velocity, and the managers are happy because they don't have to license more things, send you to more trainings, have more risk of something not being delivered, right? >>So it's really the, we see the suite where this best of breed play happening here, which in general was happening before already with Oracle's flagship database. Whereas those Amazon as an example, right? And now the interesting thing is every step away Oracle was always a one database company that can be only one and they're now generally talking about heat web and that two database company with different market spaces, but same value proposition of integrating more things very, very quickly to have a universal database that I call, they call the converge database for all the needs of an enterprise to run certain application use cases. And that's what's attractive to developers. >>It's, it's ironic isn't it? I mean I, you know, the rumor was the TK Thomas Curian left Oracle cuz he wanted to put Oracle database on other clouds and other places. And maybe that was the rift. Maybe there was, I'm sure there was other things, but, but Oracle clearly is now trying to expand its Tam Ron with, with heatwave into aws, into Azure. How do you think Oracle's gonna do, you were at a cloud world, what was the sentiment from customers and the independent analyst? Is this just Oracle trying to screw with the competition, create a little diversion? Or is this, you know, serious business for Oracle? What do you think? >>No, I think it has lakes. I think it's definitely, again, attriting to Oracle's overall ability to differentiate not only my SQL heat wave, but its overall portfolio. And I think the fact that they do have the alliance with the Azure in place, that this is definitely demonstrating their commitment to meeting the multi-cloud needs of its customers as well as what we pointed to in terms of the fact that they're now offering, you know, MySQL capabilities within AWS natively and that it can now perform AWS's own offering. And I think this is all demonstrating that Oracle is, you know, not letting up, they're not resting on its laurels. That's clearly we are living in a multi-cloud world, so why not just make it more easy for customers to be able to use cloud databases according to their own specific, specific needs. And I think, you know, to holder's point, I think that definitely lines with being able to bring on more application developers to leverage these capabilities. >>I think one important announcement that's related to all this was the JSON relational duality capabilities where now it's a lot easier for application developers to use a language that they're very familiar with a JS O and not have to worry about going into relational databases to store their J S O N application coding. So this is, I think an example of the innovation that's enhancing the overall Oracle portfolio and certainly all the work with machine learning is definitely paying dividends as well. And as a result, I see Oracle continue to make these inroads that we pointed to. But I agree with Mark, you know, the short term discounting is just a stall tag. This is not denying the fact that Oracle is being able to not only deliver price performance differentiators that are dramatic, but also meeting a wide range of needs for customers out there that aren't just limited device performance consideration. >>Being able to support multi-cloud according to customer needs. Being able to reach out to the application developer community and address a very specific challenge that has plagued them for many years now. So bring it all together. Yeah, I see this as just enabling Oracles who ring true with customers. That the customers that were there were basically all of them, even though not all of them are going to be saying the same things, they're all basically saying positive feedback. And likewise, I think the analyst community is seeing this. It's always refreshing to be able to talk to customers directly and at Oracle cloud there was a litany of them and so this is just a difference maker as well as being able to talk to strategic partners. The nvidia, I think partnerships also testament to Oracle's ongoing ability to, you know, make the ecosystem more user friendly for the customers out there. >>Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able to be best of breed. That's the kind of surprising thing that I'm hearing about, about heatwave. I want to, I want to talk about Lake House because when I think of Lake House, I think data bricks, and to my knowledge data bricks hasn't been in the sites of Oracle yet. Maybe they're next, but, but Oracle claims that MySQL, heatwave, Lakehouse is a breakthrough in terms of capacity and performance. Mark, what are your thoughts on that? Can you double click on, on Lakehouse Oracle's claims for things like query performance and data loading? What does it mean for the market? Is Oracle really leading in, in the lake house competitive landscape? What are your thoughts? >>Well, but name in the game is what are the problems you're solving for the customer? More importantly, are those problems urgent or important? If they're urgent, customers wanna solve 'em. Now if they're important, they might get around to them. So you look at what they're doing with Lake House or previous to that machine learning or previous to that automation or previous to that O L A with O ltp and they're merging all this capability together. If you look at Snowflake or data bricks, they're tacking one problem. You look at MyQ heat wave, they're tacking multiple problems. So when you say, yeah, their queries are much better against the lake house in combination with other analytics in combination with O ltp and the fact that there are no ETLs. So you're getting all this done in real time. So it's, it's doing the query cross, cross everything in real time. >>You're solving multiple user and developer problems, you're increasing their ability to get insight faster, you're having shorter response times. So yeah, they really are solving urgent problems for customers. And by putting it where the customer lives, this is the brilliance of actually being multicloud. And I know I'm backing up here a second, but by making it work in AWS and Azure where people already live, where they already have applications, what they're saying is, we're bringing it to you. You don't have to come to us to get these, these benefits, this value overall, I think it's a brilliant strategy. I give Nip and Argo wallet a huge, huge kudos for what he's doing there. So yes, what they're doing with the lake house is going to put notice on data bricks and Snowflake and everyone else for that matter. Well >>Those are guys that whole ago you, you and I have talked about this. Those are, those are the guys that are doing sort of the best of breed. You know, they're really focused and they, you know, tend to do well at least out of the gate. Now you got Oracle's converged philosophy, obviously with Oracle database. We've seen that now it's kicking in gear with, with heatwave, you know, this whole thing of sweets versus best of breed. I mean the long term, you know, customers tend to migrate towards suite, but the new shiny toy tends to get the growth. How do you think this is gonna play out in cloud database? >>Well, it's the forever never ending story, right? And in software right suite, whereas best of breed and so far in the long run suites have always won, right? So, and sometimes they struggle again because the inherent problem of sweets is you build something larger, it has more complexity and that means your cycles to get everything working together to integrate the test that roll it out, certify whatever it is, takes you longer, right? And that's not the case. It's a fascinating part of what the effort around my SQL heat wave is that the team is out executing the previous best of breed data, bringing us something together. Now if they can maintain that pace, that's something to to, to be seen. But it, the strategy, like what Mark was saying, bring the software to the data is of course interesting and unique and totally an Oracle issue in the past, right? >>Yeah. But it had to be in your database on oci. And but at, that's an interesting part. The interesting thing on the Lake health side is, right, there's three key benefits of a lakehouse. The first one is better reporting analytics, bring more rich information together, like make the, the, the case for silicon angle, right? We want to see engagements for this video, we want to know what's happening. That's a mixed transactional video media use case, right? Typical Lakehouse use case. The next one is to build more rich applications, transactional applications which have video and these elements in there, which are the engaging one. And the third one, and that's where I'm a little critical and concerned, is it's really the base platform for artificial intelligence, right? To run deep learning to run things automatically because they have all the data in one place can create in one way. >>And that's where Oracle, I know that Ron talked about Invidia for a moment, but that's where Oracle doesn't have the strongest best story. Nonetheless, the two other main use cases of the lake house are very strong, very well only concern is four 50 terabyte sounds long. It's an arbitrary limitation. Yeah, sounds as big. So for the start, and it's the first word, they can make that bigger. You don't want your lake house to be limited and the terabyte sizes or any even petabyte size because you want to have the certainty. I can put everything in there that I think it might be relevant without knowing what questions to ask and query those questions. >>Yeah. And you know, in the early days of no schema on right, it just became a mess. But now technology has evolved to allow us to actually get more value out of that data. Data lake. Data swamp is, you know, not much more, more, more, more logical. But, and I want to get in, in a moment, I want to come back to how you think the competitors are gonna respond. Are they gonna have to sort of do a more of a converged approach? AWS in particular? But before I do, Ron, I want to ask you a question about autopilot because I heard Larry Ellison's keynote and he was talking about how, you know, most security issues are human errors with autonomy and autonomous database and things like autopilot. We take care of that. It's like autonomous vehicles, they're gonna be safer. And I went, well maybe, maybe someday. So Oracle really tries to emphasize this, that every time you see an announcement from Oracle, they talk about new, you know, autonomous capabilities. It, how legit is it? Do people care? What about, you know, what's new for heatwave Lakehouse? How much of a differentiator, Ron, do you really think autopilot is in this cloud database space? >>Yeah, I think it will definitely enhance the overall proposition. I don't think people are gonna buy, you know, lake house exclusively cause of autopilot capabilities, but when they look at the overall picture, I think it will be an added capability bonus to Oracle's benefit. And yeah, I think it's kind of one of these age old questions, how much do you automate and what is the bounce to strike? And I think we all understand with the automatic car, autonomous car analogy that there are limitations to being able to use that. However, I think it's a tool that basically every organization out there needs to at least have or at least evaluate because it goes to the point of it helps with ease of use, it helps make automation more balanced in terms of, you know, being able to test, all right, let's automate this process and see if it works well, then we can go on and switch on on autopilot for other processes. >>And then, you know, that allows, for example, the specialists to spend more time on business use cases versus, you know, manual maintenance of, of the cloud database and so forth. So I think that actually is a, a legitimate value proposition. I think it's just gonna be a case by case basis. Some organizations are gonna be more aggressive with putting automation throughout their processes throughout their organization. Others are gonna be more cautious. But it's gonna be, again, something that will help the overall Oracle proposition. And something that I think will be used with caution by many organizations, but other organizations are gonna like, hey, great, this is something that is really answering a real problem. And that is just easing the use of these databases, but also being able to better handle the automation capabilities and benefits that come with it without having, you know, a major screwup happened and the process of transitioning to more automated capabilities. >>Now, I didn't attend cloud world, it's just too many red eyes, you know, recently, so I passed. But one of the things I like to do at those events is talk to customers, you know, in the spirit of the truth, you know, they, you know, you'd have the hallway, you know, track and to talk to customers and they say, Hey, you know, here's the good, the bad and the ugly. So did you guys, did you talk to any customers my SQL Heatwave customers at, at cloud world? And and what did you learn? I don't know, Mark, did you, did you have any luck and, and having some, some private conversations? >>Yeah, I had quite a few private conversations. The one thing before I get to that, I want disagree with one point Ron made, I do believe there are customers out there buying the heat wave service, the MySEQ heat wave server service because of autopilot. Because autopilot is really revolutionary in many ways in the sense for the MySEQ developer in that it, it auto provisions, it auto parallel loads, IT auto data places it auto shape predictions. It can tell you what machine learning models are going to tell you, gonna give you your best results. And, and candidly, I've yet to meet a DBA who didn't wanna give up pedantic tasks that are pain in the kahoo, which they'd rather not do and if it's long as it was done right for them. So yes, I do think people are buying it because of autopilot and that's based on some of the conversations I had with customers at Oracle Cloud World. >>In fact, it was like, yeah, that's great, yeah, we get fantastic performance, but this really makes my life easier and I've yet to meet a DBA who didn't want to make their life easier. And it does. So yeah, I've talked to a few of them. They were excited. I asked them if they ran into any bugs, were there any difficulties in moving to it? And the answer was no. In both cases, it's interesting to note, my sequel is the most popular database on the planet. Well, some will argue that it's neck and neck with SQL Server, but if you add in Mariah DB and ProCon db, which are forks of MySQL, then yeah, by far and away it's the most popular. And as a result of that, everybody for the most part has typically a my sequel database somewhere in their organization. So this is a brilliant situation for anybody going after MyQ, but especially for heat wave. And the customers I talk to love it. I didn't find anybody complaining about it. And >>What about the migration? We talked about TCO earlier. Did your t does your TCO analysis include the migration cost or do you kind of conveniently leave that out or what? >>Well, when you look at migration costs, there are different kinds of migration costs. By the way, the worst job in the data center is the data migration manager. Forget it, no other job is as bad as that one. You get no attaboys for doing it. Right? And then when you screw up, oh boy. So in real terms, anything that can limit data migration is a good thing. And when you look at Data Lake, that limits data migration. So if you're already a MySEQ user, this is a pure MySQL as far as you're concerned. It's just a, a simple transition from one to the other. You may wanna make sure nothing broke and every you, all your tables are correct and your schema's, okay, but it's all the same. So it's a simple migration. So it's pretty much a non-event, right? When you migrate data from an O LTP to an O L A P, that's an ETL and that's gonna take time. >>But you don't have to do that with my SQL heat wave. So that's gone when you start talking about machine learning, again, you may have an etl, you may not, depending on the circumstances, but again, with my SQL heat wave, you don't, and you don't have duplicate storage, you don't have to copy it from one storage container to another to be able to be used in a different database, which by the way, ultimately adds much more cost than just the other service. So yeah, I looked at the migration and again, the users I talked to said it was a non-event. It was literally moving from one physical machine to another. If they had a new version of MySEQ running on something else and just wanted to migrate it over or just hook it up or just connect it to the data, it worked just fine. >>Okay, so every day it sounds like you guys feel, and we've certainly heard this, my colleague David Foyer, the semi-retired David Foyer was always very high on heatwave. So I think you knows got some real legitimacy here coming from a standing start, but I wanna talk about the competition, how they're likely to respond. I mean, if your AWS and you got heatwave is now in your cloud, so there's some good aspects of that. The database guys might not like that, but the infrastructure guys probably love it. Hey, more ways to sell, you know, EC two and graviton, but you're gonna, the database guys in AWS are gonna respond. They're gonna say, Hey, we got Redshift, we got aqua. What's your thoughts on, on not only how that's gonna resonate with customers, but I'm interested in what you guys think will a, I never say never about aws, you know, and are they gonna try to build, in your view a converged Oola and o LTP database? You know, Snowflake is taking an ecosystem approach. They've added in transactional capabilities to the portfolio so they're not standing still. What do you guys see in the competitive landscape in that regard going forward? Maybe Holger, you could start us off and anybody else who wants to can chime in, >>Happy to, you mentioned Snowflake last, we'll start there. I think Snowflake is imitating that strategy, right? That building out original data warehouse and the clouds tasking project to really proposition to have other data available there because AI is relevant for everybody. Ultimately people keep data in the cloud for ultimately running ai. So you see the same suite kind of like level strategy, it's gonna be a little harder because of the original positioning. How much would people know that you're doing other stuff? And I just, as a former developer manager of developers, I just don't see the speed at the moment happening at Snowflake to become really competitive to Oracle. On the flip side, putting my Oracle hat on for a moment back to you, Mark and Iran, right? What could Oracle still add? Because the, the big big things, right? The traditional chasms in the database world, they have built everything, right? >>So I, I really scratched my hat and gave Nipon a hard time at Cloud world say like, what could you be building? Destiny was very conservative. Let's get the Lakehouse thing done, it's gonna spring next year, right? And the AWS is really hard because AWS value proposition is these small innovation teams, right? That they build two pizza teams, which can be fit by two pizzas, not large teams, right? And you need suites to large teams to build these suites with lots of functionalities to make sure they work together. They're consistent, they have the same UX on the administration side, they can consume the same way, they have the same API registry, can't even stop going where the synergy comes to play over suite. So, so it's gonna be really, really hard for them to change that. But AWS super pragmatic. They're always by themselves that they'll listen to customers if they learn from customers suite as a proposition. I would not be surprised if AWS trying to bring things closer together, being morely together. >>Yeah. Well how about, can we talk about multicloud if, if, again, Oracle is very on on Oracle as you said before, but let's look forward, you know, half a year or a year. What do you think about Oracle's moves in, in multicloud in terms of what kind of penetration they're gonna have in the marketplace? You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at the, the Microsoft Azure deal. I think that's really interesting. I've, I've called it a little bit of early days of a super cloud. What impact do you think this is gonna have on, on the marketplace? But, but both. And think about it within Oracle's customer base, I have no doubt they'll do great there. But what about beyond its existing install base? What do you guys think? >>Ryan, do you wanna jump on that? Go ahead. Go ahead Ryan. No, no, no, >>That's an excellent point. I think it aligns with what we've been talking about in terms of Lakehouse. I think Lake House will enable Oracle to pull more customers, more bicycle customers onto the Oracle platforms. And I think we're seeing all the signs pointing toward Oracle being able to make more inroads into the overall market. And that includes garnishing customers from the leaders in, in other words, because they are, you know, coming in as a innovator, a an alternative to, you know, the AWS proposition, the Google cloud proposition that they have less to lose and there's a result they can really drive the multi-cloud messaging to resonate with not only their existing customers, but also to be able to, to that question, Dave's posing actually garnish customers onto their platform. And, and that includes naturally my sequel but also OCI and so forth. So that's how I'm seeing this playing out. I think, you know, again, Oracle's reporting is indicating that, and I think what we saw, Oracle Cloud world is definitely validating the idea that Oracle can make more waves in the overall market in this regard. >>You know, I, I've floated this idea of Super cloud, it's kind of tongue in cheek, but, but there, I think there is some merit to it in terms of building on top of hyperscale infrastructure and abstracting some of the, that complexity. And one of the things that I'm most interested in is industry clouds and an Oracle acquisition of Cerner. I was struck by Larry Ellison's keynote, it was like, I don't know, an hour and a half and an hour and 15 minutes was focused on healthcare transformation. Well, >>So vertical, >>Right? And so, yeah, so you got Oracle's, you know, got some industry chops and you, and then you think about what they're building with, with not only oci, but then you got, you know, MyQ, you can now run in dedicated regions. You got ADB on on Exadata cloud to customer, you can put that OnPrem in in your data center and you look at what the other hyperscalers are, are doing. I I say other hyperscalers, I've always said Oracle's not really a hyperscaler, but they got a cloud so they're in the game. But you can't get, you know, big query OnPrem, you look at outposts, it's very limited in terms of, you know, the database support and again, that that will will evolve. But now you got Oracle's got, they announced Alloy, we can white label their cloud. So I'm interested in what you guys think about these moves, especially the industry cloud. We see, you know, Walmart is doing sort of their own cloud. You got Goldman Sachs doing a cloud. Do you, you guys, what do you think about that and what role does Oracle play? Any thoughts? >>Yeah, let me lemme jump on that for a moment. Now, especially with the MyQ, by making that available in multiple clouds, what they're doing is this follows the philosophy they've had the past with doing cloud, a customer taking the application and the data and putting it where the customer lives. If it's on premise, it's on premise. If it's in the cloud, it's in the cloud. By making the mice equal heat wave, essentially a plug compatible with any other mice equal as far as your, your database is concern and then giving you that integration with O L A P and ML and Data Lake and everything else, then what you've got is a compelling offering. You're making it easier for the customer to use. So I look the difference between MyQ and the Oracle database, MyQ is going to capture market more market share for them. >>You're not gonna find a lot of new users for the Oracle debate database. Yeah, there are always gonna be new users, don't get me wrong, but it's not gonna be a huge growth. Whereas my SQL heatwave is probably gonna be a major growth engine for Oracle going forward. Not just in their own cloud, but in AWS and in Azure and on premise over time that eventually it'll get there. It's not there now, but it will, they're doing the right thing on that basis. They're taking the services and when you talk about multicloud and making them available where the customer wants them, not forcing them to go where you want them, if that makes sense. And as far as where they're going in the future, I think they're gonna take a page outta what they've done with the Oracle database. They'll add things like JSON and XML and time series and spatial over time they'll make it a, a complete converged database like they did with the Oracle database. The difference being Oracle database will scale bigger and will have more transactions and be somewhat faster. And my SQL will be, for anyone who's not on the Oracle database, they're, they're not stupid, that's for sure. >>They've done Jason already. Right. But I give you that they could add graph and time series, right. Since eat with, Right, Right. Yeah, that's something absolutely right. That's, that's >>A sort of a logical move, right? >>Right. But that's, that's some kid ourselves, right? I mean has worked in Oracle's favor, right? 10 x 20 x, the amount of r and d, which is in the MyQ space, has been poured at trying to snatch workloads away from Oracle by starting with IBM 30 years ago, 20 years ago, Microsoft and, and, and, and didn't work, right? Database applications are extremely sticky when they run, you don't want to touch SIM and grow them, right? So that doesn't mean that heat phase is not an attractive offering, but it will be net new things, right? And what works in my SQL heat wave heat phases favor a little bit is it's not the massive enterprise applications which have like we the nails like, like you might be only running 30% or Oracle, but the connections and the interfaces into that is, is like 70, 80% of your enterprise. >>You take it out and it's like the spaghetti ball where you say, ah, no I really don't, don't want to do all that. Right? You don't, don't have that massive part with the equals heat phase sequel kind of like database which are more smaller tactical in comparison, but still I, I don't see them taking so much share. They will be growing because of a attractive value proposition quickly on the, the multi-cloud, right? I think it's not really multi-cloud. If you give people the chance to run your offering on different clouds, right? You can run it there. The multi-cloud advantages when the Uber offering comes out, which allows you to do things across those installations, right? I can migrate data, I can create data across something like Google has done with B query Omni, I can run predictive models or even make iron models in different place and distribute them, right? And Oracle is paving the road for that, but being available on these clouds. But the multi-cloud capability of database which knows I'm running on different clouds that is still yet to be built there. >>Yeah. And >>That the problem with >>That, that's the super cloud concept that I flowed and I I've always said kinda snowflake with a single global instance is sort of, you know, headed in that direction and maybe has a league. What's the issue with that mark? >>Yeah, the problem with the, with that version, the multi-cloud is clouds to charge egress fees. As long as they charge egress fees to move data between clouds, it's gonna make it very difficult to do a real multi-cloud implementation. Even Snowflake, which runs multi-cloud, has to pass out on the egress fees of their customer when data moves between clouds. And that's really expensive. I mean there, there is one customer I talked to who is beta testing for them, the MySQL heatwave and aws. The only reason they didn't want to do that until it was running on AWS is the egress fees were so great to move it to OCI that they couldn't afford it. Yeah. Egress fees are the big issue but, >>But Mark the, the point might be you might wanna root query and only get the results set back, right was much more tinier, which been the answer before for low latency between the class A problem, which we sometimes still have but mostly don't have. Right? And I think in general this with fees coming down based on the Oracle general E with fee move and it's very hard to justify those, right? But, but it's, it's not about moving data as a multi-cloud high value use case. It's about doing intelligent things with that data, right? Putting into other places, replicating it, what I'm saying the same thing what you said before, running remote queries on that, analyzing it, running AI on it, running AI models on that. That's the interesting thing. Cross administered in the same way. Taking things out, making sure compliance happens. Making sure when Ron says I don't want to be American anymore, I want to be in the European cloud that is gets migrated, right? So tho those are the interesting value use case which are really, really hard for enterprise to program hand by hand by developers and they would love to have out of the box and that's yet the innovation to come to, we have to come to see. But the first step to get there is that your software runs in multiple clouds and that's what Oracle's doing so well with my SQL >>Guys. Amazing. >>Go ahead. Yeah. >>Yeah. >>For example, >>Amazing amount of data knowledge and, and brain power in this market. Guys, I really want to thank you for coming on to the cube. Ron Holger. Mark, always a pleasure to have you on. Really appreciate your time. >>Well all the last names we're very happy for Romanic last and moderator. Thanks Dave for moderating us. All right, >>We'll see. We'll see you guys around. Safe travels to all and thank you for watching this power panel, The Truth About My SQL Heat Wave on the cube. Your leader in enterprise and emerging tech coverage.

Published Date : Nov 1 2022

SUMMARY :

Always a pleasure to have you on. I think you just saw him at Oracle Cloud World and he's come on to describe this is doing, you know, Google is, you know, we heard Google Cloud next recently, They own somewhere between 30 to 50% depending on who you read migrate from one cloud to another and suddenly you have a very compelling offer. All right, so thank you for that. And they certainly with the AI capabilities, And I believe strongly that long term it's gonna be ones who create better value for So I mean it's certainly, you know, when, when Oracle talks about the competitors, So what do you make of the benchmarks? say, Snowflake when it comes to, you know, the Lakehouse platform and threat to keep, you know, a customer in your own customer base. And oh, by the way, as you grow, And I know you look at this a lot, to insight, it doesn't improve all those things that you want out of a database or multiple databases So what about, I wonder ho if you could chime in on the developer angle. they don't have to license more things, send you to more trainings, have more risk of something not being delivered, all the needs of an enterprise to run certain application use cases. I mean I, you know, the rumor was the TK Thomas Curian left Oracle And I think, you know, to holder's point, I think that definitely lines But I agree with Mark, you know, the short term discounting is just a stall tag. testament to Oracle's ongoing ability to, you know, make the ecosystem Yeah, it's interesting when you get these all in one tools, you know, the Swiss Army knife, you expect that it's not able So when you say, yeah, their queries are much better against the lake house in You don't have to come to us to get these, these benefits, I mean the long term, you know, customers tend to migrate towards suite, but the new shiny bring the software to the data is of course interesting and unique and totally an Oracle issue in And the third one, lake house to be limited and the terabyte sizes or any even petabyte size because you want keynote and he was talking about how, you know, most security issues are human I don't think people are gonna buy, you know, lake house exclusively cause of And then, you know, that allows, for example, the specialists to And and what did you learn? The one thing before I get to that, I want disagree with And the customers I talk to love it. the migration cost or do you kind of conveniently leave that out or what? And when you look at Data Lake, that limits data migration. So that's gone when you start talking about So I think you knows got some real legitimacy here coming from a standing start, So you see the same And you need suites to large teams to build these suites with lots of functionalities You saw a lot of presentations at at cloud world, you know, we've looked pretty closely at Ryan, do you wanna jump on that? I think, you know, again, Oracle's reporting I think there is some merit to it in terms of building on top of hyperscale infrastructure and to customer, you can put that OnPrem in in your data center and you look at what the So I look the difference between MyQ and the Oracle database, MyQ is going to capture market They're taking the services and when you talk about multicloud and But I give you that they could add graph and time series, right. like, like you might be only running 30% or Oracle, but the connections and the interfaces into You take it out and it's like the spaghetti ball where you say, ah, no I really don't, global instance is sort of, you know, headed in that direction and maybe has a league. Yeah, the problem with the, with that version, the multi-cloud is clouds And I think in general this with fees coming down based on the Oracle general E with fee move Yeah. Guys, I really want to thank you for coming on to the cube. Well all the last names we're very happy for Romanic last and moderator. We'll see you guys around.

SENTIMENT ANALYSIS :

ENTITIES

EntityCategoryConfidence
MarkPERSON

0.99+

Ron HolgerPERSON

0.99+

RonPERSON

0.99+

Mark StammerPERSON

0.99+

IBMORGANIZATION

0.99+

Ron WestfallPERSON

0.99+

RyanPERSON

0.99+

AWSORGANIZATION

0.99+

DavePERSON

0.99+

WalmartORGANIZATION

0.99+

Larry EllisonPERSON

0.99+

MicrosoftORGANIZATION

0.99+

AlibabaORGANIZATION

0.99+

OracleORGANIZATION

0.99+

GoogleORGANIZATION

0.99+

Holgar MuellerPERSON

0.99+

AmazonORGANIZATION

0.99+

Constellation ResearchORGANIZATION

0.99+

Goldman SachsORGANIZATION

0.99+

17 timesQUANTITY

0.99+

twoQUANTITY

0.99+

David FoyerPERSON

0.99+

44%QUANTITY

0.99+

1.2%QUANTITY

0.99+

4.8 billionQUANTITY

0.99+

JasonPERSON

0.99+

UberORGANIZATION

0.99+

Fu Chim ResearchORGANIZATION

0.99+

Dave AntePERSON

0.99+